I think search is a fairly simple control loop. Beam search is an example of TTC in this modern era.
It is a very wide term, IME, that means anything besides "one-shot through the network".
I think the thing about the search formulation, which is amenable to domains like chess and go, but not other domains is critical. If LLMs are coming up with effective search formulation for "open-ended" problems, that would be a big deal. Maybe this is what you're alluding to.
That's like saying that Darwinian evolution is simple. It's not entirely wrong, but it misses the point rather badly. The thing that makes search useful is not the search per se, it's the heuristics that reduce an exponential search space to make it tractable. In the case of evolution (which is a search process) the heuristic is that at every iteration you select the best solution on the search frontier, and you never backtrack. That heuristic produces a certain kind of interesting result (life) but it also has certain drawbacks (it's limited to a single quality metric: reproductive fitness).
> Beam search is an example of TTC in this modern era.
That's an interesting analogy. I'll have to ponder that.
But my knee-jerk reaction is that it's not enough to say "put reactivity and deliberation together". The manner in which you put them together matters, and in particular, it turns out that putting them together with a third component that manages both the deliberation and the search is highly effective. I can't say definitively that it's the best way -- AFAIK no one has ever actually done the research necessary to establish that. But empirically it produced good results with very little computing power (by today's standards).
My gut tells me that the right way to combine LLMs and search is not to have the search manage the LLM, but to provide search as a resource for the LLM to use, kind of like humans use a pocket calculator to help them do arithmetic.
> If LLMs are coming up with effective search formulation for "open-ended" problems, that would be a big deal.
AFAICT, at the moment LLMs aren't "coming up" with anything, they are just a more effective compression algorithm for vast quantities of data. That's not nothing. You can view the scientific method itself as a compression algorithm. But to come up with original ideas you need something else, something analogous to the random variation and selection in Darwinian evolution. Yes, I know that there is a random element in LLM algorithms, and again I don't really understand the details, but the way in which the randomness is deployed just feels wrong to me somehow.
I wish I had more time to think deeply about these things.
It is a very wide term, IME, that means anything besides "one-shot through the network".
I think the thing about the search formulation, which is amenable to domains like chess and go, but not other domains is critical. If LLMs are coming up with effective search formulation for "open-ended" problems, that would be a big deal. Maybe this is what you're alluding to.