Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Demis Hassabis is estimating 5-10 years to AGI, and is about the only person who's opinion on this I think is worth listening to. His/their (DeepMind) plan is to retain a pretrained LLM but add something like search around it (cf AlphaGo or AlphaFold). He does say a couple of "Transformer-level" major breakthoughs may be needed to get there, and is still only 50% confident because of AGI-requirements he may have missed.

Hassabis has the Google DeepMind team of 5000+, very many of who are PhD's, working for him, so a lot can be done in 5-10 years.



LLMs can not reason. We'd need a very very different technology than LLMs to get there. Not to mention new "models" are basically old model agents with access to some more tools. Training actual new models is currently not cost efficient. Also LLMs plateaued, idk how Mr. Hassabis got to that conclusion but I call bs. A "couple" major breakthroughs can take 1000s of years.


An LLM by itself can only regurgitate reasoning and/or reasoning steps from the training set, but I think that adding search on top of it gets you much closer. You're basically talking about the possibility of searching through all possible sequences of reasoning steps that an LLM could generate, and picking the sequence that actually works. DeepMind did it for AlphaGo and AlphaFold - I would not bet against them.

When Hassabis says it'll take 5000+ people 5-10 years and maybe a couple of Transformer-level breakthroughs, then he's clearly not talking about just adding search, and everyone realizes that things like continual/incremental learning are also required. I'd guess that Hassabis has a pretty good idea of what's missing from LLMs and needs to be added to get to AGI.

I really would not dismiss Hassabis. He is a lot smarter than you or I, and has won a Nobel Prize for his application of AI to science, as well as stunning the machine learning community with AlphaGo (most had thought beating Go would take another 10 years).

I think there are better long term approaches to super-human AI than building upon LLMs, but just as a cog is not a car, there is no reason it can't be a part of a car.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: