I think you're reading way too much into OpenAI bungling its 15-month product lead, but also the whole "1 AGI company will take off" prediction is bad anyway, because it assumes governments would just let that happen. Which they wouldn't, unless the company is really really sneaky or superintelligence happens in the blink of an eye.
I think OpenAI has committed hard onto the 'product company' path, and will have a tough time going back to interesting science experiments that may and may not work, but are necessary for progress.
Governments react at a glacial pace to new technological developments. They wouldn't so much as 'let it happen' as that it had happened and they simply never noticed it until it was too late. If you are betting on the government having your back in this then I think you may end up disappointed.
I think if any government really thought that someone was developing a rival within their borders they would send in the guys with guns and handle it forthwith.
They would just declare it necessary for military purpose and demand the tech be licensed to a second company so that they have redundant sources, same as they did with AT&T's transistor.
That was something that was tied to a bunch of very specific physical objects. There is a fair chance that once you get to the point where this thing really comes into being especially if it takes longer than a couple of hours for it to be shut down or contained that the genie will never ever be put back into the bottle again.
Note that 'bits' are a lot easier to move from one place to another than hardware. If invented at 9 am it could be on the other side of the globe before you're back from your coffee break at 9:15. This is not at all like almost all other trade secrets and industrial gear, it's software. Leaks are pretty much inevitable and once it is shown that it can be done it will be done in other places as well.
this is generally true in a regulation sense, but not in emergency. The executive can either covertly or overtly take control of a company if AGI seems to powerful to be in private hands.
While generally true, a lot of governments have not only definitely noticed AI, they're getting flack for using it as an assistant and are actively promoting it as a strategic interest.
That said, any given government may be thinking like Zuckerberg[0] or senator Blumenthal[1], so perhaps these governments are just flag-waving what they think is an investment opportunity without any real understanding…
[0] general lack of vision, thinking of "superintelligence" in terms of what can be done with/by the Star Trek TNG era computer, rather than other fictional references such as a Culture Mind or whatever: https://archive.ph/ZZF3y
[1] "I alluded, in my opening remarks, to the jobs issue, the economic effects on employment. I think you have said, in fact, and I'm going to quote, ``Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity,'' end quote. You may have had in mind the effect on jobs, which is really my biggest nightmare, in the long term." - https://www.govinfo.gov/content/pkg/CHRG-118shrg52706/html/C...
Have you not been watching Trump humiliate all the other billionaires in the US? The right sort of government (or maybe wrong sort, I'm undecided which is worse) can very easily bring corporations to heel.
China did the same thing when their tech-bros got too big for their boots.
Humiliate? They're jostling for position and pushing each other out of the way to see who can buy the most government influenced while giving the least. The only thing that is being humiliated here is the United States reputation the world over. Those billionaires are making out like bandits, finally they really get to call the shots. That they give the doddering old fool some trinkets in return for untold access to power is the thing that should worry you, not that there - occasionally - is a billionaire with buyers remorse. There are enough of them to replace the ones that no longer want to play the game.
* or governments fail to look far enough ahead, due to a bunch of small-minded short-sighted greedy petty fools.
Seriously, our government just announced it's slashing half a billion dollars in vaccine research because "vaccines are deadly and ineffective", and it fired a chief statistician because the president didn't like the numbers he calculated, and it ordered the destruction of two expensive satellites because they can observe politically inconvenient climate change. THOSE are the people you are trusting to keep an eye on the pace of development inside of private, secretive AGI companies?
While one in particular is speedracing into irrelevance, it isn't particularly representative of the rest of the developed world (and hasn't in a very long time, TBH).
"irrelevance" yeah sure, I'm sure Europe's AI industry is going to kick into high gear any day now. Mistral 2026 is going to be lit. Maybe Sir Demis will defect Deepmind to the UK.
That's not what I was going for (I was more hinting at isolationist, anti-science, economically self-harming and freedoms-eroding policies), but if you take solace in believing this is all worth it because of "AI" (and in denial about the fact that none of those companies are turning a profit from it, and that there is no identified use-case to turn the tables down the line), I'm sincerely happy for you and glad it helps you cope with all the insanity!
I know, you wanted to vent about the USA and abandon the thread topic, and I countered your argument without even leaving the topic.
Like how I can say that the future of USA's AI is probably going to obliterate your local job market regardless of which country you're in, and regardless of whether you think there's "no identified use-case" for AI. Like a steamroller vs a rubber chicken. But probably Google's AI rather than OpenAI's, I think Gemini 3 is going to be a much bigger upgrade, and Google doesn't have cashflow problems. And if any single country out there is actually preparing for this, I haven't heard about it.
> I know, you wanted to vent about the USA and abandon the thread topic, and I countered your argument without even leaving the topic.
Accusations about being off-topic is really pushing it: you want to bet on governments' incompetence in dealing with AI, and I don't (on the basis that there are unarguably still many functional democracies out there), on the other hand, the thread you started about the state of Europe's AI industry had nothing to do with that.
> Like how I can say that the future of USA's AI is probably going to obliterate your local job market regardless of which country you're in
Nobody knows what the future of AI is going to look like. At present, LLMs/"GenAI" it is still very much a costly solution in need of a problem to solve/a market to serve¹. And saying that the USA is somehow uniquely positioned there sounds uninformed at best: there is no moat, all of this development is happening in the open, with AI labs and universities around the world reproducing this research, sometimes for a fraction of the cost.
> And if any single country out there is actually preparing for this, I haven't heard about it.
What is "this", effectively? The new flavour Gemini of the month (and its marginal gains on cooked-up benchmarks)? Or the imminent collapse of our society brought by a mysterious deus ex machina-esque AGI we keep hearing about but not seeing? Since we are entitled to our opinions, still, mine is that LLMs are a mere local maxima towards any useful form of AI, barely more noteworthy (and practical) than Markov chains before it. Anything besides LLMs is moot (and probably a good topic to speculate about over the impending AI winter).
Do you mean from ChatGPT launch or o1 launch? Curious to get your take on how they bungled the lead and what they could have done differently to preserve it. Not having thought about it too much, it seems that with the combo of 1) massive hype required for fundraising, and 2) the fact that their product can be basically reverse engineered by training a model on its curated output, it would have been near impossible to maintain a large lead.
My 2 cents: ChatGPT -> Gemini 1 was their 15-month lead. The moment ChatGPT threatened Google's future Search revenue (which never actually took a hit afaik), Google reacted by merging Deepmind and Google Brain and kicked off the Gemini program (that's why they named it Gemini).
Basically, OpenAI poked a sleeping bear, then lost all their lead, and are now at risk of being mauled by the bear. My money would be on the bear, except I think the Pentagon is an even bigger sleeping bear, so that's where I would bet money (literally) if I could.