Exponential gains from AGI requires recursive self improvement and the compute headroom to realize them. It's unclear if current LLM architectures make either of those possible.
People need to stop talking about "exponential" gains; these models don't even have the ability to improve themselves, let alone at this or that rate. And who wants them to be able to train themselves while being connected to the Internet anyway? I sure don't. All it takes for major disruption is superhuman ability at subhuman prices.