I understand this is not meant as production level quality, but as a web engineer I was expecting at least a decent POC with some interesting design ideas; not total spaghetti that even gets the spec wrong(despite the good idea of checking the spec in the repo).
They may have solved a problem related to agent coordination, like you discussed in your interview related to conflicts and allowing edits to merge without always compiling.
But at the end of the day, a novelty like this is only useful in so far as it produces good code; I don't see how coding agents are of any help otherwise.
So the failure of the pattern should be acknowledged, so we can move on and figure out what does work.
I speculate that what does work is actually quite similar to managing an open source project: don't merge if it doesn't pass CI, and get a review from a human(the question is as what level of granularity). You also need humans in the project to decide on ways of doing things, so that the AI is relegated to its strength: applying existing patterns.
In all seriousness, you can tell Wilson to get in touch with me. With even only one person with domain knowledge involved in such an effort, and with some architectural choices made ahead of unleashing the herd, I think one could do amazing stuff.
I was actually thinking of your earlier comments about this from the perspective of a Servo engineer when I asked Wilson how much of his human-level effort on this project related to browser architecture as opposed to parallel agents research.
The answer I got made it clear to me that this wasn't a browser project - he outsourced almost all of the browser design thinking to the agents and focused on his research area, which was parallel agent coordination.
I'm certain having someone in the human driving seat who understands browsers and was actively seeking to build the best possible browser architecture would produce very different results!
With the scope of the experiment in mind, I think we can deduce from it that AI is just not able to produce good software unsupervised. It's an important lesson.
To make a wider point, let's look at another of your prediction: that in 2026 the quality of AI code output will be undeniable. I actually think we've already reached that point. Since those agents came around I've never encountered a case where the AI wasn't able to code what I instructed it to. But that's not the same thing as software engineering, and in fact, I have never been impressed by the AI solving real problems for me.
It simply sucks at high quality software architecture. And I don't think this is due to a lack of computing power but that, rather, only humans can figure out what makes sense for them. And this matters, because if the software doesn't make sense, beyond very simple things you can test manually, it becomes impossible to know whether it works as intended.
A Web engine is a great example, because despite the extensive shared suites of tests and specifications, implementing one remains a challenge. You can write code and pass 90% of some sub test suite, and then figure out that your architecture for that Web API is all wrong and you'll never get to the last 10% and in fact your code is fundamentally broken. Unleashing AI without supervision makes this problem worse I think. Solving it requires human judgement and creativity.
Current coding agents are not up to the task of producing a production-quality web browser.
I genuinely can't imagine a better example of a sophisticated software project than a browser. Chrome, Firefox and WebKit each represent a billion-plus dollars of engineering salaries paid out to expert developers over multiple decades.
Browsers have to deal with a bewildering array of specifications - and handle websites that don't conform fully to those specifications.
They have extreme performance targets.
They operate in the most hostile environment imaginable - executing arbitrary code from the web! - so security is diabolically difficult as well.
I expect there's not a human on earth who could single-handedly architect a production grade new browser, it requires too much specialist knowledge across too many different disciplines.
On the basis, I should reconsider my 2029 prediction. I was imagining something that looked like FastRender but maybe a little more advanced - working JavaScript, for example.
A much more interesting question is when we might see a production grade browser built mostly by coding agents.
I do think we could see that by 2029, but if we did it would be a very different shape from the FastRender project:
- a team of human experts driving the coding agents
- in development for at least a year
- built with the goal of "production ready Chrome competitor", not as a research project
The question then becomes who would fund such a thing - even a small team of experts doesn't come cheap, and I expect that LLM prices in 2029 will still measure in the tens or hundreds of thousands of dollars for this, if not more.
Hard for me to definitely predict that someone will step up to fund such a project - great open source browser engines exist already, why would someone fund one from scratch?
> The question then becomes who would fund such a thing
Historically new web engines came about when a new challenger wanted to have a stake in web standards development. The way it happened was never from scratch but with a fork of an existing engine. Last time this happened was with Google. The reason, I think, was wanting to evolve the web into an application-like platform(HTML5), and a new architectural idea: multi-process.
The person who was in charge of that effort is now at OpenAI.
Today there are also projects like Ladybird and Servo which follow a different model: started from scratch and driven by interest from a developer community. But so far neither has users in the real-world, and so they haven't had an impact on the Web in the way Chromium has, yet.
Already today, both development models could benefit from the productivity gains of AI; in 2029 the game may have changed entirely. I can imagine a combination of math(TLA+ like I've done at https://github.com/w3c/IndexedDB/pull/484), web standard in their semi-formal English, and then some further guidance in terms of code architecture(through a conversation-like iterative loop), and see a Fastrender-like approach that actually works. Humans would still be the ones defining and solving all the hard problems, but you'd be typing a whole lot less code...
I've been using AI on side-projects ever since, and in those I don't type any code by hand anymore and end-up doing things I would not even contemplate(due to time constraints) without the use of AI.
They may have solved a problem related to agent coordination, like you discussed in your interview related to conflicts and allowing edits to merge without always compiling.
But at the end of the day, a novelty like this is only useful in so far as it produces good code; I don't see how coding agents are of any help otherwise.
So the failure of the pattern should be acknowledged, so we can move on and figure out what does work.
I speculate that what does work is actually quite similar to managing an open source project: don't merge if it doesn't pass CI, and get a review from a human(the question is as what level of granularity). You also need humans in the project to decide on ways of doing things, so that the AI is relegated to its strength: applying existing patterns.
In all seriousness, you can tell Wilson to get in touch with me. With even only one person with domain knowledge involved in such an effort, and with some architectural choices made ahead of unleashing the herd, I think one could do amazing stuff.