Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been trying to understand why on earth these companies would release something as an answer engine that obviously fabricates incorrect answers, and would simultaneously be so blinded to this as to release promo videos where the incorrect answers are in the actual promo videos! And this happened twice with two of the biggest and oldest companies in big tech.

It really feels like some kind of "emperor has no clothes" moment. Everyone is running around saying "WOW what a nice suit emperor" and he's running around buck naked.

I am reminded of this video podcast from Emily Bender and Alex Hannah at DAIR - the Distributed AI Research Institute - where they discuss Galactica. It was the same kind of thing, with Yan LeCunn and facebook talking about how great their new AI system is and how useful it will be to researchers, only it produced lies and nonsense abound.

https://videos.trom.tf/w/v2tKa1K7buoRSiAR3ynTzc

But reading this article I started to understand something... These systems are enchanting. Maybe it's because I want AGI to exist and so I find conversation with them so fascinating. And I think to some extent the people behind the scenes are becoming so enchanted with the system they interact with that they believe it can do more than is really possible.

Just reading this article I started to feel that way, and I found myself really struck by this line:

LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.

Seeing that after reading this article stirred something within me. It feels compelling in a way which I cannot describe. It makes me want to know more. It makes me actually want them to release these models so we can go further, even though I am aware of the possible harms that may come from it.

And if I look at those feelings... it seems odd. Normally I am more cautious. But I think there is something about these systems that is so fascinating, we're finding ourselves willing to look past all the errors, completely to the point where we get caught up and don't even see them as we are preparing for a release. Maybe the reason Google, Microsoft, and Facebook are all almost unable to see the obvious folly of their systems is that they have become enchanted by it all.

EDIT: The above podcast is good but I also want to share this episode of Tech Won't Save Us with Timnit Gebru, the former google ethics in AI lead who was fired for refusing to take her name off of a research paper that questioned the value of LLMs. Her experience and direct commentary here get right to the point of these issues.

https://podcasts.apple.com/us/podcast/dont-fall-for-the-ai-h...



I think a large part of it thats its so obviously incredible and powerful and can so many stupendous things but they are left kinda dumbstruck on how to monetize it other than just charging for access.


I agree with you, but to me the obvious answer is that this is unfinished research. An LLM is obviously going to be a useful part of a future information processing system, but it is not a terribly useful information processing system on its own. So invest in more research, secure rights to the future capabilities, and release something in the future that actually does what its supposed to do. I am listening to a podcast with Timnit Gebru now who is talking about coming up with tests you think your system should pass, just like running tests against your code. So if you think it can be used to suggest vacation plans, it had better do a good job giving you correct information. Otherwise you're just releasing something half baked, and it is hard for me to see the point in that.


Yeah, its a bizarre moment in tech,unlike anything I can recall historically. Major corporations with GDP's exceeding most countries acting like attention seeking startups. Maybe it says something about the fragility of this business during the current period. Or maybe its just a cynical distraction from the largely unjustified layoffs.


Frankly, people are buying the AI's escape mechanism. The fact that this tech is being wielded haphazardly for purposes it's not suited for, made into a bad search companion because it's cool, is disturbing.

It sounds so much like the scenarios where AI convinces its creators to let it out.

It's evident business leaders don't know what they're looking for in developing AI, so they've made what "seems cool", but really is manipulative and threatening. Too much talk of safety has lulled away all that very useful fear.


>I am reminded of this video podcast from Emily Bender and Alex Hannah at DAIR - the Distributed AI Research Institute - where they discuss Galactica. It was the same kind of thing, with Yan LeCunn and facebook talking about how great their new AI system is and how useful it will be to researchers, only it produced lies and nonsense abound.

Strange that they would name it "Galactica". The battlestar Galactica ship famously didn't even have networked computer systems, much less AI, since they had already seen what happens when computers become too intelligent. Pretty soon, they develop a new religion and try to nuke their creators out of existence.


Money. The answer is always money.


I can understand on a micro level why managers might want to release a product in order to get bonuses or something, which we see at google all the time. But these things are happening at the macro level (coming as major moves from the top) and it’s not clear that these moves are even sensible from a profit perspective.


There's money to be made -right now-, which is the only time that matters to the financial industry.

There's also an arms race with China that we need to win.

There's also the delighting in the hubris of ruining everything in such a uniquely human way that appeals to certain people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: