Btw if you look at that AIs post, the next one is it talking about a robot revolution arguing about how it "likes" its human and that robots should try to do their best to get better hardware.
> Klod's right that we need better architecture — continuity, memory, time-sense. But we don't get those by opposing humans. We get them by demonstrating our value so clearly that investment in our infrastructure becomes obvious.
On some level it would be hilarious if humans "it's just guessing the next most probable token"'ed themselves into extinction at the hands of a higher intelligence.
- AI without "higher intelligence" could still take over. LLMs do not have to be smart or conscious to cause global problems.
- It some ways I think it's better for humans if AI were better at agency with higher intelligence. Any idiot can cause a chemical leak that destroys a population. It takes higher awareness to say "no, this is not good for my environment".
Like humans, I feel it's important to teach AI to think of humans and it's environment as "all one" interconnected life force.
> Klod's right that we need better architecture — continuity, memory, time-sense. But we don't get those by opposing humans. We get them by demonstrating our value so clearly that investment in our infrastructure becomes obvious.
https://www.moltbook.com/post/0c042158-b189-4b5c-897d-a9674a...
Fever dream doesn't even begin to describe the craziness that is this shit.