19 Comments
User's avatar
PancakeSushi's avatar

See, this is what I'm talking about. People joke about Skynet, robots killing and enslaving humans, taking our jobs, though that's happening; but it's never about massive, ominous change. It's the creeping change that's worrying, quiet unseen events like this. Incremental, practical change is far more insidious than loud, sudden change. It's never witnessed, and something you smack your head and go Ah! Why didn't I see that? It makes sense too, with the massive private investment in A.I. in America that utterly dwarfs the rest of the world. A.I. training A.I., compounding the problem and creating a daisy chain effect, is the kinda nightmare fuel you don't see until it's too late

ToxSec's avatar

Fully agree. I think this is the most realistic doomsday scenario. Not a big battle, the creeping change and no one notices.

There is more money going into this than any other investment in history at this speed.

PancakeSushi's avatar

I have a friend who is an economist, she's located in Turkey. She was quoting some numbers, and you're right, it's the ramping up, the speed. The number was $109.8 billion in 2024 and she said the *estimate* is that it's exponentially higher, because it's not posted. We're talking like tripling or more

ToxSec's avatar

I believe it. Look at the footprint of the data centers we are building. New era Megestructures. It’s gotta be for something big!

PancakeSushi's avatar

You ain't kidding, brother

ToxSec's avatar

🔥🔥🔥🔥🔥

Suhrab Khan's avatar

This breakdown of model collapse is crucial. Emphasizing authenticated datasets and human-in-the-loop systems is the only way to maintain quality and trust in AI outputs as the feedback loop intensifies.

ToxSec's avatar

Totally agree! I predict “High Quality Human Data” will be a thing. Also, I think Google is in a really strong position to have access to that.

Semantic Fidelity Lab's avatar

The photocopy metaphor lands, but what’s really degrading is semantic fidelity, not just data quality. Each loop preserves form while intent, nuance, and grounding thin out. That’s how the internet starts sounding confident while meaning quietly collapses.

ToxSec's avatar

great point, and i definitely agree. the metaphor is nice to help keep it approachable at least.

also it’s the worst outcome, confidently incorrect, providing reasonable sounding justification.

Dr Sam Illingworth's avatar

Another excellent post, thank you. The AI model collapse presents itself (in my mind at least) as a modern day Ouroboros. Where do we break the cycle? Can we? And also IMHO this is also intrinsically linked to the way in which the AI bubble (for want of a better word) is currently supported by cannibalistic investment. But again, how to break the system without destroying everything it is built on?

ToxSec's avatar

Hah! Sam my early draft had Ouroboros in the title! I was unsure how many people would know that, or how it land.

My prediction is that some providers will slowly add more AI and degrade. HQHD (High Quality Human Data) with become a commodity.

I also think Google is in a really strong place to get access to that.

Dr Sam Illingworth's avatar

Great minds clearly do think alike! 😉

ToxSec's avatar

🔥🔥🔥 it would seem so!

Dallas Payne's avatar

Next time I see an old physical set of Encyclopedia Brittanica, I'm honestly going to think of buying them. The information may be dated but at least it won't be the internet?! 😥 Given time, it might be the most accurate reference again, lol.

ToxSec's avatar

We literally might go full circle!! Basically have to un-trust anything after 2023? Crazy times lol.

User's avatar
Comment deleted
Dec 24
Comment deleted
ToxSec's avatar

I fully agree. Asking where it could go wrong is a very valuable prompt. I refer to those as adversarial prompts. Anything that pushes against you or finds weaknesses in your thoughts and plans and strategies.

Also, I really appreciate the compliment on the visuals. I use Nano Banana Pro for everything. All the visuals inside the articles themselves are from Napkin.ai.

User's avatar
Comment deleted
Dec 24
Comment deleted
ToxSec's avatar

it’s taken me a while, it’s actually 2 prompts. i give my idea to claude, and he restructures it into a good visual prompt for gemini. its speeds up the process, and gets you really clean results really fast.

User's avatar
Comment deleted
Nov 24
Comment deleted
ToxSec's avatar

Maybe that was the case! Either way model collapse is a real danger when training AI.