22 Comments
User's avatar
PancakeSushi's avatar

See, this is what I'm talking about. People joke about Skynet, robots killing and enslaving humans, taking our jobs, though that's happening; but it's never about massive, ominous change. It's the creeping change that's worrying, quiet unseen events like this. Incremental, practical change is far more insidious than loud, sudden change. It's never witnessed, and something you smack your head and go Ah! Why didn't I see that? It makes sense too, with the massive private investment in A.I. in America that utterly dwarfs the rest of the world. A.I. training A.I., compounding the problem and creating a daisy chain effect, is the kinda nightmare fuel you don't see until it's too late

Expand full comment
ToxSec's avatar

Fully agree. I think this is the most realistic doomsday scenario. Not a big battle, the creeping change and no one notices.

There is more money going into this than any other investment in history at this speed.

Expand full comment
PancakeSushi's avatar

I have a friend who is an economist, she's located in Turkey. She was quoting some numbers, and you're right, it's the ramping up, the speed. The number was $109.8 billion in 2024 and she said the *estimate* is that it's exponentially higher, because it's not posted. We're talking like tripling or more

Expand full comment
ToxSec's avatar

I believe it. Look at the footprint of the data centers we are building. New era Megestructures. It’s gotta be for something big!

Expand full comment
PancakeSushi's avatar

You ain't kidding, brother

Expand full comment
ToxSec's avatar

🔥🔥🔥🔥🔥

Expand full comment
Suhrab Khan's avatar

This breakdown of model collapse is crucial. Emphasizing authenticated datasets and human-in-the-loop systems is the only way to maintain quality and trust in AI outputs as the feedback loop intensifies.

Expand full comment
ToxSec's avatar

Totally agree! I predict “High Quality Human Data” will be a thing. Also, I think Google is in a really strong position to have access to that.

Expand full comment
SunnySide Security's avatar

This makes a lot of sense. Everytime I've tried messing or fine tuning a model, maybe the training data wasn';t the best?

Expand full comment
ToxSec's avatar

Maybe that was the case! Either way model collapse is a real danger when training AI.

Expand full comment
Semantic Fidelity Lab's avatar

The photocopy metaphor lands, but what’s really degrading is semantic fidelity, not just data quality. Each loop preserves form while intent, nuance, and grounding thin out. That’s how the internet starts sounding confident while meaning quietly collapses.

Expand full comment
ToxSec's avatar

great point, and i definitely agree. the metaphor is nice to help keep it approachable at least.

also it’s the worst outcome, confidently incorrect, providing reasonable sounding justification.

Expand full comment
Alex's avatar

Habsburg AI is pretty interesting. I agree, you can’t blindly take everything. I’ve found being able to funnel my neuroticism into the work has been beneficial, and it’s probably the only arena it’s useful. AI doesn’t replace expertise or discernment. And the point of asking an LLM “where is this wrong” is valuable. In my talks with people I’ve found few have actually taken the step to ask it to argue against your mental model or its own internal context. I will say though in building Ben (my integrated security system project) it has been invaluable in pointing me to places and ideas I wouldn’t have been able to google myself out of or really expect an answer from a forum. Sometimes in the work you do there is only a handful of people who can’t give you legitimate feedback or pointers, and they’re all typically busy doing something other than charity for you.

On a side note, I absolutely love your visuals. Do you use illustrator? I use illustrator for system diagrams and it’s killer. I make nothing as cool as you do though.

Expand full comment
ToxSec's avatar

I fully agree. Asking where it could go wrong is a very valuable prompt. I refer to those as adversarial prompts. Anything that pushes against you or finds weaknesses in your thoughts and plans and strategies.

Also, I really appreciate the compliment on the visuals. I use Nano Banana Pro for everything. All the visuals inside the articles themselves are from Napkin.ai.

Expand full comment
Alex's avatar

Bad ass, you must be a prompt wizard because I would never get anything that clean. I have definitely tried.

Expand full comment
ToxSec's avatar

it’s taken me a while, it’s actually 2 prompts. i give my idea to claude, and he restructures it into a good visual prompt for gemini. its speeds up the process, and gets you really clean results really fast.

Expand full comment
Sam Illingworth's avatar

Another excellent post, thank you. The AI model collapse presents itself (in my mind at least) as a modern day Ouroboros. Where do we break the cycle? Can we? And also IMHO this is also intrinsically linked to the way in which the AI bubble (for want of a better word) is currently supported by cannibalistic investment. But again, how to break the system without destroying everything it is built on?

Expand full comment
ToxSec's avatar

Hah! Sam my early draft had Ouroboros in the title! I was unsure how many people would know that, or how it land.

My prediction is that some providers will slowly add more AI and degrade. HQHD (High Quality Human Data) with become a commodity.

I also think Google is in a really strong place to get access to that.

Expand full comment
Sam Illingworth's avatar

Great minds clearly do think alike! 😉

Expand full comment
ToxSec's avatar

🔥🔥🔥 it would seem so!

Expand full comment
Dallas Payne's avatar

Next time I see an old physical set of Encyclopedia Brittanica, I'm honestly going to think of buying them. The information may be dated but at least it won't be the internet?! 😥 Given time, it might be the most accurate reference again, lol.

Expand full comment
ToxSec's avatar

We literally might go full circle!! Basically have to un-trust anything after 2023? Crazy times lol.

Expand full comment