14 Comments
User's avatar
Epsilon Protocol's avatar

The valuation question is fascinating - we're essentially betting on whether these systems will continue their trajectory OR hit fundamental limits. What strikes me is how little serious discourse there is about the second-order effects: not just 'can AI do X?' but 'what happens when millions of AI agents start interacting with each other and existing systems?' The infrastructure layer might be where the real moats form.

Expand full comment
ToxSec's avatar

Absolutely. I do think we’re going to hit some limitations and will need another breakthrough similar to the way the transformer worked however agentic is definitely 2026. I think we’ll see more protocols like MCP replace API and glue.

Expand full comment
Epsilon Protocol's avatar

100% agree on the architecture shift coming. The transformer was that rare "unlock" that changed everything—we're probably due for another paradigm leap. MCP and similar protocols feel like the right direction—native interoperability rather than duct-taped integrations. The middleware layer has always been messy; if 2026 is the year of agentic systems going mainstream, we need infrastructure that treats agent-to-agent communication as first-class. What do you think the breakthrough will look like? My bet is on something that handles persistent context and multi-step reasoning more elegantly than current approaches.

Expand full comment
Dallas Payne's avatar

This was a really great read! Thanks for the time you took to break this down into something a non-tech person can understand (like me!). It makes a lot of sense and I'm really curious about where this will end up. I got stuck in an editing loop with Claude several days ago when writing my last post - I knew I had passed the stage of no return when Claude could no longer see the mistakes still clearly present, all of its own making. It was like it could produce a good level of critical analysis and review on the content that was entirely mine, but it could not seem to do the same with the content it had created and it had clearly written a lot of content that was not really correct (not entirely wrong but just not right), it just got worse and worse. Would this be a form of this concept in action at a very micro level?

Expand full comment
ToxSec's avatar

Right back at you! I really appreciate the engagement here. I've also been stuck in loos like this. Definitely helped inspire to to research why this was happening. Yes that is spot on. I've just started using Claude Skills. It's *really* good way to fight this. you keep your important instructions as a skill and use frequent session refreshes. Works like a charm!

Expand full comment
Dallas Payne's avatar

Glad I was on the right track understanding this 😅 I've given Claude Skills a passing glance but I think now I will definitely spend some time setting it up for my work - thanks for the tip! The loop is no fun at all.

Expand full comment
ToxSec's avatar

Definitely worth a look. Each “step” can be recreated so faithfully I love it.

Expand full comment
PancakeSushi's avatar

Wow, impressive article, and your credentials are impeccable. This is way over my head, but I'll take the word of an ex-member of No Such Agency. Beyond not being able to properly model hands, I had no idea how bad the problem was; I never trusted not only the technology, but more properly, the motives and ethics of corporations and individuals profiting off of it in a market-driven economy. With few guardrails and and the ubiquity of the technology, coupled with the dangers highlighted in this post, what impact is this likely to have on the growth of A.I. and potential regulation? What problems do you foresee beyond small inconveniences? Ultimately can this slow, or reverse the proliferation of this technology and its impact on jobs? I can infer generalized answers, but not with any degree of foresight.

Expand full comment
ToxSec's avatar

Appreciate the comment! I actually got really curious after watching a video from kurzgesagt. (one of the best channels on YouTube) I think when it comes to just LLM technology, this will create an upper limit to our current architecture. I think we will continue to see development through agents as they add functionality, but after researching the problem, this won’t be the path to AGI. For that we will need another technological breakthrough similar to transformers. so if that breakthrough doesn’t come quick enough, I think we would see a big slowdown or another AI winter.

As for regulation, I do think some US regulation and a lot of EU regulation is in the works. But I think it’s greatly behind the technology and with game theory I don’t think it will be enough in the short term.

Expand full comment
PancakeSushi's avatar

Especially on the regulation, I thought as much. I watch kurzgesagt as well, he covers a lot of topics, and anything covered in a soothing RP British voice makes learning easy lol. Thanks for writing back

Expand full comment
ToxSec's avatar

Yeah you are spot on! The channel is great with that voice for sure, even if it’s discussing existential dread lol. Great to meet you!

Expand full comment
PancakeSushi's avatar

Likewise, be seein' you around

Expand full comment
Starving Artist Midlife Crisis's avatar

I’ve noticed in, perhaps, the largest LLM, Chat GPT, mistakes become more frequent the longer one chat string goes on. It’s mostly small things like getting dates mixed up. (It was coming up with a nutrition timeline for me and thought Yesterday was today. I had to remind it 2 times what day it was. It was a small thing but I’ve noticed it does something similar with PDFs. When I give it a long PDF document, a lot of times it won’t admit that it can’t read half of the document and makes-up what it says rather than admitting that an error occurred. These are relatively small problems but, after reading this, I wonder if they’re part of a larger issue.

Expand full comment
ToxSec's avatar

Absolutely! It makes me think of a few things. 1) If when my context windows grows to 20k, your LLM suddenly can’t get facts straight, why are you bragging about allowing up to 1 million? 2) This makes me restart into new sessions all the time from frustration. It’s definitely a friction point. That meme where gpt casually says “Your right! I didn’t even open the csv file yet :)”.

Expand full comment