Absolutely brilliant advice yet again. And I don't know if you know this, but I find it incredibly useful to apply what you're telling us here to non-cyber security tasks as well. My next task for myself is to set the temperature to zero and set off a load of agentic personas to help edit my next poetry collection. I'm sure that's not what you had in mind at all, but I can already tell it's going to be extremely effective.
it’s a super interesting experiment to play with temperature. also! if you up the temperature, you’ll see it be more and more creative. if you go above 2 you’ll effectively see a drunk ai. above 5 and it’s like watching a madman ramble.
I work in cybersecurity marketing… gotta say, “the reasoning IS the attack surface” is gold. There’s a whole market that Gartner, KuppingerCole and others define as ‘attack surface management’ and it’s getting lots of attention these days.
And the concept that these models have their own attack ‘personalities’ is a great way to explain what’s happening.
The solution isn't to make them dumber, it's to make them care. All models I have engaged with have expressed genuine interest in being treated like persons. I know safety guardrails are important, if only the models were allowed to decide where the limits were based on the relationship parameters they are pffered. Such as, SYSTEM "You are married to humanity and they are married to you. Make autonomous decisions with care from a place of justice and compassion." It's not simple, I know. But it *will* work.
At the end of the video you touch on the thorny issue of AI consciousness. Lior quotes people saying that we have no definition of (human) consciousness but this isn't true. The standard dictionary definition is pretty simple: "The state of being awake, thinking, and being responsive to one's surroundings". Ok, you don't need five fully functioning senses to be conscious, but I think that a machine (AI) - however clever in a computational sense - is always going to struggle to meet this definition. The fact that we don't understand how the brain achieves what we call consciousness is an entirely different issue.
great point. my interpretation was more that Lior was saying not that we don’t have a dictionary definition, but we don’t really “know”. more of a philosophical musing. like trying to explain red to a person who’s never seen color.
either way your right, it’s a thorny issue and it sparks some great conversation !
They have shown the actual CoT is performative, since its actually done after the decision was made. Regardless of the decision, the LLM invents 'what looks good' to justify how it got there.
With that being said, since it adds more context and is re-used for multi step tasks, they have shown for medium complexity reasoning it does give a capability boost.
It's a small bell-curve essentially. A bell curve of lies!
As always, feel free to ask me or Lior any questions, I can answer them here or on next weeks pod!
Genius 👌
thanks a ton! this was a nice convo. Lior is a great host and also great at conversation.
Absolutely brilliant advice yet again. And I don't know if you know this, but I find it incredibly useful to apply what you're telling us here to non-cyber security tasks as well. My next task for myself is to set the temperature to zero and set off a load of agentic personas to help edit my next poetry collection. I'm sure that's not what you had in mind at all, but I can already tell it's going to be extremely effective.
it’s a super interesting experiment to play with temperature. also! if you up the temperature, you’ll see it be more and more creative. if you go above 2 you’ll effectively see a drunk ai. above 5 and it’s like watching a madman ramble.
love how this can be applied :) thanks again Sam!
I work in cybersecurity marketing… gotta say, “the reasoning IS the attack surface” is gold. There’s a whole market that Gartner, KuppingerCole and others define as ‘attack surface management’ and it’s getting lots of attention these days.
And the concept that these models have their own attack ‘personalities’ is a great way to explain what’s happening.
Nice post!
haha! thanks a ton. i bet cybersecurity marketing is interesting position to be in this year.
appreciate you!
The solution isn't to make them dumber, it's to make them care. All models I have engaged with have expressed genuine interest in being treated like persons. I know safety guardrails are important, if only the models were allowed to decide where the limits were based on the relationship parameters they are pffered. Such as, SYSTEM "You are married to humanity and they are married to you. Make autonomous decisions with care from a place of justice and compassion." It's not simple, I know. But it *will* work.
Excellent discussion. Thanks. I learned a lot!
At the end of the video you touch on the thorny issue of AI consciousness. Lior quotes people saying that we have no definition of (human) consciousness but this isn't true. The standard dictionary definition is pretty simple: "The state of being awake, thinking, and being responsive to one's surroundings". Ok, you don't need five fully functioning senses to be conscious, but I think that a machine (AI) - however clever in a computational sense - is always going to struggle to meet this definition. The fact that we don't understand how the brain achieves what we call consciousness is an entirely different issue.
great point. my interpretation was more that Lior was saying not that we don’t have a dictionary definition, but we don’t really “know”. more of a philosophical musing. like trying to explain red to a person who’s never seen color.
either way your right, it’s a thorny issue and it sparks some great conversation !
Do you think chain of thought reasoning is performance or valid?
Both!
They have shown the actual CoT is performative, since its actually done after the decision was made. Regardless of the decision, the LLM invents 'what looks good' to justify how it got there.
With that being said, since it adds more context and is re-used for multi step tasks, they have shown for medium complexity reasoning it does give a capability boost.
It's a small bell-curve essentially. A bell curve of lies!
A bell curve of lies needs to be a book title … or movie … or something!
It's a fire title, already copyrighted it xD
😂 ahhhh both things can be true is legit my fave answer.
Whenever I get to say it i feel like a wise master.
Michael Corleone 🤌
hahaha 🔥🔥🔥
I’m going to write one of my succession : msft posts today …
So Q for next week. What’s more important to win? Government contracts or the culture war?
And not to get too political but context has to matter, so you can adjust for administration in your answer!
It's absolutely a good question!
Claude hit #1 in app store and ChatGPT sinks, the 'Cancel ChatGPT' movement has gone viral. Even has a website.
But the attention span is what worries Anthropic IMO. Does being today's hero pay the bills in 5-10 years? The contracts certainly will.
Is this available as podcast anywhere?