How time and cost effective is this? Like I can dump 80k in tokens into finding an exploit or spend 80k to hire a guy who will probably take longer but not give me false positives.
Scam Altman hallucinates more than his AI
Different ai company, but yeah
In a weird sort of way it does. Consider all of the following
- big companies are often incompetent and inefficient in a lot of ways
- The mozilla foundation has confirmed the security issues that Anthropic found were real
- Generally over the past few years, anthropic has some of the best, most reliable models
- Claude code has been kinda bad for a while
- Claude code has been mainly bot-written for a while as well. This can lead to functional, decent code that’s still terrible in many ways as seen from the leak. Also it’s entirely possible that bots are worse at detecting issues in bot written code. You could argue if they were good at it, they would be less likely to write those security issues in the first place?
- Anthropic could have very skilled ml engineers but mediocre software developers
On the other hand: if their new tool is so great, why haven’t they used it to fix Claude’s security issues?
They don’t claim that it fixes issues, only that it finds them. Maybe they know about all the issues but can’t wade through the spaghetti well enough to do anything about them.
I’ve seen Claude prompts. They specifically asked it to create secure code.
Oh, that’s fine then. I’m glad they’ve solved the problem.
Good thing they had their top people working on it.I also add “don’t hallucinate” to all of my prompts. Works like magic!
sorry for the snark, but
big companies are often incompetent and inefficient in a lot of ways
They’re usually stupid enough to footgun their own brand too
because their new tool is new and the leaked code for claude’s frontend was written before mythos was considered mature enough to throw at your codebase?
Anthropic legitimately believes they’ve created a living being. Its fuckin weird.
I do enjoy telling people that OpenAI was literally a cult that believed they are/were building God. Musk, and Anthropic also believe that. Facebook and Google have mixed opinions on the subject. Microsoft is content just getting the bag. Oracle is also more down to earth only hoping to build a ubiquous survelice and secret policing system.
Though a mixture of academic groups, a Chinese hedge fund, and a lot of volunteering motivated by Weird Sciencing themselves a girlfriend/daughter have also contributed a lot to the field.
Guy on TV this morning saying they’ve ‘created a new species’ and I’m like yeh, you’ve created a group of humans so dumb that no other human would be willing to have kids with them.
Billions of dollars could have paid for a lot of security engineers. Wonder how many issues they could have found and fixed.
Well clearly people could use it to find security holes in their backend, since those def exist.
Time for the Anthropic Apologists. I’ve noticed a lot of them recently.
The company’s shoddy opsec doesn’t directly equate to the model’s cabapilities. I am not one to believe anyone’s hype, but I am not one to believe the AI anti-hype that goes on throughout Lemmy. A year ago, according to Lemmy, LLMs could never produce working code at scale. 6 months ago, according to Lemmy, LLMs could never produce working code that was secure enough to use in production. Now, Lemmy believes LLM can’t be disruptive to cybersecurity as a whole.
In 6 months I wonder what Lemmy will claim LLMs aren’t capable of.
I still have not seen evidence that LLMs can make it easier for engineers to produce working code at scale.
It’s not evidence, but anecdotally i have two clients who have been working fully agentic for a good 6 months and they’re smashing it. Even looking at it critically they haven’t been able to find any obvious negative impact on code quality, product stability or performance or security.
I think the secret sauce is that they weren’t born with AI, they just integrated it into technical cultures that were already solid. In this context it does speed things up a bunch. Not a 100x multiplier or whatever dumb stuff they’re pushing on Twitter, but you do get high velocity without burning out your team or sacrificing quality.
And those 2 companies i just happen to know but they’re nothing special. You take any boring software company from Paris or Berlin and i’m pretty sure you’ll get the evidence you seek.
Yeah this is very linear. Just because something sucks in someways doesn’t make in wholly incapable of other things.
Straw men
BUT THESE ARE THEFT BOTS !!!111!!!111 THeY aRe thE ReASon NOboDy waNtS tO pAy FoR mY FuRry POrN ART !1!!!1!11!
What is the ‘instruction to mislead’ referring to?
Likely the directive in the source code for users internal to Anthropic that the LLM should not make any reference to being an LLM or mention model names etc in commit messages or comments. So when they contribute code to external repos it’s not immediately identifiable as LLM generated
The poison pills that are there to mislead you if you try to reverse engineer it
Current models are getting decent at some things, while still being kinda shitty at other things. So this is not as contradicting as it sounds.










