“One thing like over 70 % of [Anthropic’s] pull requests are actually Claude code written,” Krieger instructed me. As for what these engineers are doing with the additional time, Krieger stated they’re orchestrating the Claude codebase and, in fact, attending conferences. “It actually turns into obvious how a lot else is within the software program engineering function,” he famous.
The pair fiddled with Voss water bottles and answered an array of questions from the press about an upcoming compute cluster with Amazon (Amodei says “elements of that cluster are already getting used for analysis,”) and the displacement of employees on account of AI (“I do not suppose you’ll be able to offload your organization technique to one thing like that,” Krieger stated).
We’d been instructed by spokespeople that we weren’t allowed to ask questions on coverage and regulation, however Amodei provided some unprompted perception into his views on a controversial provision in President Trump’s megabill that may ban state-level AI regulation for ten years: “When you’re driving the automotive, it is one factor to say ‘we do not have to drive with the steering wheel now.’ It is one other factor to say ‘we’ll rip out the steering wheel and we won’t put it again in for 10 years,’” Amodei stated.
What does Amodei take into consideration probably the most? He says the race to the underside, the place security measures are minimize in an effort to compete within the AI race.
“Absolutely the puzzle of working Anthropic is that we someway should discover a method to do each,” Amodei stated, that means the corporate has to compete and deploy AI safely. “You might need heard this stereotype that, ‘Oh, the businesses which can be the most secure, they take the longest to do the protection testing. They’re the slowest.’ That isn’t what we discovered in any respect.”