News

This is no longer a purely conceptual argument. Research shows that increasingly large models are already showing a ...
The new Claude Gov models have enhanced capabilities over other enterprise models developed by Anthropic, including “enhanced proficiency” in languages critical to US national security, and a better ...
Can AI like Claude 4 be trusted to make ethical decisions? Discover the risks, surprises, and challenges of autonomous AI ...
The researchers argue that traditional benchmarks, like math and coding tests, are flawed due to “data contamination” and ...
When we are backed into a corner, we might lie, cheat and blackmail to survive — and in recent tests, the most powerful ...
The company claims its ability to tackle complex, multistep problems paves the way for much more proficient AI agents.
The artificial intelligence industry is always in full swing, and something interesting happens almost every day. This time, ...
Safety testing AI means exposing bad behavior. But if companies hide it—or if headlines sensationalize it—public trust loses either way.
Two AI models recently exhibited behavior that mimics agency. Do they reveal just how close AI is to independent ...
The latest versions of Anthropic's Claude generative AI models made their debut Thursday, including a heavier-duty model built specifically for coding and complex tasks. Anthropic launched the new ...