News

Training Claude on copyrighted books it purchased was fair use, but piracy wasn't, the judge ruled.
New research shows Claude chats often lift users’ moods. Anthropic explores how emotionally supportive AI affects behavior, ...
New research from Anthropic suggests that most leading AI models exhibit a tendency to blackmail, when it's the last resort in certain tests.
New research shows that as agentic AI becomes more autonomous, it can also become an insider threat, consistently choosing ...
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the ...
Well, Anthropic received a legal win this week when a court ruled that it didn’t break the law by training Claude on the ...
Google has released its terminal-based Gemini CLI agent which is open-source and offers an expanded usage limit for the ...
Artificial intelligence company Anthropic has released new research claiming that artificial intelligence (AI) models might ...
A North California District Court has backed Anthropic for training AI models with purchased books but not for the pirated ...
A judge ruled the Anthropic artificial intelligence company didn't violate copyright laws when it used millions of ...
A federal judge ruled that Anthropic’s use of copyrighted books to train its AI model Claude qualifies as “fair use” and is “quintessentially transformative.” Judge William Alsup found the company did ...