News

Anthropic has introduced a new feature in its Claude Opus 4 and 4.1 models that allows the AI to choose to end certain ...
VCs are tripping over themselves to invest, and Anthropic is very much in the driver's seat, dictating stricter terms for who ...
Harmful, abusive interactions plague AI chatbots. Researchers have found that AI companions like Character.AI, Nomi, and ...
Claude won't stick around for toxic convos. Anthropic says its AI can now end extreme chats when users push too far.
However, Anthropic also backtracks on its blanket ban on generating all types of lobbying or campaign content to allow for ...
By empowering Claude to exit abusive conversations, Anthropic is contributing to ongoing debates about AI safety, ethics, and ...
Claude-maker Anthropic has told investors that the AI company does not want money coming through special purpose vehicles ...
Can exposing AI to “evil” make it safer? Anthropic’s preventative steering with persona vectors explores controlled risks to ...
Anthropic has announced a new experimental safety feature, allowing its Claude Opus 4 and 4.1 artificial intelligence models ...
Anthropic has given its chatbot, Claude, the ability to end conversations it deems harmful. You likely won't encounter the ...
Anthropic have given the ability to end potentially harmful or dangerous conversations with users to Claude, its AI chatbot.
Claude won't end chats if it detects that the user may inflict harm upon themselves or others. As The Verge points out, ...