AI gains “values” with Anthropic’s new Constitutional AI chatbot approach

5 min read
0 Views
Anthropic's Constitutional AI logo on a glowing orange background.

Enlarge / Anthropic's Constitutional AI logo on a glowing orange background. (credit: Anthropic / Benj Edwards)

On Tuesday, AI startup Anthropic detailed the specific principles of its "Constitutional AI" training approach that provides its Claude chatbot with explicit "values." It aims to address concerns about transparency, safety, and decision-making in AI systems without relying on human feedback to rate responses.

Claude is an AI chatbot similar to OpenAI's ChatGPT that Anthropic released in March.

Read Also :

"We’ve trained language models to be better at responding to adversarial questions, without becoming obtuse and saying very little," Anthropic wrote in a tweet announcing the paper. "We do this by conditioning them with a simple set of behavioral principles via a technique called Constitutional AI."

Read 18 remaining paragraphs | Comments



source https://arstechnica.com/?p=1937794
BotolBaba aka Mehedi Hasan Ariyan is an Bangladeshi Actor, Musical Artist, Entrepreneur & YouTube Personality. He releases his soundtracks on different music platforms like Spotify, Google Play M…

Post a Comment

Cookies Consent

We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.

Learn More