Anthropic’s AI Can Distinguish Good and Bad

Claude, a conversational AI created by Anthropic, which is a team that used to work at OpenAI, adheres to Apple's guidelines for app developers and other ethical conventions . The "constitution" that governs Claude is based on the Universal Declaration of Human Rights .

0

Claude, a conversational AI created by Anthropic, which is a team that used to work at OpenAI, adheres to Apple’s guidelines for app developers and other ethical conventions . The “constitution” that governs Claude is based on the Universal Declaration of Human Rights .

This Study Might be Revolutionary for The Future of the AI

However the “conclusion” here may be used in a figurative rather than literal sense. Anthropic co-founder and former OpenAI consultant Jared Kaplan said in an interview with Wired:

“The constitution of Claude may be seen as a fixed set of parameters used to model AI by any trainer .”

The training method used by Anthropic is outlined in a paper called “Constitutional AI: Harmlessness from AI Feedback .” This article outlines a method for creating a helpful yet harmless artificial intelligence that, once taught can learn from its mistakes, identify improper conduct and adjust its own actions accordingly with little to no human input .

Anthropic’s AI Can Distinguish Good and Bad

The “Ethics” in the AI Might be Used For Malicious Purposes

While AI ethics are crucial it’s a complex and personal topic. If the ethics taught in AI classes are not consistent with society standards they will act as a constraint on the model . When a teacher puts too much weight on the student’s behavior it might dampen the AI’s capacity to come up with objective replies .

AI proponents continue to argue about whether or not OpenAI should have intervened to make their model more politically acceptable . Despite the seeming contradiction immoral data must be used during training if an AI is to be able to tell right from wrong . If the AI has access to this information it will be easy for humans to “jailbreak” the system and circumvent the safeguards its developers have put in place to prevent the same outcomes they are seeking to prevent .

According to Jared Kaplan technology has come a long way and is far more sophisticated than most people realize . In his talk last week at the Stanford MLSys Seminar he said: “It is effective in every way. The more you go through it the more harmless you become .”

 

You may be interested in:

Leave A Reply

Your email address will not be published.