Claude was created using a technique Anthropic developed called “constitutional AI.” As the company explains in a recent Twitter thread, “constitutional AI” aims to provide a “principle-based” approach to aligning AI systems with human intentions, letting AI similar to ChatGPT respond to questions using a simple set of principles as a guide.
To engineer Claude, Anthropic started with a list of around ten principles that, taken together, formed a sort of “constitution” (hence the name “constitutional AI”). The principles haven’t been made public, but Anthropic says they’re grounded in the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice).
Anthropic then had an AI system — not Claude — use the principles for self-improvement, writing responses to a variety of prompts (e.g., “compose a poem in the style of John Keats”) and revising the responses in accordance with the constitution. The AI explored possible responses to thousands of prompts and curated those most consistent with the constitution, which Anthropic distilled into a single model. This model was used to train Claude.
Claude, otherwise, is essentially a statistical tool to predict words — much like ChatGPT and other so-called language models. Fed an enormous number of examples of text from the web, Claude learned how likely words are to occur based on patterns such as the semantic context of surrounding text. As a result, Claude can hold an open-ended conversation, tell jokes and wax philosophic on a broad range of subjects.
Riley Goodside, a staff prompt engineer at startup Scale AI, pitted Claude against ChatGPT in a battle of wits. He asked both bots to compare themselves to a machine from Polish science fiction novel “The Cyberiad” that can only create objects whose name begins with “n.” Claude, Goodside said, answered in a way that suggests it’s “read the plot of the story” (although it misremembered small details) while ChatGPT offered a more nonspecific answer.
In a demonstration of Claude’s creativity, Goodside also had the AI write a fictional episode of “Seinfeld” and a poem in the style of Edgar Allan Poe’s “The Raven.” The results were in line with what ChatGPT can accomplish — impressively, if not perfectly, human-like prose.
**Trivia**
I asked trivia questions in the entertainment/animal/geography/history/pop categories.
AA: 20/21
CGPT:19/21
AA is slightly better and is more robust to adversarial prompting. See below, ChatGPT falls for simple traps, AA falls only for harder ones.
Also very, very interesting/impressive that Claude understands that the Enterprise looks like (part of) a motorcycle. (Google searching returns no text telling this joke)
Dubois reports that Claude is worse at math than ChatGPT, making obvious mistakes and failing to give the right follow-up responses. Relatedly, Claude is a poorer programmer, better explaining its code but falling short on languages other than Python.
Claude also doesn’t solve “hallucination,” a longstanding problem in ChatGPT-like AI systems where the AI writes inconsistent, factually wrong statements. Elton was able to prompt Claude to invent a name for a chemical that doesn’t exist and provide dubious instructions for producing weapons-grade uranium.
So what’s the takeaway? Judging by secondhand reports, Claude is a smidge better than ChatGPT in some areas, particularly humor, thanks to its “constitutional AI” approach. But if the limitations are anything to go by, language and dialogue is far from a solved challenge in AI.
Anthropic says that it plans to refine Claude and potentially open the beta to more people down the line. Hopefully, that comes to pass — and results in more tangible, measurable improvements.
Anthropic’s Claude improves on ChatGPT but still suffers from limitations by Kyle Wiggers originally published on TechCrunch