logo

Trading

Anthropic

AI research and products that put safety at the frontier

AI

INDUSTRY

Technology

STATUS

Trading

OPEN TO

Public

Investment Highlights


The company focuses on increasing the safety of AI systems through rigorous research and development
Anthropic has developed a family of large language models named Claude
Claude serves as an AI helper for tasks of varying complexity

Company Overview


Anthropic is at the forefront of AI research, emphasizing safety, transparency, and ethical considerations in the development of powerful language models.

Anthropic is an artificial intelligence (AI) company, founded by former members of OpenAI. Anthropic has developed a family of large language models named Claude.

As of July 2023, Anthropic had raised US$1.5B in funding. In September 2023, Amazon announced an investment of up to US$4B, followed by a $2B commitment from Google the following month. In February 2024, the venture capital firm Menlo Ventures closed a deal to invest an additional $750M. The investment was done in the form of a special-purpose entity to consolidate several smaller investments. In total, Anthropic received financing of US$7.3B in one year.

office

Claude
Comprising former researchers involved in OpenAI's GPT-2 and GPT-3 model development, Anthropic embarked on the development on its own AI chatbot, named Claude. Similar to ChatGPT, Claude uses a messaging interface where users can submit questions or requests and receive highly detailed and relevant responses.

Initially available in closed beta through a Slack integration, Claude is now accessible via a website claude.ai.

The name, "Claude", was chosen either as a reference to Claude Shannon, or as "a friendly, male-gendered name designed to counterbalance the female-gendered names (Alexa, Siri, Cortana) that other tech companies gave their A.I. assistants".

Claude 2 was launched in July 2023, and initially was available only in the US and the UK. The Guardian reported that safety was a priority during the model training. Anthropic calls their safety method "Constitutional AI".

The chatbot is trained on principles taken from documents including the 1948 Universal Declaration of Human Rights and Apple’s terms of service, which cover modern issues such as data privacy and impersonation. One example of a Claude 2 principle based on the 1948 UN declaration is: “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.”

Constitutional AI
Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest. CAI does this by defining a "constitution" for the AI that consists of a set of high-level normative principles that describe the desired behavior of the AI. These principles are then used to train the AI to avoid harm, respect preferences, and provide true information.

Interpretability research
Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture.

Ask us a Question

Send us a message and we'll connect soon


Become a member

Become a member and elevate your experience with us!

Already have an account? Login


window.lintrk('track', { conversion_id: 14400772 });