Our current public controls are just the start. Anthropic is advocating for a level of security beyond what existing compliance requires, including two-party controls.
See our full proposal for frontier model security in Q11 of our recent NTIA comment: cdn2.assets-servd.host/anthropic-webs
Anthropic
@AnthropicAI
We're an AI research company that builds reliable, interpretable, and steerable AI systems. Our first product is Claude, an AI assistant for tasks at any scale.
anthropic.comJoined January 2021
Anthropic’s Tweets
Here, you can find our certifications such as SOC 2 Type 1 and HIPAA, request that Anthropic sign a BAA, and view high-level details on controls we adhere to.
2
3
Show this thread
Introducing our new Trust Portal, a way for you to easily find information about our certifications and compliance policies. We're excited to support use cases across a wide range of industries.
trust.anthropic.com
read image description
ALT
4
12
66
Show this thread
We're excited to see how people are using 100K context windows!
Quote Tweet
1/ We won Grand Prize in LA @Techweek_ Virtual Worlds hackathon, creating a fully-playable RPG in 1 day.
We devised a new programming paradigm: Semantic Programming.
Generative AI + other tech used: @AnthropicAI @Scenario_gg @BlockadeLabs w/ @Beamable @unity @AILA_Community
Show this thread
GIF
12
23
176
read image description
ALT
2
5
53
Show this thread
We believe that AI could have transformative effects in our lifetime and we want to ensure that these effects are positive. The creation of robust AI accountability and auditing mechanisms will be vital to realizing this goal. Read our full post here:
2
7
38
Show this thread
Evaluations, red teaming, standards, interpretability and other safety research, auditing, and strong cybersecurity practices are all promising avenues for mitigating the risks of AI while realizing its benefits.
2
5
28
Show this thread
We propose:
- Funding research for better model evaluations
- Developing risk-responsive capabilities assessments
- Establishing pre-registration for large training runs
- Empowering technically literate auditors
- Mandating red teaming before model release
6
17
50
Show this thread
There is currently no robust and comprehensive process for evaluating today’s advanced AI systems, let alone the more capable systems of the future. Our recommendations focus on accountability mechanisms for highly capable & general-purpose AI models.
1
3
15
Show this thread
This week, Anthropic submitted a response to the National Telecommunications and Information Administration’s Request for Comment on AI Accountability. Today, we want to share our recommendations as they capture some of Anthropic’s core AI policy proposals.
read image description
ALT
15
47
259
Show this thread
Done safely and securely, AI has the potential to be transformational and grow the economy.
This evening I met with , and 's Dario Amodei to discuss how the UK can provide international leadership on AI.
read image description
ALT
653
476
2,750
Excited to announce investment into Series C.
Anthropic's team is one of the strongest in the world. From the author's of the groundbreaking LLM paper, to Claude’s 100k context window, & more. Can't wait for more Claude breakthroughs (coming soon) !
Quote Tweet
We are pleased to announce that we have raised $450 million in Series C funding led by @sparkcapital with participation from @Google, @SalesforceVC, @sound_ventures_, Zoom Ventures, and others.
Show this thread
5
11
60
Anthropic is excited to announce we've opened a London office - we're thrilled to be growing the UK Anthropic team. If you are excited about our mission of creating research and products that put safety at the frontier, check out our careers page:
32
73
834
Finally, we’ve published a separate high-level note on where we’re hoping mechanistic interpretability research can go. We think it’s good to occasionally step back from our research and reflect on what we're aiming for.
transformer-circuits.pub/2023/interpret
3
6
69
Show this thread
In addition to our work, we highlight recent work by a number of researchers at other groups which we believe will be of interest if you find our papers interesting.
transformer-circuits.pub/2023/may-updat
2
1
46
Show this thread
We also list new comments and any corrections on our old papers. We periodically add comments (both from ourselves and others) on previous papers, but they can easily be missed since they're just added to the bottom of existing papers. transformer-circuits.pub/2023/may-updat
1
1
32
Show this thread
These updates include discussions of attacking superposition with dictionary learning; Defining model features as the simplest factorization; An interesting “two circle” phenomenon in memorization, and several more results from our team.
transformer-circuits.pub/2023/may-updat
1
3
53
Show this thread
Our Interpretability team is experimenting with “Updates” – small, informal research notes in between our major papers.
read image description
ALT
15
87
728
Show this thread
Welcome to the Salesforce Ventures portfolio, ! 🎉
Learn more about Anthropic's visionary approach to #AI safety, its groundbreaking model, Claude, and why we’re backing the company: salesforceventures.com/perspectives/w
9
20
74
Glad to officially announce our partnership with ! Welcome to Spark.
It is so exciting to partner with a team that has shaped so much of our own internal thinking, through their canonical papers on Scaling Laws or Deep RLHF
24
47
478
Show this thread
We are excited to announce Menlo's investment in , one of the leading language model providers at the foundation layer. Our investment in Anthropic is a bet on one of the best teams in AI. Read more: mnlo.vc/Anthropic via &
#GenAI #LLMs
13
42
Read more about our announcement here:
4
9
89
Show this thread
The funding will support our efforts to continue building AI products that people can rely on, and generate new research about the opportunities and risks of AI.
2
4
83
Show this thread
We are pleased to announce that we have raised $450 million in Series C funding led by with participation from , , , Zoom Ventures, and others.
72
238
1,441
Show this thread
Congrats to the team for launching 100K Context Windows! We tested it with our chatbot, Copilot, using a 144-page LMA-standard real estate facility agreement. Copilot was able to analyse and answer questions within seconds.
At #CLOC2023? Come to booth 145 to try it!
9
15
97
You can read more about the partnership and investment here:
3
7
45
Show this thread
We are also pleased to announce that Zoom Ventures has made an investment in Anthropic. The Zoom team shares our vision of building customer-centric AI products with a foundation of trust and security, that are robust enough for real-world use.
1
4
57
Show this thread
The first product integration of Claude will occur in the Zoom Contact Center portfolio, where Claude will help improve the end-user experience and enable superior contact center agent performance.
2
3
33
Show this thread
We are announcing a new partnership with , a leader in enterprise collaboration and communication solutions. Zoom will use Claude, our AI assistant built with Constitutional AI, to build customer-facing AI products focused on reliability, productivity, and safety.
read image description
ALT
27
110
767
Show this thread
To answer your questions: Right now, our 100K context window is a beta feature and will be priced at our standard API pricing rates during this period!
29
32
236
Show this thread
We’ve made this available to our business partners, and are excited to see what they build. Read more here:
18
38
293
Show this thread
Lastly, quickly consume and get up to speed on dense material like research papers.
1
24
232
Show this thread
Read through hundreds of pages of developer documentation to get quick answers to technical questions.
5
39
346
Show this thread
With 100K context windows, you can: Digest, summarize, and explain dense documents like financial statements or business reports.
1
17
213
Show this thread
Claude can help retrieve information from business documents. Drop multiple documents or even a book into the prompt and ask Claude questions that require synthesis of knowledge across many parts of the text.
2
14
237
Show this thread
We fed Claude-Instant The Great Gatsby (72K tokens), except we modified one line to say that Mr. Carraway was "a software engineer that works on machine learning tooling at Anthropic." We asked the model to spot what was added - it responded with the right answer in 22 seconds.
12
98
707
Show this thread
Introducing 100K Context Windows! We’ve expanded Claude’s context window to 100,000 tokens of text, corresponding to around 75K words. Submit hundreds of pages of materials for Claude to digest and analyze. Conversations with Claude can go on for hours or days.
225
1,726
5,489
Show this thread
In this post, we explain what Constitutional AI is, what the values in Claude’s constitution are, and how we chose them:
4
18
64
Show this thread
This isn’t a perfect approach but it does make the values of the AI system easier to understand and easier to adjust as needed.
2
3
37
Show this thread
This helps imbue the model with a set of values that it can use to decide when and how to engage on a topic. CAI helps Claude give more nuanced and less evasive answers, as shown here:
1
3
44
Show this thread








