$100 000 USD

DECEMBER 2024

GLOBAL

ANTHROPIC

DESCRIPTION OF EVENTS

Anthropic is an AI research and safety company based in San Francisco, focused on developing reliable, beneficial AI systems with safety at the forefront. The company’s interdisciplinary team, with expertise in machine learning (ML), physics, policy, and product design, conducts research and creates AI models that prioritize alignment and harmlessness. One of their flagship products is Claude, a series of advanced AI models, with Claude 3.5 Sonnet being the latest release. Anthropic's work involves building AI systems that can be used in a variety of applications, from custom experiences to enterprise solutions, while also ensuring that the technology adheres to strict safety protocols. Through their AI research and innovations, Anthropic aims to address critical issues related to AI alignment, fostering systems that are both useful and secure.

 

"The hacker shared both VENTI and a token pretending to be official with the ticker CLAUDE. One of the coins linked to the hack pumped immediately, leaving the exploiters with around $100K after a rapid rug pull."

 

"The hacker shared both VENTI and a token pretending to be official with the ticker CLAUDE. One of the coins linked to the hack pumped immediately, leaving the exploiters with around $100K after a rapid rug pull."

 

The exploit lasted for about 30 minutes, during which the hacker shared the Venti bot, which was trained to communicate in the style of social media and marketed with irreverent internet jargon. The token associated with the hack, CLAUDE, experienced an immediate pump, allowing the attackers to profit around $100K before the tweet was deleted.

 

"The official X account of AI startup Anthropic, backed by Amazon, appears to have been compromised, posting an unknown token contract address related to AI Agents."

 

"We have identified the root cause which resulted in unauthorized posts on this account today.

 

We have confirmed that no Anthropic systems or services were compromised or involved in this incident. We're working with @X to better understand this situation."

 

While the promotion helped Venti gain brief attention, it failed to translate into significant success or following for the token, which has since been declining in value.

 

Explore This Case Further On Our Wiki

Anthropic, an AI research and safety company based in San Francisco, is focused on developing reliable and beneficial AI systems with safety as a priority. Their team, with expertise across machine learning, physics, policy, and product design, creates AI models such as Claude, which are designed to be aligned, harmless, and secure. Recently, Anthropic's official X (formerly Twitter) account was compromised, and the hacker used it to promote a fake token called CLAUDE and an AI bot called Venti. The hack resulted in a rapid token pump, netting the attackers around $100K before the tweet was deleted. While Venti gained brief attention, it did not lead to lasting success or growth for the token, which has since seen a decline in value. Anthropic confirmed that no systems or services were compromised, and they are working with X to investigate the situation further. Anthropic later announced that the root cause of the breach has been identified.

Sources And Further Reading

 For questions or enquiries, email info@quadrigainitiative.com.

Get Social

  • email
  • reddit
  • telegram
  • Twitter

© 2019 - 2025 Quadriga Initiative. Your use of this site/service accepts the Terms of Use and Privacy Policy. This site is not associated with Ernst & Young, Miller Thompson, or the Official Committee of Affected Users. Hosted in Canada by HosterBox.