Categories
Technology

Anthropic delays new AI model over risks

Powerful system raises cybersecurity concerns despite major breakthrough

US-based AI firm Anthropic has unveiled its most advanced artificial intelligence model yet, but decided not to release it publicly due to concerns over potential misuse.

The model, called Claude Mythos, marks a significant leap in AI capability. It is designed to detect software vulnerabilities with exceptional accuracy, outperforming human experts in several tests. In one notable case, the system identified a decades-old flaw that had gone unnoticed for years, showcasing its powerful analytical abilities.

However, these same strengths have raised serious concerns. Experts believe the technology could be misused to find and exploit weaknesses in digital systems, potentially enabling sophisticated cyberattacks. This has prompted the company to take a cautious approach.

Instead of a full public rollout, Anthropic is limiting access to a small group of trusted partners under a controlled programme. The aim is to study how the model behaves in real-world conditions while reducing the risk of misuse.

The company is also in discussions with the United States government to better understand the broader implications of such powerful AI systems. CEO Dario Amodei has stressed the importance of building safeguards as AI becomes more capable and widely used.

The development highlights a growing challenge in the tech industry, how to balance rapid innovation with safety. While advanced AI can strengthen cybersecurity by identifying threats early, it can also create new risks if it is not properly controlled.

Also Read: RBI keeps Repo rate at 5.25%

Leave a Reply

Your email address will not be published. Required fields are marked *