Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test

Anthropic’s CEO, Dario Amodei, discovered a riskier weakness in DeepSeek’s R1 model that will likely cause consumers to reconsider using the chatbot. He claimed that the chatbot can generate bioweapon-related information, which could be dangerous in the near future.

In an interview, Amodei said DeepSeek generated uncommon information about bioweapons during an Anthropic safety test. As he spoke about the AI startup performance, he mentioned it is, “the worst of basically any model we’d ever tested […] It had absolutely no blocks whatsoever against generating this information.”

Anthropic regularly tests different AI models to evaluate their possible risks to national security. They do it by checking if models can produce information about bioweapons that isn’t readily available on Google or in textbooks.

Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test.
Attack success rates on LLMs. Source: CISCO

In addition, CISCO also successfully hacked DeepSeek R1 without any failures. This is different from other advanced types like o1, which prevent most attacks with their protective features.

However, Amodei said that the weakness is not a threat at the moment. He also mentioned that unless there is clear proof of danger, there will be a good reason to keep improving it. He has urged the engineers and the company to pay real attention to AI safety. DeepSeek is already facing hostility in some nations, like Italy.

On the other hand, companies like AWS and Microsoft have announced that they are adding R1 to their cloud services. Some may look at this as part of learning for the company, while others might take it more seriously. There isn’t any response yet or backlash to the new findings. 

The US AI takes the lead – Dario Amodei

According to the Anthropic CEO, “Now there are three to five companies in the US and one company in China. Whether they continue to make near-frontier models depends upon how many chips they can get access to and whether they can get access to chips at a much larger scale than those they’ve been able to get access to now.”

DeepSeek accounted for their cheap technology on Nvidia chips purchased before the US restrictions. According to Amodei, “The export controls were never really designed to prevent DeepSeek or any other Chinese company from getting the number of chips that they had at the level of a couple tens of thousands.”  

He added, “We should reasonably expect smuggling to happen. Export controls can be more successful at preventing larger acquisitions -they can’t have like a million chips because that’s easily in the tens of billions in economic activity, approaching 100 billion.”

DeepSeek claimed to have used a mix of H100s, which are the standard chips in the US. They used approximately 10,000 of those. That means they should not be too affected.

In the question of China becoming independent on their chips, especially the Huawei chips, he said that it would take 10 to 15 years. He added, “It’s actually going to be difficult to make chips that are competitive with say the new Nvidia B100 […] My sense is that it is unlikely that the Huawei chips become anywhere near comparable to US chips anytime soon.”

Amodei has raised concerns about laws in the United States. He explained that efforts are being made to assess and potentially limit the risks of AI systems.

This includes concerns about their dangers, misuse by individuals for instance, in biological attacks and the risks posed by the systems themselves. In the past year, there have been several new laws in the US. This poses the question of will DeepSeek’s market survive in the US?

In addition, Amodei praised the US AI companies as he explained that they work together well. He said that they all follow the same rules. OpenAI, Meta, Google, and xAI all have to comply with the US regulations. He said, “You can know that you’re engaging in necessary safety practices and others will engage in them as well.”

On the contrary, he said, “I don’t think that’s possible between the US and China. We’re kind of in a Hobbesian international anarchy.”

Amodei brought to light that, “I’m aware of efforts by the US Government to send a delegation to talk to China about topics related to AI safety. My understanding, again, I obviously wasn’t part of those delegations, is that there wasn’t that much interest from the Chinese side.”

US lawmakers want to ban DeepSeek on government devices

DeepSeek is about to be hit by another ban for security reasons. This time by the US government. Similar to the TikTok rule, a bipartisan group in the U.S. is suggesting a law to stop the Chinese AI app DeepSeek from being used on government devices.

Representatives Josh Gottheimer, a Democrat from New Jersey, and Darin LaHood, a Republican from Illinois, introduced a bill called the “No DeepSeek on Government Devices Act.” 

This bill would make it illegal for federal workers to use the Chinese AI app on government-owned devices. They pointed out that the Chinese government could use the app for spying and spreading false information.

Gottheimer said, “The Chinese Communist Party has made it abundantly clear that it will exploit any tool at its disposal to undermine our national security, spew harmful disinformation, and collect data on Americans […] We simply can’t risk the CCP infiltrating the devices of our government officials and jeopardizing our national security.”

Cryptopolitan Academy: FREE Web3 Resume Cheat Sheet – Download Now

     

News – Cryptopolitan – Read More   

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *