|
US-based Artificial Intelligence company Anthropic has said it cut had off access to its AI model- Claude for firms linked to the Chinese Communist Party (CCP), forgoing several hundred million dollars in revenue as part of efforts to safeguard America's technological lead.
In a statement issued on Thursday, Anthropic CEO Dario Amodei said the company chose to block Claude's usage by entities connected to the CCP, including some designated by the US Department of War as Chinese Military Companies and that the company shut down CCP-sponsored cyberattacks that attempted to misuse the AI system. Amodei further stated that the AI firm has advocated for strong export controls on advanced chips to help maintain a democratic advantage in artificial intelligence. "Anthropic has also acted to defend America's lead in AI, even when it is against the company's short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage," the statement from Atnthropic read. The company said it has worked proactively with the US Department of War and the intelligence community, deploying its models within classified government networks and at National Laboratories. According to the statement, Claude is being used across national security agencies for applications such as intelligence analysis, modelling and simulation, operational planning, and cyber operations. However, Anthropic reiterated that it would not support certain uses of AI, including mass domestic surveillance and fully autonomous weapons, citing concerns about democratic values and the current reliability of frontier AI systems. Amodei further stated that despite pressure from the Department of War to agree to "any lawful use" of its technology and remove specific safeguards, the company would not change its position. "The Department of War has stated they will only contract with AI companies who accede to "any lawful use" and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk"--a label reserved for US adversaries, never before applied to an American company--and to invoke the Defense Production Act to force the safeguards' removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," the statement read. "Regardless, these threats do not change our position: we cannot in good conscience accede to their request," it added. Anthropic further noted that it remains ready to continue supporting US national security efforts while maintaining what it described as necessary guardrails on the deployment of its AI systems. (ANI)
|