logo
soumitra dutta
  • +
logo

Soumitra Dutta Oxford Dean (Former)

souimitra dutta.jpg

The AI legal battle Soumitra Dutta is watching closely

This is the text that your visitors will see when they arrive on your website. It's an ideal use for a short, catchy phrase or tagline that describes your business.

Soumitra Dutta‚ former dean of the Oxford Said Business School‚ recently tweeted that the contract dispute between AI company Anthropic and the US military is a landmark case. It’s a legal battle that most people have never heard of‚ but probably should․


Essentially‚ Anthropic‚ the creator of the AI model Claude‚ had a contract with the Pentagon‚ or the Department of Defence‚ and it didn't want its technology to be used to spy on Americans․ And it didn't want its AI making life-or-death decisions in warfare without a human being in the loop․ So the Pentagon said no deal․ The government needs access to AI for any lawful purpose‚ full stop․ No private company gets to attach ethical strings to a military contract․


The Trump administration didn't just walk away․ It designated Anthropic as a "supply chain risk" to national security, effectively barring it from federal contracts․ Take a moment to think about that․ That label is usually reserved for foreign companies suspected of spying, as was the case with Huawei․ To apply it to an American startup because it disagreed with the Pentagon about ethics was by any measure extraordinary․ It was unprecedented‚ and not too long after‚ OpenAI's Sam Altman struck a deal with the Pentagon․ The deal was on the Pentagon's terms‚ with no such conditions attached․


Anthropic filed lawsuits challenging the "supply chain risk" label as unconstitutional retaliation. AI researcher and AI strategy consultant Soumitra Dutta Dean Oxford (Former) has been following this case closely and wrote‚ "Does a private company have the right to limit how its technology is used once it enters a government contract? Should the government be able to designate a domestic firm as a "supply chain risk" based on ethical disagreements?"


Here's the thing․ When you sell a hammer‚ you don't get to control what someone builds with it․ But AI isn't a hammer․ It can do surveillance on entire populations․ In theory‚ it could select a target in a war zone and carry out that decision faster than any human could intervene․ Anthropic argues that when the technology is so powerful and so imperfect it needs guardrails․ The Pentagon stated that it could not fight wars with a vendor's ethics policy as a constraint․


Both sides have a point‚ which is what makes this such a hard issue․


As for OpenAI, it has accepted the "any lawful use" terms and asserted that existing laws are sufficient safeguards against abuse․ This is unconvincing․ The mass surveillance programs that Edward Snowden revealed were "lawful" until the moment the courts ruled they weren't․ 


“Whether you side with the Pentagon’s need for "any lawful use" or Anthropic’s insistence on human-in-the-loop safeguards against autonomous weapons and mass surveillance, the outcome of this legal battle will set the precedent for the use of AI in society moving forward,” wrote Soumitra Dutta Oxford Dean (Former) , who’s an author and entrepreneur too.  

soumitra dutta

© 2026 by soumitra dutta Build with WebSelf.net

  • Legal notices
  • Privacy policy