• AiNews.com
  • Posts
  • Microsoft Bans DeepSeek App for Employees Over Data and Propaganda Concerns

Microsoft Bans DeepSeek App for Employees Over Data and Propaganda Concerns

A professional photo of Microsoft President Brad Smith speaking into a microphone, wearing a dark suit and light blue shirt. To the left of him, the Microsoft logo and the updated DeepSeek logo (featuring a stylized purple whale and blue text) appear side by side. Bold white text in the lower left reads, “Microsoft Bans DeepSeek App For Employees.” The background is softly blurred, keeping focus on Smith and the two logos.

Image Source: ChatGPT-4o

Microsoft Bans DeepSeek App for Employees Over Data and Propaganda Concerns

Microsoft employees are prohibited from using the DeepSeek chatbot app due to concerns over data security and potential influence from Chinese state propaganda, company President and Vice Chairman Brad Smith told U.S. lawmakers on Thursday.

“At Microsoft we don’t allow our employees to use the DeepSeek app,” Smith said during a Senate hearing, referencing the company’s internal ban on both the desktop and mobile versions of the application.

According to Smith, the app has also been excluded from Microsoft’s own app store for the same reasons.

Microsoft Cites Chinese Data Laws and Content Censorship

Smith said the restrictions are based on two primary concerns: the risk that user data could be stored in China, and the possibility that DeepSeek’s responses might be shaped by “Chinese propaganda.”

DeepSeek’s privacy policy states that user data is stored on servers located in China, making it subject to the country’s cybersecurity laws. These laws allow government authorities broad access to data stored within the country. The company also censors politically sensitive topics in line with official government policies—another factor in Microsoft’s decision.

Although several organizations and governments have taken steps to limit DeepSeek's use, this marks the first time Microsoft has publicly disclosed an internal ban on the application.

DeepSeek’s App vs. Model: Different Use Cases, Different Risks

Despite Smith’s sharp criticism of the app, Microsoft did make DeepSeek’s R1 model available earlier this year through its Azure AI model catalog. The distinction, according to the company, lies in how the model is used.

Because DeepSeek is open source, users can download the model and run it on their own infrastructure, without sending any data back to Chinese servers. This allows Microsoft—and its customers—to experiment with the underlying AI while avoiding the data routing risks associated with the official app.

Still, Smith acknowledged that even in those settings, risks remain. These include the possibility that the model could produce biased or manipulated content, or generate insecure code. During the Senate hearing, he said Microsoft had made internal modifications to DeepSeek’s model to mitigate what he described as “harmful side effects.”

Microsoft did not provide specifics on what changes were made, directing follow-up questions to Smith’s testimony. However, when DeepSeek was first offered on Azure, Microsoft stated the model had undergone “rigorous red teaming and safety evaluations” before deployment.

DeepSeek's Competitive Position Complicates the Picture

The company’s stance on DeepSeek arrives in a competitive context. The app is a direct rival to Microsoft’s own AI-powered Copilot chat products, which blend internet search and generative AI features.

However, Microsoft hasn’t banned all chatbot competitors from its platforms. Perplexity, another AI search assistant, remains available in the Windows app store. Google apps—including Chrome and Gemini—did not appear in a search of the store, though it's unclear whether that reflects a policy decision or other technical or business factors.

What This Means

Microsoft’s decision to block employee access to DeepSeek adds another example to the growing list of U.S. tech companies scrutinizing—or distancing themselves from—Chinese AI products. While Smith emphasized national security concerns, the move also highlights how open-source AI models create both opportunity and risk: even when stripped from their original apps, models may still carry problematic behaviors.

The public disclosure of this ban signals that Microsoft views some AI tools not just as technical resources, but as geopolitical liabilities.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.