Accidental screen share reveals confidential roadmap as Microsoft faces live protests
An unforced error at Microsoft’s flagship developer event this week pulled back the curtain on one of the company’s most valuable AI partnerships—and set off alarm bells about corporate confidentiality in public tech showcases.
During a Tuesday night session at Microsoft Build 2025, Neta Haiby, the company’s AI security chief, briefly shared her screen to a packed audience. In that blink-and-you-miss-it moment, a confidential Teams message flashed up revealing internal details of Walmart’s artificial intelligence plans, including rollout strategies and a potentially risky internal tool dubbed “MyAssistant.”
The session, ironically focused on best practices for AI security, quickly turned chaotic after protesters stormed the venue. But the damage was already done.
Walmart’s “Ready to Rock” AI Ambitions Laid Bare
The leaked message came from Microsoft principal cloud solution architect Leigh Samons and painted a vivid picture of Walmart’s AI integration strategy. Samons stated that Walmart was “ready to ROCK AND ROLL” with Microsoft’s Entra Web and AI Gateway systems—a statement that drew chuckles from the room but raised eyebrows once context set in.
Walmart’s internal tool, “MyAssistant,” was flagged as “overly powerful” in the chat. The tool was developed in-house last summer and relies heavily on proprietary Walmart data and Azure’s OpenAI Service. It’s designed to help store workers synthesize information, generate marketing material, and streamline tasks that would otherwise take hours.
One sentence stood out like a flare in the fog: “MyAssistant is one they build that is overly powerful and needs guardrails.” That kind of candid technical judgment isn’t usually meant for public consumption—especially when it involves two corporate giants working closely behind closed doors.
Whoops on the Big Stage
The leak wasn’t the result of a hack or whistleblower. It was just bad timing—and a lapse in screen-sharing hygiene.
Haiby was presenting a developer session on AI security practices when activists from the group No Azure for Apartheid interrupted, protesting Microsoft’s cloud contracts with Israeli institutions. In the chaos, she toggled her shared screen to show an internal Teams chat containing confidential Walmart data.
Here’s what viewers caught in a flash:
-
Walmart’s readiness to adopt Microsoft’s AI stack
-
Specific comments from a “distinguished” Walmart AI engineer praising Microsoft’s lead over Google in AI security
-
Concerns over internal tool safety
It was a mix of marketing optimism and behind-the-scenes risk assessment—served up live to anyone watching, whether in the room or online.
Microsoft and Walmart Tight-Lipped as Fallout Begins
Neither Microsoft nor Walmart have publicly commented on the incident, but sources say internal discussions are already underway at both firms.
The Verge broke the story first. CNBC later confirmed the contents of the leak through screenshots viewed during the session. Microsoft hasn’t denied the authenticity of the message, nor have they offered clarification about how they’ll handle the security slip-up.
The slip-up adds an uncomfortable twist for Walmart, which has tried to carefully control the narrative around its AI development.
That strategy included a January press release touting MyAssistant’s ability to “revolutionize frontline productivity,” but it avoided specifics about safeguards or potential overreach.
This week’s accidental reveal broke that open.
Not Just a Leak—A Wake-Up Call on Corporate AI Overconfidence
The blunder has opened up broader questions about how corporations are approaching AI safety—especially for tools that have internal reach but public consequences.
For Walmart, an AI assistant that’s “overly powerful” isn’t just a tech curiosity. It raises concerns about internal decision-making, employee autonomy, and the risk of relying too heavily on AI-generated output in real-world settings.
It also lays bare how close Microsoft’s AI stack—especially Azure OpenAI—is becoming to enterprise-critical workflows.
The message quoted a senior Walmart engineer as saying: “Microsoft is WAY ahead of Google with AI Security. We are excited to go down this path with you.”
For Microsoft, that’s a marketing win. For Walmart, it’s a double-edged sword. Publicly aligning so strongly with Microsoft’s ecosystem may give them tech muscle, but also exposes them if things go sideways.
And then there’s the timing.
Security Session Overshadowed by Live Protest and Tech Irony
The chaos of the protest was notable in its own right. The activist group No Azure for Apartheid disrupted several Build 2025 events, including Haiby’s.
The irony? The session they interrupted was all about “building AI securely.”
One moment, developers were discussing security frameworks. The next, a screen showed one of Microsoft’s largest clients calling for more guardrails on a tool that had already gone live.
The session was meant to highlight Microsoft’s AI trust layer. Instead, it became a case study in what not to do—starting with public screen shares.
Internal AI Tools, External Repercussions
While the leak may blow over in time, it adds to a growing stack of uncomfortable questions about how enterprise AI is being managed, tested, and rolled out.
Walmart is hardly alone. Across industries, internal tools are being spun up at warp speed—leveraging LLMs, proprietary data, and often not enough oversight.
One message in the chat was especially striking: “We need to lock this down before GA [general availability].”
That might have been the plan. But now, the entire industry knows MyAssistant exists—and that it’s potentially ahead of where the safety measures are.