AI vendors aren't just providing a tool. They're processing your corporate data, customer data, strategic communications, and intellectual property through models that learn from inputs. When the U.S. government designates one AI vendor as a "supply chain risk" while awarding contracts to competitors, it demonstrates that AI vendor risk is now political, reputational, and geopolitical. Regardless of which company you use or which side you agree with, your board's fiduciary duty now includes AI supply chain governance. Diversify your AI vendors the way you diversify any critical supply chain.
Sycophancy in AI is when a model tells the user what they want to hear rather than what's accurate. Federal agencies flagged xAI's Grok as "overly compliant." In a corporate context, an AI that validates the CEO's assumptions instead of challenging them is a decision-making liability disguised as efficiency. Test your AI tools adversarially.
Your most significant AI governance exposure may not be the chatbot. It may be the productivity suite your entire company runs on. When Copilot processes a board memo, when Gemini analyzes a Workspace document, when Teams meetings are transcribed by AI, these are governance events most boards haven't explicitly authorized.
Adopt the EU AI Act as your internal baseline. This gives you compliance readiness across the most regulated markets and insulates you from U.S. policy volatility.
EU AI Act: What You Need to Know
If you're adopting the EU AI Act as your internal baseline, here's what it actually covers, why it's recommended, and where to be cautious.
The world's first comprehensive AI regulation, enacted by the European Union. It classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes requirements proportional to risk. Full enforcement begins August 2026.
Social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions for law enforcement), AI that manipulates people's behavior to cause harm, and AI that exploits vulnerabilities of specific groups (age, disability).
AI used in hiring and employment, credit scoring, education admissions, law enforcement, immigration, critical infrastructure, and medical devices. These systems require risk assessments, human oversight, transparency documentation, and bias testing before deployment.
Yes, if you serve EU customers, process EU residents' data, or deploy AI systems whose outputs are used in the EU. This is the same extraterritorial reach as GDPR. If you've any European business, it applies to you.
Three reasons. First, U.S. AI regulation is fragmented across 50 states with no federal standard, and the current administration is actively challenging state laws. The EU Act gives you one coherent framework. Second, it signals governance maturity to investors, customers, and partners. Third, if U.S. federal regulation does eventually pass, it'll likely borrow concepts from the EU framework. You'll be ahead.
AI-generated content must be labeled. People interacting with chatbots must be told they're talking to AI. Deepfakes must be disclosed. High-risk systems must provide documentation on training data, performance metrics, and limitations.
Up to 35 million euros or 7% of global annual turnover for deploying banned AI practices. Up to 15 million euros or 3% for violating high-risk requirements. These aren't theoretical. The EU has shown with GDPR that it enforces fines.
The EU Act doesn't cover everything. It's weaker on general-purpose AI (still being defined), it doesn't address AI-powered pricing manipulation as directly as some U.S. state laws, and enforcement mechanisms are still being established. It also doesn't replace U.S.-specific requirements like NYC's bias audit law or Illinois's video interview act. Use it as a floor, not a ceiling. Layer U.S. jurisdiction-specific requirements on top.
Inventory every AI system in your organization. Classify each by risk level using the EU framework. For high-risk systems, begin documentation: training data sources, bias testing results, human oversight procedures. For everything else, focus on transparency and disclosure. This isn't a one-time project. It's an ongoing governance function.
- Treat AI as critical supply chain. Map every AI vendor, what data they process, where it's stored, and what happens if that vendor is compromised, sanctioned, or politically targeted. Build contingency plans the same way you would for any single-source supplier. High
- Establish AI data sovereignty as a board-level priority. Your corporate data, customer data, and IP flow through AI systems daily. The board needs to know where that data goes, who can access it, and whether it's being used to train models that serve your competitors. This isn't an IT decision. It's a fiduciary responsibility. High
- Build private AI capability for strategic operations. Any conversation, document, or analysis run through a cloud-based AI could theoretically be accessed by the vendor, by subpoena, or by breach. For sensitive strategy, M&A, and legal matters, consider local/private AI deployment. See the AI Practitioners section. High
- Integrate AI political risk into enterprise risk management. Governments are designating AI companies as security risks, fighting over defense contracts, and challenging state regulations. If your AI vendor becomes the subject of a political dispute, your operations are affected. This is a new category of risk that most ERM frameworks don't yet address. High
- Audit infrastructure layer (Microsoft, Google). Your biggest AI governance exposure may not be the chatbot. It may be the productivity suite your company runs on. When Copilot processes a board memo, when Gemini analyzes a Workspace document, when Teams meetings are transcribed, these are AI governance events most boards haven't explicitly authorized. Microsoft says Copilot is safe, but AI can now surface data that sat quietly unnoticed if permissions aren't set correctly. Audit your permissions. High
- Test AI tools for sycophancy. An AI that tells the CEO what they want to hear instead of what's accurate is a decision-making liability disguised as efficiency. Test your AI tools adversarially. Ask them to challenge assumptions. If they agree with everything, they're useless for strategic decisions. Sycophancy is solvable through deliberate configuration and structured prompting. Medium
- Adopt EU AI Act as internal baseline. Even if your company operates only in the U.S., the EU AI Act represents the most comprehensive regulatory framework for AI governance globally. Adopting it as your internal baseline gives you compliance readiness, insulates you from U.S. policy volatility, and signals to customers, investors, and regulators that you take governance seriously. Strategic
Next Steps
Board-level governance starts with information.
