If you're a consultant, advisor, IT professional, nonprofit leader, or anyone who works with organizations and people, you're in a unique position to champion data privacy and AI accountability. This isn't about selling services. It's about using your knowledge to protect the people who trust you.
For Consultants & Advisors
Most organizations don't have one yet. That's an opportunity, not a crisis. Start with the basics:
- What AI tools are employees actually using?
- Is anyone pasting sensitive data into free-tier AI tools?
- Does the organization know which vendors use AI to process their data?
You can be the person who helps them get ahead of this. Use the AI Policy Lite template on this page as a starting point.
Every vendor your client uses (payroll, CRM, email marketing, cloud storage, HR software) is potentially using AI to process data. Your client's SOC 2 (a security certification that verifies how a company protects your data) compliance doesn't mean much if their vendors aren't also compliant. Help them ask: Does your vendor have a written AI use policy? Is client data used to train AI models? Where is data stored? Who has access? Can the client opt out of AI processing? If the vendor can't answer these questions clearly, that's a red flag. Build vendor AI policy review into your advisory practice.
Sycophancy — AI that tells users what they want to hear instead of what's accurate — is one of the most underestimated risks in AI deployment. Most organizations never configure their AI tools beyond default settings. The result: AI that agrees with everything, produces generic outputs, and doesn't challenge bad ideas.
This is solvable. The AI Thinking Model™ framework configures AI to coach users toward critical thinking, challenge assumptions, and surface blind spots rather than defaulting to agreement. For your clients, this means AI that functions as a genuine strategic partner instead of an expensive echo chamber. Help them understand that how you configure AI matters as much as which tool you choose.
For IT Professionals
Shadow AI is your biggest immediate concern. Employees are using ChatGPT, Claude, Gemini, and other tools on personal accounts to do company work.
Here's what's flowing through tools the organization doesn't control:
- Customer data and personally identifiable information (names, emails, addresses, SSNs)
- HR records and employee information
- Financial reports and projections
- Strategic plans and competitive intelligence
- Internal communications and meeting notes
Start here:
- Identify which AI tools employees are actually using (not which ones you've approved)
- Determine which accounts are free tier vs. enterprise tier
- Map what data is being shared with which tools
- Build an approved tools list with clear guidelines
- Train staff on why this matters — not as a scare tactic, but as professional responsibility
Most people are focused on external data breaches. Internal data exposure through AI is the risk nobody's talking about.
AI-integrated tools built into your productivity suite (like AI assistants in email, documents, and collaboration tools) can now surface data that sat quietly unnoticed for years:
- Internal documents with overly broad sharing permissions
- Old files in shared drives that were never cleaned up
- Messages and notes that were never meant to be searchable
- Salary data, HR notes, and legal documents with open access
AI doesn't create the permission problem. It makes the existing problem visible and searchable by anyone with access.
The fix:
- Run a permissions review across all shared drives and collaboration tools
- Lock down sensitive directories (HR, legal, finance, executive)
- Review who has access to what — especially for AI-integrated tools
- Do it before AI surfaces something to someone you don't want seeing it
For Nonprofit & Community Leaders
Your constituents trust you with sensitive data: names, addresses, health information, financial situations, family details. If your CRM vendor uses AI to process that data, your constituents' information may be part of a training dataset they never consented to. Review your vendor contracts. Ask the hard questions. And be honest with your constituents about what you know and what you're doing about it. Transparency builds trust. Silence erodes it.
You don't need to be an expert. You need a room, a screen, and this website. Walk your community through the consumers section. Do a live "privacy party" where people check their phone settings together. Show them the LLM comparison chart. Give them the government contact templates. One hour of guided exploration will do more for your community than a hundred social media posts about AI ethics.
AI Policy Lite: A Starter Template
Most organizations don't have an AI policy because they think it has to be a 40-page legal document. It doesn't. Here's a one-page starter that covers the essentials — protecting staff, the company, and clients while not stifling innovation.
Customize this for your organization. Paste it into an AI tool and ask it to tailor it to your industry, size, and specific tools.
- Help every organization you work with establish a written AI use policy. High
- Audit clients' vendor contracts for AI data-training clauses. High
- Identify and address shadow AI use in every organization you advise. High
- Review and tighten data permissions before deploying any AI-integrated tool. High
- Train staff on why free-tier AI tools are a data risk for sensitive information. Medium
- Host an AI literacy session for your community or client base using this site as a resource. Medium
- Help organizations explore local AI options for sensitive operations. Recommended
- Share this kit with every client, colleague, and community leader in your network. Recommended
Next Steps
You're the multiplier. Every person you help protects a network of people behind them.
