For Everyone

Your AI Accountability Checklist

Practical actions graded by urgency. Expandable answers to real questions. No jargon.

1. Stop Agreeing to Things You Haven't Read High Priority

Terms of service (often written as "TOS" or "Terms and Conditions") are the legal rules you agree to follow when you use an app, website, or AI tool. They govern what the company can do with your data, what rights you give up, and what happens if something goes wrong. Almost nobody reads them. Companies know this and count on it.

Option A: Use a tool that reads it for you.

  • ToS;DR (Terms of Service; Didn't Read) is a nonprofit, volunteer-run project that rates services from Grade A (treats you fairly) to Grade E (serious concerns). It's available as a free browser extension for Chrome, Firefox, Safari, and Edge. It shows you a rating automatically when you visit a website. Strength: trusted, transparent, peer-reviewed, no corporate funding. Limitation: it only covers about 1,000+ services, so your niche app may not be rated.
  • Use AI as a backup. If a service isn't covered by ToS;DR, paste the full terms of service into an AI chatbot (ChatGPT, Claude, Gemini) and ask: "Summarize this in plain language. Highlight anything that affects my privacy, my data, or my rights. Flag anything concerning." AI can give you personalized, real-time analysis and answer follow-up questions. The tradeoff: you're trusting the AI company's own terms of service while using their tool to analyze someone else's. Use both. ToS;DR for quick ratings, AI for deep dives.

Option B: If you won't read it and won't use a tool, don't agree to it. We've gotten far too comfortable clicking "I Agree" and moving on. Those days should be over.

A note on what doesn't exist yet: There's no truly consumer-grade, trustworthy, standalone app that automatically reviews terms of service for everyday people. The tools that exist are either browser extensions (like ToS;DR) or require you to manually paste text into an AI chatbot. This is a gap in the market and a genuine need. If you're a funder, technologist, or policymaker reading this: this is a grantable project. The National Science Foundation, Ford Foundation, MacArthur Foundation, and Mozilla Foundation all fund digital literacy and consumer protection tools.

Usually, nothing happens immediately. And that's the problem. The risk is cumulative and invisible.

When you agree to terms without reading them, you may be granting a company the right to: collect and sell your behavioral data to third parties, use your conversations and inputs to train AI models, track your location continuously, change the terms at any time without notifying you, waive your right to sue (forced arbitration), and share your data with government agencies upon request.

The risk to you: Over years, your data gets collected, profiled, and aggregated. By the time it matters (a data breach that exposes your information, an insurance company that uses your data to set premiums, a hiring algorithm that screens you out, a surveillance system that flags you for something you said online), the damage is done and you can't trace it back to a single click.

The risk to people you love: Your data doesn't exist in isolation. When you share contact lists, photos, location data, and messages, you're sharing information about the people in your life who never consented to being in that company's database.

The risk to your community: Aggregate data from a neighborhood, a demographic group, or a community can be used for predictive policing, targeted advertising, political manipulation, and discriminatory pricing. Your individual data point is one piece of a much larger picture.

The risk to the future: Every "I Agree" click normalizes the expectation that companies can collect unlimited data with no accountability. It sets the baseline for what the next generation of technology will assume it can do. The terms of service you accept today shape the surveillance infrastructure of tomorrow.

This isn't fear-mongering. This is the business model. And the only way to change it's to stop accepting it without question.

Your Checklist
  • Install the ToS;DR browser extension right now. It takes 30 seconds. Do Today
  • Pick one app you use daily. Open its terms of service. Paste them into an AI chatbot and ask for a plain-language summary with red flags. This Week
  • For any new app or service, check the ToS;DR rating before signing up. Grade D or E? Think twice. Ongoing

2. Opt Out of Biometric Data Collection High Priority

Biometric data is any measurement of your physical body used to identify you: your face, fingerprints, iris pattern, voice, the way you walk, even the way you type. Unlike a password, you can't change your biometrics if they're stolen.

Once a company or government has your biometric profile, it exists forever. It can be sold, hacked, subpoenaed, or repurposed for uses you never consented to. Facial recognition databases have been used for mass surveillance in multiple countries. In the U.S., Illinois's Biometric Information Privacy Act (BIPA) is one of the strongest protections, but most states have nothing comparable. In the EU, the General Data Protection Regulation (GDPR) classifies biometrics as "special category data" with strict processing limits.

Ask someone you trust. A friend, a family member, a tech-savvy coworker. This isn't weakness. It's practical.

Host a "privacy party." Invite two or three trusted friends over. Each person brings their phone and laptop. Spend an hour going through settings together. Make it social. You'll be surprised what you find.

Start with one device. Pick your phone. Go to Settings > Privacy (iPhone) or Settings > Security & Privacy (Android). Spend ten minutes. That's enough for day one.

Your Checklist
  • Disable Face ID/facial recognition if you don't need it. Use a passcode instead. Do Today
  • Review which apps have camera and microphone access. Revoke any that don't need it. This Week
  • If a store, gym, or employer asks to scan your face, ask: What happens to this data? How long is it stored? Can I delete it? Ongoing
  • Schedule a "privacy party" with friends or family this month. Recommended

3. Find Out What Companies Know About You Medium Priority

A data subject access request (DSAR) is your legal right to ask any company for a copy of all personal data they hold about you. Under the EU's General Data Protection Regulation (GDPR) and under privacy laws in California, Virginia, Colorado, Connecticut, and other U.S. states, companies must respond within 30 to 45 days.

Google: Go to takeout.google.com. Select the data you want. Click "Create export."

Facebook/Meta: Settings > Your Information > Download Your Information.

Apple: privacy.apple.com > "Request a copy of your data."

Amazon: Your Account > Request My Data.

Any other company: Email privacy@[company].com: "Under applicable data protection law, I request a copy of all personal data you hold about me. My account email is [your email]. Please respond within 30 days."

Fair. Here's why it still matters: requesting your data sends a signal. Companies that receive more requests take privacy more seriously because it costs them resources.

When you get data back, look for: (1) Data you didn't expect them to have. (2) Third parties they're sharing with that you didn't authorize. (3) Inaccurate information. If any are true, you can request correction or deletion.

If you won't request your data, at minimum delete accounts you no longer use. Every dormant account is a data liability with no benefit to you.


4. Reclaim Data You've Already Given Away Medium Priority

You can't un-ring every bell. But you can significantly reduce your exposure going forward.

Your 7-Day Data Reclamation Plan
  • Day 1: Phone Settings > Privacy. Review and revoke app permissions you don't recognize (location, contacts, photos, mic, camera). High
  • Day 2: Open email. Search "unsubscribe." Unsubscribe from 10+ mailing lists. Each is a company tracking your engagement. Medium
  • Day 3: Google Data & Privacy. Turn off Web & App Activity, Location History, YouTube History. High
  • Day 4: Social media privacy settings. Set profiles to private. Limit who sees your posts. Medium
  • Day 5: Delete apps not used in 90 days. Each collects background data. High
  • Day 6: Check haveibeenpwned.com. If breached, change those passwords immediately. High
  • Day 7: Set a monthly calendar reminder to repeat Days 1 and 5. Make privacy a habit. Recommended

5. Know Where Your AI Companies Stand High Priority

See the LLM Policy Comparison Chart for a factual breakdown of what each major AI company has publicly stated about surveillance and autonomous weapons. Note: stated policies aren't guarantees. Every company deserves scrutiny.

Governments have their own classified AI capabilities. The NSA, CIA, and DoD have internal tools that don't depend on commercial products. The question isn't whether the government can surveil without commercial AI. It can.

The question is whether commercial AI makes surveillance cheaper, faster, and more scalable, and whether you're informed about the role your AI provider plays in that ecosystem. Different people will draw different lines. What matters is that you've the information to draw yours. See the LLM Policy Comparison Chart.


6. Protect Your AI Conversations High Priority

Read this carefully

Everything you type into an AI chatbot is stored. Every question, confession, brainstorm, health concern, business strategy, and personal struggle. Right now, years of conversations with ChatGPT, Claude, Gemini, and other AI tools can be downloaded as a JSON file. Your entire history, thought patterns, fears, vulnerabilities, intellectual property, all of it, in a single export. Treat every AI conversation the way you would treat an email or a text message. There's a record. It can be found.

Enable multi-factor authentication (MFA) on every AI account you've. ChatGPT, Claude, Gemini, Microsoft Copilot. All of them. If someone gains access to your AI account, they don't just see your data. They see your thinking. Your IP. Your vulnerabilities. Things you would never say out loud to another person.

Don't let people use your computer or phone with your AI accounts logged in. Log out when you aren't using them. This isn't paranoia. This is basic digital hygiene for a new category of risk.

Review and delete conversation history regularly. Most AI platforms let you delete individual conversations or your entire history. If you don't need it, delete it.

We need to advocate for AI conversation data to be treated as highly protected personal data. The current standard, where years of intimate conversations are downloadable as a JSON file, isn't adequate. This data should be encrypted, access-restricted, and subject to the same protections as medical records.

Because the answer isn't to avoid AI. The answer is to use it with your eyes open. AI is powerful. It augments your thinking, accelerates your work, and opens doors that didn't exist two years ago. But power without awareness is how you get exploited.

Cars are dangerous. We still drive them, but we wear seatbelts, obey traffic laws, and buy insurance. AI is the same. Use it. Benefit from it. But know the risks, take precautions, and demand that the companies building these tools protect you better than they currently do.

7. Understand the Biometric Paradox Medium Priority

Biometrics (fingerprint, face, voice) are great for convenience and for preventing unauthorized access to your devices. But there's a paradox: using biometrics to protect your accounts means giving a company your biometric data. And once they've it, you can't change it the way you change a password.

The risk isn't the login itself. The risk is what the company does with that biometric template. Is it stored on your device (relatively safe) or on a company server (higher risk)? Is it encrypted? Can it be subpoenaed? Can it be breached? If a password database is stolen, you change your password. If a biometric database is stolen, you can't change your face.

What to do: Use biometrics for device login (stored locally on your phone/laptop). Be cautious about biometric login for cloud services. Always have a strong password as a backup. And advocate for companies to store biometric templates on-device, not on their servers.

8. Understand How Companies You Never Chose Still Have Your Data Medium Priority

Data brokers. These are companies whose entire business model is buying, aggregating, and selling personal data. They collect information from public records, loyalty programs, apps you've used, websites you've visited, and the companies you do business with. They compile profiles (name, address, income estimate, health interests, purchasing habits, political leanings) and sell them to advertisers, insurers, employers, and anyone else willing to pay.

When your doctor's office uses a scheduling app, that app may share data with a broker. When your grocery store has a loyalty card, that purchase history may be sold. When you sign up for a free app, the app may sell your usage data. You never opted in. You were never told. But your data is on the market.

What to do: Search for yourself on data broker sites (Spokeo, BeenVerified, Whitepages) and request removal. Use services like DeleteMe or Kanary that automate removal requests. In some states (California, Virginia, Colorado), you've the legal right to demand deletion. Exercise it.

9. Demand That Your Organizations Have AI Policies High Priority

You probably don't know. And that's the problem. Your church, your child's school, your doctor's office, your payroll provider, your bank, your CRM, your nonprofit, your gym, every organization you interact with uses vendors, and those vendors increasingly use AI. The question is whether those organizations have asked their vendors about AI data policies.

Most haven't. A Virginia state agency that imposes criminal penalties for data breaches, including jail time, doesn't itself have an AI policy in place. If a government agency with enforcement authority hasn't figured this out, your local nonprofit almost certainly hasn't either.

What to ask any organization that has your data: Do you've a written AI use policy? Do you know which of your vendors use AI? Have you reviewed your vendor contracts for AI data-training clauses? Are you SOC 2 compliant, and does your compliance framework address AI? If they can't answer these questions, they're operating without a safety net, and your data is in their hands.

This isn't about blame. Most organizations are trying to do the right thing. They just haven't been asked the right questions yet. Be the person who asks.

10. Contact Your Representatives Medium Priority

The United States has no comprehensive federal AI law. Go to the Gov't Action section for email templates, call scripts, and specific policy asks.


11. Share This Kit Ongoing

The more people who understand these issues, the harder it's for any company or government to act without accountability. Share this site. Talk about it. The future isn't something that happens to you. It's something you build.

Keep Going

Now that you know your rights, take the next step.