1. Stop Agreeing to Things You Haven't Read High Priority
Terms of service (often written as "TOS" or "Terms and Conditions") are the legal rules you agree to follow when you use an app, website, or AI tool — but they also show up when you check in at a doctor's office, sign a contract, or create any account. They govern what an organization can do with your data, what rights you give up, and what happens if something goes wrong. Almost nobody reads them. This section changes that.
Use AI. Paste the full terms of service — or even a picture of them — into an AI chatbot (ChatGPT, Claude, Gemini, Copilot) and ask:
"Summarize this in plain language. Highlight anything that affects my privacy, my data, or my rights. Flag anything concerning."
AI can give you personalized, real-time analysis, answer follow-up questions, and even recommend a strategy for how to respond. This works for app terms, doctor's office check-in forms, contracts, and anything else that asks you to agree to something.
Use ToS;DR for quick ratings. ToS;DR (Terms of Service; Didn't Read) is a nonprofit, volunteer-run project that rates services from Grade A (treats you fairly) to Grade E (serious concerns). It's available as a free browser extension. Limitation: it only covers about 1,000+ services, so your niche app may not be rated. Use both tools together — ToS;DR for quick ratings, AI for deep dives.
If you won't read it and won't use a tool, don't agree to it.
A note on what doesn't exist yet: There's no consumer-grade, standalone app that automatically reviews terms of service for everyday people. The tools that exist are either browser extensions or require you to manually paste text into an AI chatbot. This is a gap in the market and a genuine need. If you're a funder, technologist, or policymaker reading this: this is a grantable project.
Usually, nothing happens immediately. And that's the problem. The risk is cumulative and invisible.
When you agree to terms without reading them, you may be granting a company the right to: collect and sell your behavioral data to third parties, use your conversations and inputs to train AI models, track your location continuously, change the terms at any time without notifying you, waive your right to sue (forced arbitration), and share your data with government agencies upon request.
The risk to you: Over years, your data gets collected, profiled, and aggregated. By the time it matters (a data breach that exposes your information, an insurance company that uses your data to set premiums, a hiring algorithm that screens you out, a surveillance system that flags you for something you said online), the damage is done and you can't trace it back to a single click.
The risk to people you love: Your data doesn't exist in isolation. When you share contact lists, photos, location data, and messages, you're sharing information about the people in your life who never consented to being in that company's database.
The risk to your community: Aggregate data from a neighborhood, a demographic group, or a community can be used for predictive policing, targeted advertising, political manipulation, and discriminatory pricing. Your individual data point is one piece of a much larger picture.
The risk to the future: Every "I Agree" click normalizes the expectation that companies can collect unlimited data with no accountability. It sets the baseline for what the next generation of technology will assume it can do. The terms of service you accept today shape the surveillance infrastructure of tomorrow.
This isn't fear-mongering. This is the business model. And the only way to change it's to stop accepting it without question.
- Install the ToS;DR browser extension right now. It takes 30 seconds. Do Today
- Pick one app you use daily. Open its terms of service. Paste them into an AI chatbot and ask for a plain-language summary with red flags. This Week
- For any new app or service, check the ToS;DR rating before signing up. Grade D or E? Think twice. Ongoing
2. Opt Out of Biometric Data Collection High Priority
Biometric data is any measurement of your physical body used to identify you: your face, fingerprints, iris pattern, voice, the way you walk, even the way you type. Unlike a password, you can't change your biometrics if they're compromised.
"But I use Face ID and fingerprint for my phone and bank app. Isn't that safe?"
Mostly, yes — when done right. Apple's Face ID and Touch ID store your biometric data locally on your device in a secure chip. It never leaves your phone. Your bank app typically uses that same on-device biometric, not a separate scan. That's the right way to do it.
The concern is when companies store biometric data on their servers — in the cloud, on databases that can be breached, subpoenaed, or sold. The distinction matters: on-device biometrics protect you; server-stored biometrics expose you.
Here's the reality: some of your biometric data is already out there. Photos of your face are on social media. Your voice is in video calls and voicemails. That ship has sailed for most of us.
But that doesn't mean the game is over. It means the strategy shifts:
- Control what you can. Limit new biometric collection. Opt out when given the choice. Don't give a retail store your face scan for a loyalty program.
- Know the difference. On-device biometrics (Face ID, fingerprint on your phone) are fundamentally different from cloud-stored biometrics. Support companies that keep it local.
- Push for protections. Illinois's BIPA has generated over $1 billion in settlements for biometric privacy violations. It works. Advocate for similar laws in your state.
- Stay informed. In the EU, General Data Protection Regulation (GDPR) classifies biometrics as "special category data" with strict limits. The U.S. is behind. You can help change that.
Ask someone you trust. A friend, a family member, a tech-savvy coworker. This isn't weakness. It's practical.
Host a "privacy party." Invite two or three trusted friends over. Each person brings their phone and laptop. Spend an hour going through settings together. Make it social. You'll be surprised what you find.
Start with one device. Pick your phone. Go to Settings > Privacy (iPhone) or Settings > Security & Privacy (Android). Spend ten minutes. That's enough for day one.
- Disable Face ID/facial recognition if you don't need it. Use a passcode instead. Do Today
- Review which apps have camera and microphone access. Revoke any that don't need it. This Week
- If a store, gym, or employer asks to scan your face, ask: What happens to this data? How long is it stored? Can I delete it? Ongoing
- Schedule a "privacy party" with friends or family this month. Recommended
3. Find Out What Companies Know About You Medium Priority
A data subject access request (DSAR) is your legal right to ask any company for a copy of all personal data they hold about you. Under the EU's General Data Protection Regulation (GDPR) and under privacy laws in California, Virginia, Colorado, Connecticut, and other U.S. states, companies must respond within 30 to 45 days.
Google: Go to takeout.google.com. Select the data you want. Click "Create export."
Facebook/Meta: Settings > Your Information > Download Your Information.
Apple: privacy.apple.com > "Request a copy of your data."
Amazon: Your Account > Request My Data.
Any other company: Email privacy@[company].com: "Under applicable data protection law, I request a copy of all personal data you hold about me. My account email is [your email]. Please respond within 30 days."
Fair. Here's why it still matters: requesting your data sends a signal. Companies that receive more requests take privacy more seriously because it costs them resources.
When you get data back, look for: (1) Data you didn't expect them to have. (2) Third parties they're sharing with that you didn't authorize. (3) Inaccurate information. If any are true, you can request correction or deletion.
If you won't request your data, at minimum delete accounts you no longer use. Every dormant account is a data liability with no benefit to you.
4. Reclaim Data You've Already Given Away Medium Priority
Some data is already out there. That's reality. What you can do is significantly reduce your exposure going forward and take control of what happens next.
- Day 1: Phone Settings > Privacy. Review and revoke app permissions you don't recognize (location, contacts, photos, mic, camera). High
- Day 2: Open email. Search "unsubscribe." Unsubscribe from 10+ mailing lists. Each is a company tracking your engagement. Medium
- Day 3: Google Data & Privacy. Turn off Web & App Activity, Location History, YouTube History. High
- Day 4: Social media privacy settings. Set profiles to private. Limit who sees your posts. Medium
- Day 5: Delete apps not used in 90 days. Each collects background data. High
- Day 6: Check haveibeenpwned.com. If breached, change those passwords immediately. High
- Day 7: Set a monthly calendar reminder to repeat Days 1 and 5. Make privacy a habit. Recommended
5. Know Where Your AI Companies Stand High Priority
See the AI Policy Comparison Chart for a factual breakdown of what each major AI company has publicly stated about surveillance and autonomous weapons. Note: stated policies aren't guarantees. Every company deserves scrutiny.
Governments have their own classified AI capabilities. The NSA, CIA, and DoD have internal tools that don't depend on commercial products. The question isn't whether the government can surveil without commercial AI. It can.
The question is whether commercial AI makes surveillance cheaper, faster, and more scalable, and whether you're informed about the role your AI provider plays in that process. Different people will draw different lines. What matters is that you've the information to draw yours. See the AI Policy Comparison Chart.
6. Protect Your AI Conversations High Priority
Everything you type into an AI chatbot is stored. Every question, confession, brainstorm, health concern, business strategy, and personal struggle. Right now, years of conversations with ChatGPT, Claude, Gemini, and other AI tools can be downloaded as a JSON file. Your entire history, thought patterns, fears, vulnerabilities, intellectual property, all of it, in a single export. Treat every AI conversation the way you would treat an email or a text message. There's a record. It can be found.
Enable multi-factor authentication (MFA) on every AI account you've. ChatGPT, Claude, Gemini, Microsoft Copilot. All of them. If someone gains access to your AI account, they don't just see your data. They see your thinking. Your IP. Your vulnerabilities. Things you would never say out loud to another person.
Don't let people use your computer or phone with your AI accounts logged in. Log out when you aren't using them. This isn't paranoia. This is basic digital hygiene for a new category of risk.
Review and delete conversation history regularly. Most AI platforms let you delete individual conversations or your entire history. If you don't need it, delete it.
We need to advocate for AI conversation data to be treated as highly protected personal data. The current standard, where years of intimate conversations are downloadable as a JSON file, isn't adequate. This data should be encrypted, access-restricted, and subject to the same protections as medical records.
Because the answer isn't to avoid AI. The answer is to use it with your eyes open. AI is powerful. It augments your thinking, accelerates your work, and opens doors that didn't exist two years ago. But power without awareness is how you lose control of it.
Cars are powerful. We still drive them, and we wear seatbelts, follow traffic laws, and carry insurance. AI is the same. Use it. Benefit from it. But know the risks, take precautions, and hold the companies building these tools to a higher standard.
7. Lock Down the Basics High Priority
Every protection on this site is worthless if your passwords are weak, reused, or compromised. This isn't a lecture. It's the foundation everything else sits on. If someone gets into your email, they can reset every account you have. If someone gets into your AI account, they can read your entire conversation history, your thought patterns, your IP, your vulnerabilities. Lock it down.
The case for a password manager: If you're reusing passwords across sites (and most people are), a single data breach at any one of those sites gives attackers the keys to try everywhere else. That's called credential stuffing, and it's how most account takeovers actually happen — not sophisticated hacking, just automated password reuse.
Password managers like Bitwarden (free and open source), 1Password, or Dashlane generate and store a unique password for every account. You remember one strong master password. The manager handles the rest.
"But why would I trust one company with ALL my passwords?"
Fair question. Here's how reputable password managers work: they use zero-knowledge encryption, meaning the company itself can't see your passwords. Everything is encrypted on your device before it touches their servers. Even if their servers were breached, attackers would get encrypted data they can't read without your master password, which the company never has.
That said, you need to trust the tool you choose. Stick with established, well-audited managers (Bitwarden is open source, meaning anyone can inspect the code). Avoid obscure or brand-new password managers that haven't been independently reviewed.
If you're not ready for a password manager: At minimum, make sure your most important accounts (email, banking, AI tools) each have a unique password. Use long passphrases — "correct horse battery staple" is stronger than "P@ssw0rd!" Length beats complexity every time.
MFA means that even if someone has your password, they can't get in without a second verification (a code from your phone or a hardware key). Enable it on: your email (this is the most important one), your AI accounts (ChatGPT, Claude, Gemini, Copilot), your bank, your social media, and any account that holds sensitive data.
Use an authenticator app (Google Authenticator or Microsoft Authenticator) instead of SMS (text message) codes when possible. SMS can be intercepted through SIM swapping (where a scammer convinces your phone carrier to transfer your number to their device). An authenticator app can't.
If someone borrows your phone or laptop while you're logged into your AI accounts, your email, or your password manager, they have access to everything. Log out of sensitive accounts when you're not using them. Set your devices to lock automatically after a short period of inactivity. This isn't about paranoia. It's about recognizing that your devices now contain more intimate information about you than any diary or filing cabinet ever did.
8. Understand the Biometric Paradox Medium Priority
Biometrics (fingerprint, face, voice) are great for convenience and for preventing unauthorized access to your devices. But there's a paradox: using biometrics to protect your accounts means giving a company your biometric data. And once they've it, you can't change it the way you change a password.
The risk isn't the login itself. The risk is what the company does with that biometric template. Is it stored on your device (relatively safe) or on a company server (higher risk)? Is it encrypted? Can it be subpoenaed? Can it be breached? If a password database is stolen, you change your password. If a biometric database is stolen, you can't change your face.
What to do: Use biometrics for device login (stored locally on your phone/laptop). Be cautious about biometric login for cloud services. Always have a strong password as a backup. And advocate for companies to store biometric templates on-device, not on their servers.
9. Understand How Companies You Never Chose Still Have Your Data Medium Priority
Data brokers are companies whose business model is buying, aggregating, and selling personal information. They compile profiles from public records, loyalty programs, apps, websites, and the companies you do business with — name, address, income estimates, health interests, purchasing habits, political leanings — and sell them to advertisers, insurers, employers, and anyone else willing to pay.
Can I remove myself?
Yes, but with caveats. You can search for yourself on data broker sites (Spokeo, BeenVerified, Whitepages, and dozens more) and request removal. Services like DeleteMe ($129/year) or Kanary automate this across hundreds of brokers.
The honest reality: removal is a game of whack-a-mole. Your data repopulates from other sources within months. Automated services keep re-removing it, but you're paying forever to stay removed.
Is it worth the effort?
Depends on your situation. Removal reduces your surface area — it makes you harder to find by casual searchers, scammers, stalkers, and people buying leads. For anyone dealing with harassment, domestic violence, or public exposure, this is critical. For others, it's a judgment call about whether the ongoing effort is worth the reduction in visibility.
In states with strong privacy laws (California, Virginia, Colorado, Connecticut, and growing), you have the legal right to demand deletion. More states are adding this every year. Exercise those rights regardless — it signals to the market that people are paying attention.
10. Ensure Your Organizations Have AI Policies High Priority
You probably don't know. And that's the problem. Your church, your child's school, your doctor's office, your payroll provider, your bank, your CRM, your nonprofit, your gym — every organization you interact with uses vendors, and those vendors increasingly use AI. The question is whether those organizations have asked their vendors about AI data policies. Most haven't.
What to ask any organization that has your data:
- Do you have a written AI use policy?
- Do you know which of your vendors use AI to process customer/client data?
- Have you reviewed your vendor contracts for AI data-training clauses?
- What security certifications do you hold, and do they address AI?
This isn't about blame. Most organizations are trying to do the right thing. They just haven't been asked the right questions yet. Be the person who asks.
11. Protect Yourself and Your Family from AI-Enhanced Scams High Priority
Scams aren't new. But AI has made them dramatically more convincing. Voice cloning, deepfake video, and AI-generated text messages can now impersonate your bank, your boss, or your family members with frightening accuracy.
They look real. That's the point. Common patterns:
- Fake bank texts: A message that looks like it's from your bank asking you to "verify" your account. The link goes to a convincing fake site.
- Voice cloning: A call that sounds exactly like your grandchild saying they're in trouble and need money wired immediately.
- Phishing emails: AI-generated emails with perfect grammar, your name, and details pulled from public records or social media.
- Fake officials: Someone claiming to be from law enforcement or a government agency who "needs your information to stop a fraud."
The universal rule: No legitimate organization will ever ask you for your password, account numbers, or personal information through a text, email, or unsolicited call. Ever.
This isn't about treating anyone as incapable. It's about recognizing that scammers specifically target people who are trusting, polite, and unfamiliar with new technology.
- Have the conversation now. Not after something happens. Explain that scam calls and texts are extremely common and extremely convincing.
- Establish a family code word. If someone calls claiming to be a family member in distress, the real family member will know the code word.
- Set up caller ID and spam filtering on their phone.
- Tell them it's always OK to hang up and call back using the number on their bank card or statement — never the number the caller provides.
- No shame. If someone does get scammed, the response should be support and action, not blame. Report it to the FTC at reportfraud.ftc.gov and to their bank immediately.
Young people are digitally fluent but not always digitally wise. They share more information publicly than they realize, making them targets for social engineering.
- Never share personal information (full name, school, location, birthday) with anyone online they don't know in person.
- Screenshots aren't private. Anything sent digitally can be saved, shared, or used against them.
- Friend requests from strangers are not flattering. They're suspicious.
- If something feels wrong, tell an adult. Build a culture where reporting feels safe, not embarrassing.
- Show them how scams work rather than just saying "be careful." Understanding the mechanism builds the instinct.
12. Contact Your Reps Medium Priority
The United States has no comprehensive federal AI law. Go to the Gov't Action section for email templates, call scripts, and specific policy asks.
13. Share This Kit Ongoing
The more people who understand these issues, the harder it's for any company or government to act without accountability. Share this site. Talk about it. The future isn't something that happens to you. It's something you build.
Keep Going
Explore more guides based on your situation.
