AI is embedded in every stage of employment: resume screening, interview scheduling, video interview analysis (facial expression, tone of voice, word choice), skills assessment, performance monitoring, and even termination decisions. Most candidates and employees don't know when AI is being used to evaluate them.
For Job Candidates
Yes. At every scale. Large companies, mid-size companies, and increasingly small businesses use AI-powered applicant tracking systems (ATS) to filter resumes before a human ever sees them. These systems score candidates based on keyword matching, experience patterns, and other criteria. Studies have documented that these systems can discriminate based on name, zip code, education institution, and employment gaps, even when the criteria appear neutral.
The uncomfortable truth: if your resume doesn't match the patterns the AI was trained on, you may be filtered out before anyone reads your name. And you'll never know it happened.
Possibly. When you upload your resume to a job platform, an ATS, or any AI-powered hiring tool, read the terms of service. Many platforms grant themselves the right to use submitted data (including your resume, cover letter, and assessment responses) to "improve their services," which can mean training their AI models. Your work history, skills, salary expectations, and career trajectory become training data for a system that may then be used to evaluate other candidates, or to build profiles that are sold to other employers.
Ask the platform directly: Does your AI use applicant data for model training? Can I opt out? If there's no clear answer, assume the answer is yes.
Some companies use AI to analyze your facial expressions, tone of voice, word choice, and even background during video interviews. These systems claim to assess traits like "enthusiasm" and "cultural fit." Independent research has raised serious questions about the validity and bias of these assessments. If you're asked to complete an AI-scored video interview, you may have the right to ask for a human review. In Illinois, the Artificial Intelligence Video Interview Act requires employers to notify candidates when AI is used and allows candidates to request the video be deleted.
AI doesn't eliminate bias. It scales it. LLMs and hiring algorithms are trained on historical data, which reflects historical patterns of discrimination. If most software engineers in the training data are men, the system may learn to prefer male candidates. If most executives in the training data are white, the system may deprioritize candidates of color. If most successful hires came from a handful of universities, the system may screen out equally qualified candidates from other institutions.
This isn't theoretical. Amazon famously scrapped an AI hiring tool in 2018 after discovering it penalized resumes that included the word "women's." The underlying problem hasn't been solved; it has just become harder to detect. When a human rejects your resume, you can sometimes learn why. When an algorithm does it, the reason is hidden inside a model nobody can fully explain.
Your rights depend on where you live and where you're applying. Here's what exists as of February 2026:
New York City Local Law 144: Employers using automated employment decision tools must conduct annual bias audits and publish results. Candidates must be notified.
Illinois AI Video Interview Act: Employers must notify candidates when AI analyzes video interviews. Candidates can request deletion.
Colorado AI Act (effective June 2026): Requires employers to assess AI tools for algorithmic discrimination and disclose AI use to candidates.
EU AI Act: Classifies AI in employment as "high risk" with mandatory transparency, human oversight, and bias testing.
Federal level: Title VII, ADA, and ADEA apply to AI-driven hiring decisions. The EEOC has signaled enforcement interest. But there's no federal law specifically requiring disclosure of AI use in hiring.
At minimum: you always have the right to ask whether AI is being used in the hiring process. If you don't get a clear answer, that itself is information. Document everything. If you believe you were discriminated against by an automated system, file a complaint with the EEOC or your state labor board.
For Employers
Legal obligations vary by jurisdiction, but best practices are clear: disclose when AI is used in hiring decisions, conduct regular bias audits of AI hiring tools, ensure human review of consequential decisions (rejections, terminations), maintain records of AI system performance and outcomes by demographic group, and comply with existing anti-discrimination law (Title VII, ADA, ADEA) which applies regardless of whether a human or algorithm makes the decision.
Ask your vendor. Many AI hiring platforms use aggregate applicant data to improve their models. If your tool isn't configured to prevent this, every resume submitted to your company may be used to train a model that serves your competitors. Review your vendor contract for data use clauses. Ask explicitly: Is our applicant data used to train your models? Can we opt out? Where is our data stored? Who else has access?
If you use a general-purpose LLM (ChatGPT, Claude, Gemini) to review resumes or draft job descriptions, check whether your plan shares data for model training. Most enterprise plans don't, but free and consumer tiers often do. If your HR team is pasting resumes into a free AI tool, your applicant data is almost certainly being used for training.
AI-powered employee surveillance (keystroke logging, screen monitoring, productivity scoring, location tracking, email analysis) is expanding rapidly. While generally legal in the U.S. with employee notification, it creates significant trust, retention, and morale risks. The EU has stronger protections under GDPR. If you're considering AI monitoring tools, involve your legal team, your HR team, and your employees in the conversation. Surveillance that employees don't know about or consent to is a liability waiting to happen.
- Ask every employer: Is AI used in your hiring process? At what stages? High
- Read the terms of service on every job platform and ATS you submit to. Check for data training clauses. High
- If asked to do an AI video interview, ask what the AI evaluates and whether a human reviews results. High
- Know your jurisdiction. Look up whether your state or city requires disclosure of AI in hiring. Medium
- Document every interaction with an AI hiring system. Screenshots, confirmations, timestamps. Medium
- If you believe you were unfairly screened out, file a complaint with the EEOC or your state labor board. Recommended
- Audit all AI tools used in hiring and HR for bias and compliance. High
- Ask your AI hiring vendor: Is applicant data used for model training? Get it in writing. High
- Disclose to candidates when AI is used in hiring decisions. High
- Ensure human review for every consequential employment decision (rejection, termination, promotion). High
- Ban use of free-tier AI tools for any HR function involving personal data. High
- Review employee monitoring practices for legal compliance and proportionality. Medium
- Train HR staff on AI bias, limitations, and legal requirements by jurisdiction. Medium
- Prepare for Colorado AI Act compliance (effective June 2026) if you hire nationally. Medium
Next Steps
Your career shouldn't be decided by an algorithm you can't see.
