AI in Hiring in United States: NY Bias Audits 2026 Trends

Updated Nov 27, 2025
  • Federal anti-discrimination, wage, and labor laws still apply fully when you use AI or automated tools in hiring, pay, scheduling, and termination decisions.
  • NYC Local Law 144 requires annual independent bias audits for covered AI hiring tools, public posting of audit results, and advance notice to candidates and employees.
  • States are moving toward stricter rules on "digital replicas" of workers and AI-powered surveillance, including consent and notice requirements and higher statutory damages.
  • HR cannot shift liability to vendors: you must vet AI tools, negotiate strong contracts, and keep enough access to data to run or support bias audits.
  • Notice obligations now overlap: AI screening notices, background check notices, and electronic monitoring notices may all be required in the same hiring or employment flow.
  • By 2026, multi-state employers that use AI in employment decisions should expect audits, documentation, and governance similar to what is already standard for financial and data privacy controls.

What are the core employment and labor law obligations for US employers?

US employers must comply with a web of federal, state, and local laws that cover hiring, pay, working conditions, discrimination, and collective labor rights. Using AI or other automated tools does not reduce these obligations; it often amplifies risk if tools create or mask systemic bias.

Key federal frameworks that interact directly with employment and labor decisions include:

  • Anti-discrimination laws (enforced mainly by the EEOC)
    • Title VII of the Civil Rights Act: Prohibits discrimination based on race, color, religion, sex (including pregnancy, sexual orientation, gender identity), and national origin.
    • Americans with Disabilities Act (ADA): Bars discrimination based on disability and requires reasonable accommodations, including in hiring processes.
    • Age Discrimination in Employment Act (ADEA): Protects workers 40 and older from age discrimination.
    • Genetic Information Nondiscrimination Act (GINA): Restricts use of genetic information.
  • Wage and hour laws
    • Fair Labor Standards Act (FLSA): Sets federal minimum wage, overtime, recordkeeping, and child labor rules.
    • State wage and hour laws: Often stricter and control if more protective (higher minimum wages, daily overtime, meal and rest breaks).
  • Labor relations and collective rights
    • National Labor Relations Act (NLRA): Protects non-supervisory employees' rights to organize and engage in protected concerted activity, including in digital spaces.
    • Use of surveillance and algorithmic management can trigger NLRA issues if it chills organizing or targets union activity.
  • Workplace safety
    • Occupational Safety and Health Act (OSHA): General duty to provide a safe workplace, which can interact with monitoring tech and remote work.
  • Background checks and privacy
    • Fair Credit Reporting Act (FCRA): Regulates third-party background checks, including certain automated screening, and imposes specific disclosure, authorization, and adverse action steps.
    • State privacy and biometric laws: Control use of biometric identifiers and AI-heavy analytics (for example, Illinois BIPA, California privacy laws).

All of these apply whether a human manager or an AI tool makes or supports the decision. When AI processes large volumes of candidates or employees, any bias or legal defect can quickly scale into class-wide exposure.

How are AI and automated tools regulated in US hiring and employment decisions?

AI in employment is regulated through a mix of traditional anti-discrimination law, new local AI-specific statutes, and emerging state privacy and AI acts. There is no single federal AI employment statute yet, but agencies like the EEOC are treating AI the same as any other selection procedure under existing laws.

Core building blocks of AI employment regulation in the US include:

  • EEOC and federal guidance
    • The EEOC has made clear that employers are responsible for AI tools they use, even if a vendor provides the tool.
    • The agency applies existing frameworks like the Uniform Guidelines on Employee Selection Procedures to automated assessments, scoring algorithms, chatbots, and resume screeners.
    • Key risk theories: disparate treatment (intentional bias), disparate impact (neutral tool with unequal effects), failure to accommodate under the ADA, and retaliation or interference.
  • Local and state AI-specific statutes
    • NYC Local Law 144: Requires annual bias audits and candidate notice for certain automated employment decision tools used for NYC jobs.
    • Illinois AI Video Interview Act: Regulates use of AI to analyze video interviews, including notice, explanation, consent, and data deletion.
    • Colorado AI Act (SB 24-205) (effective 2026): Treats employment decisions as "high-risk" AI uses and requires risk management, impact assessments, and transparency for covered entities.
    • Additional bills are pending in states like California, New York, and others targeting automated employment decisions and digital replicas.
  • Biometric and privacy laws that indirectly regulate AI tools
    • Illinois BIPA, Texas, and Washington biometric laws require informed consent, policies, and retention limits when collecting biometric identifiers, including face or voice prints used in AI analysis.
    • California privacy laws (CPRA) and upcoming rules on automated decision-making will likely require worker notices and opt-out or review rights for some AI uses.
  • Sectoral and contract-based controls
    • Union contracts may limit surveillance, algorithmic scheduling, or AI-driven performance management.
    • Enterprise customers are starting to demand contractual documentation of AI training data, bias controls, and audit support from HR-tech vendors.

For employers, the practical result is that AI governance must be integrated into existing EEO, privacy, and labor-relations compliance programs, not treated as a separate technical issue.

How does NYC Local Law 144 regulate AI hiring tools and what does compliance require?

NYC Local Law 144 prohibits employers and employment agencies from using covered AI hiring tools for NYC jobs unless they complete an annual independent bias audit, publish a summary, and give candidates advance notice. The law also authorizes civil penalties per violation, with enforcement handled by the NYC Department of Consumer and Worker Protection (DCWP).

Who and what does Local Law 144 cover?

  • Covered employers and agencies
    • Employers and employment agencies that use an automated employment decision tool (AEDT) to screen candidates or employees for jobs located in New York City.
    • Remote or hybrid roles are often covered if the position is associated with a NYC office or reports into NYC.
  • Definition of an AEDT (simplified)
    • Any computational process, derived from machine learning, statistical modeling, data analytics, or AI, that issues a score, classification, or recommendation used to substantially assist or replace discretionary decision making for employment decisions.
    • Examples: resume screeners that rank applicants, algorithmic scoring of tests or games, AI interview scoring, and some predictive hiring tools.

What are the main requirements under Local Law 144?

  • Annual independent bias audit
    • Must be conducted by a qualified independent auditor (internal teams are usually not sufficient if they are involved with the tool).
    • Audit must be completed within one year prior to the use of the AEDT.
    • Must evaluate selection rates and impact ratios across protected categories (sex, race or ethnicity, and intersectional groups).
  • Public posting of audit summary
    • Employers must publish a summary of the most recent bias audit results on their website.
    • The summary must identify the distribution dates of the tool, the data used (time period, number of applicants), and selection or scoring disparities.
  • Candidate and employee notice
    • Employers must give notice at least 10 business days before using an AEDT to evaluate a candidate or employee for a specific role.
    • Notice must describe the job qualifications and characteristics the tool will use and explain how to request an alternative selection process or accommodation.
  • Data and retention disclosures
    • Employers must disclose the types of data collected and retained and the general data sources.
    • You must have a reasonable data retention period and clear internal retention policies supporting auditability.
  • Penalties
    • DCWP may impose civil penalties typically ranging from $500 to $1,500 per violation per day for ongoing noncompliance.
    • Each failure to conduct an audit, publish a summary, or provide required notice can count as a separate violation.

What does a compliant bias audit under Local Law 144 look like?

A compliant bias audit measures how an AEDT affects different demographic groups and whether there is disparate impact. It must use historical data or, in some cases, test data that approximates actual use.

  • Core components of the audit
    • Identify the AEDT and its specific use cases (screening, ranking, scoring, etc.).
    • Define the relevant employment decision (for example, "advance to interview," "offer extended," "promotion eligible").
    • Collect data for candidates or employees evaluated by the AEDT, including outcomes and demographic information where available.
    • Calculate:
      • Selection rates for each protected group, and
      • Impact ratio (selection rate of each group divided by the highest selection rate among groups).
    • Provide clear, reproducible methodology and assumptions.
  • Auditor independence and competence
    • The auditor should not be involved in developing, marketing, or deploying the AEDT.
    • They must have statistical and legal literacy sufficient to design valid tests and interpret disparate impact.
  • Data sufficiency
    • If there is not enough historical data, the auditor may use test data or pooled data, but must explain limitations.
    • Low sample sizes can reduce confidence in results and may require longer data collection periods or tool adjustments.

How can employers operationalize Local Law 144 compliance?

HR, legal, and IT must work together to inventory tools, align vendors, and build an annual audit cycle. A practical process usually looks like this:

  1. Inventory all hiring and promotion tools
    • List any software that scores, ranks, filters, or recommends candidates or employees.
    • Flag tools using machine learning, AI, or complex statistical models for possible AEDT status.
  2. Determine Local Law 144 coverage
    • Identify which roles are NYC roles or report into NYC.
    • Work with counsel to decide if each tool is an AEDT under the law's definition.
  3. Align with vendors
    • Request technical and demographic data needed for audits.
    • Clarify who will pay for and manage the audit and whether multiple customers can share an audit.
    • Update contracts to require cooperation with audits and timely disclosure of model changes.
  4. Engage an independent auditor
    • Set scope by tool, role, geography, and time period.
    • Agree on methodology, sample thresholds, and reporting format that meets DCWP rules.
  5. Publish audit summary and update notices
    • Post the summary report where candidates and employees can easily find it, usually on your careers or legal page.
    • Create standard LL 144 notice text and integrate into job postings, email notices, or ATS messages at least 10 business days before use.
  6. Schedule annual reviews
    • Create a calendar so each AEDT is audited at least once every 12 months.
    • Re-audit earlier when you materially change the tool, data sources, or selection rules.

What new state-level rules are emerging on digital replicas and AI surveillance?

States are rapidly moving to regulate "digital replicas" of individuals and AI-powered surveillance at work, with a mix of right-of-publicity, privacy, and AI-specific laws. These rules can expose employers and vendors to statutory damages when they clone workers' likenesses or monitor employees without proper consent or notice.

How are states addressing "digital replicas" of workers?

Digital replica laws focus on preventing unauthorized AI-generated likenesses, especially for performers, but they also affect employers that use synthetic voices, avatars, or cloned likenesses of staff for commercial purposes.

  • Key themes across states
    • Consent requirements: Written consent is often required before creating or using a digital replica for advertising or entertainment purposes.
    • Scope extension to heirs or estates: Some statutes give rights to estates for decades after death, affecting long-term use of a worker's digital persona.
    • Statutory damages: Many laws allow fixed per-violation damages plus attorneys' fees, which can scale quickly in class cases.
  • Notable examples as of 2024 (subject to updates)
    • California and New York: Robust right-of-publicity and anti-deepfake rules affecting unauthorized digital replicas.
    • Tennessee "ELVIS Act": Targets AI-generated voice and image clones without consent, relevant to recording and entertainment employers.

How are states handling AI surveillance and workplace monitoring?

States are layering AI-specific risk language on top of existing electronic monitoring and privacy laws. Employers that use keyloggers, webcam analytics, GPS, or algorithmic productivity scoring must pay attention to state-level notice and consent rules.

  • Existing electronic monitoring laws
    • New York Civil Rights Law 52-c: Requires employers to give written notice and obtain acknowledgment before monitoring employees' telephone, email, or internet use, including through automated tools.
    • Connecticut and Delaware: Require employers to notify employees of electronic monitoring, often via conspicuous postings and written acknowledgment.
  • AI and privacy-specific state laws
    • California: Privacy laws and draft regulations on automated decision-making likely to require worker notices, access rights, and possibly opt-out or human review for certain high-risk uses.
    • Colorado AI Act: Treats employment-related AI as high-risk, requiring risk management programs, impact assessments, and mechanisms to address discrimination by 2026.
    • Several states are proposing bills that limit continuous biometric or AI-driven monitoring of employees without clear necessity and consent.
  • Labor and union overlay
    • The NLRB has signaled greater scrutiny of employer surveillance that might chill protected concerted activity, including algorithmic monitoring of communications or organizing efforts.
    • Union contracts increasingly include constraints on electronic monitoring, data use, and algorithmic scheduling or performance scoring.
State Focus area Key requirements for employers
New York (statewide) Electronic monitoring Written notice at hire and posted notice for email, phone, and internet monitoring; acknowledgment from employees.
New York City AI hiring tools Annual bias audits, public audit summaries, and 10-business-day notice before using AEDTs for NYC roles.
California Privacy and digital likeness Expanded privacy rights for workers; consent and restrictions on commercial use of name, image, and likeness.
Illinois Biometrics and video interviews BIPA consent and retention rules for biometric data; AI video interview notices, explanations, and deletion rights.
Colorado High-risk AI systems From 2026, risk management, impact assessments, discrimination mitigation, and disclosures for employment AI.

How should HR teams manage vendor liability for AI and algorithmic employment tools?

HR teams cannot outsource legal risk to vendors; if the tool discriminates, the employer typically shares liability. To manage this, employers must conduct due diligence, require contractual protections, and retain enough access and control to support audits and investigations.

What due diligence should you perform on AI HR-tech vendors?

  • Use-case and model transparency
    • Ask vendors to describe exactly what decisions their tool influences, what inputs it uses, and what outputs it produces.
    • Request plain-language documentation and, where possible, technical whitepapers on the model and training data sources.
  • Compliance posture
    • Request written confirmation of compliance with Title VII, ADA, ADEA, FCRA (if applicable), Local Law 144, and relevant state laws.
    • Ask for prior or current bias audit reports, model validation studies, and any regulatory inquiries or complaints.
  • Data access and audit support
    • Verify that the vendor can and will provide aggregate and de-identified data needed for your bias audits.
    • Clarify how quickly they can export data and whether there are technical or contractual limits.
  • Security and privacy controls
    • Review data security certifications (SOC 2, ISO 27001, etc.).
    • Understand where candidate and employee data is stored and processed, including any cross-border transfers.

What contract provisions help manage AI employment risk?

Contracts should clearly allocate responsibilities for legal compliance, data, and audits. They should also provide remedies when the vendor's tool creates legal exposure.

  • Compliance representations and warranties
    • Vendor represents that the tool is designed to comply with applicable employment, privacy, and AI laws for agreed geographies and use cases.
    • Vendor agrees to update the tool and documentation when laws change.
  • Bias audit and cooperation clauses
    • Vendor must provide data, documentation, and technical assistance needed for Local Law 144 and similar audits, at no or reasonable cost.
    • Vendor must notify you of material model changes that could impact audit results.
  • Indemnity and limitation of liability
    • Vendor indemnifies the employer for claims arising primarily from defects in the tool's design, training data, or operation, including discrimination claims stemming from tool output.
    • Consider higher liability caps or carve-outs for regulatory fines and class actions linked to AI misuse.
  • Data ownership and access
    • Employer retains ownership of its data and has export rights during and after the contract.
    • Vendor cannot reuse identifiable data for unrelated purposes without explicit consent.
  • Audit and termination rights
    • Right to suspend use or terminate if the tool fails a bias audit, is found noncompliant, or triggers significant regulatory risk.
    • Right to conduct or commission independent assessments, subject to agreed confidentiality and security limits.
Risk area Employer responsibility Vendor responsibility
Legal compliance strategy Define lawful use cases, hiring policies, and jurisdictions; ensure internal procedures follow law. Design tools consistent with law; disclose intended uses and limitations.
Bias audits Trigger and oversee audits; interpret results; adjust practices. Provide data, technical explanations, and support; remediate tool issues.
Candidate notices Draft and deliver notices to candidates and employees. Provide accurate descriptions of how the tool works and what data it uses.
Security and privacy Adopt policies and select compliant vendors. Implement robust security controls and honor data protection obligations.

What notice and consent rules apply when using AI for hiring and monitoring employees?

Notice and consent rules come from several overlapping laws, including AI-specific statutes, electronic monitoring laws, biometric acts, and FCRA for background checks. Employers should build a unified playbook so that candidates and employees receive clear, timely, and legally sufficient information about AI use.

What notice is required for AI screening in hiring?

  • NYC Local Law 144
    • Provide notice at least 10 business days before using an AEDT to evaluate a candidate or employee.
    • Include the job qualifications and characteristics the AEDT will use, and information on requesting an accommodation or alternative process.
  • Illinois AI Video Interview Act
    • Inform applicants that AI may be used to analyze video interviews.
    • Explain how the AI works in general terms and what characteristics it evaluates.
    • Obtain consent before using AI to evaluate the video, and delete videos on request and within statutory timelines.
  • FCRA for background checks (not AI-specific but often intertwined)
    • Provide a stand-alone written disclosure and obtain written authorization before procuring a consumer report.
    • Follow pre-adverse and adverse action steps if taking negative action based on the report, including when reports feed AI scoring.

What notice or consent is needed for monitoring and surveillance?

  • State electronic monitoring laws
    • New York, Connecticut, and Delaware require notice when you monitor employee emails, internet use, phone calls, or similar channels, including through AI tools that scan content or metadata.
    • Best practice: obtain written acknowledgment at hire and maintain visible policy postings.
  • Biometric and facial recognition rules
    • Illinois BIPA and similar statutes typically require written notice and informed written consent before collecting biometric identifiers (for example, face scans in automated video analysis, voiceprints in call monitoring).
    • They also require a public retention and deletion policy and prohibit selling or profiting from biometric data.
  • Common law privacy and wiretap concerns
    • Recording calls, keystrokes, or screen content with AI-enhanced tools can implicate federal and state wiretap laws if done without proper consent.
    • Some states require all-party consent for certain recordings; check state law before deploying these tools.

How can employers standardize AI-related notices?

A consolidated approach reduces friction and legal risk. Many employers are creating centralized templates and workflows.

  1. Map all AI and monitoring touchpoints
    • Identify every place where AI affects candidates or employees: job ads, ATS, video interviews, assessments, productivity monitoring, and performance tools.
  2. Draft layered notices
    • Create a high-level privacy and AI usage notice on your careers and intranet sites.
    • Develop specific just-in-time notices inside the ATS, onboarding flows, and monitoring notices at login screens or on devices.
  3. Integrate consent flows
    • For jurisdictions requiring consent (for example, biometric laws), add checkboxes or digital signatures with clear language.
    • Track consents systematically so you can prove them later.
  4. Provide appeal or human review options where required
    • In higher-risk uses, offer a channel for candidates or employees to request human review or raise concerns about AI-driven decisions.

When should an employer hire a labor and employment lawyer or AI compliance expert?

Employers should bring in legal or specialized AI compliance support when they deploy AI tools that can materially affect hiring, promotion, pay, or termination, especially across multiple states or in NYC. Early advice costs less than responding to an EEOC charge, private class action, or regulatory investigation tied to AI.

  • Triggers to engage counsel or experts
    • Planning to implement or significantly expand use of AI screening, scoring, or monitoring tools.
    • Operating in or hiring for New York City, Illinois, Colorado, California, or multiple states with emerging AI laws.
    • Receiving an internal complaint that AI tools are unfair or discriminatory, or an external demand letter or agency charge.
    • Negotiating or renewing contracts with key HR-tech and AI vendors.
    • Responding to union organizing, grievances, or NLRB activity involving surveillance or algorithmic management.
  • What specialists typically provide
    • Multi-jurisdictional compliance mapping for employment, privacy, and AI laws.
    • Review of AI tool design, vendor contracts, and policy frameworks.
    • Bias audit planning and interpretation, including remediation strategies.
    • Incident response if an AI tool is alleged to cause discrimination or privacy violations.

What are the next steps for employers to build an AI-safe employment compliance program?

To align employment and labor compliance with AI adoption, employers should build a structured AI governance program, integrate it into existing HR policies, and plan ahead for 2026 state-level rules. A practical roadmap over the next 6 to 12 months can dramatically reduce legal and reputational risk.

  1. Establish ownership and governance
    • Designate a cross-functional AI governance group including HR, legal, compliance, IT, and data teams.
    • Define which decisions qualify as "high-risk" (hiring, promotion, pay, termination, scheduling, surveillance).
  2. Inventory tools and map laws
    • Create a live inventory of all AI and automated tools touching employment decisions.
    • Map each tool to applicable federal, state, and local laws, including NYC Local Law 144, Illinois and Colorado statutes, biometric laws, and monitoring rules.
  3. Upgrade vendor management
    • Standardize AI vendor due diligence questionnaires and contract clauses.
    • Prioritize renegotiation or replacement of high-risk tools with weak compliance support.
  4. Design and schedule audits
    • Identify which tools require formal bias audits (starting with NYC AEDTs) and which merit voluntary audits due to risk.
    • Engage independent auditors where needed and align internal data pipelines to support repeatable audits.
  5. Revise policies and notices
    • Update equal employment opportunity, hiring, monitoring, and privacy policies to explicitly address AI and automated tools.
    • Roll out updated candidate and employee notices, with templates for different jurisdictions.
  6. Train HR and managers
    • Train HR teams and hiring managers on appropriate AI use, red flags, and how to respond to accommodation requests and complaints.
    • Emphasize that AI is a tool, not a decision maker, and that managers remain accountable for legal compliance.
  7. Monitor, document, and improve
    • Track issues, disputes, and audit findings and feed them back into model selection, vendor choices, and process design.
    • Maintain documentation of your governance program, audits, and corrective actions to prepare for future regulatory inquiries.

Need Legal Guidance?

Connect with experienced lawyers in your area for personalized advice.

No obligation to hire. 100% free service.

Disclaimer:
The information provided on this page is for general informational purposes only and does not constitute legal advice. While we strive to ensure the accuracy and relevance of the content, legal information may change over time, and interpretations of the law can vary. You should always consult with a qualified legal professional for advice specific to your situation.

We disclaim all liability for actions taken or not taken based on the content of this page. If you believe any information is incorrect or outdated, please contact us, and we will review and update it where appropriate.