Texas AI Compliance 2026 United States Data Privacy Rules

Updated Nov 26, 2025
  • By 2026, Texas agencies and many businesses that build, host, or support AI tools for government or consumer-facing decisions will face stricter disclosure and anti-discrimination requirements.
  • Texas is pairing its new data privacy framework (Texas Data Privacy and Security Act) with AI-specific rules that target "algorithmic discrimination" in areas like hiring, lending, health, and access to public services.
  • Any business that uses AI in hiring, credit, insurance, housing, or eligibility decisions should plan to tell people that AI is involved, explain key factors, and provide a clear appeal or human-review process.
  • Texas law already allows regulators and plaintiffs to attack biased AI under existing anti-discrimination, consumer protection, and privacy statutes, even if those statutes do not use the word "AI."
  • Every company operating in Texas should update its privacy policy and vendor contracts by 2026 to: (1) map AI use, (2) describe automated decision-making and profiling, (3) add AI-specific rights and notices, and (4) allocate risk with AI vendors and customers.
  • Waiting until formal Texas AI regulations are finalized will be too late; you can bake most anticipated requirements into your 2025-2026 privacy and contract refresh cycles now.

What new AI governance rules are emerging for businesses in Texas?

Texas is moving toward a model where state agencies and the businesses that build or support their AI systems must disclose AI use, assess bias, and prevent "algorithmic discrimination," while the Texas Data Privacy and Security Act (TDPSA) adds privacy and profiling safeguards. For private-sector businesses, there is already de facto AI regulation through existing Texas and federal anti-discrimination, consumer protection, and financial laws, with targeted AI obligations growing through procurement contracts and sector guidance. By 2026, most mid-sized and large businesses operating in Texas that use AI for consequential decisions should assume they need formal AI governance, documentation, and updated notices.

Key legal building blocks in Texas and the United States

  • Texas Data Privacy and Security Act (TDPSA) Effective July 1, 2024 (with some small-business nuances), TDPSA:
    • Applies to many businesses that process personal data of Texas residents (even without a revenue or data-volume threshold for large businesses).
    • Requires clear privacy notices, data protection assessments for high-risk processing, and rights to access, correct, and delete data.
    • Covers "profiling" and other automated processing that presents a reasonably foreseeable risk of unfair or deceptive treatment, which is where many AI tools fall.
  • Texas AI Advisory Council and state use of AI (HB 2060, 88th Legislature) HB 2060 (2023) created an AI Advisory Council to:
    • Study AI systems used by state agencies, including risks of algorithmic discrimination and lack of transparency.
    • Recommend policies for responsible AI use, documentation, and public disclosure.
    • Push agencies and vendors toward inventories, impact assessments, and guardrails.
    While many obligations currently fall on agencies, vendors that provide AI-enabled systems to the state will see these requirements flow into contracts.
  • Existing Texas anti-discrimination and consumer laws (already covering AI):
    • Texas Labor Code Chapter 21 (employment discrimination).
    • Texas Fair Housing Act (Property Code Chapter 301).
    • Texas Finance Code and federal ECOA and FCRA (lending and credit decisions).
    • Texas Deceptive Trade Practices Act (DTPA) for misleading claims about AI or unfair outcomes.
    If an AI system leads to discriminatory outcomes in these domains, companies can already face investigations and lawsuits.
  • Federal regulators targeting AI
    • FTC treats biased or opaque AI as a potential unfair or deceptive practice under Section 5 of the FTC Act.
    • EEOC has issued guidance on AI in hiring and testing, warning that employers remain liable for discrimination even if a vendor built the tool.
    • CFPB has warned lenders that using complex AI does not excuse them from explanation obligations for adverse decisions.

Who is most affected in Texas right now

  • Technology vendors selling AI tools to state agencies (analytics, eligibility systems, hiring platforms, chatbots for citizen services).
  • Private employers using automated or semi-automated tools for recruiting, screening, assessments, and promotions.
  • Banks, fintechs, and lenders using AI models for credit scoring, fraud detection, pricing, or underwriting.
  • Insurers, healthcare, and benefits administrators using AI to evaluate coverage, claims, utilization, or eligibility.
  • Any business conducting high-volume profiling or behavioral advertising on Texans using AI/ML models.

What to expect by 2026

  • More detailed Texas government purchasing rules that require:
    • Disclosure when AI or automated decision-making is used in citizen-facing services.
    • Requirements to avoid and remediate algorithmic discrimination.
    • Impact assessments and audit rights for agencies and possibly the State Auditor or Attorney General.
  • Contractual backpressure on vendors: agencies will push AI responsibilities (bias testing, reporting, data retention limits, indemnities) onto technology and consulting vendors.
  • Market-standard expectations for meaningful AI disclosures, opt-outs where feasible, explanation rights, and human review options for sensitive decisions, even where not yet strictly mandated by statute.

What AI disclosure requirements will Texas businesses face in hiring, lending, and other decisions?

Texas is expected to require agencies, and thus their vendors, to disclose when AI significantly influences eligibility or other consequential decisions, and broader US law is already pushing private employers and lenders to do the same. By 2026, Texas businesses that use AI in hiring, lending, housing, insurance, or benefits decisions should assume they must (1) tell people that AI is in use, (2) summarize what data and factors the system considers, and (3) offer a path to human review or appeal.

Where disclosure expectations are strongest

  • Hiring and employment
    • Use cases: resume screening, video interview scoring, personality or skills assessments, promotion/termination models.
    • Expected practices:
      • Tell candidates if AI or automated tools will evaluate them or their materials.
      • Explain in plain language the purpose of the tool and the types of data it considers.
      • Provide a way to request human review, especially after rejection.
    • Risk drivers: EEOC guidance, Texas Labor Code Chapter 21, and growing public scrutiny of AI in HR.
  • Lending and credit
    • Use cases: loan approvals, credit limits, pricing, fraud blocks, collections prioritization.
    • Current binding law already requires:
      • Adverse action notices (ECOA, FCRA) that explain key reasons for a denial or less favorable terms, even if an AI model made or drove the decision.
      • Non-discriminatory treatment under ECOA, Fair Housing Act, Texas Finance Code, and related rules.
    • Emerging expectation: lenders flag when AI or machine learning models significantly influence decisions and improve the specificity of explanations.
  • Insurance, health, and benefits
    • Use cases: premium setting, claim scoring, fraud detection, provider network decisions, public benefits eligibility.
    • Agencies and regulators are moving toward:
      • Notices that automated tools are used in evaluating claims or coverage.
      • Clear routes to contest or appeal automated results.

Elements of a strong AI disclosure notice

Most AI disclosures can follow a common structure, adapted to the business context and risk level.

  • What is automated: Identify whether the process is fully automated, semi-automated (human-in-the-loop), or using AI as one factor.
  • Why AI is used: Provide a short purpose statement, such as "to help us prioritize applications" or "to support our fraud detection analysis."
  • What data types are used: Categories (resume details, transaction history, public records, behavioral data) rather than full technical lists.
  • How this affects the individual: State whether the AI output can affect eligibility, pricing, ranking, or timing.
  • Rights and options:
    • How to request human review or additional explanation.
    • How to correct inaccurate data used by the system.
    • Where to find more detail in the privacy policy.

Practical steps to implement AI disclosures in Texas

  1. Inventory AI use cases across hiring, lending, customer service, fraud, marketing, and operations.
  2. Classify "consequential" decisions (jobs, money, housing, health, education, benefits, legal rights) versus lower-risk automation.
  3. Create standard AI language for:
    • Job postings and candidate portals.
    • Credit and insurance applications and adverse action notices.
    • Consumer account portals and key transactional screens.
  4. Link disclosures to your privacy policy, HR policies, and internal AI governance documentation.
  5. Train staff in HR, lending, customer support, and compliance to explain AI-assisted decisions in consistent, non-misleading terms.

What is algorithmic discrimination under Texas law, and when can businesses be liable?

Algorithmic discrimination is the use of AI or automated systems that results in unlawful discrimination against protected groups, even if no one intended to discriminate. In Texas, businesses can be liable for algorithmic discrimination when their AI tools cause biased outcomes in employment, housing, credit, insurance, or access to services, under existing civil rights, finance, housing, and consumer protection laws.

How Texas and US law define the problem

  • No single "algorithmic discrimination" statute yet Texas does not currently have a standalone, comprehensive "algorithmic discrimination" act, but:
    • HB 2060 explicitly directs the AI Advisory Council to examine "discriminatory impacts" of AI in state government use.
    • TDPSA and consumer protection law cover unfair, deceptive, and discriminatory data practices.
  • Existing statutes that capture algorithmic discrimination
    • Employment: Texas Labor Code Chapter 21 and Title VII (federal) prohibit discrimination based on race, color, religion, sex, national origin, age, disability, etc. This applies whether discrimination comes from a human manager or an AI screening tool.
    • Housing: Texas Fair Housing Act mirrors the federal Fair Housing Act; biased tenant screening or dynamic pricing algorithms can trigger violations.
    • Lending and credit: ECOA, Fair Housing Act, Texas Finance Code, and CFPB rules apply to AI-driven underwriting and pricing.
    • Consumer protection: Texas DTPA and FTC Act can target unfair or deceptive AI practices, including misrepresenting fairness or failing to address known biases.

Types of algorithmic discrimination risks

  • Disparate treatment: The model explicitly uses protected attributes (like race or sex) to treat groups differently.
  • Disparate impact: A model that appears neutral uses proxies or patterns that produce significantly different outcomes for protected groups without valid business justification.
  • Proxy discrimination: Inputs like ZIP code, school attended, or social graph function as stand-ins for race, income, or other protected traits.
  • Feedback loops: Systems trained on biased historical data reinforce or amplify existing inequities.

Illustrative liability scenarios in Texas

  • HR software vendor and employer:
    • An employer in Texas deploys a resume screening tool that was trained on historical hiring decisions favoring one demographic.
    • The tool screens out female or older candidates at a much higher rate.
    • Candidates file charges with the EEOC or Texas Workforce Commission; both the employer and possibly the vendor are drawn into investigations and private lawsuits.
  • Bank using AI underwriting:
    • A lender adopts a third-party AI model that uses location and transaction data.
    • Applicants in predominantly minority neighborhoods are denied at much higher rates, without a strong, documented business justification.
    • CFPB and Texas regulators can pursue enforcement; borrowers can sue for discrimination.
  • Landlord or property manager:
    • Tenant screening software trained on past eviction and arrest data denies tenants from certain communities more often.
    • Results may violate the Texas Fair Housing Act if not properly justified and monitored.

How to reduce algorithmic discrimination risk

  1. Conduct bias and impact assessments for high-risk AI uses (hiring, lending, housing, insurance, eligibility).
  2. Document legitimate business justifications for features that correlate with protected traits and consider less discriminatory alternatives.
  3. Restrict use of sensitive attributes (and risky proxies) where not legally necessary.
  4. Continuously monitor outcomes by demographic group where lawful and feasible.
  5. Include fairness obligations in vendor contracts, including testing, reporting, and remediation commitments.
  6. Train HR, credit, and risk teams on how AI bias manifests and how to escalate issues.

How should Texas businesses update their privacy policies for 2026 AI compliance?

Texas businesses should update their privacy policies by 2026 to explicitly address AI and automated decision-making, align with TDPSA, and anticipate stricter AI transparency rules. The policy should explain what personal data the business collects, how AI uses it for profiling and decisions, what rights people have, and how individuals can challenge or opt out of certain AI use where feasible.

Core TDPSA-driven privacy policy requirements

Under TDPSA and similar US laws, your privacy policy should already cover the following, which form the base for AI-related additions:

  • Categories of personal data collected (identifiers, financial data, biometric data, online activity, etc.).
  • Purposes of processing for each category.
  • Categories of third parties that receive personal data (including AI vendors and cloud providers).
  • Consumer rights and how to exercise them (access, correction, deletion, portability, and opt-out rights for selling or targeted advertising where applicable).
  • How you secure data and your data retention approach.

AI-specific additions for Texas privacy policies

To be ready for 2026 AI expectations, add or expand sections covering AI and profiling.

  • Automated decision-making and profiling Add a dedicated section that:
    • States when you use AI or automated systems for decisions that affect individuals (such as hiring, fraud prevention, underwriting, personalization).
    • Explains that some decisions are supported by algorithms that evaluate personal data to predict interests, performance, risk, or eligibility.
    • Clarifies whether a human reviews AI-driven recommendations before final decisions.
  • Categories of data used for AI systems
    • At a minimum, specify that AI systems use:
      • Account and profile data.
      • Usage and behavioral data (website/app interactions, transaction patterns).
      • Potentially third-party or public data sources, if used.
  • Individual rights related to AI Even if not strictly required, you can future-proof by offering:
    • A right to request more information about how automated tools affected a decision, where feasible.
    • A process to request human review for certain high-impact decisions (e.g., job denials, account closures, significant credit actions).
    • Opt-out options for some forms of profiling and targeted advertising.
  • AI model training and improvement
    • Explain whether you use customer data to train or improve models, and if so:
      • What categories of data are used.
      • Whether data is de-identified or aggregated.
      • How customers can object or limit use where required by law or by contract.
  • Third-party AI vendors
    • Clarify that you may share data with service providers who host, support, or provide AI tools, under contracts requiring them to protect the data and follow applicable law.

Checklist: updating your Texas business privacy policy for 2026

  1. Map AI and automated decision-making uses and tie each to specific data categories, purposes, and legal bases (if you operate globally).
  2. Add a clear "Automated Decision-Making and AI" section that:
    • Lists major AI use cases in simple, understandable language.
    • States whether decisions are fully automated or include human review.
  3. Update your data subject rights section to:
    • Explain how people can ask about AI-driven decisions.
    • Describe any available appeal or human-review process.
  4. Clarify retention for AI-related data, especially training data, logs, and model outputs, consistent with your information governance program.
  5. Align cookie and tracking disclosures with AI-powered personalization and advertising practices.
  6. Coordinate with security and incident response teams so that any AI-related data breaches or model misuse are covered in your breach playbooks and notices.
  7. Review the policy against TDPSA and at least one strict benchmark law (such as California or Colorado) to ensure multi-jurisdictional compatibility.

How should companies update vendor and AI service contracts to manage Texas AI risks?

Companies should update vendor and AI service contracts to allocate responsibility for AI compliance, bias mitigation, data protection, and disclosures, especially when dealing with Texas agencies or high-risk decisions. Contracts should include representations about compliance and non-discrimination, audit and transparency rights, data-use limits, security standards, and meaningful indemnities for regulatory and third-party claims.

Key contract terms for AI vendors and partners

  • Scope and description of AI functionality
    • Describe what the system does, what decisions it supports, and what data it processes.
    • Clarify whether the vendor supplies a model, a platform, or a full decisioning system with workflows and interfaces.
  • Compliance and non-discrimination representations
    • Vendor should represent that:
      • Its software is designed to comply with applicable civil rights, consumer, privacy, and sector laws.
      • It has taken reasonable steps to detect and reduce discriminatory outcomes.
  • Bias testing and documentation
    • Require the vendor to:
      • Provide documentation of testing methods and key results, at least on a summary level.
      • Support your own bias and impact assessments where feasible.
  • Transparency and audit rights
    • Give you a right to:
      • Access logs and configuration information related to your deployment.
      • Obtain explanations of model logic at a level suitable for regulators and affected individuals, even if not the full source code.
  • Data use and training restrictions
    • Specify:
      • What customer data the vendor can use to train or improve models.
      • Whether data must be de-identified or aggregated.
      • Whether you can opt out of training use without losing core functionality.
  • Security and incident obligations
    • Align vendor security with your baseline (for example, SOC 2, ISO 27001, or NIST-aligned controls).
    • Mandate prompt breach notification and cooperation, especially involving training data or model outputs that may include personal data.
  • Indemnity and limitation of liability
    • Seek specific indemnity for:
      • Infringement of intellectual property rights.
      • Vendor's violation of privacy or anti-discrimination laws arising from the design of the tool.
    • Negotiate higher caps or carve-outs for regulatory fines and algorithmic discrimination claims where the vendor's design is at fault.
  • Government contracting "flow-down" terms (if you serve agencies)
    • Incorporate any AI-related provisions required by the state agency, such as:
      • Disclosure and transparency to citizens.
      • Impact assessments and public reporting obligations.
      • State audit or access rights.

Example: risk allocation table for AI vendor contracts

Risk Area Contract Clause to Use Typical Business Impact (USD)
Algorithmic discrimination in hiring tool Non-discrimination warranty, bias testing obligations, indemnity for employment claims caused by tool design EEOC/TWC investigations and settlements can run from tens of thousands to several million dollars per case
Biased credit model integrated from vendor Compliance warranty with ECOA/FCRA/Texas Finance Code; cooperation and indemnity for regulatory actions linked to model logic CFPB/FTC penalties can exceed $1,000,000 per matter; Texas AG penalties under TDPSA up to $7,500 per violation
Data misuse for AI training Explicit limits on training use, data de-identification requirements, data return/deletion on termination Customer churn plus TDPSA complaints; investigation defense often costs $100,000+ in legal and consulting fees
Security breach involving model training data Security standard, breach notification, and cost-sharing clauses Incident response, credit monitoring, and remediation commonly reach $100-$200 per affected individual

How do federal and other state AI and privacy rules interact with Texas requirements?

Texas AI and privacy developments sit on top of a broader US and global regulatory stack, so multi-state or online businesses must design AI governance to satisfy Texas while also meeting stricter regimes such as California, Colorado, and EU GDPR. The safest approach is to adopt a "highest common denominator" framework for high-risk AI decisions and then tailor details for specific jurisdictions.

Key overlapping regimes

  • Federal US law
    • FTC Act: prohibits unfair or deceptive AI practices nationwide.
    • ECOA, FCRA, Fair Housing Act, ADA, Title VII: apply across states for lending, housing, disability, and employment discrimination, regardless of your Texas location.
  • Other state privacy laws
    • California (CCPA/CPRA): strict rights around profiling, targeted advertising, and certain automated decisions; powerful enforcement and private class-action risk for security breaches.
    • Colorado Privacy Act and Colorado AI Act (2024 law with future effective dates): Colorado is moving toward explicit duties for "high-risk" AI systems, including mandatory impact assessments and notification for consequential decisions.
    • Virginia, Connecticut, Utah, Oregon, and others: similar privacy regimes with data protection assessment requirements for high-risk processing including certain profiling.
  • Global laws (if you operate internationally)
    • EU GDPR: Articles 13-15 and 22 govern transparency and certain rights regarding automated decision-making and profiling.
    • EU AI Act (phased in starting 2025-2026): high-risk AI systems must meet strict conformity, documentation, and transparency duties.

How to harmonize Texas with multi-jurisdiction obligations

  1. Classify AI systems by risk tier (low, medium, high) based on impact on individuals and regulatory exposure.
  2. For high-risk AI (hiring, lending, health, housing, government services), design governance to meet:
    • TDPSA assessment expectations.
    • Colorado and EU-style impact assessment and documentation standards.
    • Strong bias testing and explanation capabilities that satisfy EEOC, CFPB, FTC, and EU regulators.
  3. Standardize your global AI disclosures, then localize where needed:
    • Use similar wording in US and EU privacy notices about profiling and automated decisions.
    • Add jurisdiction-specific details such as opt-out rights or appeal mechanisms where mandated.
  4. Maintain a single AI inventory and risk register that tags systems by geography, data sources, purpose, and regulatory frameworks that apply.

When should a Texas business hire a lawyer or AI governance expert?

Texas businesses should bring in legal or AI governance experts when they deploy AI for high-stakes decisions, interact with state agencies, face investigations or complaints, or need to redesign privacy policies and contracts for 2026. Expert help is especially valuable when you implement or procure complex AI systems that affect employment, credit, housing, insurance, health, or public services.

Common triggers for outside help

  • Planning or launching high-impact AI projects
    • Automated hiring or talent management platforms.
    • AI underwriting or risk-scoring tools in financial services or insurance.
    • AI systems that determine eligibility or prioritization for government programs or contracts.
  • Major privacy policy and contract refresh cycles
    • Company-wide privacy program redesign driven by TDPSA or multi-state privacy laws.
    • Re-negotiation of core vendor or customer agreements where AI is a key component.
  • Regulatory or litigation events
    • EEOC or Texas Workforce Commission inquiries about hiring tools.
    • CFPB, FTC, or Texas AG investigations around credit, consumer protection, or TDPSA issues.
    • Class action threats related to biased outcomes or data misuse in AI systems.
  • Government contracting opportunities
    • Responding to RFPs that involve AI-enabled products or services for agencies.
    • Understanding AI-related representations, warranties, and audit terms in state contracts.

What specialists typically do for you

  • Lawyers (privacy, tech, employment, financial regulation)
    • Map AI use cases to relevant statutes and regulations (Texas and federal).
    • Draft and negotiate AI-specific contract clauses and risk allocation terms.
    • Design complaint-handling, adverse action, and appeal processes that meet legal standards.
  • AI governance and technical experts
    • Conduct bias and impact assessments and design monitoring dashboards.
    • Evaluate vendor tools and model documentation for sufficiency and risk.
    • Develop internal AI policies, standards, and training materials.

What are the practical next steps for Texas businesses using AI?

Texas businesses should spend 2024-2026 building an AI inventory, tightening disclosures and privacy policies, upgrading vendor contracts, and putting bias and impact assessments in place for high-risk systems. A structured roadmap can help you meet emerging Texas and US expectations without overburdening your teams.

Step-by-step roadmap to prepare for 2026

  1. Build your AI inventory
    • List all tools labeled as AI, machine learning, predictive analytics, or automated decision-making.
    • Record purpose, data sources, affected individuals, and business owner for each.
  2. Classify systems by risk
    • High risk: affects jobs, money, health, housing, benefits, or legal rights.
    • Medium risk: significant personalization, access throttling, or content moderation.
    • Low risk: internal analytics and efficiency tools without individual-level impacts.
  3. Update privacy notices and disclosures
    • Implement the AI-focused privacy policy checklist above.
    • Add concise AI disclosures in hiring flows, credit flows, and other high-risk touchpoints.
  4. Strengthen vendor and customer contracts
    • Prioritize contracts covering high-risk AI or state government work.
    • Layer in non-discrimination, testing, transparency, security, and indemnity provisions.
  5. Launch bias and impact assessments
    • Start with one or two critical systems (for example, candidate screening and underwriting).
    • Document methodology, findings, and mitigation steps; repeat on a regular schedule.
  6. Train your teams
    • Educate HR, compliance, risk, and product teams on Texas privacy and AI expectations.
    • Provide scripts or guidelines for explaining AI decisions to candidates and customers.
  7. Establish governance and escalation paths
    • Create a cross-functional AI committee or working group (legal, compliance, IT, HR, product).
    • Set clear rules for approving new AI projects, responding to complaints, and reporting to leadership.
  8. Monitor Texas developments
    • Track outputs from the Texas AI Advisory Council and TDPSA enforcement actions.
    • Adjust your program as Texas agencies publish more detailed guidance, procurement rules, or regulations.

Need Legal Guidance?

Connect with experienced corporate lawyers in your area for personalized advice.

Free consultation • No obligation

Connect with Expert Lawyers

Get personalized legal advice from verified professionals in your area

Since 2020
100 lawyers
Banking & Finance Business Corporate & Commercial +1 more
Since 2024
2 lawyers
Free 15 minutes
Corporate & Commercial Business Real Estate

All lawyers are verified, licensed professionals with proven track records

Disclaimer:
The information provided on this page is for general informational purposes only and does not constitute legal advice. While we strive to ensure the accuracy and relevance of the content, legal information may change over time, and interpretations of the law can vary. You should always consult with a qualified legal professional for advice specific to your situation.

We disclaim all liability for actions taken or not taken based on the content of this page. If you believe any information is incorrect or outdated, please contact us, and we will review and update it where appropriate.