California's AI Disclosure Laws: What Startups Need to Know
Part 1 of an AI Regulatory Compliance Series
Everyone is talking about the EU AI Act. But before looking abroad, U.S. startups should be paying attention to California's AI disclosure rules — several of which arealready in effect or becoming operative this year.
California hasn't passed one big AI law. It's built a layered regime touching model development, content outputs, automated decisions, chatbot interactions, and healthcare communications. The right question isn't "does a California AI law apply to us?" — it's which ones.
Here's a map of the obligations most likely to matter to your startup right now.
Three Levels of Disclosure Duty
California regulates AI transparency at three distinct stages: how a model was trained (AB 2013), how AI content is labeled and traced (SB 942 / AB 853), and how AI decisions and interactions are disclosed to users (CCPA's ADMT rules, chatbot law SB 243, healthcare law AB 489). Many startups that aren't model developers still have real exposure at the third level — and that's the one most companies overlook.
The Obligation Most Startups Miss: ADMT Under the CCPA
If your product uses AI to make or substantially influence significant decisions about people — hiring, lending, housing, education, healthcare — California's automated decision-making technology (ADMT) rules apply to you regardless of your scale or whether you train your own models.
Covered businesses must provide pre-use notice of the ADMT's purpose and the consumer's right to opt out, offer at least two designated opt-out methods, and conduct privacy risk assessments for high-risk processing. The compliance deadline is January 1, 2027, and enforcement fines run up to $7,500 per intentional violation. This is the California AI obligation most software companies are underprepared for.
AB2013 — Training Data Transparency
If you've trained or fine-tuned a generative AI model available to California users — free or paid, since January 1, 2022 — you must publish a disclosure on your website describing, at a high level, what data you used. Required elements include data sources and categories, approximate scale, whether copyrighted or personal data was used, and whether synthetic data was involved. The disclosure must be updated with each substantial model modification.
There's no explicit penalty schedule, but noncompliance creates exposure under California's Unfair Competition Law — under which the AG needs no injury showing at all, and private plaintiffs need only show lost money or property. A pending constitutional challenge by xAI is worth watching, but OpenAI and Anthropic have already published compliant disclosures at a level of generality that doesn't appear to expose proprietary methodology. The practical standard: be specific enough to demonstrate real compliance, high-level enough to protect trade secrets, and have counsel review before you publish.
SB942 / AB 853 — Content Watermarking and Provenance
Providers of generative AI systems with over 1 million monthly California users must offer users manifest (visible) and latent (metadata) disclosures on AI-generated content, provide a free public AI detection tool, and contractually require downstream licensees to preserve those capabilities. The operative date — originally January 1, 2026 — was pushed to August 2, 2026 by AB 853, and Governor Newsom's signing statement signaled further amendments are possible before then.
AB853 also added two new tiers worth noting: large online platforms (2M+ monthly users) and generative AI hosting platforms (those distributing model weights or source code) come into scope starting January 1, 2027. If you distribute open-weight models or are building a content platform at scale, that second wave applies to you.
Below those thresholds? You may still have downstream obligations. If you're building on top of a covered provider's model, check your API terms — watermarking obligations can flow through licensing agreements.
Sector-Specific Rules: Chatbots and Healthcare
SB243 requires operators of AI companion chatbots — systems designed to meet users' social needs through human-like, persistent interaction — to clearly disclose the AI nature of the interaction, with heightened requirements for minors. Standard customer service bots, virtual assistants, and business function tools are explicitly excluded.
AB489 (effective January 1, 2026) prohibits AI systems from using terms that imply a user is receiving care from a licensed health professional when no human oversight exists — including in advertising. For health-adjacent startups, this is a marketing constraint, not just a product one.
SB53 — Frontier AI (Probably Not You, But Worth Understanding)
SB53 targets developers of frontier models above 10^26 training operations with revenues over $500M. Most startups aren't close. But all frontier model developers, regardless of revenue, must publish transparency reports at deployment. And unlike AB 2013, SB 53 includes an explicit redaction mechanism for trade secrets — which is notable, because it shows California knows how to build that protection in when it wants to. Its absence from AB 2013 is deliberate.
Where to Start
The compliance question isn't "which California AI law is the big one?" It's: where in our product lifecycle does California require us to say more than we're currently saying?
Mapping that honestly — across training data, content outputs, automated decisions, and user interactions — requires a statute-by-statute analysis, not a single AI policy. The January 1, 2027 ADMT deadline is closer than it looks, and building compliant notice and opt-out flows into a product retroactively is significantly harder than designing for them from the start.
If you'd like help working through which obligations attach to your specific product and use cases, we'd be glad to talk.
Next in the series: The EU AI Act — What Startups Actually Need to Know.
This post is for informational purposes only and does not constitute legal advice.
