Thursday, March 5, 2026
🛡️
Adaptive Perspectives, 7-day Insights
AI

New York Wants to Make It Illegal for AI to Answer Your Medical Questions

Senate Bill S7263 would hold AI companies liable when chatbots provide responses in 14 licensed professions. The bill's language is broader than the headlines suggest—and it's the wrong approach at the wrong time.

New York Wants to Make It Illegal for AI to Answer Your Medical Questions
Image generated by OpenAI GPT Image 1.5

New York Senate Bill S7263, sponsored by Senator Kristen Gonzalez (D-Queens), passed the Senate Internet and Technology Committee unanimously and advanced to third reading on March 4, 2026. It would make AI companies liable whenever a chatbot provides a “substantive response” in any of 14 licensed professions—including medicine, law, psychology, nursing, pharmacy, engineering, architecture, and social work.

The bill’s sponsor memo frames this as public safety. But the sponsor herself described the broader legislative package as tackling the “urgent need to protect the workforce from their companies’ use of AI.” That’s a tell. This isn’t primarily about safety. It’s about insulating licensed professionals from competition.

I read the actual bill text. It’s worse than the headlines suggest.

What the Bill Actually Says

The bill adds Section 390-f to New York’s General Business Law. Here’s how it works.

Chatbot is defined as “an artificial intelligence system, software program, or technological application that simulates human-like conversation and interaction through text messages, voice commands, or a combination thereof to provide information and services to users.”

Proprietor means “any person, business, company, organization, institution or government entity that owns, operates or deploys a chatbot system used to interact with users.” Third-party developers who license their technology are explicitly excluded.

The core prohibition: a proprietor “shall not permit such chatbot to provide any substantive response, information, or advice, or take any action which, if taken by a natural person” would constitute unauthorized practice of a licensed profession under New York’s Education Law or Judiciary Law.

The bill covers professions governed by 13 articles of the Education Law plus the Judiciary Law:

ArticleProfession
131Medicine
133Dentistry
135Veterinary Medicine
136Physical Therapy
137Pharmacy
139Nursing
141Podiatry
143Optometry
145Engineering and Land Surveying
147Architecture
153Psychology
154Social Work
163Mental Health Practitioners
Judiciary Law Art. 15Law / Attorney Practice

That’s 14 professions. And the kicker: disclaimers don’t help. The bill says a “proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system.” Enforcement is through private civil action—anyone can sue for actual damages, attorneys’ fees, and costs.

“Substantive Response” Is Doing a Lot of Work

The bill never defines “substantive response.” That’s the most important term in the entire statute, and it’s left completely open.

Ask ChatGPT “what’s the difference between ibuprofen and acetaminophen?” Is that a substantive medical response? What about “should I see a doctor for chest pain?” Or “what does this rash look like?” Every frontier model on the market can answer these questions. Under this bill, doing so in New York could trigger liability.

The same ambiguity applies across every covered profession. Ask an AI to review a contract clause—legal advice. Ask it to help size a structural beam—engineering. Ask it whether your kid might benefit from therapy—psychology or social work or mental health counseling, take your pick.

The sponsor memo justifies the bill by citing the American Psychological Association’s warning about chatbots “masquerading” as therapists. That’s a real concern. But the bill’s response isn’t to regulate AI therapy apps—it’s to ban substantive responses across 14 entire professional domains.

The API Question

Most coverage of this bill talks about “chatbots,” which conjures images of someone typing questions into ChatGPT. But the definition is broader than that: “an artificial intelligence system, software program, or technological application that simulates human-like conversation and interaction.”

What about a healthcare startup building a triage tool on top of Claude’s API? A legal tech company using GPT to summarize case law? A nursing education platform that lets students practice clinical reasoning with an AI patient? All of these “simulate human-like conversation” and provide “information and services to users” in covered professions.

The bill holds the “proprietor”—the entity that “owns, operates or deploys” the chatbot—liable. The third-party developer exclusion means Anthropic and OpenAI might escape direct liability if they’re only licensing model access. But the startup deploying the model? They’re the proprietor. They’re on the hook.

This creates a chilling effect on exactly the kind of AI applications that could improve access to professional services. New York wouldn’t just be regulating consumer chatbots. It would be raising the legal risk of building any AI-powered tool that touches these 14 professions.

The Real Problem: 50 States, 50 Rules

Even if you think some regulation of AI in licensed professions is warranted—and I think reasonable people can disagree on that—this is the wrong approach.

In 2025, state lawmakers across all 50 states introduced more than 1,080 AI-related bills. Only 118 became law, but that’s still 118 different rules from 118 different legislative processes. We are heading toward a patchwork of 50 states with different AI requirements, different definitions of “chatbot,” different lists of protected professions, and different liability frameworks. New York’s bill covers 14 professions. California’s SB-243 takes a disclosure-first approach. Utah requires disclosure when users interact with AI but doesn’t ban the interaction. Other states will do something else entirely.

If you’re building an AI product, you now need to know: Does this state consider a triage chatbot to be practicing medicine? Does that state’s definition of “chatbot” cover my API-based tool? Can I deploy this nursing education app in New York but not the version that gives clinical feedback?

The Trump administration saw this coming. In December 2025, President Trump signed an executive order directing the DOJ to create a task force to challenge state AI laws in federal court and threatening to withhold broadband funding from states with “onerous” AI regulations. The order even called for Congress to pass legislation preempting state AI laws outright.

At this stage of AI development, I think the administration had the right instinct. The technology is evolving too fast for state legislatures to regulate competently, and premature regulation locks in assumptions that will be obsolete in six months. Individual states crafting AI policy in isolation creates compliance mazes that benefit large incumbents (who can afford 50-state legal teams) and punish startups and smaller players. S7263 is a textbook example. New York is doing exactly what the executive order warned about, and the result is a bill that would make it legally dangerous to build AI tools in 14 professional domains—but only if you serve New York users.

Who This Actually Hurts

The AFL-CIO’s Mario Cilento supported this bill, arguing AI “shouldn’t replace human judgment or jobs.” That’s the quiet part said loud. This is protectionism dressed up as consumer safety.

The people who benefit most from AI answering medical, legal, and mental health questions aren’t the ones who already have a doctor, a lawyer, and a therapist. They’re the people who can’t afford one. A single mother in the Bronx who asks an AI whether her kid’s symptoms warrant an ER visit at 2 a.m. An immigrant who needs to understand a lease agreement but can’t afford $400 an hour for an attorney. A teenager in a mental health crisis who won’t call a hotline but will talk to an AI.

Research has already shown that AI companion use reduces anxiety, depression, and loneliness. Senator Gonzalez’s sponsor memo didn’t mention that.

I’m not arguing that AI should practice medicine or law without any guardrails. I’m arguing that making it illegal for an AI to give a “substantive response” about 14 professions—with no definition of what that means, no safe harbors for educational or informational use, and no federal coordination—is the wrong tool for the job.

Where I Land

If this bill becomes law, the immediate practical effect won’t be safer consumers. It’ll be AI companies geofencing New York or adding aggressive refusal behaviors for New York users. “I’m sorry, I can’t answer questions about medication interactions for users in New York.” That makes everyone worse off.

The right approach is federal standards with clear definitions, safe harbors for informational and educational use, disclosure requirements, and liability frameworks that distinguish between an AI pretending to be your therapist and an AI helping you understand what questions to ask your therapist. That’s a meaningful distinction. S7263 doesn’t make it.

New York is rushing to protect professionals from competition and calling it consumer safety. I’d rather see lawmakers protect consumers’ access to information—even when that information comes from a machine.