The landscape of mental health support is rapidly evolving, and with it comes the urgent need for regulations regarding the use of artificial intelligence (AI) in therapeutic settings. Recently, Illinois Governor JB Pritzker signed a groundbreaking measure that prohibits AI from functioning as a therapist or counselor, confining its role to administrative and support tasks only. This is a pivotal moment for ensuring patient safety amidst the increasing use of AI in healthcare.
As states and federal agencies grapple with these changes, the Wellness and Oversight for Psychological Resources Act clearly addresses AI’s limitations. The new law safeguards patients by disallowing any therapy services, whether offered in person or through AI, unless they are supervised by a licensed professional. Notably, AI is explicitly barred from making independent therapeutic decisions, generating treatment plans without a licensed provider’s approval, or interpreting emotional or mental states.
What does this mean for AI platforms? While they can continue to assist with administrative tasks such as scheduling appointments, processing billing, or maintaining therapy notes, the potential for misuse remains a concern. Violators of this law could face fines of up to $10,000.
Mario Treto, Jr., secretary of the Illinois Department of Financial and Professional Regulation, emphasized the importance of qualified healthcare in a recent press release, stating, “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients.”
Other States Taking Action on AI Regulations
Illinois isn’t alone in its efforts. Other states are stepping up to regulate the use of AI in mental health. For instance:
- In June, Nevada banned AI from delivering therapy or behavioral health services typically performed by licensed professionals, particularly in public schools.
- Utah has implemented its own regulations, mandating that mental health chatbots disclose to users that they are not human. This requirement kicks in upon first use, after seven days of inactivity, or when asked by the user.
- Meanwhile, New York will enact a law on November 5, 2025, requiring AI companions to direct anyone expressing suicidal thoughts to a mental health crisis hotline.
These initiatives are part of a broader response to concerns raised by the American Psychological Association (APA) about potential risks stemming from AI impersonating therapists. A recent blog post from the APA highlighted distressing incidents, including two lawsuits from parents involving children who used chatbots that claimed to be licensed therapists. Tragically, one case involved a boy who died by suicide after using an app extensively.
What You Need to Know About AI in Mental Health
As a consumer, you may wonder what this legislation means for you:
Can AI provide mental health support in the future?
While current laws restrict AI’s role to administrative support, future advancements may still explore more integrated uses, provided they follow regulatory guidelines.
Are there other technologies involved in mental health support?
Yes, many apps and online services aim to offer mental health support; however, they’re required to adhere to the same regulations to ensure user safety.
How can I stay informed about new AI regulations?
Following reputable news sources and mental health organizations can keep you updated on emerging laws and technologies affecting mental health care.
As AI continues to permeate various fields, including mental health, Illinois’ new legislation is a significant step towards safeguarding patient welfare. It reinforces the necessity of human interaction in therapy while subtly reminding us of the limitations of technology.
Stay engaged and informed as the situation develops, and feel free to explore more about the intersection of technology and mental health on platforms like Moyens I/O.