Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

Spring 2026 Issue

What to Consider When Considering AI
By Sue Coyle, MSW
Social Work Today
Vol. 26 No. 2 P. 14

AI and other technology offer social workers providing remote services a bevy of tools to aid their practice, but clinicians must understand the risks and regulations that come with it.

When Shannon Miller, LCSW, MEd, clinical director and owner of Apricity Expat Therapy first began offering telehealth services nearly 10 years ago, her introductory email to clients included an explanation of the EHR/practice management platform she used and of Zoom. At that time, most people had limited experience not only with patient portals but with videoconferencing platforms as well.

Since then, virtual mental health services have become increasingly common, thanks in part to the COVID-19 pandemic pushing nearly all services online temporarily. In fact, researchers found that in 2021 and 2022, a time when COVID restrictions were lessening significantly, nearly 28% of adult mental health outpatient clients used telemental health services exclusively. And in 2024, an estimated 56.9% of psychiatrists used videoconferencing for more than 20% of their appointments.

The uptick in virtual counseling services means that clients are more practiced in the technology utilized. “I don’t have to explain how to use Zoom anymore. Everybody kind of knows,” Miller says. “And people are more comfortable with the telehealth environment.”

It also means that providers have more tools available to them, as tech companies work to keep pace. Those tools, which include varying forms of AI, should not be adopted blindly, however. For social workers to make effective use of the technology available to them, they must not only understand what the tools can do but what the limitations and regulations surrounding them are, as well—details that can be even more complex when social workers provide remote services across state lines.

Evolving Technology Options
One of the reasons technology and AI specifically is so appealing to social workers is efficiency. Social workers spend a significant amount of time on paperwork—everything from session notes to clinical assessments and treatment plans. It is estimated that on average health care clinicians (both physical and mental health) spend at least one-third of their working time on paperwork each week. “That administrative burden is real and significant,” says Lauri Goldkind, PhD, LMSW, editor in chief of the Journal of Technology in Human Services, founder of PAIRED Lab, and a professor at the Graduate School of Social Service at Fordham University in New York. “Every time someone has a session, especially if you’re dealing with insurance companies and/or Medicare or Medicaid, you have to document that session and the outcomes from that session and how that session fits in with the arc of a treatment goal and a treatment plan.”

Thus, it is no surprise that the most popular AI programs have to do with note taking. “Probably the biggest incursion of AI tools in the social work arena is in automated note taking and ambient note taking,” Goldkind describes. “[These are systems] where someone is recording the sessions—so if you’re in a Zoom or SimplePractice session, you’re able to record that session or you have your phone recording—you take the recording, upload it into a platform or push the button on the platform, and it transcribes that file almost automatically. I would say within less than a minute. It’s pretty remarkable.”

Some systems may also help generate notes, treatment plans, and treatment goals, using the transcriptions to make recommendations or suggestions to the clinician. Clinicians utilizing these tools are expected to review the scribe of their sessions for accuracy and take AI-generated clinical suggestions as just that. Human experience and education should lead the way in diagnosis and planning.

In fact, many of the companies offering AI features emphasize the importance of clinician review. SimplePractice, the EHR platform that Goldkind mentioned, describes itself as an all-in-one system with a client portal, billing software and an AI note taking feature that transcribes and drafts notes. In regard to note taking, SimplePractice cautions, “It’s critical that clinicians aren’t skipping this step of editing the AI-assisted note. There is a possibility that AI may misinterpret what was said in session. Additionally, it’s possible that sensitive information may need to be reworded in a note for legal or insurance purposes.”1

If a social worker does not record a session, they can still use a tool to dictate notes into their dedicated platform.

Additionally, social workers may use AI and other tools to help craft emails, locate resources, and schedule clients, sending automated reminders, new-client paperwork, and links for remote sessions.

“Generative AI can offer meaningful advantages to social workers when it’s used thoughtfully,” says Brian Christenson, LMSW, LGSW, CAP, assistant dean of nursing and health sciences and social work department at Capella University, an online institution based in Minneapolis, noting that it can give social workers “the ability to streamline time-consuming tasks so social work professionals can focus on patient and client care.”

Due Diligence
While these tools offer a variety of ways to relieve the administrative burden for social workers, they should be adopted with an informed caution that takes into account the potential risks to both client and clinician. “A lot of our members and a lot of clinical social workers are not familiar with what it means to be using AI as a tool to help with scheduling or note taking or any of the ways it can be used. It looks like something that is benign [but there are concerns],” says Laura W. Groshong, LICSW, director of policy and practice for the Clinical Social Work Association.

“The main concern we have about using AI during sessions is confidentiality. Anything that gets recorded in any form can be used for any variety of purposes,” she continues.

It is vital that social workers rely on tools that are HIPAA compliant when entering or recording client sensitive information. Many tools advertise themselves as such, but social workers should not simply take a marketing team’s word for it. They must take the time to learn what safeties have been put into place to ensure client privacy.

Additionally, social workers must not enter client information into public software. ChatGPT, for example, does not promise confidentiality and it recommends not inputting sensitive personal data.

Groshong also adds that it is not always clear what a company will do with client data once it is input into their system. Even when a company is keeping information secure within their platform, data may still be used to help refine that organization’s own AI tools. “Any new data they get becomes a part of their system,” says Groshong, noting that this fact is not broadly advertised to users. “I’d say that is not something that is part of the disclosure statements that AI companies use. It’s more positive like ‘this will help develop better ways for people to use mental health treatment [implying that] there’s nothing harmful about it.’”

Social workers should seek out information on how data is used and if they are able to opt out if desired. This information may be readily available or may be found in the finest of print.

And it’s not just the data social workers input that requires caution. When social workers use AI to generate treatment plans or suggested diagnoses, they must be aware of how the system is creating such recommendations. “This actually turns out to be a really sticky wicket,” Goldkind says. “It’s quite hard to get from the vendor [answers about] ‘how is this model trained? What kind of information did you use to train this model?’ We are really at the mercy of the vendor to be forthcoming and transparent about how their product is trained.”

This information is vital, as social workers need to be assured that any diagnostic recommendations, for example, are coming from the most up-to-date resources. Even when they are, however, social workers must also be aware that “These systems can amplify biases present in data, which may lead to inaccurate outputs,” says Christenson, who emphasizes that “There’s also the risk of overreliance, where social workers might unintentionally substitute AI-generated suggestions for their own professional judgement or the lived experience of clients.

“A human-in-the-loop approach continues to be critical, positioning AI as a copilot that enhances rather than replaces human judgment and expertise.”

Clinician Liability
Additionally, social workers must be aware of who is responsible if there is a breach in confidentiality or an error in inputted information or recommendations, particularly when that information is being sent to a third party, such as an insurer, or if/when a client’s wellbeing is at stake.

The Clinical Social Work Association recommends that social workers selecting vendors with AI tools ask “Will the company sign a business associate agreement that they and their AI programs will be legally responsible for, and that they will abide by HIPAA laws and protect the privacy of communication of LCSWs and clients?”2 Such an agreement makes the vendor legally liable for maintaining confidentiality.

Without that type of agreement, there is not a lot of consumer protection in place in the United States. Should an individual experience harm due to an AI-suggested treatment plan, for example, “The platform is not going to be liable,” Goldkind says. While she says that she has not read an end user agreement recently for such a platform, “My guess is they hold themselves harmless. They would say it’s for recommendation purposes only or there will be a disclaimer that the liability rests at the professional’s discretion.

“I think it’s a lot like BetterHelp. When you read the BetterHelp user agreements on the professional side, [they] say they are not providing therapy; they are just providing the matching service,” she adds.

The question then becomes, does a clinician’s insurance cover AI-related errors? As AI continues to evolve, this is something that all parties—clinicians, insurers, and legal counsel—continue to grapple with. Some coverage plans include explicit clauses about AI in their policies, while others do not, creating a gray area. Vantage Point, a risk management resource publication, recommends that providers:

• “Consult legal counsel and insurance company representatives to gauge how AI-enabled technologies may affect the organization’s risk profile and insurance coverage needs.

• In the event of lawsuits alleging AI-related error, preserve machine-learning algorithms, so that they may be examined for data validity, presence of bias and compliance with the applicable standard of care.”3

State to State Laws and Regulations
Understanding legal responsibilities can be even more complex when a social worker is providing remote services across state lines. As is often the case, regulations around the use of AI in therapeutic settings vary from state to state. At present, “There are seven states that limit the use of AI. There are nine states that have laws pending,” Groshong says.

In Illinois, for example, the Wellness and Oversight for Psychological Resources Act was signed into law in August 2025. This law prohibits AI from acting as a therapist and limits how it can be used in a therapeutic setting. AI can help with scheduling and billing but cannot generate therapeutic recommendations or plans without human review, for instance.

“The intent is that AI can be a tool for professionals, not a participant in the therapeutic relationship. Think of it like a very advanced reference system. It can help with background research, documentation, drafting materials, or organizing information, but the licensed professional remains the one exercising judgment and making decisions,” explains Kyle Hillman, director of legislative affairs for the NASW-Illinois Chapter.

Illinois law also requires informed consent. Clinicians must obtain written and informed consent from their clients before using AI to record, transcribe, or analyze sessions. On the other hand, Nevada, which passed a similar law, does not require consent.

Nonetheless, the Nevada branch of NASW recommends social workers incorporate it, advising they “Update informed consent documentation to clearly explain any use of AI for administrative tasks.”4

But as it is optional in Nevada, the discrepancy illustrates how a social worker may be more at risk legally should they utilize AI in different states without educating themselves first and ongoing.

“This is a moving target,” Hillman says. “AI is evolving faster than any legislature can respond, so this [the laws in Illinois and Nevada] should be seen as a foundation, not the final word. We’ll almost certainly revisit issues like data use, interstate practice, and new forms of generative systems. But establishing that therapy requires a human professional was the critical first step.”

Staying Up to Date
How then can a social worker best stay informed? It requires effort, particularly when a social worker is offering services in multiple places. Miller, while currently based in Lancaster, Pennsylvania, specifically serves Americans living abroad. She has clinicians who live all over the world, as well.

“Staying on top of these rules is baked into the weekly, monthly workload of what we do. There are a couple of different apps that I use, and of course I check the government website and stay on top of the Social Work Compact,” she says of the laws surrounding telehealth specifically. She recommends Telehealth.org, the NASW website and its local chapter sites, as well as the Epstein Becker Green’s Telemental Health Laws App.

The Clinical Social Work Association has also written numerous pieces on AI and telehealth and works to keep its members up to date on laws and regulations.

Additionally, Hillman says, “At a minimum, clinicians should stay in close contact with their licensing board and their professional association, because those are the entities interpreting how new laws apply to practice. For anything even remotely complex, it’s also wise to consult legal counsel or a risk management professional. These regulations touch privacy, liability, and professional conduct, so guessing is not a good strategy.”

For a social worker providing remote services, there is a lot to keep track of. Technology can help. However, for it to be a true help, that technology must be in compliance with the laws and regulations surrounding it, and it is up to the social worker to know what that means.

— Sue Coyle, MSW, is a freelance writer and social worker in the Philadelphia suburbs.

 

References
1. McGeehan B. AI for therapists. SimplePractice website. https://www.simplepractice.com/blog/
ai-for-therapists/?g_acctid=419-488-5451&g_adgroupid=&g_adid=&g_adtype=none&
g_campaign=PMAX+%7C+Note+Taker+%7C+Desktop&g_campaignid=23144484629&g_
keyword=&g_keywordid=&g_network=x&utm_source=google&utm_medium=cpc&network=
x&utm_campaign=PMAX_NoteTaker&utm_term=&device=c&matchtype=&gad_source=1&
gad_campaignid=23139226455&gbraid=0AAAAADqOPoNKVClaGIf5gWZ11EKMtCaK1&
gclid=Cj0KCQiAtfXMBhDzARIsAJ0jp3DQSakvTNaMW7bJo6EHBjV_8kRSkVE3hZfPI9wsn
G7vh5lrTI9LGWMaAic-EALw_wcB
. Published March 4, 2025.

2. Artificial intelligence’s impact on child development: a clinical social work perspective. Clinical Social Work Association website. https://www.clinicalsocialworkassociation.org/alerts/13586813. Published January 18, 2026.

3. Artificial intelligence: a second look at an evolving technology. Vantaged Point; 2025(2). https://www.hpso.com/getmedia/6e0253a8-4a53-44c6-aa0c-86d7b83f52a0/CNA_VP25-Artifical-Intelligence_COB.pdf.

4. AI regulation in Nevada: what you need to know. NASW Nevada website. https://naswnv.socialworkers.org/Professional-Development/AI-Regulation. Updated 2025.