Fall 2025 Issue Documentation Strategies: AI in Social Work Emerging Ethical and Risk Management Issues Social work’s 19th-century pioneers could not have forecast that in the 21st century, the profession’s practitioners would use AI to serve clients and others in need. In the late 1800s, during social work’s inaugural years, the only way to serve vulnerable people was to meet with them face-to-face. Technology then was relatively primitive. Social work’s earliest practitioners could not have imagined that some day they would be able to use AI to provide services to clients remotely without ever meeting them in person, upload AI-generated clinical documentation to something called the cloud, and use AI to provide supervision, among other uses. Yet here we are. Today’s social workers have the option to use AI to provide users, both clients and the general public, with clinical advice, crisis intervention, and resources; conduct client risk assessments; implement prevention efforts; document clinical services; identify systemic biases in the delivery of social services; provide social work education and supervision; and predict social worker burnout and service outcomes, among other uses. These remarkable developments have generated considerable discussion and debate among social workers. Some practitioners embrace AI enthusiastically and celebrate its potential to extend social workers’ reach to vulnerable people, accelerate the completion of administrative and research tasks, and enhance professional education and supervision. Other practitioners are wary and worry about the potential impact of this novel technology on social work’s venerable commitment to human contact and relationship as core components of a helping profession. Still other social workers are ambivalent about AI, recognizing a complex mix of potential benefits and risks. Whatever social workers’ sentiments are about the role of AI in the profession, it is essential that practitioners have a firm grasp of emerging ethical and risk management issues. Even social workers who reject AI as a tool are likely to work with clients who are using AI to address struggles in their lives and seek resources, as well as with colleagues who are using AI for various clinical and administrative purposes. AI: A Precis In social work, the field of affective computing, also commonly referred to as emotion AI, is a subfield of computer science originating in the 1990s. AI options in social work include chatbots, social robots, machine translation, search engines, research tools, predictive analytics tools, speech-to-text tools, text recognition, speech generation, and image recognition and generation to assist people who struggle with mental health, substance use disorders, and other behavioral health challenges. A chatbot is a computer program that simulates human conversation to solve users’ queries. When an individual who is struggling with a behavioral health issue reaches out, the chatbot can help them address their challenges. For example, a chatbot might provide a user who is experiencing depression or anxiety with assessment questions, self-help suggestions, resources, and referrals. Some chatbots connect users with a human agent, especially in high-risk situations. Chatbots use natural language processing. Natural language processing entails speech recognition and text analysis to simulate human conversations via computer programs and to create and understand clinical documentation. A social robot is a robot capable of interacting with humans and other robots. Social robots can provide emotional support, companionship, and personal assistance services to older adults, people with disabilities, children with special needs, and other vulnerable populations. Social robots using AI are often equipped with sensors, cameras, microphones, and other technology so they can respond to touch, sounds, and visual cues much like humans would. These robots can decipher facial expressions, engage in conversations, respond with a smile, read text and email messages, place video calls, tell stories and jokes, and track humans with their eyes. Machine learning (ML) is a branch of AI and computer science that focuses on the use of data and algorithms to enable AI to imitate the way that humans learn. ML involves using statistical learning and optimization methods that let computers analyze datasets and identify patterns. Over time, ML uses this data to improve its accuracy. ML techniques use what is known as data mining to identify historic trends and inform future models. Since ML algorithms update autonomously, in theory, their accuracy improves with each run as the algorithm teaches itself from the data it analyzes (known as iteration). Powerful search engines use natural language processing and ML to enhance the quality of results provided in response to users’ queries. Moving beyond popular search engines such as Google and Bing, sophisticated AI search engines use natural language processing and ML to generate text and image results. For example, users can post questions about behavioral health challenges they are experiencing and receive detailed information about diagnostic criteria, treatment options, and resources. Predictive analytics tools use AI to extract insights from large volumes of data and forecast outcomes. These AI tools analyze large datasets to find patterns that can help predict future behavior, for example, the likelihood that a client with a substance use disorder will relapse or a parent will abuse a child. In addition, AI tools can be used to recognize images and generate avatars to resemble human beings. Computer vision analyzes images and nonverbal cues generated by clients, such as facial expression, gestures, and eye gaze, to analyze clients’ communications and clinicians’ responses. Social workers who use these diverse AI tools face a number of key ethical considerations related to informed consent and client autonomy; privacy and confidentiality; competence; client surveillance; misdiagnosis; algorithmic bias; and plagiarism, dishonesty, and misrepresentation. Social workers who use AI should familiarize themselves with these ethical challenges and develop policies and protocols to protect clients and prevent lawsuits and licensing board complaints that allege negligent or unethical use of AI. Informed Consent If the information that the AI tool collects includes clients’ personally identifiable information (PII), the social worker should confirm and disclose how long collected PII is stored; how, and for what purpose, PII is shared with or sold to other parties; and what precautions the provider of the AI tool has taken to protect collected data from inappropriate access. Privacy and Confidentiality Social workers who obtain confidential client data using AI tools must adhere to relevant NASW Code of Ethics standards (see section 1.07) and applicable laws. Key federal laws in the US include HIPAA, Confidentiality of Substance Use Disorder Patient Records (Title 42 CFR Part 2), Family Educational Rights and Privacy Act (Title 34 CFR Part 99), and laws governing social work in the US military branches and the VA. Social workers should carefully review service agreements, business associate agreements, and other agreements that govern the collection and use of client data. Competence The State of Utah has sponsored an ambitious and pioneering effort to identify behavioral health practitioners’ obligation to develop core competencies related to their use of AI. In 2024, Utah established the Office of Artificial Intelligence Policy under the auspices of the state’s Department of Commerce. In 2025, this office released a comprehensive guide, “Best Practices for the Use of Artificial Intelligence by Mental Health Therapists,” which includes cutting-edge recommendations to help practitioners navigate the complexities of AI technologies while safeguarding client well-being and upholding professional standards. According to this guide, before incorporating an AI tool in the diagnosis or treatment of clients, practitioners should learn about the scope of the AI tool. When this information is not publicly available, the practitioner should inquire directly with the provider of the AI tool about whether the AI tool has been developed considering the needs associated with the population characteristics of the practitioner’s clients (for example, the goodness of fit between the AI tool’s knowledge base and clients’ age, race, ethnicity, gender, sexual orientation, gender expression, religion, socioeconomic background, and behavioral health challenges). Further, social workers should recognize that new information about the performance, efficacy, and safety of an AI tool (including accuracy and incidence rates for false-positive and false-negative outputs of tools using predictive AI technologies) may become available in the future. This information can come from various sources, including internal or external audits of AI tools or their providers, evaluations conducted by professional organizations, and studies conducted by AI tool providers, universities, and other institutions. Software updates to AI technology and interacting with software and hardware products can also affect an AI tool’s effectiveness and limitations. It is incumbent upon social workers to make a reasonable effort to stay informed about the efficacy, safety, capability, limitations, and proper use of AI tools, including making adjustments if new information indicates that an AI tool is no longer effective or safe for its intended purpose. Social workers who use AI should regularly consult information technology specialists to ensure their continuing competence. Client Surveillance Algorithmic Bias Plagiarism, Dishonesty, and Misrepresentation Social workers and students who take advantage of this powerful AI tool must be sure to cite their sources and comply with “fair use” doctrine to avoid allegations of plagiarism, dishonesty, and misrepresentation. Although using content generated by AI tools is not necessarily plagiarism, it is possible that these tools incorporate content from other authors whose work should be cited. Seeking Guidance • Codes of ethics: The current NASW Code of Ethics includes many standards addressing social workers’ ethical use of technology to serve and communicate with clients. • Social work technology practice standards: NASW, Association of Social Work Boards, Council on Social Work Education, and Clinical Social Work Association jointly sponsored a task force that wrote a comprehensive set of technology-related, ethics-informed standards for the profession, “Standards for Technology in Social Work Practice.” • Social work licensing statutes and regulations: Many social work licensing and regulatory authorities have incorporated technology-related standards into their statutes and regulations. • Statewide policies and guidance: Some states in the US have inaugurated efforts to develop and promulgate guidance for social workers and other behavioral health practitioners who use AI. The Utah Office of Artificial Intelligence Policy is a preeminent example. • Academic resources: University-based institutes focusing on the use of AI in social work, such as the University of Southern California’s Center for AI in Society, offer valuable resources and guidance, including results of research on practitioners’ use of AI. • International guidance: Several prominent international organizations have developed comprehensive and detailed guidance on the responsible use of AI. Key examples include UNESCO and the International Organization for Standardization. Conclusion The profession’s earliest practitioners could not have imagined that today’s social workers would use AI. How AI will shape social work’s future is hard to forecast. As with any cutting-edge innovation, it will take time to fully identify and understand potential and actual benefits and challenges, formulate sound ethics guidelines, and create constructive risk management protocols. The emergence and proliferation of AI is yet another reminder that social work ethics challenges and related standards evolve. — Frederic G. Reamer, PhD, LCSW, is professor emeritus in the graduate program of the School of Social Work at Rhode Island College. He’s the author of many books and articles, and his research has addressed mental health, health care, criminal justice, and professional ethics. |