Photo credit: BigStockPhoto Peach_BigStockPhoto
by Allan Barsky, JD, MSW, PhD
Artificial intelligence (AI) and other digital tools are playing increasingly prominent roles in social work practice—from communication and assessment to intervention, advocacy, documentation, and research (Reamer, 2025; Zhang & Yu, 2024). As these technologies evolve, it is essential for social workers not only to understand their potential uses, but also to recognize the limits of their own knowledge about various forms of technology, their potential benefits, risks, and limitations. This balance mirrors the principles of cultural competence and cultural humility.
Standard 1.05 of the National Association of Social Workers (2021) Code of Ethics emphasizes both concepts. Cultural competence involves gaining the knowledge, skills, and awareness needed to work effectively with people from diverse backgrounds; cultural humility reminds us that learning is ongoing and requires continual self-reflection, acknowledgment of our biases (Barsky-Moore & Barsky, 2025), and respect for clients as experts in their own social identities, cultural attributes, and experiences. This article builds on these concepts by exploring the notions of technological competence and technological humility.
Definitions
Technological competence refers to a social worker’s ability to use AI, videoconferencing, electronic records, online social media, and other digital tools in ways that promote the well-being of individuals, families, groups, communities, and other social systems. Also called digital competence, it involves not only understanding of how various technologies function, but also developing the practical skills needed to use specific tools for particular tasks or interventions (Fjeldheim et al., 2024). Although social workers do not need to know how to code or design AI systems, they should have a working understanding of how these technologies support practice, their potential benefits and limitations, and the ethical considerations involved in integrating them into professional work.
Technological humility complements technological competence by reminding social workers that no matter how skilled we become, our knowledge of technology—especially rapidly evolving tools like AI—will always be incomplete. Technological humility involves recognizing the limits of our expertise, staying curious about how tools actually function, and being honest with clients, colleagues, and ourselves about uncertainties or risks. It also requires ongoing learning, consultation with technology specialists when appropriate, and a willingness to adjust our practices when new information emerges. Just as cultural humility positions clients as experts in their own identities and experiences, technological humility encourages social workers to approach technology with caution, reflection, and respect for their potential impact on human lives. While it may be tempting to rely heavily on “smart machines,” social workers must remain mindful of the uncertainties and limitations of AI and other technologies, and ensure that human judgment, oversight, and accountability guide how these tools are used in practice.
Reamer (2022) suggests that ethical humility requires a form of deference, not just in being aware of one’s fallibility, but also being deferential and modest about what one knows and does not know. Accordingly, when incorporating technology into practice, social workers should acknowledge what they do not know so that they can take appropriate precautions and proceed with due care.
A Case Example
Miranda is a graduate social work student completing her practicum at a community mental health agency. The agency has recently encouraged practicum students to integrate a CBT app that uses AI to help clients manage symptoms of depression and anxiety. Because Miranda has never used such an app before, she immediately recognizes that she has a significant learning curve before she can ethically recommend it to clients.
Drawing on principles of cultural humility, she begins by reflecting on her initial reactions. She acknowledges that she does not fully understand algorithms being used by the CBT app, so she needs to proceed cautiously, prioritizing ethical practice, client safety, and self-determination. Miranda worries that technology might replace parts of her role as a social worker and wonders whether clients will be able to build trust with an app when her training has emphasized the importance of the therapeutic relationship, human interaction, and empathy. At the same time, Miranda has read about the potential benefits of AI-assisted CBT interventions and is curious about how the app might enhance her work with clients.
After identifying her thoughts and concerns, Miranda takes the issue to her practicum supervisor, Sherese, for guidance. Sherese validates her mixed feelings. She suggests that they approach the situation as “scientist-practitioners” by gathering more information before implementing any new intervention. Together, they review the research provided by the app developers. The findings suggest that the app can be an effective supplement to in-person therapy.
Still, Miranda notes the possibility of bias, as the research has not yet been replicated by independent investigators, and there is limited information about how well the app works with different cultural groups. Even so, the materials help them understand the theoretical foundation of the app and how it was designed, allowing them to evaluate whether the CBT principles underlying the tool are evidence-informed.
Before recommending the app to clients, Miranda and Sherese decide to test it themselves. They schedule a demonstration with the developers, who show how the app can help clients in the following manners:
- create mood logs and identify patterns,
- flag negative automatic thoughts, triggers, and stressors,
- track mood trends over time, and
- receive suggestions for coping strategies such as deep breathing, yoga, or grounding exercises.
Sherese and Mirando discuss the importance of not simply referring clients to use the app but integrating it thoughtfully into treatment. They note the importance of in-person counseling sessions to review what the client is learning from the app, as well as exploring possible concerns and collaboratively identifying which strategies are most helpful. In particular, clients may need to be observed as they learn how to use particular coping strategies. For instance, Miranda notes how a client using deep breathing might lead to hyperventilation, dizziness, and fainting.
Miranda and Sherese ask the developer specific questions about confidentiality and data security to ensure the app is HIPAA-compliant and includes appropriate privacy safeguards. They then conduct several role-plays to practice explaining informed consent, assigning homework, and processing the client’s experiences with the tool in follow-up sessions.
As they explore the app, they assess its cultural responsiveness and limitations. They notice, for example, that the app requires advanced English proficiency and is currently unavailable in other languages. They also reflect on possible cultural or socioeconomic biases embedded in the app’s prompts or suggested strategies. They check back with the developer to see if there is any additional information about potential cultural biases.
Sherese emphasizes the importance of informing clients about potential risks during the informed consent process. Research on the app’s effectiveness is still emerging, and the app does not include a mechanism for screening for suicidal ideation or other crises. As a result, Miranda will remain responsible for ongoing risk assessment, safety planning, and monitoring client well-being. Sherese also notes that apps are updated frequently, so staying informed about new features, changes, and research findings will be essential to practicing with both technological competence and humility.
Conclusion
AI and other digital technologies are tools that social workers can use in various aspects of practice while retaining human oversight and responsibility for client and community well-being. Whereas technological competence emphasizes building the knowledge and practical skills needed to use digital tools effectively, technological humility reminds us to stay mindful of the limits of what we know.
Humility helps us make informed, balanced decisions by acknowledging both the strengths and shortcomings of new and emerging technologies, as well as our uncertainties and ethical concerns. Humility invites us to seek additional research, experiment thoughtfully, monitor outcomes, and consult with supervisors, colleagues, and IT specialists to ensure our practice remains ethical, safe, and client-centered. As technology continues to evolve, technological competence and humility are not one-time achievements but, rather, lifelong commitments. Recognizing the ongoing challenges and opportunities of technology-assisted social work helps us grow as reflective, responsible social workers.
References
Barsky-Moore, J. A., & Barsky, A. E. (2025). Mitigating anchoring bias when using AI in social work research: Responsible conduct of AI-assisted research. International Journal of Social Work Values and Ethics, 22(1), Item 06. https://jswve.org/volume-22/issue-1/item-06/ https://doi.org/10.55521/10-022-106
Fjeldheim, S., Kleppe, L. C., Stang, E., & Støren-Vaczy, B. (2024). Digital competence in social work education: Readiness for practice. Social Work Education, 43(6), 817-833. https://doi.org/10.1080/02615479.2024.2334800
National Association of Social Workers. (2021). Code of ethics. Author. https://www.socialworkers.org/About/Ethics/Code-of-Ethics/Code-of-Ethics-English
Reamer, F. (2025). Artificial intelligence in the behavioral health professions: Ethical and risk management issues. NASW Press.
Zhang, H., & Yu, R. (2024). Considering a unified model of artificial intelligence-enhanced social work: A systematic review. Journal of Human Rights and Social Work, 9(2), 187-205. https://doi.org/10.1007/s41134-024-00326-y
Allan Barsky, JD, MSW, PhD, is Professor of Social Work at Florida Atlantic University and author of Social Work Values and Ethics (Oxford University Press).
The views expressed in this article do not necessarily represent the views of any of the organizations to which the author is affiliated, or the views of The New Social Worker magazine or White Hat Communications.