Photo credit BigStockPhoto sasha85ru
ChatGPT
by Allan Barsky, JD, MSW, PhD
Javier, Ulla, Sylvie, and Tyrone (JUST) are working on a group presentation for an online course on the use of cognitive behavioral therapy (CBT) in social work practice. They decide to use artificial intelligence (AI) programs to develop their presentation, including the content and manner of presentation. They first ask ChatGPT to develop a written introduction to the use of CBT. They then use another AI program to develop avatars that can present the introduction, as well as a professional-looking role-play to demonstrate the use of CBT in an interaction between avatars playing the roles of social worker and client. The avatars look and sound just like Ulla and Tyrone. JUST also sets up a chatbot using AI to answer text-based questions submitted by other class members. Avatars looking and sounding like Javier and Sylvie respond to the questions. We know that these students are JUST (pun intended), but are they acting ethically?
Is it Essentially Unethical to Use AI?
People who question BSW or MSW students’ use of AI to complete assignments may view such behavior as dishonest: “Students are supposed to do the work themselves and submit their own original work. Submitting the work of others, including AI, is dishonest.” Course syllabi and university policies have long had policies prohibiting plagiarism (submitting the work of others and presenting it as one’s own). Until recently, however, course syllabi and policies have not specifically dealt with the use of AI to assist with or complete assignments.
Certainly, if a professor asks students not to use AI for assignments and students do so surreptitiously, then students have infringed the principle of academic integrity (cf., NASW, 2021, s.4.04). The violation is not just that students used AI, but rather, that they were dishonest with their professor and breached the terms of the syllabus (which is like a contract between the professor and students). This begs the question: Are there ways that students may use AI in a manner that is not only ethical, but in a manner that proves to be more beneficial than completing assignments without the use of AI?
Should Professors Prohibit or Discourage Use of AI?
Some professors have started using programs such as “Turnitin” to help identify whether students are using AI in their written assignments. This approach focuses on identifying students who cheat, hoping to discourage cheating and providing evidence when professors believe students need to be held accountable. Other professors have started embracing the use of AI by students. For some, the rationale for embracing AI may be that students are already using it, and even if there are some ways to detect surreptitious use of AI, new AI programs are constantly being developed and it is becoming harder for AI-detection programs to accurately detect its use. For others, the rationale may be that AI can be looked upon as a useful tool rather than an unethical trick.
Social workers and various health care professionals are already using AI in practice (Kumar et al., 2022). For instance, professionals can submit biopsychosocial information that they have gathered from clients and ask an AI program to provide an assessment or diagnosis. The professionals can then review the product provided by AI and ensure that its assessment or diagnosis is valid and reliable. Although some people question whether AI and digital technology can demonstrate empathy and warmth to clients, social robots and chatbots can be programmed with ambient intelligence, helping them to demonstrate presence and responsiveness to the people they are serving (Barsky, 2022; Kumar et al., 2022). If professionals and social work agencies are already using AI in practice, why not encourage rather than discourage students from using AI during their studies?
Does the Ethics of Using AI Depend on Learning Objectives?
Proponents of AI may argue that AI is simply a tool that students should learn how to use, just as they have learned how to use computers for writing papers, online search engines for identifying relevant scholarly articles and research, and stats programs for analyzing quantitative research data. Can you imagine if research professors asked students to calculate all their stats without the aid of computers, or if students were required to submit all their essays in handwritten cursive?
There was a time when educators expressed concerns about students using slide rules and calculators in math classes, and when they mourned the potential loss of cursive as students learned to rely more on digital technology for writing. Although some educators continue to mourn these changes and potential losses, educators have come to accept and embrace the use of many new forms of technology.
When educators are considering whether to allow, encourage, discourage, or prohibit the use of various forms of technology, they should consider the learning objectives for each assignment or learning activity that they are constructing. If JUST’s professor wanted students to learn how to use AI to research the use of CBT, then using AI for this assignment would be appropriate. If the professor wanted them to learn how to use AI with avatars, then their use of AI would also be appropriate. If, however, the professor had reason to teach the students how to conduct their own literature reviews or how to engage clients in CBT without the use of AI, then it should be appropriate for the professor to construct assignments in a way that students complete these assignments without AI.
Even though AI can be used to perform a literature review, students still need to learn this competency without the aid of AI. AI is far from perfect in constructing literature reviews. AI may gather and rely on research and other information that is not valid and reliable. It may be programmed in a manner that has biases against certain individuals or groups. For instance, its algorithms may focus on research related to some cultural groups and neglect research related to others.
In terms of using avatars to demonstrate CBT practice skills, students may be losing the opportunity to practice and develop skills that they will need to engage clients in their field internships and future practice. Although some clients use AI programs for services related to social work, most social work services are still offered by social workers rather than by AI. Programming avatars certainly requires knowledge and skill; however, learning how to communicate with clients in person or through videoconferencing requires a different set of skills. Asking students to engage in live action role-plays (rather than animated ones) can help them develop these skills.
Is the Use of AI an Either/Or Proposition?
Although questions about the use of AI and other technology may be framed as either/or questions, these questions propose a false choice. It is certainly possible for professors to integrate the use of AI with other types of learning, and it is certainly possible for social workers to integrate the use of AI with more manual (nonautomated) forms of practice with clients. A professor might ask students to use AI to write an initial draft of an assignment and then critique the essay that AI produced.
I asked ChatGPT to write an essay, with references, on whether it was ethical for social workers to engage in sexual relationships with clients. The essay appropriately described why having sex with clients was unethical. Still, the essay had various limitations. It quoted from a dated version of the National Association of Social Workers [NASW] Code of Ethics and cited psychology literature rather than social work literature. It did not differentiate between clinical and macro practice, and some of the paragraphs contained repetitive information. Additionally, it did not cite some of the leading experts on this topic.
If this were a classroom assignment, I could also ask students to rewrite the essay by attending to the limitations and concerns they had identified. With regard to JUST’s presentation with avatars, students could observe the avatars as a learning exercise, reflect on what they learned, and then practice their social work skills by performing live role-plays.
Where Do We Go From Here?
From an ethical perspective, the simple fact of “using” or “not using” AI does not dictate whether social work students are acting ethically. Similarly, there are both ethical and unethical ways in which students can use videoconferencing, electronic records, virtual reality (Minguela-Recover et al., 2023), and other forms of technology (NASW et al., 2017). In regard to honesty and integrity, students should not surreptitiously submit work produced by AI when their professor’s instructions or course syllabi instruct them not to do so. When students have good reasons to use AI, they can discuss these with their professors and jointly determine whether certain assignments can be reconceptualized to allow for the use of AI.
Students may have many creative and useful suggestions for incorporating AI into their learning, and there may be ways of incorporating AI without sacrificing the benefits of performing assignments without AI. Professors may also engage students in discussions of the ethics of AI, including issues related to informed consent; managing risks and benefits; and ensuring that AI respects the dignity, worth, diversity, and confidentiality of the people it is serving (Barsky, 2023; NASW, 2021).
When AI is constructed and used appropriately, it can serve us well as a tool for education and practice. When there are problems in the ways AI is constructed or used, then clients, students, practitioners, professors, and communities may be put at risk. We can empower ourselves and the people we serve by understanding the possibilities and limitations of AI and by playing active roles in its development and use.
Acknowledgement: Thank you to my daughter, Adelle Barsky-Moore, for sharing her insights into a current university student’s perspectives and uses of AI. For example, she taught me how students could use AI to develop study guides to prepare for exams, and how to cross-check with her textbook to ensure that the information in the study guide is accurate. For other examples of how students use ChatGPT, please see Terry (2023).
Full disclosure: I did not use AI to complete this article. I did use spell-check and grammar-check utilities on MS Word. I do not know whether I have any ownership interest in any particular AI companies. I do not own their shares directly, but it is possible that I have equity in them through mutual fund investments. The JUST people cited in this article are fictional. They were not members of my class. If they were, it would have been wonderful to support their creative efforts with AI and ensure that they were learning the skills they need to practice ethically and effectively.
References
Barsky, A. E. (2022, January). Ethics alive! The ethics of SocBots: Imagining Siri and Alexa as the next generation of social workers. The New Social Worker, 29(1), 4-5. https://www.socialworker.com/feature-articles/ethics-articles/ethics-socbots-imagining-siri-alexa-next-generation-social-workers
Barsky, A. E. (2023). Essential ethics for social work practice. Oxford University Press.
Kumar, Y., Koul, A., Singla, R., & Ijaz, M. F. (2022). Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. Journal of Ambient Intelligence and Humanizing Computers, 13, 1-28. https://doi.org/10.1007/s12652-021-03612-z
McMurtrie, E. (2023, February 2). Teaching: Rethinking research papers, and other responses to ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/newsletter/teaching/2023-02-02?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_6088609_nl_Academe-Today_date_20230203&cid=at&source=&sourceid=&cid2=gen_login_refresh
Minguela-Recover, A., Baena-Pérez, R., & Mota-Macías, J. (2022). The role of 360º virtual reality in social intervention: A further contribution to the theory-practice relationship of social work studies. Social Work Education. https://doi.org/10.1080/02615479.2022.2115998
National Association of Social Workers. (2021). Code of ethics. Author. https://www.socialworkers.org/About/Ethics/Code-of-Ethics/Code-of-Ethics-English
National Association of Social Workers (NASW), Council on Social Work Education, Association of Social Work Boards, and Clinical Social Work Association. (2017). Standards for technology and social work practice. https://www.socialworkers.org/Practice/Practice-Standards-Guidelines
Terry, O. K. (2023, May 24). I’m a student. You have no idea how much we are using ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_6893319_nl_Academe-Today_date_20230524&cid=at&source=&sourceid=&sra=true
Allan Barsky, PhD, JD, MSW, is Professor of Social Work at Florida Atlantic University and author of Social Work Values and Ethics (Oxford University Press).
The views expressed in this article do not necessarily represent the views of any of the organizations to which the author is affiliated, or the views of The New Social Worker magazine or White Hat Communications.