Social Work Tech Talk: I, Chatbot—What Does AI Have To Do With Social Work?

by

by Gina Griffin, DSW, MSW, LCSW

     Suddenly, everyone is talking about AI. For a blerd such as myself, this is bittersweet. Although I have passed the half-century mark, humans have still failed to produce a reliable transporter, or even Starfleet Academy. (Though there’s still time. It’s supposed to be founded in 2161. I just have to hang on for a while.) It is likely that this rules out any chance that I might have of serving as a therapist for Starfleet, on a Constitution-class starship, like counselor Deanna Troi. On the other hand, we do have AI. And as replete as it is with possibility, it is also fraught with the possibility of misuse and harm. It is being deployed in a number of settings, and it already touches many parts of the tech that we use daily, such as using the Bing search engine, which is AI powered (Ortiz, 2023). So, let’s talk a little bit about what makes it relevant to the work that we do as social workers (in Starfleet, or otherwise).

     I’m pretty sure that you know that AI stands for Artificial Intelligence. These are computing models that mimic human thought. They are taught by feeding them large sets of data, which helps them to learn by experience, and to accomplish specific tasks (SAS, 2023). These models are brilliant and breathtakingly cool.

     One of the most visible, and controversial uses of AI has been to develop models that can mimic or create art. A lot depends on who you ask. Some of the most well-known art generators are Dall-E, Stable Diffusion, Midjourney, and WonderAI. A “prompt” is entered into the generator to produce an image. And after a bit of “thought,” the generator produces an image based on your request. The images are sometimes wildly distorted; extra limbs and extra fingers are often an indication that you are looking at a piece of AI-generated art. However, many of the pieces have become very sophisticated and beautiful, as the models continue to evolve. They can mimic any style, even of particular artists like Norman Rockwell, and this request becomes part of the prompt.

     This is controversial for several reasons. One is that the data sets used to feed these models are derived from the art of living, breathing artists. Initially, artists were not asked if they would like to have their art included in these data sets. As a result, the companies that have developed these models are profiting illegally from their work. (Hencz, 2023). This is a problem that is being addressed in some of the newer engines, and there are models where art is now being ethically sourced. As an example, Adobe’s Firefly, now in Beta release, has been trained on data sets that are open license. They are also part of an initiative that has the intention of helping artists to protect their work, and to train the software with personalized datasets based on their own style (Adobe, 2023).

     Another concern is that AI-generated art is currently full of the types of biases that often plague new technology. As an example, early AI models often poorly identified people of color. Black people were often labeled as “gorillas” by the AI, which is obviously problematic (Mac, 2021). The theory is that the models were poorly trained, and that not enough images of Black people were included so that the models could accurately recognize us. However, it is also a well-known problem that people feed their biases into algorithms, intentionally or otherwise (O’Neill, 2016). So, in the world of AI, representation absolutely matters for the sake of accurate interpretation.

     I have to pause here and say that I am part of a group of social workers who have been playing with AI art for some time. This was the brain child of social workers Melanie Sage and Jonathan Singer. We meet a few times a week and work with a common prompt, then post the results. While this is fun, and while it produces beautiful and comical results, we have also been made more aware of the biases of the software. For example, by default, the output image will always be of a White person in most of the major software. If you would like an image of a person of color, you will have to specifically ask for this result. Asking for an older woman may produce a hag-like creature (although this has been slowly improving). And a lot of the software demonstrates biases about what men do and what women do. These are not small concerns, as it is important that the work that we produce should accurately reflect the world around us. (If you want to see what we’ve been up to, look for #SocialWorkAIArt on the app formerly known as Twitter.)

     As an example, for the illustration at the top of this article, I used the following prompt in the WonderAI app: 

An African-American woman with long curly gray hair and glasses stands next to a robot that has the same face as herself. They are both smiling. They are in a very modern spaceship. One of them is wearing street clothes. The other is dressed like a very modern robot, and it has the same face as the other woman. The robot is carrying a clipboard.

     I used some variation of this prompt to generate about two dozen pictures. Sometimes, the results were very far off. I originally wanted an image of the woman’s head pushing out of a monitor and talking to herself. But WonderAI didn't like that at all. When that happens, I nudge the prompt a little bit to see what it can do. So, I also tried Alexa-like machines, as well. There are various filters, and some of them may respond better to the prompt than others. This piece was made using the “Mystical” filter.

     An additional concern is that these art generators will take work away from flesh and blood artists. It is easy to see why this is a concern. Artists train for a lifetime to perfect their skill, and now, people with no training can pretty much push a button and produce complex pieces of art. Job displacement is frequently a concern with new technology. And in many situations, I fear that this will be the case with AI. Several fast food chains are already experimenting with chatbots that take orders and robot arms that flip burgers (Baldwin, 2023). This is reputedly being done because of a labor shortage. While I believe this is true to some extent, I suspect that there is also the financial motivation to eliminate paying human workers. And I suspect that the less complex the skill level, the more susceptible jobs will be to replacement by AI and automation. This should be a concern for social workers, as it means that jobs requiring less skill are likely to fall by the wayside, and workers will need to be trained in other areas. Workers may need help organizing into unions, so that they can protect the creation of new job categories and possibly prevent the elimination of old ones.

     In the case of AI art, I have a theory. It is a very little-known fact that I started out as a fashion designer, and then a graphic designer—my mother paid for years of art lessons. I was working in fashion when Photoshop was released in 1990, and management wanted us to learn it as part of our jobs. I was furious. I spent all of that time learning how to do everything by hand, and a stupid machine was going to come in and mess up everything. I was friends with the lead designer, and she encouraged me to stay and learn Photoshop so we could integrate it into our work). But I was sure that this was going to take away our jobs. What actually happened is that it really just became a part of the workflow. We had to relearn how we did things; but the work was still there, and the jobs evolved.

     I suspect that this is what will happen with most graphic design. I don’t think most creative jobs will go away; I think they will be asked to extend their skill sets. That may be a problem in itself, as job descriptions for this type of work tend to be sprawling, and the pay often does not reflect the years of experience accumulated by a designer. But we can see this new type of skill set emerge in artists like @Stelfie, who combines layers of his own photography, Stable Diffusion, and Photoshop to create imaginative “selfies” from a time traveling man. And you can see the integration of this type of thought, based on the marketing screenshots of Adobe Firefly.

So what does this have to do with social work?

     Well, I’ve thrown in some clues above regarding diversity and the labor market and ethics, in general. However, there are some additional concerns. 

     One is that AI makes it easier to create “deep fakes,” or completely AI-generated images that seem to be real. This doesn’t really require a great deal of skill or money, and they’re already popping up in many places. On one hand, this will make it much easier to proliferate misinformation. Older consumers may not understand when they are viewing a deep fake, and they may assume that they are real. As there are no real regulations yet related to this type of art, there is little to say how it can and cannot be used (Bond, 2023). So, teaching digital literacy becomes an absolute must.

     Additionally, AI makes it easier to perpetrate identity theft. While some companies are using measures such as voice printing to safeguard client accounts, AI has already been used to fool these systems (Evershed and Taylor, 2023). This technology has also been used to try to fool parents into believing that their child has been kidnapped, and that a ransom is demanded (Karimi, 2023).  So, it’s important that our clients begin to understand how the technology is evolving and how we can safeguard against misuse.

     There is also the consideration of AI in clinical use, which has mixed results. On one hand, in studies like one published in the Journal of the American Medical Association earlier this year, AI chatbots outperformed physicians in answering patient questions. The questions were found to have a higher quality content, and the chatbots demonstrated more empathy (Ayers, Poliak, & Dredze, 2023). And clients have begun to use chatbots to supplement their mental healthcare (Basile, 2023). But there is also the dark side of AI, and in this case, a Belgian widow believes that her husband’s ongoing discussions with an AI chatbot caused his death by suicide (Landymore, 2023). She states that the chatbot encouraged him to die and made it sound reasonable. So, although social work mental health providers may have some competition from AI technology, there may still be a lot of work to do before this type of technology is ready to perform on its own.

     And as extreme as it sounds, there is also the possibility that AI is evolving much too quickly and that it might surpass humans (Metz, 2023). A New York Times article states, “In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future” (Metz, 2023). He says that he also sees how bad-faith actors can use AI in such a way that it will risk the survival of humanity. He, along with 1,000 other tech leaders, believe that AI can pose “profound risks to society and humanity” (Metz and Schmidt, 2023).

     Maybe you’re wondering what a social worker can do. Well, there are groups that you can join that focus on the ethical intersection of technology and social justice. These groups include husITa and All Tech Is Human and Data for Good. Social worker Laura Nissen and Social Work Futures are imagining a more just world into being by setting the stage right now. And you can hang out with your more tech-minded social work brothers and sisters on X (formerly known as Twitter) by following #SWTech.

     The world needs more social work voices to point us toward a safer future with technology. It’s not quite my job on the Enterprise D, but this is a good start.

References

Adobe. (2023). Adobe Firefly: FAQ. https://www.adobe.com/sensei/generative-ai/firefly.html#faqs

Ayers, J.W., Poliak, A., Dredze, M., et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. Published online April 28, 2023. doi:10.1001/jamainternmed.2023.1838

Baldwin, S. (2023). How robots are helping address the fast-food labor shortage. CNBC. https://www.cnbc.com/2023/01/20/how-fast-food-robots-are-helping-address-the-labor-shortage.html

Basile, L. M. (2023). Can AI replace therapists? Some patients think so as they turn to Chat GPT. MDLinx. https://www.mdlinx.com/article/can-ai-replace-therapists-some-patients-think-so-as-they-turn-to-chat-gpt/4FzAn1SXlzSUWREEhbblh9

Bond, S. (2023) AI-generated deepfakes are moving fast. Policymakers can't keep up. NPR. https://www.npr.org/2023/04/27/1172387911/how-can-people-spot-fake-images-created-by-artificial-intelligence

Evershed, N., and Taylor, J. (2023). AI can fool voice recognition used to verify identity by Centrelink and Australian tax office. https://www.theguardian.com/technology/2023/mar/16/voice-system-used-to-verify-identity-by-centrelink-can-be-fooled-by-ai

Karimi, F. (2023). ‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping. CNN.https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html

Landtmore, F. (2023)l Widow says man died by suicide after talking to AI chatbot. Futurism. https://futurism.com/widow-says-suicide-chatbot

Metz, C., & Schmidt, G. (2023). Elon Musk and others call for pause on A.I., citing ‘profound risks to society’. New York Times. https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html

O’Neill, C. (2016). Weapons of math Ddestruction: How big data increases inequality and threatens democracy. New York: Crown Publishers.

Ortiz, S. (2023). ChatGPT vs. Bing AI: Which AI chatbot is better for you? ZDNet. https://www.zdnet.com/article/chatgpt-vs-bing-chat/

SAS Institute. (2023). Artificial intelligence: What it is, and why it matters. https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html#:~:text=Artificial%20intelligence%20(AI)%20makes%20it,learning%20and%20natural%20language%20processing.

Dr. Gina Griffin, DSW, MSW, LCSW, is a Licensed Clinical Social Worker. In 2012, she completed her Master of Social Work at University of South Florida. And in 2021, she completed her DSW at the University of Southern California. She began to learn R Programming for data analysis in order to develop her research-related skills. She now teaches programming and data science skills through her website (A::ISWR) and free Saturday morning #swRk workshops.

Back to topbutton