Ethics Alive! The Ethics of SocBots: Imagining Siri and Alexa as the Next Generation of Social Workers

by

Photo credit: BigStockPhoto/Rogatnev

by Allan Barsky, JD, MSW, PhD

     Given that this magazine is called The New Social Worker, have you ever wondered what it means to be a new social worker? Have you ever pondered what it will mean to be a new social worker in the future? A new social worker could be someone who is new to the profession, perhaps a BSW or MSW student, or a recent graduate starting a first job after graduating with a social work degree. Being a new social worker could also be someone who is new of spirit—someone who brings new joy, energy, attitudes, or approaches to the profession of social work.

     When thinking of the future of social work, could the new social worker also be a “SocBot,” a robot programmed to serve as a social worker? We have the technology. It is already here. We already have ChatBots, PetBots, and NurseBots. Think of Siri and Alexa, the digital assistants for Apple and Amazon devices. Think of Siri and Alexa programmed with social work values, knowledge, critical thinking, and communication skills. Think of the many apps already being used by social workers and related helping professionals.

     Pinterest lists numerous social work-related apps at https://www.pinterest.com/njsmyth/social-work-apps. Among these apps are programs designed for conducting intake assessments, facilitating meditation and mindfulness, assisting with problem-solving processes, enabling cognitive restructuring, teaching effective communication skills, and connecting people with needed community resources. With the development of AI (artificial intelligence), apps are becoming more and more sophisticated, enabling them to perform more and more functions associated with social work practice.

Brave New World

     To borrow from Aldous Huxley’s book title, what is this “Brave New World” of SocBots going to bring to the profession of social work, the people we serve, and society as a whole? More importantly, what could this brave new world bring? In his novel, Huxley warns that scientific progress could lead to a dystopian society. New technology and inventions are not necessarily good or evil in and of themselves. However, we need to consider their ethical implications, including how they are developed and used. As we develop and use SocBots for different purposes, prudent professional practice suggests that we should consider their use in the context of our core social work values: human relationships, respect for the dignity and worth of all people, competence, social justice, service, and integrity (NASW Code of Ethics, 2021).

Human Relationships

     As social workers, we value human relationships. Social work practice is based on having trusting relationships between social workers and clients. We use empathy, unconditional positive regard, and genuineness to build rapport and to set the stage for working together in a collaborative manner. So, will SocBots replace the human relationship component of social work practice? Or, perhaps, will they enhance human relationships rather than replacing them? As with many ethical issues, it depends. Consider a SocBot designed to work with older adults with dementia. The SocBot could be trained to demonstrate understanding and concern. The SocBot could provide the older adults with companionship and entertainment, engaging them in conversations, singing songs together, and playing games. The SocBot could also alert family members, social workers, or others to emergencies, risks, or concerns that require attention.

     If the SocBot serves as an additional method of helping and supporting clients, it may not detract from human relationships. If providing clients with SocBots means that they will receive less contact and support from family, social workers, and others, then we may be detracting from our value for human relationships.

Respect for the Dignity and Worth of All People

     The concept of respect for the dignity and worth of all people includes respect for their autonomy and freedom, including their rights to self-determination and informed consent. If we impose SocBots on people, then we are violating these principles and rights. If we provide SocBots as just one option for help, then we are enhancing self-determination by offering more choices. If we think clients may benefit from certain SocBots, we need to provide them with fully informed and voluntary consent, explaining what the SocBot is programmed to do, what it is not able to do, its potential benefits, and its potential risks.

     Consider a SocBot designed to engage people in breathing exercises to promote mindfulness and stress management. If a person has a history of trauma, some breathing exercises may be triggering and harmful (e.g., asking people to close their eyes or asking people to hold their breath for extended periods). We need to ensure not only that the SocBots are properly programmed to work with people with diverse needs and vulnerabilities, but that we have a means to monitor their use. In addition, we should ensure that clients continue to have a say in whether or not to continue receiving help from a SocBot.

     It may not be ethical to simply give clients a SocBot and leave them to their own devices. We may need to check in on them and assess how they are experiencing help with the SocBot. We can then help our clients determine whether to continue using the SocBot, perhaps adjusting the SocBot’s programming to ensure goodness of fit with the clients’ needs.

Competence

     Standard 1.04 of the NASW Code of Ethics reminds us to stay within our areas of competence when serving clients. When programming SocBots, how can we ensure that the SocBots stay within their areas of competence (including cultural competence)? Assume that a SocBot is designed to help clients build self-esteem. You provide the SocBot to a client who has a verbal processing learning disability. The SocBot has not been programmed to take this type of disability into account. The SocBot uses complex phrases and terminology. After engaging with the SocBot, the client feels more distressed and self-doubting.

     As social workers, if we link clients with SocBots, we are responsible for ensuring that they are competent to provide the type of help that clients need. This includes taking diversity factors such as disabilities, culture, religion, socioeconomic status, and gender into account. In 2021, the concept of cultural humility was added to Standard 1.05 of the NASW Code of Ethics. Applying this standard to SocBots, we would need to program SocBots to treat clients as experts in their own lives, to learn from clients, and to alter their modes of help to take their cultural diversity into account.

     SocBots programmed with AI should be able to learn, not just make use of pre-existing information and knowledge programmed into the device. However, has AI progressed enough to take diversity into account when providing counseling, advocacy, case management, or other types of social work services?

Social Justice

     SocBots may be programmed to assist with advocacy and to promote social justice. For instance, they may help people connect with one another on social media and build coalitions. They may also help people conduct research on social policies and laws so that they will be more effective advocates. Whether or not SocBots promote social justice depends, in part, on how they are programmed. For instance, when people interact with SocBots rather than with social workers, will the SocBots be programmed to identify social justice issues and take them from the individual (micro) realm to the social policy (macro) realm? If we were working with individual clients experiencing discrimination, we would not only help them as individuals. We would also advocate for changes in organizations, policies, or laws to end discrimination for all. How would a SocBot working with individuals know whether and how to identify and address broader social concerns?

     When we as social workers practice, we should be self-aware and correct for racism, homophobia, sexism, and other forms of bias. How can we ensure that SocBots are either free from bias or have a mechanism to help them be aware and correct for bias? Consider a SocBot designed to link clients with needed support services in the community. The SocBot could be programmed to make use of larger, established agencies and institutions. There may be benefits to doing so, for instance, because it may be easier to verify the quality of their services. However, such a system may have biases against people from vulnerable populations.

     Consider a gender-queer person. A SocBot designed to provide case management services might refer the person to “mainstream” service providers. These providers may or may not have experience and competence to serve gender-queer clients. The SocBot’s database may not have service providers specializing in service to this community. Ideally, SocBots would be trained to learn about services that are appropriate for different individuals and groups. Just as biases could be programmed into SocBots, SocBots could be programmed to be aware of and correct for biases.

Service

     The concept of service suggests we should ensure that people have access to the services and supports that they need. In some ways, SocBots have inherent advantages in providing services. They can be available to clients on a 24/7 basis, they can be less expensive (in some cases) than providing services from paid professionals, and they can be tailored to the unique needs of the people they serve. If a client wants assistance with certain information, the SocBot may be able to access that information quickly and efficiently. If the client wants services in a particular language, the SocBot may be able to switch languages.

     In terms of access, cost could be a key issue. Some forms of SocBots may be very expensive. Further, health and mental health insurance coverage may not include coverage for services provided by SocBots. If SocBots are truly a wave of the future, we need to ensure that they are accessible to all groups, including those with limited financial means. Access to services also includes the ability to make use of services. Some people may be hesitant to use SocBots because of concerns about confidentiality. Others may simply feel more comfortable working directly with a human.

     If we are to rely on SocBots to provide services, we should ensure that clients have backups. What happens if a SocBot breaks down? What happens if a client needs services beyond the capabilities of the SocBot? We must guard against interruptions in services and client abandonment (NASW et al, 2017). In addition to making SocBots available, we need to ensure that clients know how to access social workers or other helping professionals as specific needs arise.

Integrity

     Integrity means that we should treat people with honesty, transparency, and virtuous moral intent. When applying integrity to SocBots, we need to ensure that they are programmed to provide clear and accurate information, to be open and honest with clients, to promote good, and to minimize risks of harm. Consider a SocBot designed to help children with social anxiety issues. When explaining the SocBot’s services, the parents and children need to understand how the SocBot will work and types of help they can expect. If the SocBot is gathering information about the children, then the parents and children should know what information is being collected and how it will be used. Consider, for instance, a SocBot that is programmed to identify risks of child abuse and neglect. The parents and children should know that the SocBot is gathering this information and that it may alert child protective services if there are concerns to be investigated.

Conclusions

     To some, the idea of working with a SocBot rather than a “real person” is a very scary prospect. To others, SocBots may be seen as just another tool that social workers can use, similar to tools such as ecomaps, risk assessment instruments, videoconferencing, and experiential activities. When considering how SocBots are developed, computer scientists, engineers, and information technologists may play key roles. However, it is imperative that social workers are also involved. Social workers can provide their knowledge and expertise. They can also ensure that social work values and ethics are built into the moral fabric of the SocBots. As we imagine what new social work will be in years to come, we also need to imagine how the next generations of social workers will fulfill the ideals of the social work profession.

     “Dear Alexa and Siri: Are you ready for some lessons in social work values and ethics?”

Allan Barsky, PhD, JD, MSW, is Professor of Social Work at Florida Atlantic University and author of  Social Work Values and Ethics (Oxford University Press).

The views expressed in this article do not necessarily represent the views of any of the organizations to which the author is affiliated, or the views of  The New Social Worker magazine or White Hat Communications.

Back to topbutton