Photo credit: BigStockPhoto/PixelsAway
by Dr. Dava R. Wilson, EdD, LCSW
When Sophia, a 28-year-old mother of two, sits in my office completing a depression screening, I watch her struggle with Question 12: I feel like I’m failing as a parent. The response choices stare back at her: Never, Sometimes, Often, Always. After a long pause, she circles “Often,” then looks up with tears in her eyes. “But not when I’m reading to them at night,” she whispers. “Those moments...those are different.”
This scene reveals a fundamental disconnect between clients’ lived experiences and our assessment tools. The inclusion of absolute response options like “always” and “never” forces clients to compress the beautiful complexity of human experience into oversimplified categories that rarely capture their truth.
The Human Cost of Absolute Categories
When we ask clients to endorse “always,” we essentially ask them to confirm that something occurs 100% of the time—a standard that fails to acknowledge how people actually live and cope. Consider Marcus, a 45-year-old veteran experiencing frequent panic attacks and persistent worry. When asked if he “always feels anxious,” he selects “often,” because he remembers peaceful moments fishing or holding his grandson. His clinical insight—recognizing that even severe anxiety has moments of reprieve—actually works against him in an assessment system that equates severity with absolute endorsement.
Meanwhile, Jennifer, whose anxiety significantly impacts her functioning, also selects “often,” because she cannot honestly say her anxiety is constant. Despite vastly different presentations, both receive identical scores, masking crucial differences and potentially leading to mismatched interventions.
Cultural Dignity and Response Patterns
Cultural backgrounds significantly influence communication styles and assessment responses. Recent research demonstrates that extreme response style—the tendency to select endpoints on rating scales—varies across cultural groups and seriously threatens assessment validity. Schoenmakers and colleagues (2024) found that correcting for extreme response style requires careful attention to model choice, as different approaches yield dramatically different results when accounting for cultural variations.
Contemporary measurement research reveals that response format choices significantly impact how different cultural groups interact with assessment tools, affecting reliability and validity in ways unrelated to actual phenomena being measured (Kusmaryono et al., 2022). Research on social desirability bias shows how participants from different backgrounds may present reality in ways they perceive as socially acceptable, leading to responses that mask rather than reveal true experiences (Bergen & Labonté, 2020).
I learned this lesson when working with Mrs. Ramirez, an older Latina client who consistently chose middle-range responses on trauma screening tools despite describing experiences clearly meeting PTSD criteria. Her cultural framework emphasized strength through endurance and discouraged claims of extremity. When explored more deeply, her “sometimes” endorsements revealed persistent, severe symptoms requiring immediate attention. The assessment’s absolute options had obscured rather than illuminated her reality.
The Measurement Challenge
From a psychometric perspective, absolute response options create ceiling and floor effects. When responses cluster at extreme ends, we lose valuable information about true variability in experiences. A comprehensive review of Likert scale development revealed significant advances in understanding how response format affects measurement quality, with strong evidence against using extreme absolute anchors (Jebb et al., 2021).
Zhang and colleagues (2022) demonstrated that extreme response style creates substantial bias in group comparisons, potentially leading to incorrect conclusions about treatment effectiveness and client progress. More troubling is how these limitations affect our ability to track change. A client initially reporting “always” feeling depressed may still endorse “always” after months of therapy and meaningful progress, because they haven’t achieved complete absence of symptoms. The assessment fails to capture observable clinical improvement.
Real-World Consequences
Consider these scenarios: Alex, 16, marks “always” for alcohol cravings but only drinks on weekends, triggering intensive treatment that doesn’t match the actual usage pattern. Maria, a trauma survivor, selects “never” feeling safe because “always feeling safe” seems impossible given her history, masking significant progress from hypervigilance to feeling secure in most relationships. David marks “always” feeling hopeless because he can’t remember feeling completely hopeful, obscuring important improvements in coping strategies and daily functioning.
Evidence-Based Alternatives
Contemporary assessment research provides clear guidance for tools that better capture human complexity. Rather than forcing absolute categories, we can offer options acknowledging nuance:
- Frequency-Based Approaches: Most days (5-7 days per week), Many days (3-4 days per week), Some days (1-2 days per week), Rarely (less than once per week), Not at all.
- Intensity Recognition: “When you experience this, how intense is it, typically?” with options from Extremely intense to Very mild.
- Contextual Sensitivity: In most situations, In many situations, In some situations, In few situations, In no situations.
- Time-Specific Frameworks: In the past two weeks, how often... with options like Nearly every day, More than half the days, Several days, Not at all.
Research consistently supports 5- to 7-point scales with descriptive anchors, avoiding absolute language, showing better reliability and validity than those with absolute options, while reducing cultural and individual response style bias.
Implementation Strategies
When selecting assessment tools, prioritize instruments avoiding absolute options. If using existing tools with absolute anchors:
- Client Preparation: Normalize experience complexity. Explain that questions ask about general patterns and encourage choosing responses capturing overall experience during specified time frames.
- Exploration and Context: Use responses as conversation starting points. When clients mark extremes, explore what that looks like daily, what exceptions exist, and how experiences vary across contexts.
- Supplementary Understanding: Combine standardized tools with open-ended questions and clinical observation, allowing clients to explain nuances structured responses might miss.
- Cultural Responsiveness: Attend to how cultural background might influence response patterns, exploring discrepancies through culturally sensitive dialogue.
Professional Development
Social workers must think critically about tools and their client impact. The 2021 NASW Code of Ethics emphasizes cultural competence and professional self-care, requiring examination of how assessment practices affect client well-being and therapeutic relationships (National Association of Social Workers, 2021). Professional standards emphasize valid and fair assessment practices serving all populations equitably (American Educational Research Association et al., 2014).
Supervision should encourage practitioners to question inconsistent assessment results, explore cultural factors influencing responses, remember scores represent starting points rather than definitive answers, and develop nuanced assessment interpretation skills.
Moving Forward
Remember Sophia? With more nuanced response options, she accurately conveyed feeling as if she was failing “most weekday mornings when everything feels chaotic, but rarely during bedtime routines or weekend family activities when I feel more present.” This information proved crucial for developing treatment plans building on existing strengths while addressing specific struggling contexts.
Assessment, at its best, is collaborative discovery between social worker and client. When tools respect experience complexity, they become bridges to understanding rather than barriers to connection, helping us see clients as whole human beings navigating complex lives with remarkable resilience.
Conclusion
The next time you encounter assessments with “always” and “never” options, pause and consider: What stories might these absolutes be silencing? What nuances are we missing? Our clients deserve assessment tools as complex and dignified as they are.
In choosing assessments respecting human complexity, we affirm a fundamental social work principle: every person’s experience matters, deserves careful attention, and contains wisdom that rigid categories cannot capture. This isn’t just about better measurement—it’s about better practice, more ethical engagement, and deeper respect for the profound complexity of human experience.
When we move beyond “always” and “never,” we create space for the full spectrum of human experience to emerge, honoring our clients’ capacity for growth, change, and healing while acknowledging that wellness is an ongoing process of building resilience and finding meaning.
References
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. American Educational Research Association.
Bergen, N., & Labonté, R. (2020). Everything is perfect, and we have no problems: Detecting and limiting social desirability bias in qualitative research. Qualitative Health Research. https://doi.org/10.1177/1049732319889354
Jebb, A. T., Ng, V., & Tay, L. (2021). A review of key Likert scale development advances: 1995–2019. Frontiers in Psychology, 12, 637547.
Kusmaryono, I., Wijayanti, D., & Maharani, H. R. (2022). Number of response options, reliability, validity, and potential bias in the use of the Likert scale education and social science research: A literature review. International Journal of Educational Methodology, 8(4), 625-637.
National Association of Social Workers. (2021). Code of ethics of the National Association of Social Workers. NASW Press.
Schoenmakers, M., Tijmstra, J., Vermunt, J., & Bolsinova, M. (2024). Correcting for extreme response style: Model choice matters. Educational and Psychological Measurement, 84(1), 145-170.
Zhang, Y., Yang, Z., & Wang, Y. (2022). The impact of extreme response style on the mean comparison of two independent samples. SAGE Open, 12(2), 21582440221108168.
Dr. Dava R. Wilson, EdD, LCSW, is a clinical therapist and professor of social work whose primary focus is trauma and alternative interventions for trauma. She has a telehealth private practice and hosts several interns from universities across the country each semester.