Last night, I made a terrible mistake…
I asked ChatGPT what it thought I looked like, given everything it knows about me.
Y’all ready for this?
[The acronym A.C.E. does at least make an appearance in my manager training program... 🤷🏻♀️]
Oof. Does anyone else have some heartburn?
I mean, LOOK AT THAT SCARF?!
Oh, and yeah, my apparently thoughtful questions in fields ranging from neuroscience to business landed me in a very different demographic.
You’ve done me dirty, ChatGPT.
I tried to explain that it got this assignment terribly wrong—that not only were the gender and age incorrect but also I don’t necessarily dress as someone might expect for my career.
Then, I got this:
[I definitely own that jacket, but NO.]
Seeing my male alter ego felt more significant than just getting back a bizarre AI-generated image response. It reminded me that he is who's assumed to be a self-employed leadership trainer and public speaker.
Interestingly, in terms of gender, many sources show slightly more women than men in corporate training and public speaking. (Please note the limitation that I haven’t seen data that includes employees who identify outside of "male" or "female" for these roles.) When it comes to race, every source I've seen shows "white" as the most common identity, by far, for these professions. (Work to come!)
So, what does it mean that my identity has been assumed to be that of the sharply dressed gentleman above? Is it just a matter of my name, "Jesse," or is bias showing up in AI-generated images? [Cue: rabbit hole.]
According to a thought-provoking piece in ScienceNewsExplore: "[AI] paints a world that is more biased than reality." Researchers (Luccioni, Akiki, Mitchell, and Jernite, 2023) have noted that, at least for Dall-E 2 and Stable Diffusion v 1.4 and 2, "while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents." As AI becomes further engrained in all industries, how do we more critically engage with the results we're given? I decided to ask ChatGPT itself what questions we need to be considering in the learning and development space. Here's what it suggested we reflect on:
How might we educate employees about the limitations and biases of AI, especially when these technologies are used in training or career development settings?
Are we providing opportunities for learners to discuss and reflect on the biases they might encounter in training materials, including those generated or influenced by AI?
Are we mindful of how AI-driven biases might affect marginalized groups within our organization, and what support systems can we put in place to address this?
How do we foster a learning culture that values diversity and encourages questioning and reshaping biased assumptions within both AI systems and human-led training content?
What processes can we implement to keep up with advancements in AI ethics and bias reduction, ensuring our L&D programs remain aligned with best practices in inclusivity?
We've already established that AI has its shortcomings, so what do you think? What else do we need to consider?
Comments