A new study explores how people with autism interact with ChatGPT and similar artificial intelligence tools for help and advice as they confront problems in their workplaces.
The findings show such systems sometimes dispense questionable advice. And controversy remains within the autism community as to whether this use of chatbots is even a good idea.
“What we found is there are people with autism who are already using ChatGPT to ask questions that we think ChatGPT is partly well-suited and partly poorly suited for,” says Andrew Begel, an associate professor in the department of software and societal systems and the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University. “For instance, they might ask: ‘How do I make friends at work?'”
Begel heads the VariAbility Lab, which seeks to develop workplaces where all people, including those with disabilities and who are neurodivergent, can successfully work together.
Unemployment and underemployment can be problems for many adults with autism, and many workplaces either don’t have or prioritize the resources to help employees with autism and their coworkers overcome social or communication problems as they arise.
To better understand how large language models (LLMs) could be used to address this shortcoming, Begel and his team recruited 11 people with autism to test online advice from two sources—a chatbot based on OpenAI’s GPT-4, and what looked to the participants like a second chatbot but was really a human.
Somewhat surprisingly, the users overwhelmingly preferred the real chatbot to the disguised adviser. It’s not that the chatbot gave better advice, but rather the way it dispensed that advice, Begel says. “The participants prioritized getting quick and easy-to-digest answers.”
The chatbot provided answers that were black and white, without a lot of subtlety and usually in the form of bullets. The counselor, by contrast, often asked questions about what the user wanted to do or why they wanted to do it. Most users preferred not to engage in such back-and-forth, Begel says.
Participants liked the concept of a chatbot. One explained: “I think, honestly, with my workplace… it’s the only thing I trust because not every company or business is inclusive.”
But when a professional who specializes in supporting job seekers with autism evaluated the answers, she found that some of the LLM’s answers weren’t helpful. For instance, when one user asked for advice on making friends, the chatbot suggested the user just walk up to people and start talking with them. The problem, of course, is that a person with autism usually doesn’t feel comfortable doing that, Begel says.
It’s possible that a chatbot trained specifically to address the problems of people with autism might be able to avoid dispensing bad advice, but not everyone in the autism community is likely to embrace it, Begel says.
While some might see it as a practical tool for supporting workers with autism, others see it as yet another instance of expecting people whose brains work a bit differently than most people to accommodate everyone else.
“There’s this huge debate over whose perspectives we privilege when we build technology without talking to people. Is this privileging the neurotypical perspective of ‘This is how I want people with autism to behave in front of me?’ Or is it privileging the person with autism’s wishes that ‘I want to behave the way I am,’ or ‘I want to get along and make sure others like me and don’t hate me?'”
At heart, it’s a question of whether people with autism are given a say in research that is intended to help them. It’s also an issue explored in another CHI paper, on which Begel is a coauthor with Naba Rizvi and other researchers at the University of California, San Diego.
In that study, researchers analyzed 142 papers published between 2016 and 2022 on developing robots to help people with autism. They found that 90% of this human-robot interaction research did not include the perspectives of people with autism. One result, Begel says, was the development of a lot of assistive technology that people with autism didn’t necessarily want, while some of their needs went unaddressed.
“We noticed, for instance, that most of the interactive robots designed for people with autism were nonhuman, such as dinosaurs or dogs,” Begel says. “Are people with autism so deficient in their own humanity that they don’t deserve humanoid robots?”
Technology can certainly contribute to a better understanding of how people with and without autism interact. For instance, Begel is collaborating with colleagues at the University of Maryland on a project using AI to analyze conversations between these two groups. The AI can help identify gaps in understanding by either or both of the speakers that could result in jokes falling flat or creating the perception that someone is being dishonest.
Technology could also help speakers prevent or repair these conversational problems, Begel says, and the researchers are seeking input from a large group of people with autism to get their opinion on the kind of help they would like to see.
“We’ve built a video calling tool to which we’ve attached this AI,” says Begel, who has also developed an Autism Advisory Board to ensure that people with autism have a say in which projects his lab should pursue. “One possible intervention might be a button on this tool that says ‘Sorry, I didn’t hear you. Can you please repeat your question?’ when I don’t feel like saying that out loud. Or maybe there’s a button that says, ‘I don’t understand.’ Or even a tool that could summarize the meeting agenda so you can help orient your teammates when you say, ‘I’d like to go back to the first topic we spoke about.'”
The team presented the study results at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2024) last month in Honolulu.
Source: Carnegie Mellon University
The post ChatGPT may give bad workplace advice to people with autism appeared first on Futurity.
from Futurity https://ift.tt/UsvZBP7
No comments:
Post a Comment