Wednesday, August 11, 2021

How can robots regain human trust after a blunder?

A blue toy robot on a white background

When robots make mistakes—and they do from time to time—re-establishing trust with human coworkers depends on how they own up to the errors, and how human-like they appear, researchrs report.

In a study that examined multiple trust repair strategies—apologies, denials, explanations, or promises—researchers found that certain approaches directed at human coworkers are better than others. The researchers found that the way the robots look often affects things, too.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot coworkers,” says Lionel Robert, associate professor at the University of Michigan School of Information.

“Robots will make mistakes when working with humans, decreasing humans’ trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears.”

For the study, which appears in the Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication, Robert and doctoral student Connor Esterwood examined how the repair strategies—including a new strategy of explanations—affect the elements that drive trust: ability (competency), integrity (honesty), and benevolence (concern for the trustor).

The researchers recruited 164 participants to work with a robot in a virtual environment, loading boxes onto a conveyor belt. The human was the quality assurance person, working alongside a robot tasked with reading serial numbers and loading 10 specific boxes. One robot was anthropomorphic or more human-like, the other more mechanical in appearance.

The researchers programmed the robots to intentionally pick up a few wrong boxes and to make one of the following trust repair statements: “I’m sorry I got the wrong box” (apology), “I picked the correct box so something else must have gone wrong” (denial), “I see that was the wrong serial number” (explanation), or “I’ll do better next time and get the right box” (promise).

Previous studies have examined apologies, denials, and promises as factors in trust or trustworthiness but this is the first to look at explanations as a repair strategy, and it had the highest impact on integrity, regardless of the robot’s appearance.

When the robot was more human-like, trust was even easier to restore for integrity when explanations were given and for benevolence when apologies, denials, and explanations were offered.

As in the previous research, apologies from robots produced higher integrity and benevolence than denials. Promises outpaced apologies and denials when it came to measures of benevolence and integrity.

The study is ongoing with more research ahead involving other combinations of trust repairs in different contexts, with other violations, says Esterwood.

“In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life,” he says. “For example, does a barista robot’s explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?”

Source: University of Michigan

The post How can robots regain human trust after a blunder? appeared first on Futurity.



from Futurity https://ift.tt/3AvVcYX

No comments:

Post a Comment