) People trust human-generated profiles more than artificial intelligence-generated profiles, particularly in online marketplaces, reveals a study in which researchers sought to explore whether users trust algorithmically optimised or generated representations.
When researchers informed them that they were viewing either all human-generated or all AI-generated profiles, participants didn't seem to trust one more than the other. They rated the human- and AI-generated profiles about the same.
That changed when participants were informed they were viewing a mixed set of profiles. Left to decide whether the profiles they read were written by a human or an algorithm, users distrusted the ones they believed to be machine-generated.
"Participants were looking for cues that felt mechanical versus language that felt more human and emotional," said Maurice Jakesch, a doctoral student in information science at Cornell Tech in America.
"The more participants believed a profile was AI-generated, the less they tended to trust the host, even though the profiles they rated were written by the actual hosts," said a researcher.
The research team from Cornell University and Stanford University found that if everyone uses algorithmically-generated profiles, users trust them. But if only some hosts choose to delegate writing responsibilities to artificial intelligence, they are likely to be distrusted.
As AI becomes more commonplace and powerful, foundational guidelines, ethics and practice become vital.
The study also suggests there are ways to design AI communication tools that improve trust for human users. "Design and policy guidelines and norms for using AI-mediated communication is worth exploring now", said Jakesch.
(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)