»Ę¹ĻŹÓʵ

Skip to main content

#Chatterbots

By Joe Arney

AI robot prompts

As an expert in generative artificial intelligence and ethics, when Casey Fiesler interacts with brands or commenters online, sheā€™s very attuned to whether the person on the other end might actually be a chatbot.

More and more, regular internet users are having the same doubts. Thatā€™s becauseĢżcompanies are increasingly turning to chatbots to solve problems, manage customer engagementā€”or because everyone else is doing it.

Casey Fiesler

ā€œIā€™ve heard from multiple people on social media who say the big conversations they have at work are about how to do A.I., because everyone feels like they have to integrate this new technology as quickly as possibleā€”even if it doesnā€™t makeĢżsense,ā€ said Fiesler, associate professor of information science at CMCI.

Chatbots have their use, Fiesler said. They can spark brainstorming sessions for a writer struggling with a draft, or create non-player characters in tabletop role-playing games. The problem, she said, ā€œis the idea that chatbots and generative A.I. need to be doing everything, everywhere. Which is absurd.ā€

Donā€™t think so? Consider that chatbots have encouraged small-business owners to break the law (City of New York), advised using glue to help cheese stick to pizza (Google) and impersonated parents to offer reassurance about local schools (Meta).

ā€œIn the Meta case, to give them some credit, the account that responded to the parent was clearly labeled as being A.I.,ā€ Fiesler said. ā€œBut at the same time, the idea that it might impersonate a parent should have been anticipated, because large language models are not information retrieval systemsā€”theyā€™re ā€˜what word comes next?ā€™ systems. So, itā€™s inevitable youā€™re going to have some wrong responses.ā€

Social media interactions that should be between people are one case where Fiesler said chatbots should be off-limits;Ģżanother is dispensing legal, medical or business advice. Thatā€™s not even considering the complex social and ethical concerns about A.I.ā€”misinformation, labor rights, intellectual property, energy consumptionā€”that are getting short shrift by an industry waxing poetic about the golden age this technology promises to usher in.

But moving slowly and asking thoughtful questions is not a strength of Silicon Valley, and companies fearful of being left behind are missing Fieslerā€™s bigger point about ethical debt.

ā€œThereā€™s this attitude of do this now, and deal with the consequences after we see what goes wrong,ā€ she said. ā€œBut very often, the harm is already done.

ā€œIt blows my mind that these huge tech companies, with all their resources, could be surprised that all these things keep happening. Whereas when I describe some of these A.I. use cases to undergrads in my ethics class, they come up with all the things that could go wrong.ā€