#Chatterbots
By Joe Arney
As an expert in generative artificial intelligence and ethics, when Casey Fiesler interacts with brands or commenters online, sheās very attuned to whether the person on the other end might actually be a chatbot.
More and more, regular internet users are having the same doubts. Thatās becauseĢżcompanies are increasingly turning to chatbots to solve problems, manage customer engagementāor because everyone else is doing it.
āIāve heard from multiple people on social media who say the big conversations they have at work are about how to do A.I., because everyone feels like they have to integrate this new technology as quickly as possibleāeven if it doesnāt makeĢżsense,ā said Fiesler, associate professor of information science at CMCI.
Chatbots have their use, Fiesler said. They can spark brainstorming sessions for a writer struggling with a draft, or create non-player characters in tabletop role-playing games. The problem, she said, āis the idea that chatbots and generative A.I. need to be doing everything, everywhere. Which is absurd.ā
Donāt think so? Consider that chatbots have encouraged small-business owners to break the law (City of New York), advised using glue to help cheese stick to pizza (Google) and impersonated parents to offer reassurance about local schools (Meta).
āIn the Meta case, to give them some credit, the account that responded to the parent was clearly labeled as being A.I.,ā Fiesler said. āBut at the same time, the idea that it might impersonate a parent should have been anticipated, because large language models are not information retrieval systemsātheyāre āwhat word comes next?ā systems. So, itās inevitable youāre going to have some wrong responses.ā
Social media interactions that should be between people are one case where Fiesler said chatbots should be off-limits;Ģżanother is dispensing legal, medical or business advice. Thatās not even considering the complex social and ethical concerns about A.I.āmisinformation, labor rights, intellectual property, energy consumptionāthat are getting short shrift by an industry waxing poetic about the golden age this technology promises to usher in.
But moving slowly and asking thoughtful questions is not a strength of Silicon Valley, and companies fearful of being left behind are missing Fieslerās bigger point about ethical debt.
āThereās this attitude of do this now, and deal with the consequences after we see what goes wrong,ā she said. āBut very often, the harm is already done.
āIt blows my mind that these huge tech companies, with all their resources, could be surprised that all these things keep happening. Whereas when I describe some of these A.I. use cases to undergrads in my ethics class, they come up with all the things that could go wrong.ā