Bias, Obscurity, Absent Oversight – Are We Ready for AI Chatbots?
The practical applications of ChatGPT left academic institutions and companies scrambling to develop adoption or preventive guidelines, a social conundrum that is about to expand as technology giants announce the integration of artificial intelligence into their search engines. These advances come despite the known presence of biases, a comparative lack of transparency and the absence of a regulatory framework, warn academic and non-profit researchers.
“We always have these new technologies thrown at us without any control or an educational framework to know how to use them,” says Giada Pistilli, Principal Ethicist, Hugging Face, who worries about how quickly companies are adopting AI.
Most of the data used to train AI systems and other machine-learning models comes from human-generated, biased internet content. Indicative of this widespread phenomenon are two investigative studies on a single biometric tool, CLIP, which found the system “was incapable of performing without bias, and often acted out significant and disturbing stereotypes,” according to a study led by John Hopkins University. In short, companies' race to commercialize robotics threatens to congeal these flaws into the foundation of robotics design.
Extrapolating from this risk to consider AI chatbots' propensity to hallucinate, or create fictional answers to questions it does not know the answer to, the potential danger for misinformation becomes perceptible. Threatening to further compound this pitfall are humans’ tendency to afford human-sounding AI chatbots greater trust, according to a study by the University of Florida. As problem even further consolidated by a comparative lack of transparency by AI chatbot answers, which can come from an Encyclopedia Britannica just as from a gossip blog without clear distinction between the two.
“It [is] completely untransparent how [AI-powered search] is going to work, which might have major implications if the language model misfires, hallucinates or spreads misinformation,” says Aleksandra Urman, Computation Social Scientist, University of Zurich.
Altogether, the potential susceptibilities of AI chatbots suggest that there may be unintended risks on the horizon that are yet to fully emerge. Moreover, given companies’ accelerated attempt to integrate these incomplete technologies into people’s daily lives calls for the mobilization of legislators to provide oversight and develop proper legislation to curb foreseen risks before they cause irreparable damage. In parallel, continued research and development efforts are crucial to advancing AI chatbot applications while circumventing risks.