March 8, 2024
By Kiara Rasmussen
Staff Writer
Most people have heard the saying, “Nobody is perfect.” But what about the information provided by artificial intelligence?
According to a study by Vectara, a platform providing artificial intelligence-powered search technology, the public should not be so quick to trust AI chatbots. It found that these computer programs invent answers between 3 and 27% of the time. Experts call them “hallucinations.”
Computer science teacher Ms. Orth said she was surprised to learn that AI made errors as little as 3% of the time.
Orth said since AI collects information from the internet without fact-checking, it will never be 100% accurate. She demonstrated this by saying the same sentence twice. The first time, the sentence expressed genuineness, while the second expressed sarcasm.
“It’s the tone of my voice that you get [the meaning] from, not the words that I said, because the words were exactly the same,” Orth said. “There’s stuff like that [which the chatbot] ChatGPT will never be able to pick up on.”
According to TechTarget, a company focused on AI development, AI systems rely on multimedia found in web browsers like Google and Microsoft Edge. They produce responses by collecting data, analyzing patterns and self-correcting.
Orth said the ubiquity of chatbots is imminent.
“When Google first came out, it was scary. People didn’t want to use it,” Orth said. “You didn’t know if you could rely on that information, and now Google is a verb. We Google things.”
According to Google, its new AI Gemini Ultra scored 90% in Massive Multitask Language Understanding, which measures knowledge in 57 subjects including ethics, history, law, mathematics, medicine and physics to test world knowledge and problem-solving skills. AI Gemeni Ultra is the first AI model to surpass human experts.
“I would like to teach my students how best to use AI as a tool to improve their lives, both personally and academically.”
Business and technology teacher Mrs. Huntington said she is concerned about students sharing too much personal information through the internet, such as their names, birth dates, physical location, high school and who their friends are.
“The questions are typically disguised as innocent questions on fun and entertaining surveys to find out which Disney princess… or which character from [the sitcom] ‘Friends’ you are most similar to, or even to get a discount on purchases,” Huntington said.
Huntington said, currently, she does not use AI with her students, but she has some projects in mind.
“I would like to teach my students how best to use AI as a tool to improve their lives, both personally and academically,” Huntington said.
Senior Joshua Machcinski said when he is using his phone, he uses the AskAI app but favors ChatGPT when he is using his laptop.
“I will try to use AI as a last resort in the future,” Machcinski said. “I do try to stay away from it and not lean on it 24/7.”
He said last year he had a poor experience using AskAI when the app, which debuted in March 2023, was still in its early stages of development.
Machcinski said the chatbot produced incorrect information when he asked it about a specific scene in a novel he had been assigned in English class.
“I was trying to find the part of the novel that the quote happened in for my essay, and when submitting the question to the AI, it told me some of [the] context that the quote had, but it carried on with its own unique events that didn’t actually happen,” Machcinski said.
He said even though it can be useful, AI can be dangerous, just like it is depicted in futuristic movies about robots.
“Even from last year, AI has advanced so much up until now. I don’t even know the full capabilities it has,” said Machcinski.