With the recent surge in artificial intelligence (AI) capabilities, many are wondering how much trust ought to be placed in AI. We know the technology isn’t perfect, but just how reliable are AI systems, and how do we best assess the truthfulness of AI-generated information?
In his TEDx Talk, Dr. Aaron Hunter, Mastercard Chair in Digital Trust at BCIT, discusses the complexities of AI trust and advocates for transparency in how AI systems are trained.
Assessing AI reliability
Dr. Aaron Hunter knows a thing or two when it comes to AI. He is Director of the BCIT Centre for Cybersecurity and also the Institute’s first Mastercard Chair in Digital Trust, where he leads the development of faculty and student research activities in the field of digital trust.
In a TEDx talk, Dr. Hunter spoke candidly about trusting AI technology, a topic that raises concerns for a growing number of people.
“When an AI system makes something up … in response to a question, we call that a ‘hallucination,” says Dr. Hunter. “Nobody should claim that ChatGPT always tells the truth.”
He went on to say that the information produced by AI systems like ChatGPT is only as good as the data that was used in training them.
“If you’re going to use these AI tools to get information, what you need to ask first of all is, ‘what data was used to train this tool?'” says Dr. Hunter. “Because if you don’t trust the data source that was used, then you shouldn’t trust the tool.”
He also said the information produced by AI systems will continually improve as newer versions are released.
What are the goals of AI systems?
Dr. Hunter explains there are also AI technologies that are created specifically to change people’s opinion, attitude, or behaviours.
“There’s a whole field called persuasive technology which is intended to convince you to believe something, or do something,” he says. “We’re going to see this more and more in digital tools and services that everybody’s using.”
Dr. Hunter says persuasive technology will be used for various purposes, whether it’s to have people stay on an app for a long time, or to make them purchase products. He says he isn’t opposed to the use of persuasive technology, as long as there’s awareness of where and when it’s being used.
“I don’t mind people trying to influence me, but I want to know when they’re doing it,” says Dr. Hunter. “I think people should be able to tell the difference between information and manipulation.”
As the capabilities of AI systems increase, Dr. Hunter reminds people to behave thoughtfully and not to place blind trust in these increasingly human-like tools.
“We’re in a thought-provoking time right now,” he says. “We need to think rationally about how to trust this technology, because that’s the only way to make sure it functions to everyone’s benefit.”
Preparing graduates for careers in AI
BCIT delivers hands-on, cutting-edge training that prepares learners for a workforce that is increasingly embracing AI technologies, from new Flexible Learning courses like Using AI in Technical Writing, to programs that require all students to study AI, such as the Bachelor of Science in Applied Computer Science.
In the BCIT Computer Systems Technology (CST) program, students are taught computer systems theory while gaining hands-on, practical experience in software development. The Artificial Intelligence and Machine Learning option teaches the fundamentals of AI and ML applications, with practical work focusing on real-world data sets.
The BCIT Business Information Technology Management program (Artificial Intelligence Management Option) explores AI and machine learning (ML) and their impacts in business settings. Students learn how to leverage AI and ML technologies, and tackle complex business challenges proactively.
Watch Dr. Aaron Hunter’s TEDx Talk to learn more about the complexities of AI systems.