From Prime Minister Mark Carney announcing a ban on vehicles made before 2000 to Premier Scott Moe promoting investments in cryptocurrency, deepfake videos created by artificial intelligence are becoming more and more prominent.
Dr. Devan Mescall has been a professor at the University of Saskatchewan’s Edwards School of Business for 15 years.
He has served in the accounting department, but over the last couple of years has developed expertise and deep knowledge of AI to help Saskatchewan businesses think about how they can adapt and use the technology.
Mescall completed an AI program at MIT and was one of the authors of the University of Saskatchewan’s first research to be published in the Harvard Business Review about AI implementation.
He said the technology began with the development of adversarial networks known as GANs. One program would generate a fake image, while a second discriminator program would evaluate the image to determine whether it was real or fake.
Mescall explained that these two programs would feed off each other until the image was refined enough to fool the discriminator into thinking it was real.
“By doing that it improves the image, improves the image and improves the image until the generator does fake out the discriminator and then you have something that is of a quality that the human eye can also be fooled by,” he said.
That was the base technology. However, Mescall said AI has since advanced to a “diffusion” model, which uses different mechanisms to refine images until it produces a high-quality and stable result.
He said the diffusion model makes it easier for anyone to create a fake image or video.

“Each iteration of AI over time is doing two things: making the images more and more realistic and harder to identify as fake, but also making it more accessible and easier for a non-professional — or making it very easy and accessible for anyone — to create these images,” Mescall said.
This technology has also been developing rapidly. According to Mescall, the first GAN models appeared around 2014, and early deepfakes emerged in 2018 and 2019. By 2022, diffusion models were introduced, and that’s when deepfakes really took off.
In fact, the newest models allow users to make deepfakes with very little data.
“For example, if you wanted to create a deepfake putting an individual in, you would need actual video of them or video from different perspectives to get it right. Well now, it's getting to the point where, with a single photo input, they can generate an entire full-body synthesis,” said Mescall.
The problem is that the technology is advancing to the point where telltale signs — such as blurring around the eyes and the edges of a person — are becoming harder and harder to detect.
Mescall said other telltale signs of deepfake videos include blurring around the hands and hairline, irregular blinking patterns and misplaced shadows.
However, he said the best defence against deepfakes is to watch videos critically and ask yourself: is this reasonable?
“If we think about the one circulating with Premier Moe, the context is that Premier Moe is unlikely to be the person to try and get you to invest in crypto. Thinking through those things is important — and that's challenging because often, especially with scams, they are trying to take advantage of things people want to believe,” he said.
He added that it’s not just individuals who can be fooled by deepfakes. Mescall said last year, a deepfake video of a multinational corporation’s chief financial officer led to an employee transferring $25 million out of the company.
“This is not an easy thing, but it is something that all of us have to be vigilant about,” he said.
When it comes to investing, he advises checking www.aretheyregistered.ca, a national investment registry.