Deepfake videos are a threat to the general public
The question is how many people realized it was a scam too late, but we fear it is a very high number.
Deepfake recordings existed even before the advent of artificial intelligence in the form we know it today (ChatGPT, Bard, Bing, Midjourney...). But with the advent of large language models, these recordings have become frighteningly realistic. It was only a matter of time before miscreants used this advanced technology for evil purposes.
They chose the best possible target. MrBeast is known for his extravagant YouTube videos in which he delights his fans and passers-by with expensive cars, private jets, phones, houses and similar gifts. A scenario where MrBeast would give away almost free iPhone 15s isn't too unusual either. Even the malicious ones bet on it. The fake MrBeast urged viewers to click on a link in the video, but behind the scenes was the intent to steal data.
A deepfake video of MrBeast recently surfaced on TikTok, where it has likely caused massive damage, not only to users, but also to the platform, which will now grapple with questions about how to prevent this in the future.
MrBeast isn't the only victim of AI scams. Putin's image has also been misused in the past to spread false information.
Although celebrities are more attractive targets, ordinary mortals are also at risk. BBC journalists Matthew Amroliwala and Sally Bundock have also experienced this firsthand. Their image was used to promote a financial scam. In the video, the journalist is supposed to be hosted by the richest Earthling, Elon Musk, who is supposed to be interrogating him about the currently supposedly most profitable investment opportunity. Elon Musk has also been a victim of deepfake videos in the past, in which he allegedly gave away his wealth and gave advice on cryptocurrencies.
Until now, Meta (formerly Facebook) attached warnings about possible false information to such videos, which were discovered by the organization FullFact. The latter has set itself the mission of checking all possible irregularities that appear in the news and on social media. A representative of Meta also spoke. “We do not allow such content on our platforms and have removed it immediately. We are constantly working to improve our systems and encourage anyone who sees content they believe violates our policies to report it using the in-app tools. Then we can study it and take action."
TikTok also chimed in, removing MrBeast's fake video hours after it was posted and banning the account that violated their terms of service. TikTok has this kind of "synthetic" footage highlighted as prohibited content in its policy.
The real MrBeast, meanwhile, called on platform X (formerly Twitter) and other platforms to answer the question of whether they are prepared for the rise of such scams.
How to identify deepfake recordings?
Although for now classic phishing attacks and ransomware attacks remain the most common forms of attacks and also the most successful, there is likely to be a period when deepfakes and artificial intelligence attacks will become more common.
Recently, even Tom Hanks had to defend himself because of artificial intelligence. His image was stolen to promote a controversial dental plan.
AI systems will only become more advanced. With development, concerns about how to recognize such scams will increase. The golden rule is to be suspicious if you come across a video, post or message in which someone is offering you something for free, especially if it's a product or service that usually requires larger amounts.
The image of MrBeast devalues this rule somewhat. Precisely because of his videos, in which the participants also compete for attractive prizes, the mentioned deepfake video was difficult to recognize as a scam.
We have no choice but to always be careful and check every piece of information beforehand. Those most observant users will also notice some other suspicious signs that may indicate that it is a scam.
In MrBeast's clip (you can watch it at bit.ly/DeepfakeRN), the attackers included the name of a famous YouTuber with the blue check mark that we are used to from official profiles. If you are a frequent TikTok user, you know that this kind of name placement is not necessary as all videos already include the uploader's name below the TikTok logo.
In the case of the BBC journalists, the reasons for suspicion were more grammatical in nature. The presenter mispronounces the number 15 and pronounces the word "project" with an unusual intonation. There are also some minor grammatical errors, which you can easily miss (for example, the incorrect use of "was, were"). As recordings become more and more realistic over time, these types of errors will be important for detecting fraud. In the same clip, it's very hard to see that Elon Musk got an extra eye above his left eye. Eyes, fingers, and similar details are often distorted in AI shots, but for us, this represents an opportunity to avoid disaster.
Legal experts who have a divided opinion regarding the prohibition of such recordings have also spoken out. If we were to ban deepfake technology, we would also harm genuine content in movies and series created with special effects.
Cover image: Image by rawpixel.com on Freepik