Artificial intelligence (AI) is making its way from high-tech labs and Hollywood plots into the hands of the general population. ChatGPT, the text generation tool, hardly needs an introduction and AI art generators (such as Craiyon and OpenAI) are hot on the heels of its popularity. Inputting nonsensical prompts and receiving funny clip art in return is a fun way to spend an afternoon.
However, while you’re using AI art generators for laughs, cybercriminals are using technology to trick people into believing sensationalist fake news, catfish dating profiles, and malicious impersonations. Sophisticated AI-generated art can be hard to spot, but here are some signs you might be viewing a suspicious photo or dealing with a criminal behind an AI-generated profile.
What are AI Art Generators and Deepfakes?
To better understand the cyberthreats each poses, here are some quick definitions:
AI art generators. Generative AI is basically the specific type of AI behind art generators. This type of AI is filled with billions of examples of art. When someone gives it a prompt, the AI goes through its vast library and selects a combination of artworks that it thinks will best meet the prompt. AI art is a hot topic of debate in the art world because none of the works it creates are technically original. It gets its final product from various artists, most of whom have not given the computer program permission to use their creations.
Deepfake. A deepfake is a manipulation of existing photos and videos of real people. The resulting manipulation can create an entirely new person from a compilation of real people, or the original subject is manipulated to appear to be doing something they did not do.
AI art and deepfakes are not technologies found on the dark web. Anyone can download AI art or deepfake apps, such as FaceStealer and Fleeceware. Because the technology is not illegal and has many innocent uses, it is difficult to regulate.
How Are People Using AI Art Maliciously?
It’s very innocent to use AI art to create a cover photo for your social media profile or to pair with a blog post. However, it’s best to be transparent with your audience and include a disclaimer or caption that says this is not original artwork. AI art becomes malicious when people use images to deliberately deceive others and gain financial gain from the deception.
Catfish can use deep profile pictures and videos to convince their targets that they are truly looking for love. Revealing their true face and identity can put a criminal catfish at risk of detection, so they may use other people’s photos or deepfake an entire photo library.
Those who spread fake news may also enlist the help of AI or a deepfake to add “credibility” to their conspiracy theories. When they pair their exciting headlines with an image that, at a glance, proves its legitimacy, people may be more likely to share and spread the story. Fake news is destructive to society because of the strong negative emotions it can create in so many people. The resulting hysteria or anger can lead to violence in some cases.
Finally, some criminals can use deepfake to spoof face ID and gain entry to sensitive online accounts. To prevent someone from deepfaking your accounts, protect your accounts with multifactor authentication. That means more than one form of identification is required to open the account. These methods can be a one-time code sent to your cellphone, passwords, answers to security questions, or fingerprint ID in addition to face ID.
3 Ways to Spot Fake Photos
Before you start an online relationship or share a bright news story on social media, scrutinize photos with these three tips to pick out malicious AI-generated art and deepfakes.
1. Check the context around the picture.
Fake photos usually don’t appear on their own. There is often text or a larger article around them. Check the text for typos, poor grammar, and general poor composition. Phishers are known for their poor writing skills. AI-generated text is harder to decipher because its grammar and spelling are often correct; however, the sentences may seem inconsistent.
2. Review the claim.
Does the image seem too strange to be true? Too good to be true? Expand this generation’s rule of “Don’t believe everything you read on the internet” to include “Don’t believe everything you see on the internet.” If a fake news story is said to be true, look for the headline elsewhere. If it’s really noteworthy, at least one other site will report on the event.
3. Check for distortions.
AI technology often makes hands a finger or two too many, and a deepfake creates eyes that either have no soul or a dead look to them. Also, there may be shadows in places where they don’t naturally exist, and the skin tone may appear uneven. In deepfaked videos, the voice and facial expressions may not line up exactly, making the subject look robotic and stiff.
Strengthen Your Online Safety With McAfee
Fake photos are hard to spot, and they will likely become more realistic as technology improves. Awareness of emerging AI threats better prepares you to take control of your online life. There are quizzes online that compare deepfake and AI art to real people and human-created artwork. When you have ten minutes to spare, consider taking the quiz and identify your mistakes to identify malicious counterfeit art in the future.
To give you more confidence in the security of your online life, partner with McAfee. McAfee+ Ultimate is the all-in-one privacy, identity, and device security service. Protect up to six members of your family with a family plan, and receive up to $2 million in identity theft coverage. Work with McAfee to stop any threat that appears under your watch.