Technology

A tech tip: How to detect AI-generated deepfake images

LONDON– AI fakes are quickly becoming one of the biggest problems we face online. Deceptive images, videos and audio are on the rise due to the increase and misuse of generative technologies artificial intelligence Tools.

With AI deepfakes popping up almost daily, showing everyone from Taylor Swift to Donald Trump to Katy Perry at the Meta Gala, it’s becoming increasingly difficult to distinguish what’s real and what’s not. Video and image generators like OpenAI’s DALL-E, Midjourney, and Sora make it easy for even people with no technical knowledge to create deepfakes – just type in a request and the system will spit it out.

These fake images may seem harmless. But they can be used to carry out scams and other purposes Identity theft or propaganda and Choice Manipulation.

How to avoid being fooled by deepfakes:

In the early days of deepfakes, the technology was far from perfect and often left telltale signs of manipulation. Fact-checkers have pointed out images with obvious errors, such as hands with six fingers or glasses with differently shaped lenses.

But as AI has improved, it has become much more difficult. Some common advice — such as looking for unnatural blinking patterns in people in deepfake videos — no longer applies, said Henry Ajder, founder of consulting firm Latent Space Advisory and a leading generative AI expert.

Still, there are some things to watch out for, he said.

Many AI deepfake photos, particularly of people, have an electronic glow, “an aesthetic kind of smoothing effect” that makes skin “look incredibly polished,” Ajder said.

However, he cautioned that creative suggestions can sometimes eliminate this and many other signs of AI manipulation.

Check the consistency of shadows and lighting. Often the subject is clearly focused and appears convincingly lifelike, but elements in the background may not be as realistic or sophisticated.

Face swapping is one of the most common deepfake methods. Experts advise taking a close look at the edges of your face. Does the skin tone of the face match the rest of the head or body? Are the edges of your face sharp or blurry?

If you suspect that the video of a person speaking has been doctored, look at their mouth. Do their lip movements match the tone perfectly?

Ajder suggests looking at the teeth. Are they clear or blurry and somehow inconsistent with their appearance in real life?

Cybersecurity firm Norton says the algorithms may not yet be sophisticated enough to generate individual teeth. Therefore, the lack of outlines for individual teeth could be a clue.

Sometimes context is important. Think about whether what you see is plausible.

Journalism website Poynter points out that if you see a public figure doing something that seems “exaggerated, unrealistic or out of character,” it could be a deepfake.

For example, would the Pope really wear a luxurious puffer jacket like the one depicted in an infamous fake photo? If so, wouldn’t more photos or videos be published from reputable sources?

The Met Gala was all about over-the-top costumes, which added to the confusion. However, such big name events are usually covered by officially accredited photographers who produce numerous photos that can help with verification. One clue that the Perry image was fake is the carpet on the stairs, which some eagle-eyed social media users said was from the 2018 event.

Another approach is to use AI to combat AI.

OpenAI announced Tuesday that it is releasing a tool to detect content created with DALL-E 3, the latest version of its AI image generator. Microsoft has developed an authentication tool that can analyze photos or videos to provide a trust score on whether they have been tampered with. The FakeCatcher from chip manufacturer Intel uses algorithms to analyze the pixels of an image to determine whether it is real or fake.

There are online tools that promise to detect fakes if you upload a file or include a link to the suspicious material. However, some of them, such as OpenAI’s tool and Microsoft’s authenticator, are only available to select partners and not to the public. That’s partly because researchers don’t want to tip off bad actors and give them a greater advantage in the deepfake arms race.

Open access to detection tools could also give people the impression that they are “divine technologies that can outsource critical thinking for us,” when instead we need to be aware of their limitations, Ajder said.

Apart from that, artificial intelligence is advancing rapidly and AI models are being trained using internet data to produce ever higher quality content with fewer errors.

That means there is no guarantee that this advice will still be valid in a year.

Experts say putting the burden of becoming digital Sherlocks on ordinary people could even be dangerous because it could give them a false sense of confidence as deepfakes become increasingly difficult to detect, even for trained eyes.

___

Swenson reported from New York.

___

The Associated Press receives support from several private foundations to improve its explanatory coverage of elections and democracy. For more information about AP’s Democracy Initiative, click here. The AP is solely responsible for all content.

This is a curated content sourced publicly with a clear linkable mention to the original source and you may view the source from the following Source link

Notepad is free a updates hub to keep you updated.