Refine Your Search

Refine Your Search

Refine Your Search

Searching Owner Information...0%

Thank you for your patience.

Enter your Email to unlock result
Organizing All the Data ... 0%

Thank you for your patience.

Multiple Faces Detected

Browse and upload image here
Uploading...
Uploading...

We Respect Your Privacy.

Start people search here...

All Categories
Deepfake: Scary Future for Scams in 2020 [and Beyond]

Deepfake: Scary Future for Scams in 2020 [and Beyond]

October 4th, 2023
Guides, Scams & Fraud
Deepfake: Scary Future for Scams in 2020 [and Beyond]

How important is your image to you? Not only in the broader sense, such as how you are viewed by others, but also your physical representation – your face, smile, hair, body, and other physical features. When you post a selfie or video online, do you consider it your personal property, since it contains you? What if scammers could create an entire world using manipulated video, called deepfake?

While deepfake is a word you may not have heard of before, you will never forget what it is once you learn more. It is the far more sinister cousin of “fake news,” as it is both innovative and terrifying. While “fake news” is often a politically loaded term for “junk news.” Deepfake strikes at everyone equally, regardless of their place on the political spectrum.

One thing is for sure:

It will change information technology on the web as we know it. It can challenge the way we receive new information and will become one of the hottest, most controversial terms around in 2021 and beyond.

Be one of the first to understand the frightening power of deep fakes and how it harnesses artificial intelligence technology, creating images that are virtually indiscernible from real ones. Also, learn how deepfake videos can trick you out of money, frame innocent people as frightening criminals, and fuel a sick world of pornography (even child pornography) across the Internet and the dark web.

So, What is Deepfake?

deepfake-online-scams-1.png

Deepfake, as a term, is a fusion of “deep learning” and “fake.” At its core, it is a misinformation campaign relying on your eyes to deceive you. It is a technology-fueled deceit, which relies on artificial intelligence-based (human) image synthesis. Through the act of an algorithm, fake videos are created which are convincing enough that experts struggle to determine if they’re authentic.

[bctt tweet=”Deepfake videos may be convincing enough that you may struggle to determine if they’re real.” username=”socialcatfish”]

This technology seems a creative godsend for filmmakers – as it gives them a way to alter video images and create fantasy without the constraints of reality. Not surprisingly, this used in Hollywood quite frequently. One famous example is when Forrest Gump “meets” John F. Kennedy in the film, “Forrest Gump”.

However, while proponents of deepfake find its methods creative and inspiring – which is why it has succeeded in moviemaking applications, to others, it represents the end of “seeing is believing.” This is because, when a deepfake image is created, existing images and videos are superimposed onto stock or source images or video. This requires far more than a simple process like Photoshop.

Deepfake involves a machine called GAN (Generative Adversarial Network). If your mind isn’t already boggled by this concept, the ramifications soon become clear. Deepfake can create videos that (seem to) show a person participating in activities (or being at events) that never actually happened.

As it establishes fake believable video, it can look as though it shows someone saying something they did not – either political, violent, embarrassing, sexual, threatening, libelous, etc. This can result in a fake video, propaganda, misinformation, terrorism, hoaxes, threats, fake porn, revenge porn, fake cheating videos, and more.

The Origins of Deepfake Technology

My-Post-5-3.jpg

It’s hard to read about any technological industry without hearing the word “artificial intelligence.” Most of us see both perks and drawbacks to an increasingly electronic world that relies on established and emerging smart technology. Deepfake fits into this paradigm perfectly.

The goal of deep learning is to create processes that mimic the skills and tasks that our human brains perform naturally. These processes hope to recreate and even transcend human potentiality through artificial neural networks fueled by algorithms.

If you’ve ever used Snapchat or other applications, you’ve probably tried Face Swap. In a way swapping “faces” and merging photographs is a simplified version of deepfake. Before we discuss GAN or Generative Adversarial Networks (the advanced forms of this technology) let’s examine the last blockbuster film you saw.

To movie makers, deepfake technology is helpful and needed. The goal in cinema is to create believable situations for the actors.

Let’s say an actor passes away (RIP) before filmmaking ends on a multimillion-dollar movie. Instead of the entire project being scrapped, all scenes repeated with a new actor, or significant script changes, the filmmakers can use deepfake technology to finalize a few unfinished scenes, making it appear as if the actor was in the background or even speaking, etc.

In other uses, deepfake’s generative adversarial networks (GAN’s) are deep architectures that are made up of two neural nets – which pit one against the other (and are, therefore “adversarial”).

Their origin came in 2014, came when Ian Goodfellow and Yoshua Bengio, from the University of Montreal, released a paper describing their use. This development caught the attention of social network executives like the Director of Facebook AI Research, Yann LeCun.

Since then, he has spoken many times about Adversarial Training. In 2018, he even posted his thoughts on Quora, referring to the new developments in deep learning (or GAN) as, “opening the door to an entire world of possibilities.”

How Does Deepfake Software Work?

My-Post-6-3.jpg

To a beginner exploring this technology in layman’s terms, it can boggle the mind. The basic idea behind GAN is to train two neural networks simultaneously. Those are comprised of (D) the Discriminator and the (G) Generator. The Discriminator can receive input from the uploading of an image and determine whether it looks authentic, “natural”, even though it is not being used as it was in real life.

The Generator takes these images that have been deemed ‘natural’ (as in, not those which are fuzzy, complicated, or of ill fit) and trains itself to produce images that fool the Discriminator. The reason this process is termed “adversarial training” is because of the Discriminator and the Generator work together and against one another – with the Generator aiming to minimize output while the Discriminator tries to increase or maximize it. This helps fine-tune the process of creating “fake” real images and video.

Another scare for those who enjoy sharing the content of themselves and their families online, is that the more you share, the better the GAN will be at creating a fake “you.” This is because the algorithm is set to analyze all images and videos that are shared and uploaded – in short, the more you share, the better the fake will be at mimicking you!

Each of us has the expressions we make. We also have something called “microexpressions,” or brief, very nuanced, facial expressions that correspond with an emotion being experienced. These often are the “tells” of our personality and genuine feelings.

According to The Science of People and their coverage of Dr. Paul Ekman, these expressions are almost identical worldwide. This means that they exist apart from culture and sociological training. The seven most common include anger, disgust, happiness, surprise, contempt, sadness, and fear.

Now, one can assume that since GAN machines can learn to create even microexpressions, future deepfake technology may have looked at hours of video data and imaging to create near-perfect fakes! These expressions are so important as they are glimpses’ into what someone is feeling and less likely to be faked by humans… until now. With a deepfake video or image, even microexpressions that humans cannot fake could be included which will make it even more believable.

Where to Find Deepfakes

My-Post-7-3.jpg

Since December 2017, the word deepfake has been used more and more broadly. This isn’t surprising since the Oxford Dictionary named “fake news” the term of the year in 2016. Will 2020 be the year when “deepfake” takes center stage?

Probably not.

Although awareness is increasing and experts are concerned about the potential for great harm, more than likely widespread use will take longer. This is because the skill and technology are not yet widely available enough to be used convincingly by the masses. For now, those with costly and complex systems will be able to harness the technology first.

Currently, you may see a celebrity “face swapped” with another celebrity “mug” and laugh. Maybe your friend on social media posts a meme which showcases deepfake’s underlying technology.

However, while one might snicker at this video of Jennifer Lawrence combined with Steve Buscemi, or Nicolas Cage being added to movie roles he didn’t star in, imagine the consequences if someone was to create believable deepfake videos of the United States President (especially when currently serving as POTUS).

BuzzFeed came up with an example of this, where comedian and actor, Jordan Peele, spoke as if he was former President Barack Obama and said blunt or potentially inflammatory things about the future of the U.S.

To see current deepfake videos, you can search “#deepfake” for the best of what YouTube, Twitter, or search engines have to offer from these types of videos. In the case of the infamous 2016 “grab ‘em by the pussy” Access Hollywood video, Donald Trump now suggests the video may be fake, (although the video’s other star, Billy Bush, has confirmed its authenticity).

If that same video had been released in a year when deepfake videos were commonly contested, would it change how it was received by the public? These are some of the many ethical dilemmas and debates that this revolutionary artificial intelligence technology may bring. One can imagine a world where courtrooms require forensics done on the origin of the salacious or controversial video.

Will Scammers Use Deepfake Technology?

My-Post-8-1.jpg

Yes, absolutely. They already do and will (more so!) in the future! Although the quality may not be too great, it is often just enough to get what they want. And what do scammers typically want?

MONEY.

Scenario 1: Deepfake Cheating

Imagine that you have an ironclad prenuptial agreement with your beloved spouse. In it, if either of you is caught cheating, you lose everything. Imagine what would happen if you go to court and your former partner has submitted explicit photographs or videos of you, which prove you cheated. You know this isn’t true, but the judge does not.

What happened?

You were set up by fake images or videos created with deepfake technology! Now, a scammer could send you these same false images or videos and threaten to expose you online to your family and friends.

They might demand money to delete or remove the pictures from video sharing platforms. Your social media accounts may get hacked they’ll share the video. They promised to do all of this unless you pay them cash. Worried about your reputation and your marriage, you pay them afraid that you’ll lose everything if you don’t!

Scenario 2: Deepfake Crime

A scammer has a video of you committing a crime.

The problem?

You never committed a crime or did what the video depicts. However, someone has taken your images and created a video that seems to show the above, with a near or exact likeness to you. Perhaps the video was made by someone you’ve never met or a (local or overseas) scammer who managed to record audio of you saying things they’ve now attached to a fake video.

Do you trust the cops to see it your way or will you pay the ultimate price of jail time? The scammer knows you’re worried and says that they’ll make the video “go away” if you pay them. Although you hate to give in to blackmail, because of how believable the video looks, you don’t want to risk going to jail! You’re also worried that even if you’re innocent, the controversy will destroy your business and reputation.

Scenario 3: Deepfake Pornography

There are several directions that this can go. In the more traditional pornography scam, you’ve connected with someone online and exchanged private videos of a sexual nature. Maybe it was purely a temporary adult connection, or you’re ‘in love’ and don’t care who knows it!

Unfortunately, the person you’re connecting with is a scammer and after you send them private X-rated video of yourself, they either:

  1. Blackmail you with the content, or
  2. Sell the video to pornographers who then use it to create deepfake pornography starring others.

This means that the video of you might be sold on the dark web or altered to put someone else’s face on your body (or vice versa). Another possibility occurs after a breakup and is called “revenge porn.” This happens when someone you’ve shared the explicit video with is mad that you called off your love and shares that video.

When it comes to deepfake revenge porn, someone takes your photo and puts your face and likeness to pornographic content. As your face is (believably) attached to someone else’s naked body, this can be particularly soul-crushing for the victim. You might worry this adult video could get you fired from your job or ruin your reputation. Worse, you feel humiliated, betrayed, and embarrassed, and fearful.

Scenario 4: Deepfake Catfish Scams

A catfisher is someone who pretends to be another person online. When this is done to collect money from the victim, it is commonly called romance scams. The FBI considers a romance scam as happening when a scammer breaks hearts and bank accounts all at once.

While some scammers avoid speaking with their victims on phone or video chat, so they won’t be caught imagine a world where you talk to your scammer and they send you videos where their face matches their body, along with what they have told you about themselves.

While being catfished and lied to can destroy anyone’s faith and trust in humanity, incorporating the use of deepfake technology can cause even worse damage and PTSD (post-traumatic stress disorder). Through deepfake technology, scammers can match their face to someone else’s body or also used computer-generated images which are only a composite and not a real person at all! This is already happening in a simplified way!

Social Catfish spoke with a Nigerian scammer who showed how he uses third-party apps to add his voice to another video and make it look as if he is the one in the video. You can witness an example for yourself:

The Negative Effects of Deepfake

My-Post-9-1.jpg

Unfortunately, the abuses and ramifications of convincing deepfake videos and imaging are broader than scammers alone. As this technology advances, the possible detrimental uses will increase as well. While most of us will be able to spot poorly produced deepfake imaging in the short-term, experts agree that the long term advances will be almost impossible to catch.

Hany Farid, Professor of Computer Science Department at Dartmouth College, works with the United States government to help develop a way to catch deepfake technology. Called the father of (digital image) forensics, his attempts involves looking for thermal/unusual blinking patterns in fakes video.

As he explains to the Wall Street Journal, the moment we develop one way to catch fakes, new technology evolves.

You can put into a person’s mouth anything you want. Hany Farid

Farid has said, explaining that those in Academia are advancing the technology without enough potential oversight for what could go wrong. At the very least, it’s going to become so widespread that we experience daily interaction with deepfake content.

And what could go wrong?

Revenge: You could be framed for something you didn’t do. Your likeness could be placed in videos to embarrass you, cost you your job, or try and land you a jail sentence. Videos might show you doing or saying something that will solicit the public’s anger.

Pornography: Also called “morph porn” and “parasite porn.” Essentially this is a deepfake porn video. One target of morph porn was Noelle Martin, who has spoken out extensively about the negative ramifications this technology has had in her life. Her experience (edited photographs) is one example of what those with more realistic Deepfake porn videos may experience.

Child Pornography: When children are involved, the effects are even worse.

Terrorist Propaganda: Potentially, terrorist recruitment videos (such as those used for ISIS) could show Americans saying harmful or threatening things about terrorist leaders or other governments.

Warfare: If someone edited a deepfake video to show a sitting president or leader declaring war on another nation (especially if nuclear), the threatened country might respond with severe military force or nuclear power, without recognizing the threat was fake.

Government Overthrow: Especially in heavy terrorist locations or third world countries, deepfake content could be created to influence the masses into starting a coup.

Court Proceedings: One party might file employee deepfake videos to allege abuse, get custody of a child, or win their case.

Fake Crimes: People could set others up for fake crimes or take the fall with edited video.

Revenge Porn: Similar to deepfake pornography uses, this is created to get revenge against a former lover. When the laws about harassment were beginning to make progress, the technology of deepfake threatens to undo those gains. Revenge porn laws have slowly increased in severity (for those who create it), by state. In many places, it is considered a misdemeanor, while other States have subsequent offenses in revenge porn count as a felony.

Search the laws in your state, here:

https://www.traverselegal.com/blog/revenge-porn-laws-by-state/

A significant problem begins to emerge when women’s faces can be added to pornography videos with such realistic results. This means that any women could be shown in videos.

Medical Misinformation: Videos of health experts discussing the coronavirus could be doctored or altered for political and economic reasons. This would allow for misinformation of the coronavirus, or other diseases, to spread when people try to look up accurate information about illnesses. This is dangerous since people could think they have something they don’t, or have something but not know about it, which would prevent them from getting the proper treatment.

Political Fake News: Fake news and videos can spread that could impact the 2020 presidential election. With fake news, misinformation could spread about certain candidates that would then make people not want to vote for that specific person or proposition. This could lead to a candidate getting elected or proposition getting passed that people may not have wanted if they got the correct information.

Does Deepfake Affect One’s Credibility?

My-Post-10-1.jpg

Defense attorneys are going to love the word “deepfake.” Shown damaging video of one of their clients? “This is a deepfake video!”, they might declare.

On the flip side, imagine running for state office, the Presidency, or being a CEO of a successful company. What if one of your competitors wanted to destroy you? They could have experts create a damaging video where you insult a diverse group or say something atrocious to lose your customer base.

Since deepfake technology is coming and there is no way to avoid it altogether, companies have had to take action where the government hasn’t yet. They invest in research, algorithms (to stay ahead of technology), and use common sense solutions such as looking for sounds in the background that don’t match the video.

However, according to CNN Business, even top companies like Facebook are somewhat vague and elusive about precisely what they will do to protect users from fakes. The same is true for Twitter, YouTube, and Reddit.

While they are taking action, none of the companies want to say precisely how. This might help me to avoid arming the enemy (those who create deepfake videos), or because as the technology changes and emerges, they aren’t quite sure themselves.

To mitigate the potential downfalls and loss, start being savvy and smart NOW! How?

Make Your Accounts Private

Make sure all of your accounts are private so you can know who’s viewing your images and videos on social media.

Conduct Regular Searches

Perhaps you’re already concerned about your online reputation. However, even if you search your name, username, or details “sometimes,” it is no longer enough. You will need to use reverse searches for photographs and videos regularly. The best way to do so is by using an algorithm-based search like Social Catfish.

Make It a Family Thing

Given the risks that your innocent children’s photographs could be used in the shocking video, make sure that you perform safety searches for your minor children and other family members. Even if they don’t use the internet, if you share photographs of them online, they are still at risk.

Take Action!

If you see anything online which attempts to frame or blackmail you, or uses your likeness in vengeful or humiliating ways, immediately contact the authorities and the Internet Crime Division of the FBI. Victims of crimes that threaten their reputation sometimes fear that attacking the problem head-on might make it worse, which allows scammers and bullies to reign. Help stop the problem by being assertive and protecting yourself.

Limit What You Share

Part of what makes social media so enjoyable is sharing your life, family’s activities, and more. While it’s not necessary to avoid it, just know that the more you share openly, the more you’re at risk. Some of us have careers (models, actors, influencers, etc.) where our image is always going to be public.

Its unavoidable, but you should be diligent than the average person in doing safety searches online. As we learned above, the more images a GAN machine is given, the better it is at creating a fake!

Be Careful with Deepfake Videos

My-Post-11-1.jpg

Have you ever encountered a deepfake? What is your biggest fear about how this technology could impact your life? Do you have any worries about your family, spouse, or children being exploited? Share this article on social media! Let us know – in the comments – if artificial intelligence will change the way you share video or images online.

The best tool to combat these deepfake developments is reverse image searches which regularly scan for content. Use Social Catfish today and take charge of your image through our proprietary facial recognition and reverse image search technology!

Find Someone on Tinder by Phone Number: Free Tinder Search (2026)

Find Someone on Tinder by Phone Number: Free Tinder Search (2026)

That sinking feeling when you wonder if someone's on Tinder, your partner, an ex, or maybe just som...

Snapchat Lookup: How To Search Someone By Phone Number

Snapchat Lookup: How To Search Someone By Phone Number

Snapchat is one of the most popular web-based social media platforms. Loved for its convenience of ...

Related Articles

How Can You Protect Yourself on Social Networking Sites in 2026?

How Can You Protect Yourself on Social Networking Sites in 2026?

You posted a vacation photo. You accepted a frien...

How to Check If Someone Is Using Your Identity: A Step-by-Step Guide

How to Check If Someone Is Using Your Identity: A Step-by-Step Guide

Identity theft doesn't always announce itself. Mo...

How to Protect Your Personal Information: Complete Safety Guide

How to Protect Your Personal Information: Complete Safety Guide

You lock your front door every night without thin...

How to Run a Free Background Verification Online (Step-by-Step Guide)

How to Run a Free Background Verification Online (Step-by-Step Guide)

In an increasingly digital world, knowing who you...