Is that Facebook account real? Meta reports rise in AI profile pictures

Is that Facebook account real? Meta reports rise in AI profile pictures


By Nicole Sganga

CBS News

Meta warns against AI-generated profiles rising

Meta reports show a rapid increase in AI-generated profiles used as threat actors by the threat actors


Facebook parent Meta is witnessing a “rapid increase” in fake profile pictures generated by artificial intelligence.

Publicly available technology such as “generative adversarial network” (GAN), allows anyone, including threat actors, to create eerie environments. DeepfakesYou can create dozens of synthetic faces in a matter of seconds.

These are “basically photos that do not exist,” according to Ben Nimmo (Global Threat Intelligence lead at Meta). “It’s not a person in this picture. It’s a computer-generated image.

“More Than Two-thirds of all the [coordinated inauthentic behavior] Networks we disrupted this past year featured accounts with GAN-generated profile photos, which suggests that threat actors may view it as a way for them to appear more authentic.” META reported in public reporting, Thursday.

The social media giant uses a combination of behavioral indicators to identify GAN-generated profile pictures. This is a significant improvement over reverse-image searches that only identifies stock photos.

Meta has revealed some of the fakes in a report. These two images are just two of many that are fake. The third image shows how they are superimposed over one another, so that all the eyes align perfectly, revealing their artificiality.

Artificially generated fake Facebook profile for Ali Ahmed Ghanem


Artificial intelligence-fake photo from Alice Schultz’s Facebook profile


Six AI-generated photos of purportedly distinct individuals are superimposed on one another on the right. This shows that all six eyes align perfectly and they are fakes.


AI image professionals are trained to spot errors in AI images. Some AI images have melted backgrounds or mismatched earrings.

AI-generated image showing melting at the top of a baseball cap.


“There is a whole community open search researchers who just love to nerd out on finding those. [imperfections,]” Nimmo said. “So what threat actors might think is a good hiding place is actually a good way for the open source community to spot it.”

However, the sophistication of generative adversarial network that will soon rely upon algorithms to produce content that is indistinguishable than that produced by humans has made it a complex game of whack-amole for the global threat intelligence team at social media.

Meta said that since 2017, more than 100 countries have been subject to what Meta calls “coordinated authentic behavior” (CIB). This refers “coordinated efforts inauthentic behavior to manipulate public discourse for a strategic goal, where fake accounts are central to this operation.”

Five years ago, Meta began publishing threat reports. Since then, the tech company has disrupted more 200 global networks. These networks span 68 countries and 42 different languages. Meta claims that these disruptions violate its policy. According to Thursday’s report “the United States was most targeted country by global.” [coordinated inauthentic behavior] We’ve disrupted operations over the years, followed closely by Ukraine and the United Kingdom.”

According to Thursday’s report, Russia was the most “prolific” source for coordinated inauthentic behavior. There were 34 networks that originated from Russia. Mexico (13 networks), Iran (29 networks), and Mexico (13 networks), were also high-ranking among geographic sources.

The report stated that “since 2017, we’ve disrupted networks controlled by people connected to the Russian military, military intelligence, marketing companies and entities associated with a sanctioned Russian financial.” “While most media reports have focused on Russian operations targeting America, our investigations revealed that Russia had more operations targeting Ukraine and Africa.”

“If you look at Russian operations, Ukraine is consistently the single largest target they’ve chosen on,” Nimmo said, even before the Kremlin invasion. The United States is also guilty of violating Meta’s policies regarding coordinated online influence operations.

Last month, in a rare attribution, Meta reported that individuals were “associated with the US army”Promoted a network of approximately three dozen Facebook accounts and two hundred Instagram accounts that focused on U.S. interests abroad. The focus was on Afghanistan and Central Asia.

Nimmo stated that the U.S. military relied upon a “range” of technical indicators to determine the date of its last month’s takedown.

Nimmo said that the network was active on a variety of platforms and was posting about general events in those regions. “For example, describing Russia and China in those regions.” Nimmo said that Meta had gone “as far we can go” to determine the operation’s connection with the U.S. military. He did not cite any particular service branch or military command.

The report found that two-thirds of the coordinated inauthentic behavior taken by Meta “most often targeted people in their country.” The top of that group were the government agencies from Malaysia, Nicaragua and Thailand who were found to have targeted their own citizens online.

The tech giant said it is working with other social media companies in order to expose cross-platform information warfare.

“We’ve continued exposing operations running on many different Internet services simultaneously with even the smallest network following the same diverse approach,” Thursday’s report stated. These networks have been observed operating across Twitter, Telegram TikTok and Blogspot as well as YouTube, Odnoklassniki VKontakte and Change.[.]org, Avaaz and other petition sites, as well as LiveJournal.

Critics say that these types of collaborative takedowns are too small and too late. Sacha Haworth, executive Director of the Tech Oversight Project, criticized the report and called it “a terrible rebuke.”[not] They are worth the paper they are printed on.”

CBS News’ Haworth stated that deepfakes and propaganda from foreign state actors are already too late. “Meta has shown that they don’t care about changing their algorithms that amplify dangerous content. This is why we need legislators to pass laws that give them control over these platforms.”

A 128-page investigation conducted by the Senate Homeland Security Committee, obtained by CBS News, found that Meta and other social media companies prioritize user engagement, growth and profits over content moderation.

Meta reported to congressional investigators that it had “removed”[s] Millions of accounts and posts are being violated every day,” and artificial intelligence content moderation stopped 3 billion fake accounts in the first half 2021.

The company stated that it had invested more than $13B in security and safety teams between 2016-October 2021. This included over 40,000 people who are responsible for moderation, or “more than the FBI”. The committee pointed out that this investment was only 1% of the company’s current market value.

Nimmo, Who was directly targeted by disinformation According to him, he doesn’t feel like he is “screaming in the wilderness” after 13,000 Russian bots declared his death in a 2017 hoax.

“These networks are being caught earlier and earlier. We have more eyes in more places. In 2016, there wasn’t really a defender community. The only ones on the field were the offense players. This is no longer true.

Nicole Sganga

CBS News reporter covering homeland security, justice.


Thank you for reading CBS NEWS.

Log in or create a free account
For more features, please visit.

To continue, please enter your email address

To continue, please enter a valid email address

Continue reading