Meta holds the deception of generative AI in check
Meta says it has been successful in stopping coordinated disinformation campaigns that use generative AI, despite concerns. It is still studying and fighting “coordinated inauthentic behaviour” on its platforms, despite worries about deceptive practices like the use of generative AI tools, and it is still hard to stop people from abusing technology.
Meta, the big social media company, says its plan to stop coordinated disinformation campaigns made by generative AI that keeps getting better is working, even though many people are worried. Growing concerns surround the potential use of generative AI to deceive or confuse voters in upcoming global elections, particularly in the US. This is why Meta is doing a new study on “coordinated inauthentic behaviour” on its platforms.
What we’ve seen so far is that our industry’s existing defenses, such as our emphasis on behavior rather than content in countering adversarial threats, are already in place and appear to be effective, said David Agranovich, Meta’s threat disruption policy director, at a press briefing on Wednesday.
“Right now, we don’t use generative AI in very complex ways, but we know that these networks will keep changing how they work as this technology changes.” People have said that Facebook is being used to spread false information about the election.
People have said for years that Facebook is a great place to spread false information about elections. Russian agents used Facebook and other social media sites based in the US to stir up political unrest during the 2016 election, which Donald Trump won.
The European Union is looking into Meta’s Facebook and Instagram because they may not have done enough to fight fake news before the EU elections in June. However, experts are also concerned that malicious individuals may disseminate a significant amount of false information on Meta apps due to the ease of use of generative AI tools such as ChatGPT and the Dall-E image generator, which enable the creation of content instantly and on demand.
According to the report, Meta said it had seen “threat actors” use AI to make fake photos, videos, and text, but not real pictures of politicians. Meta’s family of apps has used generative AI to create profile pictures for fake accounts. The report also said that a Chinese fraud network used the technology to make posters for a made-up pro-Sikh activist group called Operation K.
Meta reported that a network based in Israel posted what looked like comments about Middle Eastern politics made by computers on the Facebook pages of news organizations and public figures.
Meta Discovers Political Manipulation and Foreign Interference on Platform
Meta said those comments, some of which appeared on the pages of US lawmakers, were like spam. Real users responded by calling them propaganda and criticizing their comments. A political marketing firm in Tel Aviv, according to Meta, ran the campaign. He said, “This is an exciting space to watch.” Mike Dvilyanski is Meta’s head of threat investigations. “At this point, we haven’t seen an enemy use generative AI tools in a way that causes problems.”
The report also showed that a group with ties to Russia called “Doppelganger” is still trying to use Meta apps to hurt support for Ukraine, but the platform is stopping them. Over the last 20 months, Doppelganger has “taken it to a new level,” but Meta says it is still crude and doesn’t help build real audiences on social media.
The report showed that Meta also got rid of small groups of fake Facebook and Instagram accounts that came from China and were targeting the Sikh community in Australia, Canada, India, Pakistan, and other places. The fake accounts sparked calls for pro-Sikh protests.