“There are multiple entry points for deepfakes and verified data to penetrate the strategic stability relationships between nuclear states,” said a panelist at a recent event organized by the United Nations Institute for Disarmament Research Discuss the impact of deepfakes on international security and stability, how to address deepfake technologies, and other governance issues.

In times of crisis, fabricated media can help escalate events leading to violence. That is why it is important to discuss the problems of deepfakes and the solutions of various stakeholders in order to counter them.

The following panelists from the fields of international security and artificial intelligence also spoke about the implications of deepfakes and proposed solutions.

  • Anita Hazenberg, Director of Innovation at Interpol
  • Alexi Drew, Senior Analyst at RAND, Europe
  • Valeria Solis, Director of Drugs and Cybersecurity, Mexico’s Foreign Affairs Secretariat
  • Moliehi Makumane, Senior Policy Advisor and South Africa Delegate at UN OEWG and UN GGE
  • Saifuddin Ahmad, Assistant Professor at Nanyang Technological University in Singapore
  • Petr Topychkanoc, Senior Researcher at SIPRI

Effects of deepfakes

  • Affects stable relationships between nuclear and non-nuclear-armed states: “TThere are several entry points for deepfakes and verified data to penetrate the strategic stability relationships between nuclear states. Potential deepfakes related to a growing nuclear threat against non-nuclear-weapon states can provoke significant change and decisions, including decisions to use nuclear-armed allies to protect themselves. It can also result in intelligence data being exchanged for assessment as non-nuclear states lack early warning and intelligence skills to assess their potential nuclear stress, “Topychkanov said.
  • Asks trust – blurring the line between what is real and what is fake: “The trust base is breaking down and being displaced by the increasing ability to ignore all evidence presented to us, from any source,” said Drew. In addition, Ahmad said that tHose, who deals with fake news on social media, gets very skeptical of all kinds of information, including real information, and this is problematic. “When citizens start to doubt even real information, it threatens society because we fail to create a basic truth base.” […] Social media or internet users are more likely to trust videos than text as a moral representation of reality. When the disinformation is aimed at our own prejudices, we often fall for our own prejudices, and if the content matches our opinion, we do not question the authenticity of such content. Therefore, with the advancement of technology behind deepfake, the manipulated reality could be more persuasive to add to the cost of this form of disinformation, ”Ahmad said.
  • Deepfakes undermine the international framework for good government behavior: Moliehe Makumane has raised an important question about whether executives are responsible for taking advantage of this [deepfake] Can you trust technology? “Trust between state and non-state actors is growing, the framework highlights the discrepancies in Member States’ capacities to implement the framework, which creates weaknesses for the system as a whole.” Another important question that Makumane has raised is how Member States implement the framework when synthetic media cannot be identified. “The solutions are not very clear and the capacities will differ significantly between the Member States that can implement the framework and the Member States that cannot. It is difficult for countries to ensure that activities are inclusive, accessible and do not negatively affect members of individual communities. Not every country is able to recognize the capabilities of synthetic media – owned by either state or non-state actors – and develop the ability to identify and map information, evaluate or influence media-led operations, “Makumane said.

Possible solutions

  • Regional organizations can play a bigger role: Makumane stressed that “technology is more dangerous when it is unregulated, so national governments tend to go a little extreme and regulate everything, which significantly shrinks the space for freedom of expression and other digital rights. […] This is where regional organizations consolidate different competencies and skills and play a bigger role in finding a balance between using the whole toolkit, the policy and the technology itself to mitigate the risk. “
  • State and private sector must join forces: Ahmad suggested that gOvernments should work with the private sector, especially social media companies, to regulate deep fakes. “Researchers have shown that it is possible to create fake satellite images of real places with fake details using airborne technology, although this may not pose a threat to an average user, but it poses a threat to national and international security. Additionally, while tags and warnings are tagged and warned about tampered social media videos, these techniques don’t work for all citizens. Some either avoid the labels or are not sharp enough to focus on them, or sometimes they just go with their own prejudices and discredit those labels when evaluating the content of the video. So regulating technology is challenging and there is an ongoing debate about whether social media companies should be held accountable for what their users post on the platform, ”said Ahmad.
  • Educate and raise awareness: Hazenberg emphasized that pRepresentative police leaders need to prepare their police organizations for the future because some of them have no idea what artificial intelligence really is. Drew added that Instead of focusing more on heads of state, the focus should be on the population, as the population needs to be educated. Solis said civil society organizations must take the initiative alongside governments. Building on Solis’ point, Ahmad said, “WWhen it comes to digital literacy programs, it targets a vulnerable group, the elderly population, but as part of my research, I have found that even those with higher digital literacy are prone to deep fakes. “
  • Clarify the mitigation strategy: We need to be very clear about the mitigation strategy we are using, such as authentication, verification or trust building, or technical or media literacy, Drew said. “We also need to be aware of the secondary effects of our actions. For example, social media platforms try to counter the manipulated content of certain US presidents by labeling it as misinformation that has been further amplified on various platforms. The reason this policy did not work as intended was because only one platform issued these specific policies. If this had been done for all parties involved in this form of information dissemination to the public, the intended first consequence or the intended result would have worked. “
  • Set standards for digital technologies, including synthetic media: “The ability to check authenticity, verifiability, and see the progress of the media from its inception to its final dissemination to an audience would be a great way to accomplish this, ”said Drew.
  • Multi-stakeholder system to defend against deepfake-related threats: Valeria Solis suggested that a solid network of multiple actors needs to be developed to create a solid dual system that encompasses both prevention and resilience to deep fakes threats. “The various stakeholders can include industry, designers, and even suppliers to embed trusted sources for creating multimedia content through the very first design of the devices. For example camera manufacturers, media content distributors such as traditional television, radio stations, smartphone manufacturers, as everyone has access to smartphones for media production. “

Also read:

Do you have anything to add? Post your comment and give someone a MediaNama as a gift subscription.