
About a decade ago, no one imagined a world where a person’s face and voice could be copied so perfectly that even trained eyes would struggle to tell the difference. Yet today, deepfake technology has moved from novelty to global threat, reshaping how truth is created, shared, and manipulated. The rapid spread of synthetic media now forces governments, companies, and regulators to confront a challenge that grows faster than the rules designed to contain it.
Speaking on this shifting landscape, Tech Law expert Kabir Adeyemo says the world must accept that deepfakes have outpaced traditional legal systems. “The danger is not only that deepfakes distort reality,” he explains, “but that they weaken trust, the very foundation of our democratic, social, and digital systems.” According to him, the explosion of misinformation, AI-generated propaganda, and manipulated political content has made regulatory intervention urgent, and any solution must blend law, technology, and international cooperation.
Kabir begins by identifying the legal vacuum that synthetic media has created. While several national laws exist, he notes that there is still no single, binding international treaty that directly regulates deepfakes. However, he points to Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which protects freedom of expression but allows restrictions for respect of rights, national security, and public order. “This is where many countries can anchor their deepfake regulations,” he says, because misinformation that threatens elections, public safety, or individual dignity falls under the scope of permissible limitations.
He also references the Budapest Convention on Cybercrime, calling it one of the few widely adopted frameworks that can indirectly combat malicious deepfake use through provisions covering illegal access, data interference, and computer-related fraud. “Deepfakes used to commit identity theft, election manipulation, or financial fraud unquestionably fall into areas already covered under international cybercrime norms,” he adds.
Moving to the global players shaping the debate, Kabir mentions that the European Union has taken the lead with transparency obligations under the new AI Act, requiring clear labeling of AI-generated content. The United States, though slower, is increasingly recognizing the threat, especially after deepfake attacks on politicians during election cycles. China, on its part, has already enforced rules mandating the disclosure of synthetic media. “But these approaches cannot work in isolation,” Kabir warns, “because deepfakes do not respect borders.” However, he outlines three core problems that must be solved before any meaningful regulation can succeed.
First is attribution. The growing difficulty of identifying the original creator of a deepfake. Without reliable tracing tools, enforcement becomes almost impossible. Kabir believes international cooperation on watermarking standards, similar to the Coalition for Content Provenance and Authenticity (C2PA) model adopted by Adobe, Microsoft, and others, is essential.
Second is jurisdiction, since harmful synthetic media often originates in one country but spreads through servers in another. Here, Kabir proposes expanding mutual legal assistance treaties and harmonizing national laws so that deepfake crimes can be prosecuted across borders with fewer procedural delays.
Third is platform accountability, especially with big players such as Meta, X, TikTok, OpenAI, and Google at the center of global information flows. He stresses that platforms should be legally required to provide faster takedowns, clearer labeling, and stronger identity verification tools. “If the creators and distributors of synthetic content cannot be controlled, then the platforms that amplify them must step up,” he states. After laying out the problems, Kabir provides a roadmap for solutions.
Kabir explains that the first major pillar is transparency and labeling, which he calls the “frontline defense” against AI-driven misinformation. He argues that mandatory disclosure of AI-generated content is no longer optional but a necessary safeguard for global digital spaces. In his view, governments can legally anchor this requirement under the permissible limitations in Article 19 of the ICCPR, which already allows restrictions to protect public order, reputation, and the rights of others. By making transparency a legal obligation, he believes societies can reduce the spread of harmful synthetic media and empower users to recognize manipulated content before it distorts public perception.
The second pillar he describes is an accountability framework that places real responsibility on major digital platforms. Kabir insists that companies such as Meta, TikTok, X, Google, and other content distributors must be compelled to actively monitor, detect, and remove harmful deepfakes—especially those capable of influencing elections, inciting violence, or damaging personal safety. He explains that without clear liability rules, platforms will continue to treat deepfake threats as “optional problems” rather than legal duties. Strong accountability measures, he says, ensure that platforms no longer remain passive conduits but become active guardians of information integrity.
Finally, Kabir points out the need for cross-border enforcement, which he describes as the most challenging yet most essential part of deepfake regulation. Since malicious synthetic media often originates outside the jurisdiction where the harm occurs, he stresses the importance of strengthening global cybercrime cooperation under frameworks such as the Budapest Convention. Kabir believes this treaty already provides a workable structure for tracing digital offenders, sharing evidence, and prosecuting deepfake-assisted crimes across borders. He argues that to stop offenders from using jurisdiction gaps to avoid accountability, countries must expand international collaboration and update mutual legal assistance procedures.
According to Kabir, there is also a need for global public education. “Regulation alone cannot save us,” he cautions. “People must understand that not everything they see or hear can be trusted.” He encourages governments, tech firms, and civil society to invest in media literacy campaigns that help users detect AI-manipulated content.
As he concludes, Kabir warns that the coming years will define whether societies can stay ahead of AI-driven misinformation or be consumed by it. Deepfakes will only become more realistic, more accessible, and more dangerous. But with coordinated international law, strategic technological safeguards, and strong political will, he believes the world can still protect truth in the digital age. “The future of trust,” Kabir says, “depends on the laws we make today.”



























