Skip to main content

Deepfake Technology.

 

What is Deepfake.

Deepfake is software that is propelled by artificial intelligence and capable of superimposing a digital composite visage or voice onto an existing video or audio of a person. Nonetheless, it is lamentable that it has been weaponized by evildoers in modern times.


Current condition.

Problem 1: Ease in creating disinformation.

In 2019, Mark Zukerberg, the founder of Facebook, was "deepfaked" for a video in which he discussed the ominous topic of the power of big data. Yet, Zuckerberg never uttered those remarks, and the actual video is Zuckerberg's 2017 presentation on Russian election interference on Facebook. Ironically, Facebook did not remove the video and merely declared that they will restrict exposure to such erroneous content; consequently, the world will be rife with disinformation in the future as deepfake technology advances to flawlessness. At that point, it will be child's play to create a video of each of us saying something we've never said before, making it even easier for people to disseminate disinformation, make propaganda, or attempt to malign someone with fabricated remarks, and we won't even be able to distinguish between the real ones and the fake ones.


Problem 2: Compromised cybersecurity.

Furthermore, internet users’ security is substantially jeopardized by the ease with which deepfakes can impersonate a person without their consent. There is a slew of instances in which deepfakes could be used to perpetrate cybercrime, such as phishing, sextortion, and identity theft. For illustration, by producing real-time audio and video cloning with deepfake technology, cybercriminals may effortlessly steal our identities and attempt to solicit funds from our contact lists; subsequently, deepfake renders everyone's privacy worthless and increases their risk of being cyberstalked. 
Ergo, experts have recognized deepfakes as one of the most significant AI-related criminal concerns.

What is the ideal scenario for the future of deepfake.

Though contemplating the pervasive misuse of deepfake may give us an inkling that its future is dismal and that it is doomed to be used in negative ways, I believe that in the future, deepfake will have an enormous effect on a multitude of industries, including film, education, healthcare, and numerous others, due to its exceptional potential. 


Deepfake has made it possible to "resurrect" Paul Walker, who passed away during production of "Fast & Furious 7," and complete the film. 

Additionally, the film industry will conserve more time on dubbing, as deepfake only requires the initial footage and human-recorded dialogue dubs to create videos that are spoken in any desired language. Similarly, deepfake is advantageous for the education industry as it allows for more interactive teachings. Instead of requiring students to read voluminous slides and articles, deepfake could generate interactive lecture videos or animated versions of people. 

Lastly, using deepfakes to assist physicians become more empathetic through the analysis of patients' facial expressions using emotion recognition technology will have a significant impact on the healthcare industry and the lives of patients suffering from aphantasia, the inability to visualize objects or people. Thus, future growth of deepfakes is inevitable, but prompt action must be taken before the misuse becomes catastrophic.


How to bridge the present with the ideal state.

Solution 1: Implement application standards and filter deepfake content using a detector.

Despite the havoc deepfake might bring to the digital world, it has not been deemed forbidden to employ. Thus, for the purpose to avert the fraudulent use of deepfake, nations worldwide should inspect deepfake content using detector tools and enact draconian and clarified application rules. In actuality, not all instances of the dissemination of fake content or the use of deepfake software are essentially detrimental, and there are numerous benign applications. Hence, rather than prohibiting the use of deepfake, which would restrict the use of creative individuals, gatekeepers might adopt AI detector tools that use the same AI model as deepfake content creators so that they can effectively identify deepfake media content and take appropriate action. Indeed, the regulations stated must be exhaustive, including limitations on permissible use and penalties for those who exploit deepfake for nefarious reasons. 

Solution 2: Blockchain technology.

Next, it may be feasible to use blockchain technology to tackle the misuse of deepfake. Due to the fact that blockchain is a public, tamper-resistant ledger, it is suitable for preventing deepfakes misapplication. 

Simply put, when a digital media is written into the ledger, it will be allocated a cryptographic hash, which will act as the distinct fingerprint of the corresponding file; the fingerprint will then be uploaded to a blockchain, where it will be globally accessible, thereby making it more accountable as every user will be able to scrutinize each other. 
Therefore, every bit of media will have its own cryptographic hash on the blockchain, akin as a timestamp. When media content is modified, a new hash that refers the prior hash will be generated and added to the blockchain. Therefore, tracing the original source of media content would be a breeze.


Solution 3: Be skeptical and explore numerous sources.

Although people can still recognize deepfake-produced videos and audio hitherto, the time will ultimately come when such content will be indistinguishable. By then, the copious quantity of diverse information will have confounded us. Therefore, as internet consumers, we must be vigilant and skeptical of any media content we ingest on social media, as anything could be false at this juncture, even if it comes from our digitally close companions. As some content may be created with the intent to deliberately deceive or defame someone, it is of paramount importance for internet users to verify the veracity of the information presented by reviewing various sources. While many view the decline of trust negatively, it could be a blessing in disguise if viewed from a different angle. The overabundance of disinformation maintains our skepticism towards media content, indirectly fostering an environment of independent thinking and media literacy.

Reflection.

The reason I'm bringing up this topic is because mass media have such a powerful impact and influence on people in general, deepfake may be something that everyone should be aware of and comprehend both the threats it poses and how to use it to our advantage. 

References.

Adetunji, J. (2019). Deepfake videos could destroy trust in society – here’s how to restore it. The Conversaton. https://theconversation.com/deepfake-videos-could-destroy-trust-in-society-heres-how-to-restore-it-110999

Akhmedova, K. (2020). What are deepfakes, and how can we prevent their misuse. Techvera. https://techvera.com/what-are-deepfakes-and-how-can-we-prevent-their-misuse/

Beard, M. (2019). To fix the problem of deepfakes we must treat the cause, not the symptoms. https://www.theguardian.com/commentisfree/2019/jul/23/to-fix-the-problem-of-deepfakes-we-must-treat-the-cause-not-the-symptoms

Bocetta, S. (2019). Deepfakes are a problem, what’s the solution. AT&T Cybersecurity. https://cybersecurity.att.com/blogs/security-essentials/deepfakes-are-a-problem-whats-the-solution

Cheguri, P. (2023). How to prevent the misuse of deepfake technology. Analytics Insight. https://www.analyticsinsight.net/how-to-prevent-the-misuse-of-deepfake-technology/

Cheikosman, E., Gabriel, K., & Hewett, N. (2021). Blockchain can help combat the threat of deepfakes. here's how. World Economic Forum. https://www.weforum.org/agenda/2021/10/how-blockchain-can-help-combat-threat-of-deepfakes/

Chiu, A. (2019). Facebook wouldn’t delete an altered video of nancy pelosi. What about one of mark zuckerberg. The Washington Post. https://www.washingtonpost.com/nation/2019/06/12/mark-zuckerberg-deepfake-facebook-instagram-nancy-pelosi/

Chow, M. (2022). What are the positive applications of deepfakes. Jumpstart. https://www.jumpstartmag.com/what-are-the-positive-applications-of-deepfakes/

Cole, S. (2019). This deepfake of mark zuckerberg tests facebook’s fake video policies. Motherboard. https://www.vice.com/en/article/ywyxex/deepfake-of-mark-zuckerberg-facebook-fake-video-policy

Deepfakes ranked as most serious ai crime threat. (2020). ScienceDaily. https://www.sciencedaily.com/releases/2020/08/200804085908.htm

EACA. (n.d.). Deepfake the fake that steals your face and privacy. https://eaca.eu/news/deepfake-the-fake-that-steals-your-face-and-privacy/

Edwards, B. (2020). Ai threatens to rewrite history. here’s how to protect it. Fast Company. https://www.fastcompany.com/90549441/how-to-prevent-deepfakes

ESafety Commissioner. (2022). Deepfake trends and challenges — position statement. https://www.esafety.gov.au/industry/tech-trends-and-challenges/deepfakes

Etienne, H. (2021). The future of online trust (and why deepfake is advancing it). Ai and Ethics, 553-562(2021). https://doi.org/10.1007/s43681-021-00072-1

Europol. (n.d.). Europol report finds deepfake technology could become staple tool for organised crime. https://www.europol.europa.eu/media-press/newsroom/news/europol-report-finds-deepfake-technology-could-become-staple-tool-for-organised-crime

Ezquer, E. (2023). Deepfakes in movies: the future of filmmaking. Metaroids. https://metaroids.com/feature/deepfakes-in-movies-the-future-of-filmmaking/

Glover, C. (2022). Criminals are using deepfakes to apply for remote it jobs, fbi warns. Tech Monitor.  https://techmonitor.ai/technology/cybersecurity/deepfakes-it-jobs-fbi

Goodin, D. (2023). Fbi warns of increasing use of AI-generated deepfakes in sextortion schemes. Ars Technica. https://arstechnica.com/information-technology/2023/06/fbi-warns-of-increasing-use-of-ai-generated-deepfakes-in-sextortion-schemes/

Goth, G. (2022). Medical deepfakes are the real deal. MDDI Online. https://www.mddionline.com/artificial-intelligence/medical-deepfakes-are-real-deal

Gregory, M. (2023). Fbi warns of 'uptick' in reports of deepfake extortion using social media photos, videos. The Town Talk. https://www.thetowntalk.com/story/news/2023/06/06/fbi-issues-warning-on-deepfake-extortion-using-online-photos-videos/70290480007/

Jaiman, A. (2020). Positive use cases of synthetic media (aka deepfakes). Medium. https://towardsdatascience.com/positive-use-cases-of-deepfakes-49f510056387

Joshi, N. (2022). 3 ways in which deepfakes can be used positively. LinkedIn. https://www.linkedin.com/pulse/3-ways-which-deepfakes-can-used-positively-naveen-joshi

Kepczyk, R. H. (2022). Deepfakes emerge as real cybersecurity threat. AICPA & CIMA. https://www.aicpa-cima.com/news/article/deepfakes-emerge-as-real-cybersecurity-threat

Langguth, J., Pogorelov, K., Brenner, S., Filkukova, P, & Thilo, D. S. (2021). Don't trust your eyes: image manipulation in the age of deepfakes. Frontiers, 6(2021). https://doi.org/10.3389/fcomm.2021.632317

Lawton, G. (2023). How to prevent deepfakes in the era of generative ai. TechTarget. https://www.techtarget.com/searchsecurity/tip/How-to-prevent-deepfakes-in-the-era-of-generative-AI

Lees, D. (2022). Deepfakes are being used for good – here’s how. The Conversation. https://theconversation.com/deepfakes-are-being-used-for-good-heres-how-193170

Onque, R. (2023). 3% of people can’t create a mental picture in their heads—this test will tell you if you’re one of them. Make It. https://www.cnbc.com/2023/02/19/three-percent-of-people-in-the-world-have-aphantasia-heres-what-to-know.html

Poremba, S. (2021). How to protect against deepfake attacks and extortion. Security Intelligence. https://securityintelligence.com/articles/how-protect-against-deepfake-attacks-extortion/

Q5iD. (2022). A quick history of deepfakes: how it all began. https://q5id.com/blog/a-quick-history-of-deepfakes-how-it-all-began

Retail TouchPoints. (n.d.). Why deepfakes are a real problem. https://www.retailtouchpoints.com/resources/why-deepfakes-are-a-real-problem

Richard van Hoojidonk. (2021). The good, the bad, and the future of deepfakes. https://blog.richardvanhooijdonk.com/en/the-good-the-bad-and-the-future-of-deepfakes/

Russell, D. (2023). Is deepfake pornography illegal? it depends. Endless Thread. https://www.wbur.org/endlessthread/2023/06/23/deepfake-pornography-law

Sample, I. (2020). What are deepfakes – and how can you spot them. The Guardian. https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them

Schreiner, M. (2022). Deepfakes: how it all began - and where it could lead us. The Decoder. https://the-decoder.com/history-of-deepfakes/

Somers, M. (2020). Deepfakes, explained. MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained

Srivatsan, N. (2022). Deep fakes - are they good or bad for healthcare industry. LinkedIn. https://www.linkedin.com/pulse/deep-fakes-good-bad-healthcare-industry-nagaraja-srivatsan

Ssttek. (2023). The risky side of the digital age: deepfake technology. https://www.ssttek.com/the-risky-side-of-the-digital-age-deepfake-technology/

Statista. (2022). Share of consumers who say they could detect a deepfake video worldwide as of 2022. https://www.statista.com/statistics/1367702/global-consumers-detecting-deepfakes/

Vidhya, A. (2021). Deepfakes: effective solutions for rapidly emerging issues. Medium. https://medium.com/analytics-vidhya/deepfakes-effective-solutions-for-rapidly-emerging-issues-8b1685feef56

Vincent, J. (2021). Deepfake dubs could help translate film and tv without losing an actor’s original performance. The Verge. https://www.theverge.com/2021/5/18/22430340/deepfake-dubs-dubbing-film-tv-flawless-startup

Waterfield, S. (2022). Will deepfake cybercrime ever go mainstream. Tech Monitor. https://techmonitor.ai/technology/cybersecurity/deepfake-cybercrime-mainstreamy

Watts, S. (2021). The real future of ‘deepfake’ media. Daily Beast. https://www.thedailybeast.com/the-real-future-of-deepfake-media

Westerlund, M. (2019). The emergence of deepfake technology: a review. Technology Innovation Management Review, 9(11). https://timreview.ca/article/1282

Wong, Q. (2019). Deepfake video of facebook ceo mark zuckerberg posted on Instagram. CNET. https://www.cnet.com/tech/tech-industry/deepfake-video-of-facebook-ceo-mark-zuckerberg-posted-on-instagram/

Yang, H. -C., Rahmanti, A. R., Huang, C. -W., & Li, Y. -C. J. (2022). How can research on artificial empathy be enhanced by applying deepfakes. Journal of Medical Internet Research, 24(3), e29506. https://doi.org/10.2196/29506

Comments

Popular posts from this blog

Public Transport in Malaysia.

Current condition. Whether it's for getting to and from work or school, or any other purpose, public transportation has consistently served an indispensable part in facilitating daily life in society. Nonetheless, public transport in Malaysia has been beset by significant hurdles for a long time, causing commuters to become disgruntled and reluctant to use it; instead, they continue to drive to work amidst heavy traffic congestion. Problem 1: Poor interconnectivity. The most significant issue with public transit nowadays is the inadequate connectivity between transportation stations and the first- and last mile transport. For illustration, if someone is using the MRT or LRT and need to go somewhere else, he or she will have to change to an e-hailing service or just walk the extra distance. As a consequence, there is no doubt that people still prefer purchasing their own vehicles and rely superfluously on them due to their greater convenience.  Nevertheless, this has exacerbated a m

Freedom of Expression.

  Present condition of Freedom of Expression. Problem 1: Freedom of Expression encourages violence. Although free speech remains the cornerstone of democracy, it is incontrovertible that it has been weaponized and transformed into a pretext for violence in contemporary society, with sometimes fatal repercussions. In essence, free speech consistently misleads individuals into believing that their subjectivity will be permitted regardless of the emotional impact it may have on others. Not only that, due to the anonymity of social media, people are more prone to say whatever they want while concealing behind their online profile, thereby creating an environment where abusers are protected from consequences. Therefore, someone who solely dislikes another person might utilize the rights of free speech as a veil to convey offensive and even discriminatory views.  And as people become continually engulfed in their virtual existence, a greater proportion of them fall prey to issues caused by t