Since the dawn of the computer age, humans have viewed the approach of artificial intelligence (AI) with some degree of apprehension. Popular AI depictions often involve killer robots or all-knowing, all-seeing systems bent on destroying the human race. These sentiments have similarly pervaded the news media, which tends to greet breakthroughs in AI with more alarm or hype than measured analysis. In reality, the true concern should be whether these overly-dramatized, dystopian visions pull our attention away from the more nuanced — yet equally dangerous — risks posed by the misuse of AI applications that are already available or being developed today.
AI permeates our everyday lives, influencing which media we consume, what we buy, where and how we work, and more. AI technologies are sure to continue disrupting our world, from automating routine office tasks to solving urgent challenges like climate change and hunger. But as incidents such as wrongful arrests in the U.S. and the mass surveillance of China’s Uighur population demonstrate, we are also already seeing some negative impacts stemming from AI. Focused on pushing the boundaries of what’s possible, companies, governments, AI practitioners, and data scientists sometimes fail to see how their breakthroughs could cause social problems until it’s too late.
Deepfakes are realistic-appearing artificial images, audio, and videos, typically created using machine learning methods. The technology to produce such “synthetic” media is advancing at breakneck speed, with sophisticated tools now freely and readily accessible, even to non-experts. Malicious actors already deploy such content to ruin reputations and commit fraud-based crimes, and it’s not difficult to imagine other injurious use cases.
Deepfakes are just one example of AI technology that can have subtly insidious impacts on society. They showcase how important it is to think through potential consequences and harm-mitigation strategies from the outset of AI development.
Large language models are another example of AI technology developed with non-negative intentions that still merits careful consideration from a social impact perspective. These models learn to write humanlike text using deep learning techniques that are trained by patterns in datasets, often scraped from the internet. Leading AI research company OpenAI’s latest model, GPT-3, boasts 175 billion parameters — 10 times greater than the previous iteration. This massive knowledge base allows GPT-3 to generate almost any text with minimal human input, including short stories, email replies, and technical documents. In fact, the statistical and probabilistic techniques that power these models improve so quickly that many of its use cases remain unknown. For example, initial users only inadvertently discovered that the model could also write code.
AI may never reach the nightmare sci-fi scenarios of Skynet or the Terminator, but that doesn’t mean we can shy away from facing the real social risks today’s AI poses. By working with stakeholder groups, researchers and industry leaders can establish procedures for identifying and mitigating potential risks without overly hampering innovation. After all, AI itself is neither inherently good nor bad. There are many real potential benefits that it can unlock for society — we just need to be thoughtful and responsible in how we develop and deploy it.
For example, we should strive for greater diversity within the data science and AI professions, including taking steps to consult with domain experts from relevant fields like social science and economics when developing certain technologies. The potential risks of AI extend beyond the purely technical; so too must the efforts to mitigate those risks. We must also collaborate to establish norms and shared practices around AI like GPT-3 and deepfake models, such as standardized impact assessments or external review periods. The industry can likewise ramp up efforts around countermeasures, such as the detection tools developed through Facebook’s Deepfake Detection Challenge or Microsoft’s Video Authenticator. Finally, it will be necessary to continually engage the general public through educational campaigns around AI so that people are aware of and can identify its misuses more easily. If as many people knew about GPT-3’s capabilities as know about The Terminator, we’d be better equipped to combat disinformation or other malicious use cases.
2025 © Leackstat. All rights reserved