We don’t have to wait for AI to gain sentience and go rogue because the probability of intelligent automation being used by bad people for bad purposes is one…hundred…percent. Expect chaos when con artists conspire with convoluted neural networks.
Assessing threats and preemptively developing defense mechanisms is critical for the good guys to prepare for, prevent, and mitigate potentially existential threats. That’s the core message of a comprehensive and rather alarming report prepared by some of the brightest (ahem, human) minds on artificial intelligence.
Aptly entitled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” the 100-odd page policy recommendation paper was authored by luminaries from Future of Humanity Institute, OpenAI, Centre for the Study of Existential Risk, and leading universities in the U.S. and U.K. In the report, the authors described the fast-evolving threat landscape, key areas of security risk, and high-level recommendations for actions we can take today.
Bigger, Better, And Badder: The Evolution Of Threat
The range of plausible outcomes is extremely diverse, even without considering the outcomes that are less likely, but still possible … We anticipate increased malicious use of AI just as criminals, terrorists and authoritarian regimes use electricity, software, and computer networks.
Don’t fall in love with Alexa just yet. The smart technologies you rely on daily can be commandeered and turned against you. As machine intelligence becomes more powerful, pervasive, and connected, embedding AI in all of our personal and industrial computing devices could be attacked and compromise the security infrastructures that protect resources, citizens, and communities.
The Malicious AI report painted a challenging and ever-changing landscape of potential threats:
- Existing threats will get worse. AI will be used to scale up known attacks by augmenting the “human labor, intelligence, and expertise” required in executing such attacks. As exemplified by cyber bot trolls and spear phishing attacks, AI can a) deliver a multiplier effect to a malicious campaign, b) make the process of attacking easier and faster, and c) broaden the type and number of possible targets.
- New threats will emerge. Advances in fields such as generative neural networks that produce hyper-realistic sounds and images can be hijacked by criminals to replicate real-world audiovisual signatures to unlock security systems, create socially destabilizing false news, impersonate politicians and celebrities, or otherwise influence people to behave a certain way. Malicious algorithms can even be introduced into other AI or robotic systems with the aim of disrupting, destroying, or controlling one or more features/capabilities of these systems.
- The nature and characteristics of threats will change. AI will fundamentally alter the arena of cyber attacks by increasing effectivity, precision, and untraceability of such attacks. Many of these illegal incursions will specifically target even powerful and (supposedly) secure AI systems by exploiting vulnerabilities. One grim possibility is the remote use of autonomous weapon systems such as drones to target individuals in a crowd using facial recognition systems.
Where We Would Hurt The Most
Through wearables, standard computing devices, and the Internet of things, AI will inevitably permeate every corner of our existence. However, there are three main security domains we’re most at risk of attacks, and in which the consequences could be disastrous.
- Digital security. Malicious entities will readily use AI to automate processes for more effective cyber attacks. As previously mentioned, these include spear phishing, digital impersonation, and automated hacking. Among businesses, a new form of corporate subterfuge will come as a coordinated and adversarial poisoning of data with the goal of compromising, devaluing, or altogether destroying an organization’s data architecture.
- Physical security. While we spend much of our productive hours tethered to our digital devices and roaming cyberspace, we do live in a material world and inhabit physical bodies. Even in that domain, malicious AI will catch up and pose life-threatening hazards. In addition to weaponized drones, nefarious AI can infect autonomous vehicles, connected appliances, and other devices to inflict physical harm on people and property.
- Political security. As demonstrated in the 2016 US elections and elsewhere, the use of technology — including AI, predictive analytics, automations, and social media bots — can have far-ranging societal impact. Specifically, artificial intelligence can be used by hostile entities for illegal surveillance, propaganda, deception, and social manipulation. The Malicious AI report authors highlighted probable attacks driven by AI-enhanced capabilities to analyze human behavior, moods, and beliefs. This scenario fits the needs and aspirations of an authoritarian state but can also be appropriated to subvert democracies.
How To Stop The AI-pocalypse
An AI-assisted doomsday hasn’t happened yet and hope still remains that the good guys will outpace the bad guys.
But, in contrast to the low barrier to using AI for bad intentions, developing and securing AI systems that are both beneficial and safe requires tremendous care and rigorous testing. Even well-meaning researchers and engineers have built biased and unintentionally harmful machine learning models with alarming consequences. Ensuring benevolent technology requires robust best practices, strategic policies, corporate culture, legislation, and a competent regulatory regime. The effort will also mandate the involvement of people from diverse backgrounds: consumers, public sector officials, AI researchers, cyber security experts, and industry leaders.
There are four key steps, suggested by the authors, to ensure a secure AI climate:
- Policymakers, technical researchers, and cyber security experts should jointly explore, prevent, and mitigate the use of artificial intelligence by hostile entities.
- AI researchers, scientists and engineers should proactively participate in the security ecosystem surrounding artificial intelligence. These professionals should be mindful of the dual use/nature of AI, consistently integrate security features and protocols in their work, and always warn users and policymakers of the potential misuse, vulnerabilities, and risks of the products they are developing.
- Best practices should be defined and established for AI research, including the implementation of more effective and comprehensive methods for addressing dual-use concerns. Ethical standards and reasonable expectations should help shape these practices.
- More people of diverse backgrounds should become stakeholders and domain experts on issues surrounding the malicious applications of AI. Meanwhile, organizations should promote a culture of responsibility when it comes to building the world’s AI security framework.
Cyber crime is a reality that will cost the world around $6 trillion annually by 2021. Hackers with evil intent lurk in the hazy corners of the dark web and in every city around the physical world.
With AI projected to intensify the impact of cyber crimes, free societies have no choice but to fight back and to secure every playing field that matters. Primarily relying on AI to fight malicious AI, these societies will have to fight fire with fire.
But no one really needs to get singed. By focusing and channeling resources now into educating all stakeholders and building preventive security measures, the good guys can stay several steps ahead of the curve.
Leave a Reply
You must be logged in to post a comment.