SC247    Topics     Technology    Software    OpenAI

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies and proposes ways to better forecast, prevent, and mitigate these threats.

Artificial intelligence and machine learning capabilities are growing at an unprecedented rate.

These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis.

Countless more such applications are being developed and can be expected over the long term.

Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously.

This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies and proposes ways to better forecast, prevent, and mitigate these threats.

We analyze but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be.

We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.

In response to the changing threat landscape we make four high-level recommendations:

  1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively be reaching out to relevant actors when harmful applications are foreseeable.
  3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

For the purposes of this report, we only consider AI technologies that are currently available (at least as initial research and development demonstrations) or are plausible in the next 5 years and focus in particular on technologies leveraging machine learning.

We only consider scenarios where an individual or an organization deploys AI technology or compromises an AI system with an aim to undermine the security of another individual, organization or collective.


Log in to download this paper.
Remember me.
Forgot your password? · Not a member? Register today!

What’s Related

News
Microsoft Invests $1 Billion in Elon Musk-Backed AI Company
Microsoft has announced a $1 billion investment in OpenAI as part of a partnership with the artificial intelligence start-up, OpenAI will use Microsoft Azure cloud servers exclusiv...
Preparing For the Malicious Use of Artificial Intelligence
Elon Musk Launches $1Billion Fund to Save the World from Artificial Intelligence
More News
Resources
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies and proposes ways to better forecast, prevent, and mitig...
More Resources