UAV operators are urged to reassess use of AI as new report warns of exploitation risks

Commercial UAV operators have been urged to re-assess their use of artificial intelligence (AI) software following a report detailing the exploitation risks.

The Malicious Use of Artificial Intelligence has issued a 100-page forecast detailing the potential dangers of using AI, and has outline the risks posed by rogue operators, as well as terrorists and criminals.

The report identified digital, physical and political as the areas that malicious use of AI is most likely to be exploited.

Story continues below
Advertisement

In relation to drone technology, the report stated that an individual could acquire a UAV and install it with facial recognition software to target a specific person.

The report stated: “The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks.

“We also expect novel attacks that subvert cyberphysical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones).

“systems that examine software for vulnerabilities have both offensive and defensive applications, and the difference between the capabilities of an autonomous drone used to deliver packages and the capabilities of an autonomous drone used to deliver explosives need not be very great,” it continued.

While the report details specific areas of concerns, it acknowledges that the flagged risks are not an exhaustive list and concludes by urging operators to consider how using AI could make their operations vulnerable.

Tags : AIArtifical IntelligenceMalicious Use of Artificial Intelligence
Emma Calder

The author Emma Calder

Leave a Response