Misuse of AI

Think before you code.

Report now available: Download here

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


News

We participated at the 2018 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) at the United Nations in Geneva.

There, we presented our findings during a side event on the Topic A Technological Perspective on Misuse of Available AI.

 

Reactions

 

Conscious Coders held a side event on the technical aspects and risks of AI, which was highly welcomed by state representatives. The latter argued that such a clear technological overview had been lacking within the debate.

Regina Surber, Advisor to the ICT4Peace Foundation and the Zurich Hub for Ethics and Technology (ZHET)

For me this look into the black box was an eye-opener to see how the algorithms behind AI work and how autonomous systems can be developed.

Col. (GS) Bruno Paulus, Military Advisor of the Permanent Representation of Germany to the Conference of Disarmament

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

More information about the conference: UN Group of Governmental Experts

 


What are we working on?

 

Autonomous systems increasingly enter our daily life. The application of novel technology, however, is not limited to civilian usage but also disrupts the military sphere and poses significant threat of misuse. Today’s discussion on Lethal Autonomous Weapons Systems (LAWS) is mainly based on ethical and legal arguments although the revolution in decision-making processes stems from technological innovation.

In particular, the accessibility of algorithms and technology from a civilian source often features dual-use implications. Thus, international discussion on LAWS should not only focus on possible future development but also needs to consider how to cope with already existing technology that could potentially be used in a military context.

We, the group Misuse of AI, aim to contribute a well-founded technological viewpoint to the important discussion around LAWS.

 


We would be happy to get your thoughts on this important topic:

Techies: You have worked on dual-use implications of technology, data tracking or misuse prevention of algorithms and technology in general?

Social Scientists: You have experience with technology assessment, worked on the socially beneficial viewpoint of technology or on ethics in war?

Everyone else: You are interested in the topic and would like to contribute to our discussion?

Please get in contact with us: ai.misuse@consciouscoders.io

 


Who are we?

 

Valentin

The introduction of AI in War is fundamentally different from earlier developments in warfare as for the first time machines can be given the power to decide autonomously about killing human live – international regulation is needed now.

Industrial engineer interested in societal debates.


Alexander

The risk arising from the openness of innovation in the field of artificial intelligence calls for rapid regulation in line with development progress. This applies in particular to autonomous weapon systems, the construction of which seems already possible today with available technologies.

Computer engineer with passion for the application of top-notch AI technology.


Florian

Computer scientist with strong expertise in deep and reinforcement learning.


Lukas

Imagining current civilian developments of AI-based decision-making in war-related applications should be of great concern of every individual, potentially being a target of such technology.

Electrical engineer gone astray as political scientist.