If you think of dual use as a new recycling program for single-use items, you're almost wrong. The term "dual use" refers to the possibility of dual use of developments, both for good and for bad.
Contrary to what many scientists assume, this is certainly relevant in research and is becoming increasingly important with the advent of artificial intelligence in all our lives. Dual use is about the fact that technologies and goods can not only be used for civilian purposes for the benefit of all people, but can also be misused for military and political purposes in order to monitor people, restrict their freedoms, suppress them or even kill them..
To explain the problem in more detail, here are further examples from two highly topical and controversial scientific fields:
The World Economic Forum warns against artificial intelligence, or AI for short, as a global risk. This applies above all to targeted disinformation in the super election year of 2024, when the largest democratic systems will decide on their leadership for the coming years and thus also set the course for their global political behavior. According to Computerwoche, the WEF (World Economy Forum) surprisingly sees the threat of destabilizing measures on democratic systems through AI as even more significant than extreme weather events and cyberattacks - and no one disagrees. The threat posed by AI is real and this is slowly dawning on the experts, as neither ethics nor politics nor legislation can keep up with the rapid pace at which AI is spreading into all our lives.
According to Spiegel Netzwelt and Heise Online, hundreds of AI experts signed the following statement last year: “It should be a global priority to reduce the risk of extinction through AI - on a par with other risks to society as a whole, such as pandemics and nuclear war.” The statement was signed by such luminaries as Sam Altman, CEO of Open AI, Dennis Hassabis, head of Google DeepMind, and Turing Award-winning AI researchers Geoffrey Hinton and Yoshua Bengio. Taiwan's Digital Minister Audrey Tang and Microsoft's Chief Technology Officer Kevin Scott were joined by numerous AI experts from research and industry. The message was published by the Center for AI Security in San Francisco.
Is this warning justified? In 2022, National Geographic reported on a study by Oxford University that addressed the possibility that technology could become the undoing of humanity if advanced AI were to take on a life of its own and turn against its inventors. The fact that AI could one day become smarter than humans has been no secret since the confrontation between Garry Kasparov and IBM's Deep Blue in 1997. For the first time, a computer managed to defeat the world's best chess player 3.5:2.5 in 6 chess games according to tournament rules.
Ist diese Warnung gerechtfertigt? Das National Geographic berichtete 2022 über eine Studie der Oxford University, in der die Möglichkeit adressiert wird, dass die Technologie zum Verhängnis der Menschheit werden kann, wenn die fortgeschrittene KI sich verselbstständigen würde und sich gegen ihre Erfinder wendet. Dass die KI einmal schlauer werden könnte als der Mensch ist seit der Auseinandersetzung zwischen Garri Kasparow und IBM’s Deep Blue im Jahre 1997 kein Geheimnis mehr. Zum ersten Mal schaffte es ein Computer den weltbesten Schachspieler nach Turnierregeln in 6 Schachpartien 3,5:2,5 zu besiegen.
The current wars between Russia and Ukraine as well as between Israel and the Iranian-controlled terrorist militias Hamas in the south, Hezbollah in the north and Houthi on the Red Sea are already being fought with the intensive support of AI, according to Deutschlandfunk radio in October 2023. At the moment, AI is probably still mainly used in defense, but the next step towards autonomous weapons has already been taken and AI is already fully operational in information gathering and evaluation.
However, as the researchers themselves are often so immersed in their research and cannot imagine with the best will in the world that their research could be misused, especially as there is often also an advisory ethics council and they are, after all, “the good guys”, the DFG, together with the Leopoldina, has developed a recommendation for both individual researchers and research institutions. In Part A, which is aimed at individual researchers, they warn of the risk of misuse, which researchers must be aware of. In critical cases, they must make a personal decision about what is responsible in their research based on their knowledge and experience. In doing so, the opportunities of research and its risks to human dignity, life and other important goods must be weighed against each other.
„The road to hell is paved with good intentions“.
The many good intentions in research and innovation are no longer sufficient. Researchers bear responsibility and must face questions such as, "What if these results fall into the wrong hands? How dangerous is this research? Should such results even be published?" Perhaps they even have to be published so that the public has a better chance of responding appropriately. Researchers are obligated to ask themselves these questions first and to consider in advance what impact their creations could have – and please do not respond with, "Hint of the day, it's always easy to be wise." The same applies to killer arguments such as, "If I don't do it, others will," "It's all dual use anyway," and "My research isn't dangerous because we're the good guys." All researchers must continually question themselves and review the results obtained for potential misuse. It is fundamentally necessary to weigh up whether the potential benefit outweighs any potential harm. And to do that, you have to think outside the box and consider whether the sensor you've just developed might not be much better suited to building bombs, or whether the blueprint for a potential drug that might come onto the market in probably 20 years might not be used tomorrow for the mass destruction of humans.
Dual use isn't about banning research, but rather about raising awareness of personal responsibility among researchers!
The decision to perhaps not pursue something to its conclusion must come from the researchers themselves. Especially when their own field of research becomes the focus of international criticism, researchers in that field are called upon not to duck away, but to take the fears and warnings seriously and develop mechanisms to prevent potential misuse. Financial advantages, ambition, the pursuit of power, and the pressure to succeed and perform represent the dark side of the Force in these considerations!