+49 7071-29-76812 wirtschaftskoordination@uni-tuebingen.de
English
InnoBlog
pixinoo / iStock

Article published on 03. December 2024Dual Use

If you think of dual use as a new recycling program for single-use items, you're almost wrong. The term "dual use" refers to the possibility of dual use of developments, both for good and for bad.

Contrary to what many scientists assume, this is certainly relevant in research and is becoming increasingly important with the advent of artificial intelligence in all our lives. Dual use is about the fact that technologies and goods can not only be used for civilian purposes for the benefit of all people, but can also be misused for military and political purposes in order to monitor people, restrict their freedoms, suppress them or even kill them..

Introduction

Dennis Swanson Studio 101 West Photographers / iStock
While the BAFA (Federal Office of Economics and Export Control) and customs primarily focus their attention on security aspects and potential economic damage to the Federal Republic of Germany and inspect goods from the fields of physics, electronics, telecommunications, information technology and IT, among others, goods from the fields of chemistry, biology and medicine are also not without their problems. For example, centrifuges used in the medical, biological and pharmacological fields can just as easily be used for the enrichment of uranium. Various publications on virus research can also serve as blueprints for biological warfare agents. Dual use applies not only to manufactured goods, but also to intellectual property that can be assigned to the transfer of knowledge. Without the publications on nuclear fission by Lise Meitner, Otto Hahn, Nils Bohr and others, both the peaceful use of atomic energy and the construction of the atomic bomb would not have been possible. Economic efficiency and dual use are also closely related. For example, concerns about research content are often carelessly thrown overboard in favor of attracting third-party funding and donors such as the Department of Defense (DoD) are described as “the good guys”.

To explain the problem in more detail, here are further examples from two highly topical and controversial scientific fields:

Dual Use and Bio-Sciences

Image-Jungle / iStock
The German Association for Biology, Biosciences and Biomedicine (VBIO) has published the following very apt definition of dual use on its website: “Dual Use Research of Concern (DURC) includes work that has the potential to generate knowledge, products or technologies that can be directly misused by third parties to harm human life or health, the environment or other legal interests.”
The outbreak of Spanish flu claimed up to 50 million lives during and after the First World War. According to VBIO, researchers in the USA wanted to reconstruct this highly virulent strain in 2005 in order to understand the high pathogenicity of this influenza virus and because they hoped to be able to develop better drugs for prevention and treatment. They therefore equipped a relatively harmless influenza virus with the complete coding sequences of all eight viral gene segments of the deadly 1918 virus strain. They thus created a blueprint for the construction of a highly dangerous killer microorganism. This is now accessible to all people, autocratic systems as well as singular terrorist cells.
In the meantime, further gene sequences of highly dangerous pathogens have become publicly available: the smallpox virus (estimated 300 to 500 million deaths in the 20th century alone until eradication) and the plague pathogen Yersinia pestis (20 to 50 million deaths).
In 2012, two research groups triggered heated discussions because they had produced variants of the H5N1 bird flu virus which, unlike the wild type, are also transmitted between mammals by air. This turned a relatively poorly transmissible virus into a highly contagious, deadly virus. The idea was that a corresponding mutation could also occur in nature at any time - and they wanted to be prepared for this. Research results can also be applied in other contexts or pose a threat in a different way. In 2001, for example, according to VBIO, a “killer” mousepox virus was developed in Australia to control a mouse plague. However, this was a blueprint for the manipulation of a human pathogenic smallpox virus to extend its lethal spectrum of action. All these examples are based on good intentions, but in the wrong hands, these results represent a horror vision for any peace-loving society. For those whose knees are still shaking, we recommend the next paragraph.

Dual Use and Artificial Intelligence

tampatra / iStock

Dual Use and Artificial Intelligence

The World Economic Forum warns against artificial intelligence, or AI for short, as a global risk. This applies above all to targeted disinformation in the super election year of 2024, when the largest democratic systems will decide on their leadership for the coming years and thus also set the course for their global political behavior. According to Computerwoche, the WEF (World Economy Forum) surprisingly sees the threat of destabilizing measures on democratic systems through AI as even more significant than extreme weather events and cyberattacks - and no one disagrees. The threat posed by AI is real and this is slowly dawning on the experts, as neither ethics nor politics nor legislation can keep up with the rapid pace at which AI is spreading into all our lives.

According to Spiegel Netzwelt and Heise Online, hundreds of AI experts signed the following statement last year: “It should be a global priority to reduce the risk of extinction through AI - on a par with other risks to society as a whole, such as pandemics and nuclear war.” The statement was signed by such luminaries as Sam Altman, CEO of Open AI, Dennis Hassabis, head of Google DeepMind, and Turing Award-winning AI researchers Geoffrey Hinton and Yoshua Bengio. Taiwan's Digital Minister Audrey Tang and Microsoft's Chief Technology Officer Kevin Scott were joined by numerous AI experts from research and industry. The message was published by the Center for AI Security in San Francisco.

Is this warning justified? In 2022, National Geographic reported on a study by Oxford University that addressed the possibility that technology could become the undoing of humanity if advanced AI were to take on a life of its own and turn against its inventors. The fact that AI could one day become smarter than humans has been no secret since the confrontation between Garry Kasparov and IBM's Deep Blue in 1997. For the first time, a computer managed to defeat the world's best chess player 3.5:2.5 in 6 chess games according to tournament rules.

Ist diese Warnung gerechtfertigt? Das National Geographic berichtete 2022 über eine Studie der Oxford University, in der die Möglichkeit adressiert wird, dass die Technologie zum Verhängnis der Menschheit werden kann, wenn die fortgeschrittene KI sich verselbstständigen würde und sich gegen ihre Erfinder wendet. Dass die KI einmal schlauer werden könnte als der Mensch ist seit der Auseinandersetzung zwischen Garri Kasparow und IBM’s Deep Blue im Jahre 1997 kein Geheimnis mehr. Zum ersten Mal schaffte es ein Computer den weltbesten Schachspieler nach Turnierregeln in 6 Schachpartien 3,5:2,5 zu besiegen.

After the victory of Deep Blue, the computer community then turned to another game, even older than chess, even simpler in structure, but far more complex in its move possibilities... Go! It is particularly widespread in Southeast Asia and was considered one of the four arts of Chinese scholars in ancient times. It was only in 2015 that a computer program called AlphaGo from Google DeepMind managed to defeat the European champion Fan Hui, followed in 2016 by a victory over Lee Sedol, 9th Dan and the world's best player from 2007 to 2011 (The Verge 2016). After defeats against the increasingly stronger and better computer programs, Lee Sedol retired from active competition in 2019.
While Deep Blue mainly used “brute force” maneuvers to evaluate hundreds of millions of positions, AlphaGo achieved this with the support of neural networks and deep learning.
But now let's return to the Oxford University study and the risk assessment of advanced AI. A scientific team from Oxford University and the Australian National University in Canberra has used models to calculate how likely it is that an advanced AI would turn against humans. The results were published in AI Magazine and are worrying because, according to the lead author, an “existential catastrophe is not only possible, but likely.”
To understand how such an existential catastrophe could occur, we need to look behind the scenes. Simple AI models make decisions through supervised learning (SL), while advanced models do this through reinforcement learning (R L). With this method, the AI is not given any data, but develops strategies independently in simulation scenarios to achieve the best result. The best result is rewarded, with the programmers deciding what this reward looks like. The reward system only serves as a motivational boost. The problem now is that the AI sees through the reward system and manipulates it to get more rewards. The study concludes that this scenario poses a real danger. The researchers assume that AI would quickly fight for the same resources as humans (energy) and would always be one step ahead of humans in its countermeasures. It would be a battle against an overpowering opponent.

The current wars between Russia and Ukraine as well as between Israel and the Iranian-controlled terrorist militias Hamas in the south, Hezbollah in the north and Houthi on the Red Sea are already being fought with the intensive support of AI, according to Deutschlandfunk radio in October 2023. At the moment, AI is probably still mainly used in defense, but the next step towards autonomous weapons has already been taken and AI is already fully operational in information gathering and evaluation.

Dual Use and the Responsibility of Scientists

Gorodenkoff / iStock
In 2022, the joint committee on dealing with security-relevant research of the German Research Foundation (DFG) and the Leopoldina (German National Academy of Sciences) issued an updated recommendation on scientific freedom and scientific responsibility for third-party funded research institutions in Germany. In it, they emphasize that research forms the essential basis for progress and must therefore be free and unrestricted, which is also guaranteed by the Basic Law. However, this freedom of research also comes with a price and high risks, as all useful results carry the danger of being misused.
This is why, according to the DFG and Leopoldina, all researchers have a special ethical responsibility due to their knowledge, experience and freedom, which goes beyond their legal obligations. They expect research institutions themselves to create the framework conditions for ethically responsible research and to introduce the instruments of scientific self-regulation.

However, as the researchers themselves are often so immersed in their research and cannot imagine with the best will in the world that their research could be misused, especially as there is often also an advisory ethics council and they are, after all, “the good guys”, the DFG, together with the Leopoldina, has developed a recommendation for both individual researchers and research institutions. In Part A, which is aimed at individual researchers, they warn of the risk of misuse, which researchers must be aware of. In critical cases, they must make a personal decision about what is responsible in their research based on their knowledge and experience. In doing so, the opportunities of research and its risks to human dignity, life and other important goods must be weighed against each other.

The recommendations concretize this consideration with regard to the necessary risk analysis, risk mitigation measures, the examination of the publication of research results and the renunciation of research as a last resort. The primary objective is the responsible conduct and communication of research. In individual cases, a responsible decision by researchers may even mean that a high-risk project is only carried out after a research moratorium or not at all.
However, implementing these recommendations requires a change in awareness among researchers at universities, and this can only come about through increased education and training. From the very beginning of their studies, there should be compulsory courses for all students to deal with the issue of the dual use of research results. And this is precisely what Part B of the DFG and Leopoldina's joint recommendation to research institutions envisages. In it, they call for a heightened awareness of the problem and the necessary knowledge beyond the legal boundaries of research, as well as the introduction of ethical rules for dealing with security-relevant research.

Conclusion

„The road to hell is paved with good intentions“.

The many good intentions in research and innovation are no longer sufficient. Researchers bear responsibility and must face questions such as, "What if these results fall into the wrong hands? How dangerous is this research? Should such results even be published?" Perhaps they even have to be published so that the public has a better chance of responding appropriately. Researchers are obligated to ask themselves these questions first and to consider in advance what impact their creations could have – and please do not respond with, "Hint of the day, it's always easy to be wise." The same applies to killer arguments such as, "If I don't do it, others will," "It's all dual use anyway," and "My research isn't dangerous because we're the good guys." All researchers must continually question themselves and review the results obtained for potential misuse. It is fundamentally necessary to weigh up whether the potential benefit outweighs any potential harm. And to do that, you have to think outside the box and consider whether the sensor you've just developed might not be much better suited to building bombs, or whether the blueprint for a potential drug that might come onto the market in probably 20 years might not be used tomorrow for the mass destruction of humans.

Dual use isn't about banning research, but rather about raising awareness of personal responsibility among researchers!

The decision to perhaps not pursue something to its conclusion must come from the researchers themselves. Especially when their own field of research becomes the focus of international criticism, researchers in that field are called upon not to duck away, but to take the fears and warnings seriously and develop mechanisms to prevent potential misuse. Financial advantages, ambition, the pursuit of power, and the pressure to succeed and perform represent the dark side of the Force in these considerations!

More articles