Dual UseArticle published on 03/12/2024

Anyone who thinks of dual use as a new recycling program for the reuse of disposable items is just a little off the mark. This is because the term “dual use” refers to the possibility of dual usability of developments in both a good and a bad sense. Contrary to what many scientists assume, this is relevant in research and is becoming increasingly important with the advent of artificial intelligence in all our lives. Dual use is about the fact that technologies and goods can be used for civilian purposes for the benefit of all people, but can also be misused for military and political purposes in order to monitor people, restrict their freedoms, suppress them or even kill them.

 

© pixinoo / iStock

Introduction

While the BAFA (Federal Office of Economics and Export Control) and customs primarily focus their attention on security aspects and possible economic damage to the Federal Republic of Germany and inspect goods from the fields of physics, electronics, telecommunications, information technology and IT, among others, goods from the fields of chemistry, biology and medicine are also not without their problems. For example, centrifuges used in the medical, biological and pharmacological fields can just as easily be used for the enrichment of uranium. Various publications on virus research can also serve as blueprints for biological warfare agents. Dual use applies not only to manufactured goods, but also to intellectual property that can be assigned to the transfer of knowledge. Without the publications on nuclear fission by Lise Meitner, Otto Hahn, Nils Bohr and others, both the peaceful use of atomic energy and the construction of the atomic bomb would not have been possible. Economic efficiency and dual use are also closely related. For example, concerns about research content are often carelessly thrown overboard in favor of attracting third-party funding and donors such as the Department of Defense (DoD) are described as “the good guys”.

 

© Dennis Swanson – Studio 101 West Photography / iStock

To explain the problem in more detail, here are further examples from two highly topical and controversial scientific fields:

 

Dual Use and the Biosciences

The German Association for Biology, Biosciences and Biomedicine (VBIO) has published the following very apt definition of dual use on its website: “Dual-use research of concern (DURC) includes work that has the potential to generate knowledge, products or technologies that can be directly misused by third parties to harm human life or health, the environment or other legal interests.”

The outbreak of Spanish flu claimed up to 50 million lives during and after the First World War. According to VBIO, researchers in the USA wanted to reconstruct this highly virulent strain in 2005 in order to understand the high pathogenicity of this influenza virus and because they hoped to be able to develop better drugs for prevention and treatment. They therefore equipped a relatively harmless influenza virus with the complete coding sequences of all eight viral gene segments of the deadly 1918 virus strain. They thus created a blueprint for the construction of a highly dangerous killer microorganism. This is now accessible to all people, from autocratic systems to singular terrorist cells.

 

© image_jungle / iStock

 

In the meantime, further gene sequences of highly dangerous pathogens have become publicly available: the smallpox virus (estimated 300 to 500 million deaths in the 20th century alone until eradication) and the plague pathogen Yersinia pestis (20 to 50 million deaths).

In 2012, two research groups triggered heated discussions because they had produced variants of the H5N1 bird flu virus which, unlike the wild type, are also transmitted between mammals by air. This turned a relatively poorly transmissible virus into a highly contagious, deadly virus. The idea was that a corresponding mutation could also occur in nature at any time - and they wanted to be prepared for this. Research results can also be applied in other contexts or pose a threat in a different way. In 2001, for example, according to VBIO, a “killer” mousepox virus was developed in Australia to control a mouse plague. However, this was a blueprint for the manipulation of a human pathogenic smallpox virus to extend its lethal spectrum of action. All these examples are based on good intentions, but in the wrong hands, these results represent a horror vision for any peace-loving society. If you're not already shaking at the knees, we recommend the next paragraph.

 

Dual Use and Artificial Intelligence

The World Economic Forum warns of artificial intelligence, or AI for short, as a global risk. This applies above all to targeted disinformation in the super election year 2024, when the largest democratic systems will decide on their leadership for the coming years and thus also set the course for their global political behavior. According to Computerwoche, the WEF (World Economy Forum) surprisingly sees the threat of destabilizing measures on democratic systems through AI as even more significant than extreme weather events and cyberattacks - and no one disagrees. The threat posed by AI is real and this is slowly dawning on the experts, as neither ethics nor politics nor legislation can keep up with the rapid pace at which AI is spreading into all our lives.

 

© tampatra / iStock

According to Spiegel Netzwelt and Heise Online, hundreds of AI experts signed the following statement last year: “It should be a global priority to reduce the risk of extinction through AI - on a par with other risks to society as a whole, such as pandemics and nuclear war.” The statement was signed by such luminaries as Sam Altman, CEO of Open AI, Dennis Hassabis, head of Google DeepMind, and Turing Award-winning AI researchers Geoffrey Hinton and Yoshua Bengio. Taiwan's Digital Minister Audrey Tang and Microsoft's Chief Technology Officer Kevin Scott were joined by numerous AI experts from research and industry. The message was published by the Center for AI Security in San Francisco.

Is this warning justified? In 2022, National Geographic reported on a study by Oxford University that addressed the possibility that technology could become the undoing of humanity if advanced AI were to take on a life of its own and turn against its inventors. The fact that AI could one day become smarter than humans has been no secret since the confrontation between Garry Kasparov and IBM's Deep Blue in 1997. For the first time, a computer managed to defeat the world's best chess player 3.5:2.5 in 6 chess games according to tournament rules.

 

© PhonlamaiPhoto / iStock

 

After the victory of Deep Blue, the computer community then turned to another game, even older than chess, even simpler in structure, but far more complex in its move possibilities...Go! It is particularly widespread in Southeast Asia and was considered one of the four arts of Chinese scholars in ancient times. It was only in 2015 that a computer program called AlphaGo from Google DeepMind managed to defeat the European champion Fan Hui, followed in 2016 by a victory over Lee Sedol, 9th Dan and the world's best player from 2007 to 2011 (The Verge 2016). After defeats against the increasingly stronger and better computer programs, Lee Sedol retired from active competition in 2019.

While Deep Blue mainly used “brute force” maneuvers to evaluate hundreds of millions of positions, AlphaGo did this with the support of neural networks and deep learning.

But now let's return to the Oxford University study and the risk assessment of advanced AI. A scientific team from Oxford University and the Australian National University in Canberra has used models to calculate how likely it is that an advanced AI would turn against humans. The results were published in AI Magazine and are worrying because, according to the lead author, an “existential catastrophe is not only possible, but likely.”

In order to understand how such an existential catastrophe could occur, we need to take a look behind the scenes. Simple AI models make decisions through supervised learning (SL), while advanced models do this through reinforcement learning (RL).  With this method, the AI is not given any data, but develops strategies independently in simulation scenarios to achieve the best result. The best result is rewarded, with the programmers deciding what this reward looks like. The reward system only serves as a motivational boost. The problem now is that the AI sees through the reward system and manipulates it to get more rewards. The study concludes that this scenario poses a real danger. The researchers assume that AI would quickly fight for the same resources as humans (energy) and would always be one step ahead of humans in its countermeasures. It would be a battle against an overpowering opponent.

The current wars between Russia and Ukraine as well as between Israel and the Iranian-controlled terrorist militias Hamas in the south, Hezbollah in the north and Houthi on the Red Sea are already being fought with the intensive support of AI, according to Deutschlandfunk in October 2023. At present, AI is probably still mainly used in defense, but the next step towards autonomous weapons has already been taken and AI is already fully operational in information gathering and evaluation.

 

Dual use and the Responsibility of Researchers

In 2022, the joint committee on dealing with security-related research of the German Research Foundation (DFG) and the Leopoldina (German National Academy of Sciences) issued an updated recommendation on scientific freedom and scientific responsibility for third-party funded research institutions in Germany. In it, they emphasize that research forms the essential basis for progress and must therefore be free and unrestricted, which is also guaranteed by the Basic Law. However, this freedom of research also comes with a price and high risks, as all useful results carry the danger of being misused.

This is why, according to the DFG and Leopoldina, all researchers have a special ethical responsibility due to their knowledge, experience and freedom, which goes beyond their legal obligations. They expect research institutions themselves to create the framework conditions for ethically responsible research and to introduce the instruments of scientific self-regulation.

However, as the researchers themselves are often so immersed in their research and cannot imagine with the best will in the world that their research could be misused, especially as there is often also an advisory ethics council and they are, after all, “the good guys”, the DFG and the Leopoldina have developed a recommendation for both individual researchers and research institutions. In Part A, which is aimed at individual researchers, they warn of the danger of misuse, which researchers must be aware of. In critical cases, they must make a personal decision about what is responsible in their research based on their knowledge and experience. In doing so, the opportunities of research and its risks to human dignity, life and other important goods must be weighed against each other.

 

© gorodenkoff / iStock

 

 

The recommendations concretize this consideration with regard to the necessary risk analysis, risk mitigation measures, the examination of the publication of research results and the renunciation of research as a last resort. The primary objective is the responsible conduct and communication of research. In individual cases, a responsible decision by researchers may even mean that a high-risk project is only carried out after a research moratorium or not at all.

However, implementing these recommendations requires a change in awareness among researchers at universities, and this can only come about through increased education and training. From the very beginning of their studies, there should be compulsory courses for all students to deal with the issue of the dual use of research results. And this is precisely what Part B of the DFG and Leopoldina's joint recommendation to research institutions envisages. In it, they call for a heightened awareness of the problem and the necessary knowledge beyond the legal boundaries of research, as well as the introduction of ethical rules for dealing with security-relevant research.

 

Conclusion

“The road to hell is paved with good intentions”.

The many good intentions behind research and innovation are no longer enough. Researchers bear responsibility and have to put up with questions such as “What if these results were to fall into the wrong hands? How dangerous is this research? Should such results be published at all? Do they perhaps even have to be, so that the general public has a better chance of reacting appropriately if the worst comes to the worst?”. Researchers are obliged to ask themselves these questions first and to consider in advance what effects their creation may have - and please don't answer with “you're always smarter afterwards”. The same applies to killer arguments such as “If I don't do it, others will”, “It's all dual use anyway” and “My research isn't dangerous because we're the good guys”. All researchers must constantly question themselves and check the results they obtain for possible misuse. It must always be weighed up whether the potential benefit is greater than the potential harm. And to do this, you also have to think outside the box and see whether the sensor you have just developed is not much better suited to building bombs or whether the blueprint for a potential drug that might come onto the market in 20 years' time could be used for the mass destruction of people tomorrow.

And you also have to think outside the box and see whether the sensor you have just developed is not much better suited to building bombs or whether the blueprint for a potential drug that might come onto the market in 20 years' time could not be used tomorrow for the mass destruction of people.

Dual use is not about banning research, but about creating an awareness of personal responsibility among researchers!

The decision to perhaps not pursue something to the end must come from the researchers themselves. Especially when one's own field of research becomes the focus of international criticism, researchers in this field are called upon not to duck away, but to take the fears and warnings seriously and develop mechanisms against possible abuse. Incidentally, financial benefits, ambition, striving for power, pressure to succeed and to perform represent the dark side of the Force in these considerations!


More articles