The Concept of Autonomy and the Changing Character of War1

There has been an immense development in unmanned aircraft technology in the past three decades or so. The percentage of unmanned versus manned aircraft in combat operations is only predicted to grow in the future. The public’s aversion to risk and the endurance facilitated by modern unmanned systems have both played important roles in the growth of unmanned aircraft in modern warfare. Increasingly complex warfare scenarios call for increasingly complex weapons systems, and autonomous aircraft are predicted to play a crucial role in meeting tomorrow’s operational challenges. The article argues that even though autonomous systems will be able to make tactical decisions by themselves, these decisions will not be acted upon in a vacuum – even autonomous machines will be a part of the military and political chain-of-command. Operational concepts such as ‘loyal wingman’, Manned-Unmanned Teaming, motherships and swarming are the beginning of a new autonomous way of warfare. It is important that we tailor our autonomous machines to operate inside the realm of military and political control. It is thus crucial to have a broad debate among policy makers, technology developers, scholars and civil society in order to decide how the weapons of the future will be programmed and the place and scope that human control should play therein.


INTRODUCTION
Autonomous unmanned aircraft attacking in swarms, huge motherships sending out smaller aircraft to execute missions, submarines releasing drones from the abyss and flying them deep inside enemy territory to gather information. These are not ideas from a sci-fi novel, but military tactics being currently implemented or planned for the near future. Several inter-related questions arise from the advent of fully autonomous weapons systems. Are autonomous weapons changing the way we fight wars? Is the new technology revolutionary? Will autonomous weapons systems take over and render the military chain-ofcommand obsolete? The military requirement for autonomy in weapons systems is based on real challenges that must be overcome by military units in future wars. Such autonomy is expected to help meet many of these challenges.
This article will discuss the fundamental aspects of autonomy in weapons systems and some of the challenges that the military hopes autonomy will help overcome. In the latter context, the article will provide some insights on modern warfare concepts that are facilitated by autonomy in weapons systems, in particular flying autonomous systems. The controversial targeted killings are not discussed in this article, but some of the operational concepts that are facilitated by autonomy are. While not addressing in detail the moral and ethical aspects of leaving decision making to machines with regards to applying lethal power, this article will touch on the context in which such decisions will be made.
The second chapter discusses briefly some of the factors that form the background of the development of unmanned systems as we know them today. The third chapter discusses what autonomy is, and where it fits into the discussion of command and control in military operations. The fourth chapter then gives a short presentation of some of the challenges that modern militaries are seeing, and a few of the operational concepts that autonomy is facilitating. Finally, the fifth chapter concludes by presenting a few thoughts on the way ahead for autonomy in military operations.

THE DEVELOPMENT OF UNMANNED AIRCRAFT TECHNOLOGY
There has been an immense development in unmanned aircraft technology in the past three decades or so. Although unmanned aerial systems have been around to support warfare for almost as long as manned aircraft, the scope of utilisation and the number of assets involved in combat in later years are unequalled in history. The percentage of unmanned versus manned aircraft in combat operations is only predicted to grow in the future. There are two important factors that have played a critical role as premises for this growth in focus, use, and development of unmanned systems in the past couple of decades, namely risk and endurance.
The first important underlying factor for the development of unmanned technology is the general public's and the politicians' lack of willingness to accept risk to one's own military forces in the application of military power. Initiated by the first Gulf War in 1991, where a large part of the world was able to follow the developments in the war through the television in their living rooms, the 1990s seem to have shaped a psyche in the West with expectations of short wars with little, or preferably no casualties of one's own. The first Gulf War created somewhat of a modern precedent with regard to how many losses the public was willing to accept, and how quickly the war was to be executed. Even the Chinese write admiringly about the Gulf War of 1991 in their doctrinal writings. 2 But then came the slaying of American soldiers in Mogadishu in 1993, whereby the Clinton administration suffered such a backlash in public opinion that it did not intervene in Rwanda the following year. In 1999, NATO intervened militarily in Kosovo through its operation Allied Force, enforcing a hard requirement for military aircraft to stay above 15,000 feet in order to avoid being shot down by the sophisticated Serbian air defence systems. The 1990s were thus instrumental in shaping a collective psyche in the West, with demands for short wars with little or no human losses. This led to both implicit and explicit demands for low risk in military operations. This collective low-risk psyche and understanding of the demands from the public have shaped the demand for unmanned aircraft that, by way of their characteristics, come with little to no risk to our own soldiers. 3 Another significant factor is the requirement for endurance by surveillance assets. Modern surveillance demands significant presence over time above the object being targeted in order to facilitate sufficient situational awareness and understanding. The way asymmetric warfare has developed puts an increasing demand on intelligence and a timely tactical, operational, and strategic foundation for decision making. There are several reasons for intelligence, surveillance and reconnaissance (ISR) being given such a prominent place in asymmetric warfare. One factor is that many clashes between own troops and the enemy often take place in areas with many civilians present, such as cities and other densely populated areas. In such cases there are obvious requirements for accuracy within targeting in order to avoid collateral damage and loss of civilian lives. Another factor is the common modus operandi of insurgents and terrorists of deliberately blending into the civilian populace and activities. This gives significant challenges to ISR assets and entities with regards to building a pattern-of-life and a general understanding of the enemy's movements and actions. This means that surveillance assets must be above the target under surveillance for as much time as possible in order to build the required situational awareness.
The development of unmanned systems is now focussing on improved sensors, increased endurance, higher speed, reduced radar cross section, and possibly most of all the integration of unmanned systems into warfare. The factor that will increasingly facilitate this integration is rising levels of autonomy.

WHAT IS AUTONOMY?
In order to better grasp and analyse the concept of autonomous weapons systems, we must have a common understanding of what autonomy is. By reading articles in the media, popular science magazines, or descriptions by weapons systems manufacturers, one may get the impression that there are already weapons systems operating autonomously. This stretches the definition, if not being an outright falsehood. If the degree of automation in a system is significantly encompassing, we tend to describe the system as something more than automatic -and the tendency seems to be to designate the system autonomous. The US Air Force describes several of their unmanned systems as partly autonomous or semiautonomous, where elements of the system function autonomously. One example is the stabilisation of unstable aircraft, which is done by computers in modern aircraft. The computer will assist the pilot in flying the aircraft, in order for him or her to focus on operational tasks, rather than having the inherent instability of the aircraft demanding the pilot's full attention simply to fly. The stabilising effort is carried out by the computer, and some will claim that the aircraft is autonomously stabilising itself. In the debate to find the appropriate common definition of autonomy, there are in principle two schools of thought. On the one side, the US view is that the system is autonomous if it is capable of operating without human input. NATO also operates with this understanding, where a distinction is made between automatic actions and processes, defined as 'the execution of a predefined process or event that requires UAV system crew initiation' , and autonomous actions and processes, defined as 'the execution of predefined processes or events that do not require direct UAV system crew initiation and/or intervention.' 4 The delineation is thus between whether an action or process has been initiated or directed by an operator or not.
On the other side, the British view is that the system is autonomous only if it is: capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be. 5 In this case we have a system that operates based on its own understanding of the circumstances and surroundings. The system understands its surroundings and its place within these, and it is this understanding, rather than pre-programmed algorithms, that facilitate the framework for action. This definition is much closer to artificial intelligence, and demands a contextualised grasp of superior entities' higher intent for the mission, as is the case with any human, military operator. This is the terrain we are exploring when discussing what systems the future will bring about, and where the responsibility for actions executed by machines belongs. To further confuse matters, there are on-going discussions on scales of automatics and partial autonomy on our way towards full autonomy. If we compare current systems with the ones we can expect in the future, we see a continuum from the first-generation unmanned systems, where every single action is being controlled from afar, to modern systems that are partially autonomous, to what we see might come in the future with fully autonomous weapons systems, in the sense of the British definition above.
To underline the differentiation between automatic and autonomous systems, Gary Schaub and Jens Kristoffersen at the Centre for Military Studies, University of Copenhagen, have presented a simple model, as depicted below. The clear delineation and high threshold for describing any system as truly autonomous raises important matters of transparency, responsibility, and accountability for lethal autonomous weapons systems. This article will not go into this discussion (it being addressed in some detail by Møgster in his article in this issue of the Oslo Law Review), but it seems fundamental to have a grasp of what autonomous operations really entail before venturing into discussing the problematic ethical, moral, and legal aspects of such operations. What is the foundation for a given weapons system's decision making? Can we understand the rationale or not? If we are able to, we can program it otherwise, if required, and we can direct it, change it, and so on. If we cannot understand the basis for undertaking an action, especially in situations where the system is executing lethal operations, we must be very careful indeed with respect to utilising such systems in operations. But as it stands today, there are no systems in war that operate free from specific algorithms pre-programmed by humans, which in turn means that although the algorithms may be complex, we have full overview of the system's programming and potential actions. A reason for discussing a scale of automatics is that a drone will not necessarily be limited to one level of autonomy, but will be capable of being adjusted to suit different scenarios. If there is a high level of trust towards the system in a given scenario, a higher level of autonomy can be given to the system. A higher degree of trust in the system will facilitate more complex operations, which demand a higher degree of autonomy from the respective systems. From a military-philosophical, leadership standpoint, we can exemplify this distinction through distinguishing between detailed micro-management and mission command. The former entails a superior officer or entity directing a military unit's every move, whereas in the latter case only overarching direction and guidance is given. Autonomous weapons will operate in the mission command context, but the decision whether humans should be in, on, or out-of-the-loop in deciding to apply lethal power through the use of autonomous weapons is still up to us, including the politicians, the developers, the military, the public, and more generally humanity as a whole. 7 A continuum from direct remote control to full autonomy covers everything from a system where every single action is being initiated and directed by an operator on the ground, to a system that is able to manoeuvre, orient itself, and make its own decisions, based on its understanding of its surroundings, its mission, and the higher intent of its superiors. But will the drones of the future make up their own decisions on life and death at their own discretion? It is hard to imagine a targeting scenario where we are operating outside the framework of a targeting list that has been explicitly approved by humans, and in which a prioritisation of what will be targeted will be set by the chain-of-command. In the future we will continue to operate with very specific attack criteria that explicitly state the level of certainty required for finding the right target and the acceptable level of collateral damage, which also will be set by the chain-of-command. But it is likely that the framework for these decisions will change from one conflict to another, and from one phase of the conflict to another. A fighter pilot is also in many respects operating autonomously in operations today, but he or she has very strict and specific limitations as to what has been ordered, and restraints and constraints with regards to these orders. Even if technology would allow it, it does not seem operationally advisable, nor politically palatable, to remove the human from the loop altogether. No military units are exempt from command and control -the application of military power is by definition under political control and authority, and no military unit is outside this chain. All military units and personnel must operate within the framework and rules of engagement that direct any given operation or mission they are taking part in, and no single person or entity has the authority to make decisions on life and death at his, her, or its own discretion.
One fundamental question thus becomes what ability and capacity to make such decisions, within the military chain-of-command, we want to give autonomous systems. Autonomous systems will, through significant control and programming of the framework for an operation, be able to identify, understand and execute their assigned mission on the level of, and in the future better than, human operators and manned systems. We can adjust the level of autonomy allowed in a system and can retain the authority to apply lethal power 7. For further discussion on this, see for example Birkeland and Dyndal (n 1) 58. at a higher level, or as a minimum be able to remove such possibility from the system if necessary. In 2012, the United States Department of Defence issued a policy with regards to autonomous systems and their ability to apply lethal power, stating that 'all autonomous systems must be constructed to facilitate commanders and operators to exercise appropriate levels of judgment over the use of force.' 8 War is inherently a human activity, and one would expect that few people would want to move away from having humans responsible for a military mission, even if that mission is executed by an autonomous system -just like any other military mission executed by humans and manned platforms. Some claim that without human oversight and control, applying the principles of necessity, proportionality, distinction, minimisation of collateral damage and avoidance of unnecessary suffering is problematic. This is where the discussion of the concept of meaningful control comes in. With different levels of automatics and autonomy come varying levels of control. But when it comes to decisions of life and death, this control must necessarily be meaningful in order to give legitimacy to the delegation of decision-making from the human to the machine. Heather Roff and Richard Moyes underline some key elements of meaningful control in the context of autonomous weapons systems, namely: 9 • Predictable, reliable and transparent technology; • Accurate information for the user on the outcome sought, operation and function of technology, and the context of use; • Timely human action and a potential for timely intervention; • Accountability to a certain standard.
The main challenge seems to lie in the transparency into the technology that is working towards 'the outcome sought' and combining this with the predictability that seems to work against the very definition, at least in the strictest sense, of autonomy. As Schaub and Kristoffersen point out, the main concern is that 'only if the technology is designed in such a way to permit a typical user to understand its operation can they make informed, conscious, and meaningful decisions about the use of the weapon system.' 10 The design of autonomous systems must thus include options for meaningful control by humans if we are to be able to make any meaningful policies for the use of autonomous weapons at all. Furthermore, we must discuss regimes of validation and verification, before we allow the use of fully autonomous systems in military operations. 11 In short, the concept of validation and verification is about our threshold for claiming trust in the system, not only as an operator, but as the owner of the system. Are we able to trust that the autonomous drone will operate in accordance with the tasks assigned and its legal framework? When we arrive at the point where we can call a system fully autonomous, the system shall be so complex, and the algorithms so comprehensive, that what we consider to be sufficient operational testing and evaluation of new systems today will only cover a small fraction of the infinite choices of actions that come with autonomous systems. Today's regimes for validation and verification are directly related to the very specific technical and operational requirements that were presented ahead of system production and procurement. Future systems will be so complex that there is simply no way of testing every single functionality and option for the system with regards to full autonomy. We will be able to build test scenarios that will challenge the system to a very high degree, but we will not be able to stimulate the system to make the same decision every time. Validation and verification is about what we need to have validated and verified of the system without being able to validate and verify everything. The discussion surrounding the necessary regime for validation and verification focusses on where the threshold should be for us to claim trust in the system without having tested all the possible actions. A new approach to testing, validation and verification of the autonomous weapons systems of the future in order to enable them to operate, is thus required. Developers even claim that it is possible today to implement a fairly high degree of autonomy in systems; however, the lack of a regime for proper testing, validation and verification of the system prevents a high level of autonomy from being fully implemented. 12 Here we touch on an interesting intersection between what is possible and what is necessary: It is conceivable that in the near future we may confront an adversary that has lowered his threshold for testing, validation and verification of autonomous systems, in order to gain the upper hand in a conflict. We can thus risk fighting autonomous systems that apply lethal power based on more vague or illdefined attack criteria, with less respect for collateral damage than what we demand from our own troops and weapons systems. We have to recognise the complexity of autonomous systems, which in turn will demand a new approach to testing, validation, verification, and implementation of systems into an operational context. The question, therefore, does not seem to be whether we will get there, but how our legal and operational framework should be construed for operating such systems.

WHY AUTONOMY? NEW OPERATIONAL CONCEPTS
In order to engage in the discussion on the dangers, and even the suggested (potential) ban on, autonomous weapons, it seems beneficial to have an understanding of the background for the military requirements for such systems. The requirement, or desire, for autonomous weapons systems is driven by strategic military and political considerations. There are complex technological and tactical challenges that must be overcome in modern warfare, especially in high density conflicts and international peer-to-peer wars, the possi-12. ibid. bility of which cannot be discarded. The immense technological development of the past three decades, especially within the processing capability of information, has led to astonishing systems, be they military or civilian. Famous military systems such as the Predator and Global Hawk drones have been facilitated by capable processors, satellite communications technology and an ever-evolving sensor resolution capability. However, these systems have operated mainly in benign environments, where the threat to the systems themselves has been negligible. The future, however, looks different. Future airborne weapons systems must be capable of operating in environments with a constant threat to themselves from the enemy, and with continuous threats to and disruption of the utilisation of the electromagnetic spectrum. Gone are the days where one could set up an unencrypted satellite link to a drone that would fly in circles over a perceived threat with impunity, mapping every move of the enemy. Contested environments, where the aspects of automated and autonomous systems are increasingly appreciated, are here to stay. Every ten years the US Air Force issues its technological vision for the coming decade with respect to possibilities within technological development, operational requirements, and underutilised technological potential in modern warfare. In the vision for 2010-2030, autonomy is emphasised as the most prominent factor for the future with regards to operational capabilities, efficient manning of systems, and lowered costs. 13 There is a momentum with military technological developers to move towards autonomy in systems not to bypass decision-makers, or to opt out from moral decision-making and target prioritisation, but to meet the increasing complexity of modern warfare with all available technological means. The following section will shortly discuss two of the main operational challenges that are currently experienced by military planners and operators and a few of the emerging operational concepts in which autonomy will in the future play a fundamentally facilitating role.

Sensor processing
Some of the background for the increased requirements for unmanned systems in modern warfare is the requirement for timely operational information. Intelligence in support of operations must be able to create an understanding for the adversary's systems in network, be they electronic, urban, resource-based or informational. This type of understanding creates the basis for much of today's warfare. Proper situational awareness requires the collection of so much data through the use of high-resolution sensors that operational units must utilise a significant amount of resources for sensor processing and structuring alone. The intelligence officer must build an understanding of human networks, local logistical connections for small groups of terrorists or gangs of organised crime, and must create a pattern-of-life for secondary and tertiary persons in order to understand the modus operandi of the primary person. These processes are time and resource demanding, especially in the form of information analysis.
These factors are the foundation for the requirement for autonomous sensor processing. The single field of research that receives the most attention within autonomy is therefore 13. USAF Chief Scientist (n 11) ix. the systems' ability to process sensor data on board, before it is transmitted to other units or to the ground station. 14 Modern warfare and wars are so information intensive and, as a continuation of this, so personnel and resource demanding, that many nations are struggling to educate and develop sufficient numbers of analysts for sensor processing and analysis. 15 Even the most basic task of the processing -the mere structuring of that incoming data in order to see if it is worth further investigating -demands specifically trained analysts on their respective sensor type. This is where many military organisations see the potential for approaching sensor analysis more efficiently. Sensor processing may sound technical and fascinating only to the IT enthusiast, but it is the potential in autonomous sensor processing that will lay the foundation for the decision-making of the drones that many fear will take place in the future.
Fully automatic -and then autonomous -sensor processing will provide the foundation for targeting and other crucial decisions and prioritisations in military operations. The advantage of autonomous sensor processing is that the machine will be able to process enormous quantities of data to find what is relevant. The machine will analyse this data, then decide what to communicate back to higher authorities, and make decisions based on this analysis. Knowing that one of the biggest challenges for communications in modern operations is the availability of bandwidth in satellites, it becomes evident that this provides a potential for scaling down the size and frequency of transmissions from and to drones on a surveillance and reconnaissance mission. The drone will process its own sensor data on board and decide by itself what to transmit back to its superior unit on the ground. This is entirely different from the current situation whereby drones usually transmit everything picked up by the sensors, raw and unprocessed. In summary, autonomous sensor processing will streamline and improve the decision-making process, especially in situations where time is of the essence. In addition, there is satellite bandwidth to be saved by reducing transmissions to the ground station down to the bare essentials. Finally, it is autonomous sensor processing that will be the foundation for drones assessing attack criteria, and ultimately attacking based on their own, collected, and analysed sensor data.

Denied areas
Operational and strategic planners are preparing for a future where we will operate in areas in which our adversary is either denying us access to an area (anti-access), or are denying the free utilisation of areas through, for example, disturbing the electromagnetic spectrum or placing a minefield to restrict movement (area denial). Denied areas and the adversary's planned and deliberate tactics in this respect are often referred to as measures for anti- access/area denial (A2/AD). 16 Existing and future methods for a holistic approach to such denial of areas includes 'jamming' , surface-to-air missiles, and denial of use of the electromagnetic spectrum. The threat to airborne assets flying in operations in, for example, Afghanistan in the early 2000s, was more or less negligible, and most of today's unmanned aircraft are completely dependent on some sort of an electromagnetic connection, either direct line-of-sight communications, or communications via satellite systems, between the flying asset and the controlling ground station. Autonomous drones will be able to operate without inputs from ground control stations and will be able to fly into denied areas without being influenced by satellite communication being shut down. 17 Autonomous drones will tackle the challenges set by denied areas through speed, manoeuvrability, endurance, and the ability to fly into areas that are contaminated by nuclear, chemical and biological elements. Radar is today the primary sensor for establishing an air picture by ground air defence units. Future drones will likely have varying degrees of stealth attributes and will be able to operate in denied areas more or less invisible to radar sensors. All these improved characteristics will facilitate a freedom of manoeuvre far beyond what the drones are capable of nowadays. An autonomous system will be able to react to incoming threats in a significantly shorter time than what is considered humanly possible. A well-trained fighter pilot will spend around 0.3 seconds to react to any situation or stimulus, and approximately double that time to decide on the appropriate action to take. A robot is able to react and decide on a response within a millionth of a second. 18 Installing laser weapons and other types of weapons on the drones of the future, means that the reaction times and efficiency of selfprotection measures will be many times better than that of a manned system. Autonomous drones of the future will be capable of detecting incoming missiles and other threats far faster than any human would and, through manoeuvres that are damaging to the human body, will be able to evade the threat or neutralise it through the use of self-protection measures that require less time and power to be utilised. Key to this context is a willingness to accept risk, as we should expect that we will be increasingly willing to send a weapons system into denied areas both because it is unmanned, and because of the improved survival capability.

Manned-Unmanned Teaming (MUM-T)
An important early step towards autonomous drone warfare is improved integration through Manned-Unmanned Teaming operations. 19 In such operations, both manned and unmanned systems fly together to carry out a mission. This can be an offensive, weapons carrying, manned platform, which requires assistance from an electronic warfare-capable, unmanned system, where the latter has to work autonomously because of the hostile electronic countermeasures in the area. 20 We are likely to see an increase in these types of operations in the years to come. There are plans to develop communications options to be installed in the new fighter aircraft F-35, which a lot of Western countries are procuring, where the fighter pilot will be controlling drones in support of his or her own operations, either for information gathering or weapons delivery or other support measures. 21 The concept of manned-unmanned collaboration includes the term loyal wingman, where one or more unmanned systems operate in close coordination and cooperation with a manned platform. Such unmanned systems will fly close to the manned aircraft and support with gathering of sensor data, will be able to carry weapons in addition to or instead of the manned platform, and some will be capable of functioning as an in-flight fuel station, flying in the rear of the operations area, supporting the manned platform with fuel in order to extend the endurance and operational time-window for the mission. 22

Multi-Aircraft Control (MAC)
The concept of Multi-Aircraft Control, where more than one drone flies under the control of a single human operator, is gaining increasing attention in operations. A basic example in use today is the transportation of more than one drone under the control of one person, where several drones are flown from point A (for example the factory airfield) to point B (for example the operational squadron). In order to streamline the utilisation of frequencies and reservation of airspace, this can be done in one operation. A more advanced example might be an operator that controls a mission with several drones in formation where the mission is to gather intelligence on a target from an area with several objects of interest. The drones will take care of vertical and horizontal separation themselves, and they will cue each other on movement and incidents that might be of interest to the other drones. The operator, however, is mainly focussed on monitoring the flight path or keeping the group within the operations assigned area. Today's technology enables us to link up advanced, automated drones to each other. Advanced autonomy technology will further enhance the ability of drones to work in groups. This leads us to the concept of swarming, which will be discussed below.

Mothership
Several contractors are now constructing airborne systems where the drones are controlled by a parent aircraft, conceptually referred to as a mothership. Mothership concepts are about bigger, manned or unmanned systems that carry smaller unmanned systems into the operations area. These can be sent out to operate together with each other and/or the mothership. The smaller drones can, for example, perform 'cueing' against the main target for the mothership. In peacetime, manned aircraft may fly along the territorial border of a hostile country, sending out a drone to fly in over the border at a significant risk to investigate specific items or objects, a risk that would not be taken with manned aircraft. A maritime patrol aircraft can carry an anti-submarine drone and have the drone investigate a submarine contact over time, while the mothership flies to investigate another target of interest. 23 Another example is a seagoing vessel, for example a submarine, that will send out autonomous underwater vehicles to support the main submarine; or the main submarine may send out flying drones under the water, which will ascend to the surface, take off, fly to a point of interest and gather information, fly back to a designated point of rendezvous, submerge, and re-enter the mothership. 24

Swarming
The concept of swarming is gaining traction in visionary statements about future warfare and drone operations, and it concerns a group of partly or fully autonomous drones that cooperate to carry out a mission in a given area. The drones will create an internal network among themselves in order to avoid collision, optimise search angles, cue targets to each other, and process and analyse gathered information together. Drones in swarms will be able to attack targets in such great numbers and through such complex approach patterns that none of today's self-defence systems will be able to handle them. Too many attacking units will simply overwhelm hostile air defence radar and missile systems, both through sheer numbers and complex modus operandi. Swarms of drones will also be able to function as communications relays, creating flying information networks that accumulate data. Information that has been collected by one drone in the swarm will immediately be available to the other drones, which creates redundancy in case one or more drones should be neutralised. Drones that cooperate in this manner will also be able to cover a much larger area in a much shorter time than what is possible by both manned and unmanned systems today. Several institutions are utilising swarms of small drones for research purposes, and such systems will go into mass production as the technology is made available.

Airpower roles in general
The drones of the future will be able to operate in areas that an adversary is fully engaged in defending. Drones will be able to fly many times the speed of sound, and many drones will likely be developed with stealth attributes. The general trend in their development aims at future autonomous drones being able to execute all of today's air power roles. 25 The most common type of military operations that today's drones are engaged in are ISR missions. This core airpower mission is today being carried out by anything from small, handheld aircraft, to large, multi-million dollar systems such as the Global Hawk. Although it is likely that there will be a continued market for small and relatively elementary systems, we will also see a significant increase in drone surveillance capabilities. Already there are systems in operation that have stealth attributes and thus are difficult -impossible for some -to detect through the use of radars. Survivability is only increasing through better sensors and self-protection systems, and the capacity of sensors is increasing at a high rate.
We already see systems being deployed with what is termed persistent presence, such as DARPA's Integrated Sensor Is Structure autonomous airship, which will be available for sensor data delivery 24 hours a day for 10 years. 26 Another system being fielded is the Autonomous Real-Time Ground Ubiquitous Surveillance (ARGUS), which is capable of following and tracking single human beings in an area of 40 square kilometres from a position at 20,000 feet. 27 It was the stealthy RQ-170 that penetrated Pakistani sovereign airspace and provided real-time full motion video to the White House as the operation to capture/kill Osama bin Laden was executed in April and May 2011. 28 Decision makers throughout the chain-of-command, from the tactical team leader on the ground, to the general command, to the politicians deciding on going to war, all have a more or less insatiable demand for information. The further development of surveillance drones will only continue.
Probably the most significant factor under discussion with respect to drone warfare is the air-to-ground role of some of today's drones. While the controversial targeted killings are not discussed in this article, the air-to-ground role in a military armed conflict will not disappear from the portfolio of modern military drones anytime soon. The benefits of being able to attack adversary forces with little to no risk to one's own troops are simply too significant to disregard. An example is the Israeli Harpy system of so-called fire-and-forget drones, which are capable of flying above an operations area over a long period of time, and then reacting to and attacking emerging military targets based on pre-programmed radar signatures. 29 Additionally, Lockheed Martin is in the process of developing the successor of the Cold War high-speed surveillance aircraft SR-71, the SR-72. This will likely be capable of flying at hypersonic speeds, will be stealthy, and capable of both executing surveillance missions as well as ground attack missions. 30 A third, classic airpower role is the air-to-air mission. While we have not seen air-to-air drones fielded as of today, we should expect to see very capable drones that can neutralise hostile aircraft very efficiently in the years and decades to come. Some even talk about the F-35 being the last manned aircraft that the West will procure. 31 The technological leap that we will see in these types of missions is the capability for situational awareness and sensor fusion that will be inherent in autonomous drones. Sensor fusion technology, where all sensor inputs to the aircraft are merged into one seamless picture for a superior understanding of the operational environment the pilot is flying in, is already present in the F-22 and F-35 fighter aircraft. This is just one example of early steps in developing the all-round airpower role capable autonomous drones of the future.
The combination of risk aversion and increasingly complex operating environments is driving requirements for drones that can execute risky missions in dangerous places, and this includes to the highest degree being capable of executing air-to-ground missions. Increasingly complex environments call for increasingly complex platforms that can tackle those environments, leading to the military requirement for autonomous drones for the most dangerous and complex mission sets during future operations.

CONCLUSION: DOES AUTONOMY FUNDAMENTALLY CHANGE ANYTHING?
Most of the weapons systems we see employed today are automatic and automated, and none are truly autonomous. They are evolutionary products of a long line of technological development within the respective field of warfare. However, if not revolutionary, the technological development facilitating the operational concepts discussed in this article is immense. The utilisation of remotely controlled unmanned aircraft today is not revolutionary technology in itself, but it does seem to lower the threshold somewhat for applying military power as a political tool. And, as the concepts discussed above show us, drone technology will likely also change the way we fight wars in the near future.
This 'robotisation of conventional warfare' can change the ways we fight wars to such a degree that over time we might consider it revolutionary. Higher levels of automatics have led to a trend where military drones are to a large extent being described as autonomous, and operations where the drones are increasingly able to execute actions without human input. There are many factors that have led to and are further contributing towards demands for increasingly autonomous weapons systems. The overall lowered willingness to accept risk to our own troops is one factor, and the allure of technology that facilitates aircraft being flown for 30 hours or more at a time while being controlled from the other side of the world is another. The military demand for autonomous weapons is very real, and very specific, and is based on actual challenges that the military faces in operations today.
The concepts described in this article are meant to meet increasingly complex problem sets in the operational planning and execution realm. There is not, however, a push to release military commanders from decision making and from being in or on the decision loop for applying lethal power. It is crucial to have a broad debate among policy makers, technology developers, scholars and civil society in order to decide how the weapons of the future will be programmed and the place and scope that human control should play therein. Technology that has been invented remains there to be used until it becomes obsolete. We can try to regulate the use of sophisticated military technology, but efforts to develop new weapons that provide clear advantages will most likely continue unabated. Autonomous drones will be an integrated part of military operations in the near future, and by then we must have calibrated our ethical compass, and have developed a legal and political framework for the use of autonomous drones in war. War is not the appropriate place for silent afterthought.