Predictions about the military future of machine learning
The second half of 2022 was dominated by breakthroughs in machine learning (ML) that, among other things, facilitated systems to produce both visual and written art. In light of this, questions abound concerning the potential military applications of this technology. As the United States Department of Defense (DoD) will have allocated $874 million towards research into artificial intelligence (AI) between last fall and this coming October, there is no shortage of interest from the defence industry, and, commentators say, no shortage of unique paths that development of this technology could take.
To speculate about where military-funded ML systems may go from here, it is important to understand how DoD-sponsored research in this field works. While in-house and university-based research are still very prominent, much of the DoD’s recent investments in ML and AI research have been directed towards private companies that occupy a niche between Silicon Valley and traditional defence contractors. Some of these companies are valued at over $1 billion, with some of the youngest companies less than half a decade old. This group contains companies such as Andruil, Shield AI, and Rebellion Defense, the latter of which Eric Schmidt—former Google CEO—was previously involved with.
The dedicated $874 million, which primarily comes from the $2.3 billion that the Pentagon spends on science and technology research, is managed by the DoD’s Joint Artificial Intelligence Center (JAIC), founded in 2018. The JAIC usually funnels resources into research projects aimed at improving a general understanding of broad scientific theories that might later lead to practical developments.
Currently, there appear to be two overarching goals and capabilities the DoD is investing in through ‘SHARPE’ cohort companies, that being an initialism for six major start-ups, including: Shield AI, HawkEye360, Andruil, Rebellion Defense, Palantir, and Epirus. One of these goals is the development of ML-based systems to improve operational knowledge that would aid in strategic tasks carried out by humans; however some have suggested that a totally autonomous strategic AI might be possible in the near-future. Another function that has garnered DoD attention is autonomous weaponized vehicles, primarily airborne drones.
Rebellion’s flagship product, Nova, and its older counterpart, Iris, both fall into the former camp, with Nova offering “adversary emulation” that would allow for more robust planning of military operations. Iris appears to be an AI-enabled system for analysis of big-data that would prove useful in a military context, serving much the same strategic end as its counterpart. HawkEye360, founded in 2015, is similar in that it provides a satellite surveillance service that may, going forward, be aided by an ML system.
Anduril’s Lattice OS also falls into the first camp, being a technology designed to detect and highlight objects of interest within a given area. While more ‘boots on the ground’ than its counterparts, Rebellion or HawkEye360, it is still, at its core, a military technology company that hopes to utilise ML in the collection and analysis of data. Palantir focuses more on centralising software platforms, according to its website.
Falling into the second DoD goal of airborne technology, Shield AI is invested in the development of autonomous militarised drones, employing an AI pilot ominously titled Hivemind. The prospect of automated militarised robots powered by AI has come under far more scrutiny as of late than data management systems designed for strategic use. Part of the anxiety surrounding tech like this is its potential application in a domestic setting, a fear legitimised by experiments carried out by the New York Police Department formerly leasing a dog-like robot name Spot from Boston Dynamics, another robot corporation, to conduct searches and aid in hostage situations.
While public backlash has rarely deterred the military-industrial complex, it is likely that it will compound a pre-existing aversion to autonomous militarised robots, for a number of reasons: the potential for misuse by enemy forces, the potential for development in this area to lead to a total waste of resources with little to show for it, and a general lack of incentive to name a few. With cheap, unmanned technologies like drones already commonplace and ready to replace human operators, it is most likely that there is little impetus for the military to invest in products like that offered by Shield AI.
ML-enabled strategic management systems capable of interpreting big data are already receiving significant attention from the defence community as they seem to be a key technology in global ongoing arms races. It appears that, already, the United States Air Force is working towards the realisation of autonomous ML programs to solve complex problems. It is noteworthy that most ‘SHARPE’ companies are dedicated to development in this area, not autonomous weaponized vehicles.
Regardless of its possible employment in future conflicts, investment in machine learning from the defence sector is likely to open the floodgates for developments in the consumer market.
The JAIC hasn’t existed for almost a year. It was rolled into the Chief Data & AI Office in Feb ’22.