Read Time: 1 Hrs 20 Mins

Originally Published: 2018

The Future of Intelligent Decision Making within Armaments & Warfare

Good or bad, right or wrong, sentience and dominance have always been at the forefront of human evolution. Whenever there are two or more species living in close proximity there is always a battle over the space and resources required for continued survival and proliferation. Usually the smartest and most adaptable of the competing species will either dominate or obliterate the others.

Darwin made it clear with his theory based around ‘survival of the fittest’, effectively stating that as life moves forward, regardless of species, only those able to adapt as the world changes around them are likely to pass their genes onto the next generation.

As highly evolved as we like to think we are as a race or species, humans are still part of the same ‘adapt to survive scenario’ as the rest of the creatures here on earth. The only real difference between us and the animals over which we have apparent dominance, is the fact that we know it exists but choose to believe we have moved beyond it, whilst other animals just live by their instincts.

This disconnect from the reality of evolution is one of the reasons why we as a race continuously sleepwalk into potential ‘evolution-ending’ situations, and we do it with a smile on our faces. You only need to look at the nuclear arms race as a clear indication of how our intelligence and urge to dominate each other, can bring about potentially world-ending situations and a full stop to the Darwinism that drives our collective development. Thinking about it logically, why on earth would any species in its right mind mind, pursue a course of mutually assured destruction as a means to protect what we claim as our own land or property?

There will always be two sides to every coin and a yin for every yang, neither side will be absolutely correct 100% of the time and likewise, neither will be 100% incorrect, either. The entire atmosphere of earth wasn’t set aflame by the first detonation of a nuclear bomb, but we did create a source of energy by harnessing atomic fission. The earth wasn’t sucked into oblivion by the black hole created by scientists at the LHC under the Alps, but we did make great discoveries about the universe around us as a result. The earth hasn’t become entirely uninhabitable as a result of global warming — just yet, but we have as a collective signed up to various accords designed to prevent such, and are right now looking to alternative energy sources to reduce the impact of our fossil fuel driven society.

Over the years it hasn’t just been the immediate or direct effects of possible events or current activities that have been subject to negative speculation. Long term and indirect effects of ongoing technological or political movements have also been the subject of speculation. The usual result of this comes in the form of predictions of dystopian futures.

While fighting the abolition of slavery, the pro-slavery lobby predicted that the whole British economy would fail as a long term result of abolishing the trade with Africa. Those that fought against votes for women (anti-suffragism), did so with the argument that it would threaten the institution of family (amongst other things) and lead to the general downfall of society. The Luddites of Nottinghamshire and Yorkshire (UK) destroyed labour-saving machinery for fear that it would diminish employment opportunities and damage their long term standard of living.

Throughout history people have fought against change, whether it was misguided or entirely correct. The simple truth is that whenever change is coming and in whatever form, there will always be someone fighting it or forewarning of heinous results if the change is implemented. Though the days of all-out rebellion and blood on the streets have mostly been relegated to the annals of history, the concept of cataclysmic results from reform and change has not.

In today’s technological and digital world, you are never more than a mouse-click away from some dystopian conspiracy theory or another. With the pace at which our day to day technologies are developing and the way evermore powerful technologies are being integrated into our daily lives, it is hardly surprising that a growing number of people are looking bleakly at the future and letting their imaginations run riot. It doesn’t help when you have the media stoking embers of worry with their portrayal of worst case scenarios, with myopic and overbearing governments crushing dissenting peoples, apparently for their own protection.

The films ‘The Hunger Games’, and ‘Divergent’ are a couple of examples where governments suppress the people in a dystopian style future. Their ambitions being to keep the masses down, whilst protecting their privileged existence at the other end of the spectrum. In this imagined futuristic dystopia, technology is used to suppress even if it is rarely orchestrating the suppression. The main drivers come from the government’s own desire to rule and they simply use technology as a tool to carry out their plans. The main message from this type of far-fetched tale is that we should never be too trusting of governments, ideologies or rulers that seek to control their people, too closely.

Bashing governmental style and oppression has always been a major force in fiction. It seems people just love being reminded that they are just one dictator away from life in a really dark place. Over time, fictional representations of dictatorships and dystopian societies have had more fuel added to their fire by the inclusion of real life ‘baddies’. Communism and various religions popping up around the globe have all fostered their own kind of potential dystopia. As a result, those that are able to extrapolate from current events and foresee potential catastrophe for society have imagined and shared their worst thoughts.

It isn’t simply governments being taken apart in fiction, it is technologies falling into the wrong hands that could be the harbinger for a future dystopian outlook. You only need to watch a couple of films like “Spectre’, (a Bond Movie) or ‘Fast and Furious 7’ to be reminded of the types of surveillance technologies which, if controlled by the wrong people, could spell the end to personal freedom or privacy in such a way as to entirely curtail the liberty of those to be monitored. In Spectre, James Bond was trying to stop the switching-on of an extremely powerful surveillance system which would spell the end of privacy as we know it, whilst at the same time preventing the villains from closing down the 00 program, forever. Whilst in Fast and Furious 7, the car-mad team were trying to steal a hacking program called God’s Eye, which could hack into any digital video or audio device and track people down across the globe, from terrorists intent on causing only harm, (in my opinion, we lost our privacy both digital and physical, a few years ago). As ever though, there is always someone on hand to stop the bad guys winning in the end.

All well and good when the focus of a dystopian outlook is based on general human frailties such as greed or hunger for power, it is relatively simple -at least in Hollywood — to overthrow the bad guys and restore a semblance of normality or freedom to the masses. But what of those imagined dystopian futures where the fate of humanity is removed from their own hands? What of a future where technology has taken over and is attempting to destroy humanity itself or take over entirely? Films such as ‘I Robot’, ‘Terminator’ and ‘The Matrix’, paint a truly horrific image of Artificial Intelligence gone crazy.

The AI-controlled humanoid robots in ‘I Robot’, were initially sent out as ever-ready and untiring assistants to make the lives of people better. However, over time, the sentient AI controlling them began to think that humans needed a whole lot more assistance than they were currently getting and so a newly programmed range of robots were dispatched to enact Martial Law, purely and simply because the AI in its own logic saw that humans were effectively their own worst enemy and needed saving from themselves.

The AI in ‘Terminator, simply wanted to take over the entire planet and so all out war ensured a beleaguered human race, which left it free to continue growing and developing itself. Finally, in ‘The Matrix’, the AI had gone beyond both ‘I Robot’ and ‘Terminator’, in that it had in fact already subjugated almost the entire human race and was in process of using them as human batteries to power itself, whilst keeping them alive in liquid filled pods and feeding their minds with computer generated simulations of real life.

It isn’t all doom and gloom when it comes to future visions of how technology will create different types of societies in the future. The TV series, ‘Persons of Interest’, portrays an AI system that is capable of predicting future crimes and sending an agent along to the location to try and prevent it from taking place. Likewise in the film, ‘Minority Report’, three psychics are suspended in a flotation tank while AI technology is used to capture their visions of the future, or pre-crime, which allows a team of soldiers to be deployed to prevent the crimes from being committed.

Apart from technologies being used to oppress, or taking over and attempting to suppress humans, another element that is rife within post-apocalyptic, dystopian-style fiction, is that of transhumanism, which means in layman’s terms the adoption of technology into, or genetic manipulation of, the human body, in order to improve skill sets and abilities over and above that which even tens of thousands of years of evolution could achieve.

The film ‘Gattaca’, from 1997, portrays a future where genetic manipulation has created a two-tier society with genetically engineered humans at the top,and naturally born humans, at the bottom. Star Trek depicts a future where a race of human/machine hybrids — The Borg — are constantly attempting to assimilate and merge with other alien races.

Robocop depicts a future where robots are being used heavily by the military, but in order for them to be accepted as part of civilian society (for policing etc) they need to be humanised as people are wary of unfeeling machines making life or death decisions over citizens. The result being the creation of a human/machine hybrid that still uses it’s remaining humanity to gauge right and wrong in any given situation (until it’s masters decide to change the game plan). In the film ‘Ironman’, transhumanism is part and parcel of Tony Stark becoming Ironman, when the electromagnet and arc-reactor are inserted into his chest as a way to prevent shrapnel from destroying his heart. He goes on to utilise the arc reactor as a power source to provide the energy to run his exoskeleton suit and become the superhero we know and love today. The Matrix has an element of transhumanism in that in order for the human batteries to be kept placated, they have a digital interface installed into their bodies through which they are plugged into the simulation matrix.

Additionally, there was an element of transhumanism in ‘I Robot’, where detective Del Spooner — the character played by Will Smith — is revealed to have a robotic, prosthetic limb (arm). One of the first transhumanist fictions I was ever exposed to was the TV Series, ‘The Six Million Dollar Man’, in which Colonel Steve Austin,played by Lee Majors, had his bionic implants which gave him super strength, speed, hearing and sight.

All of these varying visions in fiction, both good and bad, can not only distract you from the reality of what is happening in the world of technology today, but also blind you to the rapidly approaching future. To a certain degree, pretty much every technology that has been featured in the movies and on TV (AI and transhumanism), over the years, is coming ever closer to reality in laboratories and research centres around the globe.

Let’s take a closer look at what I mean.

Aside from the obvious growth of AI and genetic research by the commercial sector to automate machinery/processes or speed up data analysis, or biotech companies researching the genome in order to tailor targeted pharmaceuticals etc, there are effectively two global industries that are driving technological advancements in transhumanism more rapidly than any of the others, the defence and medical sectors. Between them, they are making advances in biotechnology, cybernetics, prosthetics and AI at an unprecedented rate.

Transhumanism in medicine is becoming increasingly mainstream with the continued growth of genetic research and the ability to screen foetuses for birth defects such as genetic or chromosomal disorders. Effectively, once these technologies are expanded it will open the doors to precisely the type of society imagined in the film, Gattaca. Not necessarily supercharging humans, but more akin to creating a near perfect specimen of humanity, and possibly being able to choose hair and eye colour, height, and even IQ levels. I am sure that if it is even remotely possible then there will be geneticists somewhere that are looking at ways to ‘improve’ the human genome in its entirety and create their own version of improved human perfection.

Another area where transhumanism is ever growing in popularity, and is responsible for the majority of forward movement, is in the medical rehabilitation sector (the doctors won’t call it transhumanism though). Merging technology with biology is seen as the way forward in terms of aiding the disabled, injured or impaired. Mainstream medical practitioners are adapting and using technology on or within the human body to restore or improve various conditions.

Prosthetics in general are heading towards the type of technology that was imagined in ‘I Robot’ or ‘Robocop’. Prosthetics now exist that can be operated with signals sent direct from the brain through the nerves and then transmitted into the limbs themselves which allows those wearing them to actually control or generate movement just like a normal arm, hand, foot or leg. Researchers at the University of Illinois unveiled a control algorithm last year that they claim delivers significantly more reliable electrical feedback to nerve endings than current prosthetic technologies, essentially restoring the sense of touch. Combining these two advances in prosthetic technology is going to enable future amputees to feel whole again as they will no longer have a lump of senseless technology attached to their body, but a life-like technological replacement that ‘feels’ like it is part of the body.

Hearing aid technology is another area, with hearing aids becoming smaller and more powerful by the year and cochlear implants that entirely skip the ear canal and use transmission technology to pass sounds direct through to the cochlea in the cases where simple amplification is insufficient (hearing aids) and restore a sense of hearing. Tweaking that technology could allow supernatural hearing if the processing and receiving technology were upgraded to do as such. Taking that idea further and using a similar technology to that described for controllable limbs, could it be possible to ‘think’ about hearing over a greater distance and have the implants focus in on certain sounds?

Using technology as a cure or an aid for people with hearing disabilities is a great step forward and with the way technology is moving, deafness could become a thing of the past within a relatively short period. Blindness is another area that is being focused on by the technology sector to provide a cure. So called “Bionic eyes” are now available for certain types of blindness. There are two main types currently, the first is already commercially available and works by replacing the eyeball with cameras and sensors, around the eye, and feeding visual information direct to the retinal cells and then into the optic nerve. This variety is limited in the number of people it can help simply due to needing a fully active optic nerve to function. Far more people suffer from blindness caused by problems with the optic nerve itself, so the next stage in research is looking at ways to bypass the eye and optic nerve altogether and install some kind of transmitter straight onto the visual cortex part of the brain itself.

Admittedly, the testing on this technology is very fresh, human trials are only due to start on five people before the end of this year. Infancy of testing aside, we can all see the potential of this technology to not only improve the lives of millions of people, but also be adapted for a more militaristic purposes. In the TV series, ‘The Six Million Dollar Man’,” Colonel Steve Austin used his bionic vision to zoom in and view things over a great distance. This type of ‘super-vision’ would change the face of espionage and reconnaissance dramatically.

Prosthetics in general are all going down the same path for implementation, they are all focusing on connecting direct to the brain and either sending or receiving signals directly. However, as far as linking minds with computers, goes, there is no greater push to achieve such a thing as is taking place in the field of research into curing ‘Locked-in syndrome’. Locked-in syndrome is where a patient is fully cognitive but completely unable to move (occasionally eyes can move) or communicate. Researchers from the Toyohashi University of Technology in Japan are working on a device that can read brain waves and (electrical signals in the brain) and then analyse them with artificial intelligence (RNN based machine learning) in order to translate the data into speech. I can surmise that if this technology were to reach a level where it were truly possible to read a person’s mind through mapping and analysing brainwaves with AI, then perhaps the reverse proposition could be looked at and actually make people think certain things in a similar way as was portrayed in the Matrix trilogy. Or perhaps it could lead to the ultimate dream of ultra-transhumanists, that of living forever with minds being uploaded into supercomputers.

Going even further down the medicine and transhumanism rabbit hole you could be forgiven for looking at modern-day ICUs (Intensive Care Units) in some of the more advanced hospitals, and claiming that most patients become a type of cyborg while in there, simply due to the variety and number of machines that are keeping them alive. There are machines cleaning blood, re-oxygenating it and pumping it through the body, machines breathing for patients, machines to stimulate muscles, machines/catheters controlling bowel and bladder movements, and more. Typically, it seems modern medicine has a machine to support or provide most essential functions that a human requires to stay alive. As with pacemakers for heart conditions, how long will it take for miniaturisation to reach a level where the machines you find clicking, beeping and whirring in hospital wards are so diminutive that they can be inserted into the body permanently?

As another example of electronics being merged with the human body, It was revealed in September 2018, by Dr Claudia Angeli, of Kentucky Spinal Cord Injury Research Center, the University of Louisville (USA), that three patients had begun to learn to walk again after being fitted with electronic implants that stimulated their spinal cords. Previously paralysed, these patients are among the first to benefit from this drive to bring electronics to bear on major spinal injuries. It works by stimulating the spinal cord, via 16 electrodes fitted around the damaged section, through a process called ‘epidural stimulation’.

These technologies in their current state are and will be, a godsend for those that have been afflicted with some terrible conditions. However, using a combination of all of the currently available prosthetic technologies, and an amalgamation of where these technologies could be in a few short years, it is highly probable that humans could well be in the fortunate position of being able to replace most parts of their anatomy and boost their senses with some variety of technology in the not too distant future.

Don’t get me wrong, I am not saying that there will be armies of Robocops wandering around or that we will all look like something out of Star Trek, but there is a revolution in biomechanical merging with transhuman engineering just round the corner, that is going to change the way we all live (and possibly, not die!).

Highlighted above is how I see the medical sector driving the growth of transhumanism, now I want to look at the other area that has been ripe for fictionalisation in a dystopian fashion, that of AI. If we look straight at AI as a way to speed up everyday data management and analysis, you will see that there is pretty much no sector in industry and commerce that is to remain completely untouched by this technology.

The medical fraternity are, again, one of the leading propellants of AI development. More and more frequently, researchers across the globe are utilising AI, specifically machine learning, techniques to perform clinical diagnoses and suggest treatments. AI has the capability to detect meaningful relationships in data sets and has been widely used in many clinical situations to diagnose, treat, and predict the results. Ever-striving for more accurate results and faster diagnoses, millions of pounds are being poured into the ongoing development of AI in areas such as disease identification and diagnosis, personalized treatment/ behavioral modification, radiology & radiotherapy and epidemic outbreak prediction. Companies such as IBM (Watson), Google (DeepMind) and Microsoft (Project Hanover) are really pushing the envelope in order to become the very best available in all their chosen areas of AI connected medical research. Whether it is mapping the human genome, trying to obliterate cancer or properly keeping track of and cross-referencing medical records, AI is the very sharpest edge of the sword that modern medicine is using in its attempt to slice away suffering and death from humanity. Given the egalitarian nature of this drive to improve technology, very few people actually object to what the medical sector are doing.

Before moving onto another sector, I just want to talk briefly about a convergence of AI and transhumanism in the medical sector that isn’t related to the patients direct treatment or therapy. I want to take a look at AI-assisted Augmented Reality (AR) and Mixed Reality (MR) for the medical staff themselves. When you look at how hospitals operate, much of the time you will notice a heavy reliance on patient records being available for study by nurses, doctors and surgeons alike. In a lot of hospitals, these are still handwritten notes on a clipboard attached to the end of a patient’s bed, or available electronically via some kind of tablet (Electronic Patient Records — EPR). As the professionals approach any given patient, they need to access the files to understand the current situation.

AR Smart glasses connected to an AI-driven EPR system will be able to deliver patient’s records and will be triggered by a facial recognition algorithm, instantly projecting all relevant information onto the lenses of the glasses. Factor in a ‘natural language processing’ (NLP) module too and you have a system that can be updated verbally by doctors and nurses as they carry out the standard checks required for each individual patient. From a surgeon’s standpoint, we would be looking at a combination of AR and MR for use during surgery. As the surgeon moves into position to perform an operation he is fed, via his glasses, with relevant data and images relating to the particular patient. Perhaps it would be an ultrasound image from the patient’s lungs overlaid directly onto the patient’s chest highlighting the area requiring attention, along with guidance lines for incisions, perhaps? With additional systems monitoring vital signs feeding directly into a small corner of the lenses, this would allow for smoother and more rapid operations and hopefully a quicker recovery time for patients as the time for which they are anaesthetised, would be shortened dramatically. Again, with the addition of an NLP module, the surgeon could not only update notes as he goes along but also switch between relevant imagery dependant on the complexity of the surgery being carried out.

Though there is a constant push within the medical sector to further knowledge in both transhumanism and AI, it is far from the biggest influencer, in either field. The military are beginning to throw more money than anyone else at both.

AI and transhumanism in warfare have been a major focus for many years, the ultimate ambition being to move human beings as far away as possible from the battlefield, and in doing so, protect soldiers from harm. While aiming to design and realise a full-blown AI system that can plan and orchestrate every element of a war, the powers that be are looking at every conceivable possibility on the way there. This research of the angles in-between is where transhumanism is coming to the fore.

Regardless of the weapons a regular soldier can be provided with, they will always be limited in the field by their speed, strength, intellect and agility, it is these areas that the military have been looking at ‘enhancing’, for decades. One agency that is renowned for continual attempts to push the boundaries of human abilities is America’s Defense Advanced Research Projects Agency (‘DARPA’). DARPA’s Defense Sciences Office has been experimenting in various fields trying to expand soldiers’ abilities in diverse ways. Their ultimate aim being to make soldiers virtually unkillable and able to withstand a host of deadly situations involving chemical, biological & radioactive weapons, infections & diseases, extreme altitudes & temperatures and harsh natural environments. They have been looking towards nature’s own well provisioned beasts for inspiration and genetic materials. Animals such as sea-lions, geese, cockroaches and apes have all been under the microscope due to their own innate capabilities and resilience to harsh environs. One of the experiments involved feeding soldiers an alternate form of Vitamin A, made from the livers of Walleyed Pikes, in order to improve night-vision. They were achieving a certain level of success with it too, until someone invented the night-vision scopes we see today, at which point the experiments were halted…as far as we know.

Beyond the possible genetic manipulation that could be utilised to ‘improve’ soldiers, scientists have also looked at alternative methods of external stimulation to lessen the impact of sleeplessness and so extend the periods of time over which soldiers can remain active and alert. The methods employed for such experiments were everything from administering amphetamines, to applying electromagnetic fields to the cranium.

The activities surrounding physical and genetic experiments to ‘improve’ soldiers have been ongoing for many years; however, another area that has seen increased activity over that time, is research into the augmentation of soldiers using technology that can extend and increase their abilities. It may seem an odd assertion to make, but in real terms, the miniaturisation of radio technology brought about the first non-physical/genetic approach towards transhuman adoption of technology. As soon as radios became small enough to be fitted into every soldiers kit — and not require a huge battery power pack — the physical act of communication across greater distances became so much easier. Obviously, communication between soldiers on patrol, for instance, is absolutely essential in order to coordinate and share real-time information regarding their own situation, and positions, as well as those of the enemy.

As soon as instant communication across greater distances became possible, without the need for runners or messengers, defensive and offensive campaigns could be more easily coordinated. Increasing the distance between soldiers allowed wider areas to be surveilled and protected and sped up the flow of information between them, thus releasing more time to prepare for whatever activities required.

With communications covered, what about the other more physical elements of soldiering? Lifting, carrying and running, have always formed a huge part of any soldier’s life, anyone that has served and experienced going on exercise with all their kit on their back will tell you that.

Exoskeletons suits and other implants etc have long been an ambition of the military with research and prototypes being seen as far back as the 1960’s, when General Electric and the United States Armed Forces worked together to create the Hardiman suit. Working via a combination of electricity and hydraulics, the Hardiman suit was designed to multiply the lifting ability of wearers by about 25 times, enabling the wearer to lift a 250 kg object as easily as a 10 kg object, unassisted. Ultimately, the Hardiman suit was shelved as it was unwieldy, heavy and slow to react. The major issues surrounding any type of exoskeleton suit in today’s army are still fairly similar to those back in the sixties, i.e.: weight, power and speed of action. As power sources decrease in size with the ongoing development of smaller and more powerful batteries, these challenges could well be overcome in the not too distant future. In real terms, the medical sector is far more likely to have full exoskeleton suits for use in civilian medical rehabilitation, than the military, as the peak-performance requirements are lower. It is easier to create a machine that will enable levels of movement in line with day to day activities, to improve the life of someone that has lower than normal physical capabilities, than it is to massively increase the capabilities of a fit and strong soldier in the heat of battle. The sheer flexibility required of soldiers in battle, is almost impossible to recreate with the type of wearable machinery they are looking to develop. Creating a system that is able to be worn continuously without impeding the wearer’s usual activities, whilst at the same time being always available to kick in and provide additional speed or power when required is the ultimate aim.

When it comes to physical and technical equipment, to aid and improve your average soldier , nature again becomes the laboratory technician’s unwitting muse. The humble kangaroo is serving as inspiration for a special type of footwear. Military researchers are looking at developing a boot modelled on kangaroo physiology, that would benefit soldiers with the ability to run faster and jump more than double their own height, with ease. Additionally, geckos have been under the microscope for their almost supernatural ability to climb most surfaces and even run across ceilings. The result of this research are gloves and shoes which give such powerful grip that a 200lb man could scale the outside of a tower block, or a cliff face with none of the regular climbing kit.

Creating usable kit that can get soldiers moving faster, jumping higher, enduring longer and carrying more equipment while also meeting the ever-present possibility of hand to hand combat requirements, requires a massive leap forwards in mechanical engineering which is still some way off.

One of the simplest areas that is fouling the development of these exoskeletons is that of human diversity in body shape and build. Imagine the expense of individually fitting each and every soldier with a bespoke exoskeleton. Just the differences between soldiers’ arm lengths would cause massive issues and would be enough to cause the boys in finance aneurysms, never mind muscle girth and chest dimensions, etc.

Perhaps the way forward might be to concentrate on the development of flexible ’smart fabrics’, as opposed to machinery, for this particular enhancement? Perhaps it would be better to look at creating exosuits rather than exoskeletons? Perhaps some kind of suit that uses a combination of electroactive polymer fabrics, super thin ballistic materials (developed and researched by MIT and Rice University — USA), piezoelectric fabric (courtesy of Chalmers University of Technology, in collaboration with the Swedish School of Textiles) and some form of solar cell fabric (various) all combined with an AI system that learns as it is worn by the individual? While being worn during regular activities and exercise, the AI would be able to adjust the suit as it learns, ultimately instructing the electroactive polymers how to mould to the physique and flex in a way which would boost movement under certain conditions, through the application of a piezoelectric-generated charge to the relevant areas. The solar fabric would enable the suit to stay charged even during periods of inactivity, whilst the batteries would store excess charge, and the ballistic fabric would lessen the chances of soldiers being injured by shrapnel or bullets. Linking the suit with a variety of health monitoring smart applications that are readily available nowadays would mean an added ability to monitor the health and wellbeing of soldiers as they fought. From a purely speculative point of view, I like the idea of the AI embedded in the suit, having a return-to-base mode which would see an injured, or unconscious soldier transported to safety for medical treatment.

Whilst looking at technology that is to be incorporated as kit and worn by soldiers in battle, I would be remiss if I didn’t mention the latest in consumer-ready technology that is being rolled out into military use, technology that is known collectively as ‘wearables’. The wearables label encompasses a range of products that can be easily worn in order to either monitor or provide data, making life easier for the average user, but when adapted for military use, can provide soldiers a new level of freedom from mundane but necessary tasks.

The range of technology and applications is extremely diverse but I will try to cover a good selection here. Product-wise we can see the use of smart glasses & watches, and also fitness monitoring equipment such as Fitbits, on the increase. In times of war, there can never be too much intelligence or data to work from, so pieces of kit such as these will prove invaluable both to the men on the ground and the overseeing coordinators. Watches and Fitbits are probably the easiest pieces of kit to issue as they require little more fitting than an adjustable strap, but they can be very useful. Most watches can can be used as a compass for direction finding, they can also be programmed with instructions or recieve them when needed if connected to a mobile phone of some sort of tablet/handheld device. Instructional messages and images of suspects (for counterinsurgency operations) can be sent as mobile data, allowing soldiers and intelligence services to be kept up to date and not have to reach into their pockets to retrieve a hand-held device to view the messages. In a live situation or while on patrol, it remains essential to have full access to one’s weapons.Utilising military-grade wearables, it would be possible to use geo-locational elements to keep a close eye on where your colleagues are if it is fed through to a live map on wearable kit. Another element built into a lot of wrist-based systems is health-monitoring applications (heart rate, blood sugar, etc.), which could feed back to base when soldiers are in need of relief or medical attention.

Having said that, the US military recently imposed a ban on Fitbits and the like as they were regularly updating publicly viewable websites with the locations of secret training bases as the soldiers were being put through their paces. So perhaps a private, smart network for exclusive military use, would allow them to optimize the benefits that come with wearable tech?

Glasses with built-in cameras and lens projection technology allow commanders remote, real time awareness of a situation, and the ability to feed data directly to soldiers wearing them. Maps or directions could be displayed directly in front of the wearer’s eyes, also enemy troop numbers or fresh orders based on the previously gathered and analysed images. It may also be possible to transmit specific instructions to carry out tasks that an individual soldier may not already know (perhaps a medical procedure to save a teammate or even disable an IED (improvised explosive device) — with instructions and even line drawings/images — if there is no time to call in the bomb disposal team.

Quickly following in the footsteps of the military’s desire to use technology built into tactical glasses, to give their troops an edge, how about enabling soldiers to see through walls? Until quite recently, this suggestion would have been ridiculed. Now, however, it isn’t such a distant dream according to researchers in America. The tech wizards at MIT have been toying around with radio signals in an effort to use their wall-penetrating qualities to create a radar-like system that can not only tell you if there is someone the other side of a wall, but could also identify who it might be. By using an AI system that can access a database of possible candidates, it was running with an accuracy rate of 83% when last tested, they call their system RF-Pose.

Currently, the tech itself is limited in scope by point of the fact that radio signals are unable to pass through particularly thick walls. Although this technology is now reaching a level where it can track and identify humans through walls, a similar kind of technology is readily available for those in the building trade. Walabot is a device which can be coupled with a smartphone and used to ‘scan’ walls in order to identify structural elements within the wall whether it is wooden beams, pipes, cables or metal studs. A combination of MITs RF-Pose system combined with the Walobot tech would create a system that could see through walls, identify people and highlight weapons (or at least objects made of metal or wood, etc).

Further enhancements could involve voice-recognition technology, perhaps using a miniature parabolic listening device bouncing light off walls to pick up vibrations. Connecting the sensing system to multiple databases and an advanced AI system, could enable it to recognise weapons as well as people and further strengthen its capabilities. As the amount of data being generated and collected increases, so would the AI system’s ability to accurately ‘recognise’ what and who is on the other side of the wall and whether they are friend or foe, armed or injured, standing or sitting, etc. All this information could be streamed directly to a soldier’s smart glasses and become essential in a hostage release situation, or perhaps help in locating of a sniper from inside a block of flats by viewing through the walls.

Beyond in-theatre use of smart glasses, another element of usable wearable equipment , is virtual reality (VR). Currently VR is being used for elements of basic training such as skill drills and physical fitness. The real benefits of virtual reality lay in its ability to provide a realistic battle scenario in which soldiers can be acclimatized to the realities of a warzone, i.e.the sights and sounds. Exposure to this type of training means soldiers are less disoriented when they first deploy to an active warzone as they have already experienced similar situations multiple times via VR, in training. It is also a highly mobile system, all that is needed is a headset and a laptop loaded with the latest training programs. These new systems are coming out all the time and the programmers are working constantly to ensure the scenarios they produce are as realistic as possible. In the future it won’t be necessary for programmers to create the VR programmes, Artificial Intelligence Virtual Reality (AIVR) is the next step. AIVR is a system that uses real footage from real warzones and battlefield activities (recorded via all possible image and audio capture devices) which the AI knits together to create a dynamic and interactive programme.

Removing the need for a human programmers to create these virtual worlds will create the chance for almost instantly available training packages. Another plus side to using an AIVR system is that a realistic battle scenario could be generated for any area on earth. Send in drones to capture 360 degree images of any area, feed it into the AI along with previous content from other areas, and the AI would find the most suitable content and use it to recreate its very own virtual war for a new region. Speeding up operations and reducing training costs.

External augmentation and physical/genetic manipulation are some of the more commonly known enhancements that the defence industry is attempting to utilise in warfare today, but what of transhumanism involving the insertion of technology into the body, where is the focus?

There has been research into a synthetic blood made from respirocyte (a theoretical red blood cell formatted from diamonds) that could contain gasses at pressures of nearly 15,000 psi and allow the exchange of carbon dioxide and oxygen in a similar way to naturally occuring red blood cells. Soldiers infused with this would be able to endure for longer in harsh conditions and possibly stay underwater for hours on end — without breathing equipment.

DARPA has been investigating pain immunization serums that can limit a soldier’s ability to feel feel pain quite dramatically. After the initial shock of taking a bullet and depending on the severity of the injury, the pain would wear off almost immediately and allow the soldier to treat themselves, and carry on fighting, until such time as proper medical attention could be sought.

Another programme from scientists at DARPA, is the ‘Brain Machine Interface’ project. Quite literally they are attempting to create a system whereby soldiers will be able to not only direct and interact with technology, but communicate ‘telepathically’ with each other. An addendum to this project is looking at ways to install memory chips in soldiers, too. Of course, the claimed purpose of the project dubbed Restoring Active Memory (RAM), is that of enabling soldiers with brain injuries to have normally functioning memories, but it isn’t too far a stretch to imagine them implanting memory chips into uninjured soldiers simply as a means to boost memory capacity. If they are adding memory, why not add additional processing power as well?

So far, I have looked at biotechnology, cybernetics and prosthetics in medical and military applications, I also took a look at AI applications for the medical sector. However, by far the fastest growing and most highly anticipated, yet feared area of technological development, is in the field of militarised AI.

AI-driven warfare has been at the core of the most dystopian-style references in fiction over the years, everything from an AI ‘deciding’ that humanity needs protecting from itself and needs to be curfewed for its own good, through to AI-driven machines taking over the world and using humans as living battery cells. So it is understandable that the public perception of AI is so out of sync with reality. When you also have widely respected individuals such as the now departed professor Stephen Hawking and Elon Musk, talking about the dangers posed by AI, it will be an uphill struggle to alter the general opinion of this technology. It may help to understand a little of the history of AI in war, or at least some of the reasoning behind the continued push for AI integration in all areas of warfare.

In spite of all the doom and gloom about the possibility of robot overlords suppressing humanity, I believe AI presents the means to protect soldiers from being hurt in battle. As much as I would love to see an end to war it seems an unlikely dream. Whether it is hunger for power, a desire for scarce resources, religious differences or simply racism, there seems to be a never ending supply of ‘reasons’ for those in power to declare war. War is, and always has been, a dirty affair, the blood of soldiers has watered the land, in the heat of battle, across the globe for millennia. When most people think of death in war they usually focus on that of the soldiers. Some may use this idea to argue that they put themselves in that position and therefore have only themselves to blame as they ‘chose’ to fight, after all. However, whilst service remains a choice in a lot of countries (whether to join the military or not) in other countries there is no choice, national service is still a major part of certain countries and their military make up. In real terms, the vast majority of soldiers dying in war are only fighting because they are forced to by their respective governments or because they are forced into war by enemies attempting to take control of the land on which they live.

War has evolved over the years, it started as an up close and personal experience with both sides battling face to face with hand held weapons such as clubs, axes, daggers and swords, then progressed to shooting each other with bows and arrows then cannons, mortars, and guns across battlefields. The next stage in war was mechanisation, with tanks and aircraft becoming the most often used means to deliver death to enemies. With aircraft came the development of unpowered bombs that could be dropped in huge numbers from aircraft or zeppelins with little or no accuracy ,bar the pilot’s skill and judgement. Following on from the ubiquitous drop and forget bombs, we then moved onto powered bombs that could self propel once dropped and from there we “upgraded” to rocket powered missiles.

The driving force of this progression in technology has almost always been to put distance between those doing the killing and those being killed, the problem with this is that with distance comes inaccuracy. Once weapons changed from hand-held varieties into ordinance based projectiles, the opportunities for non combatant injuries and death increased dramatically. Without the ability to accurately target specific areas or enemies, it became necessary to make these ordinance devices ever more powerful. Increased explosive power was seen as way to ensure maximum damage and the destruction of the actual target be it a building or an enemy’s weapon. During wars where masses of ordinance devices such as these are used, there have been frequent news reports of collateral damage relating to hospitals, market squares and schools having been hit accidentally, especially when they have been used against an enemy that has craftily set up operations in built up areas.

Over many years, reported civilian casualties (mostly women and children) and the subsequent public outcries, were key drivers in the creation of smart bombs and guided missiles. Using systems that incorporate a form of artificial intelligence guidance into bombs and missiles, enables the devices themselves to adjust their own trajectory in a manner similar to the autopilot on a passenger jet. This new breed of weapon still has the ability to decimate enemy forces but now has the added advantage of decreasing the number of civilian casualties caused by ordinance landing and detonating in public areas. Nowadays you hear more stories about airstrikes that are carried out with surgical precision than inadvertent civilian casualties. Some are programmed to target heat (heat seeking missiles) and so will lock onto the exhaust of an aircraft in flight, or the heat signature of a tank, and simply chase until impact. Other more complicated systems can be programmed with specific coordinates as targets and once launched will fly straight there and detonate. The introduction of this technology has somewhat lessened the chances of civilian casualties but is still subject to human input and errors.

Now that we have looked at the first stepping stones of AI in warfare and the benefit in reducing casualties on both the civilian and military fronts, let’s take a look a closer look at the AI systems currently operating in the world and how they are being adapted into military technologies.

As mentioned previously, an autopilot-like system has revolutionised the ability to accurately target enemy forces and reduce the possibilities of injuring civilians. The technology itself is similar to that which is trusted everytime you go on a passenger plane nowadays. Some may not realise but the majority of most national and international flights are controlled by autopilot, this is a form of artificial intelligence operating the aircraft in reaction to feedback from the various sensors and data systems both on the aircraft itself and that which comes in via signals from air-traffic control. Modern aircraft can practically fly themselves and if it wasn’t for the desire to have humans in control for take-off and landing, they could. The technology for fully automated flight is available. Although autopilot is a very much used AI, it can’t really be said to be truly autonomous as the flight paths etc. are pre programmed by humans and the auto function is more of a reactionary system than a true self operating system. In real terms though, the pilots are always ready to take over should an event occur that breaks from the pre-programmed flight parameters, and requires emergency action, such as another aircraft adopting a collision course or some severe weather related incident.

The use of autopilot or guidance systems in the military sector is not only limited to use in missiles or bombs, though, it has long been used as an aid during airborne refueling missions. The pilot guides the tanker as it approaches the aircraft to be refueled but will turn on the autopilot once the tanker is within the re-fueling ‘envelope’, thus stabilising the tanker in a synchronous flight path with the target craft and enabling the refueling to take place in relative safety. It is within this field of inflight refueling that the next step in the application of AI is taking place. Boeing recently released a few details on current project of theirs, the MQ-25 tanker drone, a UAV (Unmanned Aerial Vehicle) which can either be flown by a land based controller or fully automated (i.e. pre-programmed) to carry out refueling missions. Of course, if you have been following the news this last few years, you will have seen and heard many times about UAVs, also known as Drones, being used in war zones. UAVs were originally used for missions deemed ’boring, of dubious intent or hazardous for humans” these missions were predominantly reconnaissance missions, but these days that is not the case. There are a high number of UAVs in use by military groups around the world that serve as airborne unmanned combat and reconnaissance platforms, for example, the General Atomics MQ-9 Reaper Drone. This particular platform which functions as a hunter-killer, contains a vast sensor suite used to assess and acquire targets from high altitude which are then relayed back to the controllers for analysis. Onboard computers and supporting AI enable the drone to perform mission critical tasks such as battlefield management, survival and evasive manoeuvres, and target engagement.

Imagine the scenario where troops operating in a mountainous or forested region being plagued by guerilla attacks, with the enemy combatants only breaking cover long enough to launch their attack then disappearing back into cover. Such a situation is difficult to combat on the ground let alone with manned aerial assets that fatigue and are subject to limited hover time. However, a combination of an MQ-25 tanker supporting a fleet of MQ-9 Reaper UAVs in a similar Close-Air-Support role would enable an almost permanent eye in the sky above the area of interest and also the ability to strike almost instantly as enemies are spotted moving within the supported area.

Some of the larger heavy lifting variety of UAVs can be used as supply vehicles to deliver what is required to soldiers in need, whether it is food, ammunition, weapons or medical supplies. A programmed drone enhanced with AI would be able to fly in under radar while automatically avoiding obstacles such as trees, hills, enemy outposts, etc’, deliver its payload and fly out quickly and quietly. Previously this type of supply run would have been carried out by a piloted helicopter.

It isn’t only the big hunter-killer type UAVs that are being researched and developed either, the smaller variety of UAVs are also proving to be of value, with many being used for more localised reconnaissance missions when one of the bigger systems is simply unviable, perhaps due to cost per flight or the need to reconnoiter within some kind of building. These smaller systems can either be launched by hand or from a small vehicle and flown by remote control (or pre programmed AI) into the target zone while streaming data as it is gathered, via sensors and cameras, back to base or the pilot, providing actionable intelligence e.g. troop numbers, positions, weapon types, etc’, around which to plan an assault.

Indeed, these smaller varieties of UAVs could well become a staple element of any army’s offensive force due to their low manufacturing costs, relative simplicity to adapt and their ability to fly almost anywhere inside or out. Typically, these UAVs could be carried high into the sky above an enemy base by a stealth carrier drone, released and activated as they fell, and with little to no radar signature they would proceed to their target location undetected, lock onto all aircraft and detonate. As offensive weapons a hundred small UAVs carrying explosive devices could take out an entire squadron of enemy aircraft and disable the enemy’s ability to launch any of their own airborne attacks.

Recently, in the US, F-18 fighter jets have released a swarm of Perdix micro-UAVs during a Naval Air Systems Command demonstration held at China Lake. At just six inches in diameter, these micro drones possess no radar signature are capable of confusing enemy defences and even blocking radar signals. The drones are not pre-programmed synchronized individuals, but a collective organism, sharing one distributed brain for decision-making and adapting to each other like a swarm of bees..

A swarm of a thousand micro-UAVs with hive AI could sniff out the chemical signatures of explosive materials, they could then be used to clear acres of land rendered uninhabitable owing to landmines planted during previous conflicts. Additionally, the same UAVs could be used as scouts searching for IEDs as troops make their way through enemy-held territory. In fallen states where national security has lapsed, illegal mining operations run by ‘terrorists’ or rogue militias have grown in remote locations. A majority of these operations use forced labour (kidnapped and enslaved families and children).

Deploying a swarm of drones in this difficult terrain tasked with disrupting communication networks and to ‘precision-strike’ and neutralise weapons as a prelude to ground operations, would limit collateral damage.

UAVs of varying sizes could be deployed in many different scenarios but what about a good old fashioned aerial dogfight between opponents? Air combat maneuvering is a demanding tactical aerial ballet on both pilot and airframe, when maneuvering a combat aircraft to attain a position from which an attack can be made on another aircraft. It relies on employing aggressive, offensive and defensive basic fighter maneuvers in order to gain an advantage over an aerial opponent. In these engagements success is almost entirely reliant upon a pilot’s situational awareness, decision process, reaction time, fitness and ability to withstand the high gravitational or g-forces. Such high-g environments place an enormous strain on human physiology and as a consequence impairs the ability to make effective tactical decisions. All these factors mean that the pilot (and more importantly his health) requires additional intelligent life support to maintain his fighting ability which limits both man and machine to an envelope of operations.

Even in one-to-one standoff scenarios this is exceedingly challenging but in reality this can get further complicated, and dangerous, in a multiple threat aerial engagement where friend-or-foe is difficult to establish. This is where AI controlled Unmanned Combat Aerial Vehicles (UCAV) such as Dassault’s nEUROn can be brought into full effect in medium-to-high-threat combat zones. A full squadron of small, agile and heavily armed UCAVs could assist in combat air patrols, or even provide a further sphere of cover allowing primary, manned aircraft to carry out mission particulars or directed by the pilot as needed. This new breed of AI enhanced UCAVs designed for airborne conflict can be created with all the firepower of a modern fighter but with additional focus on improved agility, longer flight times, situational awareness, tactical decision making and reaction times far superior to any human. In the end, this enhances the survivability of both man and machine and ensures mission and operational effectiveness.

Although I have spoken about the removal of humans from the danger zone, thus potentially saving their lives, one area that is often forgotten is that of the mental health of the soldiers in charge of the remote drones, the pilots. Death is something that the human psyche has issues dealing with at the best of times, but for these people it is something they have to deal with on a day to day basis. These ground based aircrew are the ones that are making life or death decisions based on the information they are receiving from the drone’s sensor suite. Right now, it is these people that are put in the unenviable position of deciding whether to order a kill strike or not. These crew, although safe in their control rooms are still suffering from a variety of stress-related disorders due to such huge stresses being put on them. A fully automated AI equipped with real time analysis including facial recognition and object detection etc. accessing satellite feeds together with relevant databases of enemy personnel, weapons and bases/movements is entirely capable of actioning a strike on the intel it gathers and presenting for final approval at a joint command level. This ultimately would remove most ‘human’ stresses from warfare almost entirely.

Aviation always seems to grab the headlines when it comes to stories regarding these unmanned vehicles, but to be clear, it isn’t only the aviation sector that is becoming automated in this fashion. Multiple ground and water-based vehicles are being developed too. UUVs (Unmanned Underwater Vehicles), USVs (Unmanned Surface Vehicles) and UGVs (Unmanned Ground Vehicles) are all new types of automated systems that can either operate entirely autonomously or be operated manually via remote control.

The majority of the ROVs (Remotely Operated Vehicles) in this arena are used for a wide range of military and commercial applications. For the military these include Mine Countermeasures (MCM), Intelligence, Surveillance and Reconnaissance (ISR), Anti-Submarine Warfare (ASW), and Fast Inshore Attack Craft (FIAC) for combat training. For commercial purposes these include, oil and gas exploration and construction, oceanographic data collection, hydrographic, and oceanographic and environmental surveys. All ROVs have a degree of AI assisted autonomous guidance to allow the human operator to concentrate on performing the task required, rather than trying to simultaneously steady the craft in the water and operate the additional equipment.

Autonomous Undersea Vehicles (AUVs) have seen increased attention and integration with Naval Warfare groups globally as an additional layer with network centric warfare doctrine. General Dynamics recently deployed two Bluefin SandSharks micro-AUVs from a larger, deep-water rated Bluefin-21 Unmanned Undersea Vehicle (UUV) at the U.S. Navy’s Annual Naval Technology Exercise. The mission successfully demonstrated intelligence, surveillance, reconnaissance and neutralization of a potential threat.

Back on solid ground, UGVs are used quite widely for such things as ordinance disposal as they are equipped with different pieces of equipment such as maneuverable arms and pincers to perform a variety of tasks. Remote controlled ground vehicles have been appearing on the frontline since WWII when the German controlled ‘Goliath’ demolition machines were used to destroy enemy tanks, bridges and buildings. The Russians also had a version of remote controlled weapon called the ‘Teletank’, which was used for both reconnaissance and attack missions. Since it was fully armed it became an effective asset in fire support roles directly engaging enemy troops.

Nowadays there are also quite a wide variety of armed UGVs which can be sent into battle in extremely hostile areas where sending in human operated vehicles and weapons etc. would almost certainly be a suicide mission for those driving. In true DARPA fashion the agency has accelerated development of programs demonstrating AI enhanced systems from Ground X Vehicle Technologies (GXV-T) to the close unit Legged Squad Support Systems (LS3) that can be deployed in contested environments.

OK, now that unmanned and AI enhanced vehicles have been covered let us take a look at the robot side of the equation. Yes, I know all the previous machines are types of robots but I am now referring to the bipedal and quadrupedal variety. No one wants to see a Terminator roaming down the street or an out of control Robocop, but what kinds of technology are out there and where does AI fit into it?

One of the main commercial organisations that is continuously featured when talking about robots, is Boston Dynamics. As a company, Boston Dynamics appear to be doing more forward-thinking in the field of robotics than most others and so far they have developed machines that can walk unaided on two feet with a human-like gait, lift and carry objects, jump onto and off different objects and even do backflips (e.g. Atlas). They have also developed a series of dog like machines on four legs (conveniently named ‘Spot’ and ‘SpotMini’) that can go up and down stairs or navigate rough terrain with relative ease. Additionally, SpotMini can have an additional maneuverable arm fitted which gives it the ability to lift and carry small objects. Their most dynamic machine yet (named ‘Handle’) has two wheels (attached to leg like limbs) and two legs which can pirouette and leap with ease and it capable of also lifting packages up to 100lbs in weight. All three machines employ various forms of artificial intelligence to analyse the world around them which is provided from a rich sensor suite and cameras. This enables them to have the ability to traverse different terrains and avoid a wide selection of obstacles. In addition to the advanced obstacle detection and avoidance algorithms, they have also the unique ability to self-correct and maintain balance countering unbalanced forces from such events as being kicked, or having slipped in different terrain.

Similar technology as utilised within these vehicles and robots is also used in autonomous defence systems at sea, and on land, usually in the form of anti-ballistic missile systems which are set up to protect military bases and sovereign territory around the world. Once triggered, the AI systems are designed to track and destroy incoming projectiles and are to be used as ‘safety nets’, preventing surprise missile attacks from damaging important assets, bases, or killing people. Introduced in 1979, a Dutch system called ‘Goalkeeper’ was installed on some of their battleships to provide short-range defence against highly maneuverable missiles, aircraft and fast-moving surface vessels. Once activated the system automatically undertakes the entire air defense process from surveillance and detection to destruction, including selection of the next priority target. The most recent version of this autonomous weapons system, is installed on the British Royal Navy’s Type 45 Destroyers. Known as the Sea Viper (PAAMS) air-defence system it is capable of tracking over 2,000 moving targets whilst simultaneously controlling and coordinating multiple missiles in the air, at once, even if the projectiles are travelling at supersonic speeds. Globally there are multiple land and space-based radar and tracking systems set up to monitor for all possible instances of missile launches and either provide warning, or react defensively.

Watching the progressive development of these Artificially Intelligent machines on land, on water and in the air, it is easy to see a future where most, if not all, operations in the military arena, will be either remotely controlled by human operators, or operating autonomously with AI at the helm. AI’s ability to act and react at speeds far beyond human capabilities whilst operating in environments dangerous to soldiers, are the main reasons that AI is becoming the de facto resource for militaries around the world.

I have used the word autonomously, throughout the article, what is meant by that is that all actions and reactions carried out automatically by the machines themselves will have been pre-programmed (or trained) within the AI itself as a ‘if this, then that’ kind of protocol (to really simplify it). What this means is that if at any point during a mission, these machines should be confronted with something that isn’t part of their programming, they would be at a loss as to what to do.

The truth of the matter is that although all these machines are AI in nature, they are fairly limited in scope as they rely heavily on their pre-programmed actions and reactions to go about their tasks. In some cases, QR codes placed on objects are used as a way for the machines to identify what they see and how they should act (one code means box to be picked up, another means wall, a third means a door, etc.) This type of directional requirement is due to the specific type of Artificial Intelligence employed in these machines.

Artificial Narrow Intelligence (ANI) is the most commonly used form in unmanned vehicles, systems and robots alike, all require hours of training before being able to carry out new tasks or recognise new elements in the world around them. In a nutshell this represents a ‘linear-style’ of machine learning that has evolved form from supervised to semi-supervised state that simply correlates specific data (e.g. shapes, and signatures) acquired through sensory sources to ‘known’ information. An extension of this semi-supervised training process leads into the beginnings of un-supervised states, or Artificial General Intelligence (AGI) where real-time limited abstraction and tactical decisions are developed and executed.

Clearly the individual machines themselves can all go on and do some remarkable things within the confines of what they have been taught but (as mentioned earlier) if they come across something that is not part of their core programming, they will struggle to act in a suitable way. Even with the currently limited scope of ANI programming, it can remove people from a theatre of war and save the lives of millions of fighting age people. Automation and AI are definitely the way to go, but, it will ultimately take a different type of artificial intelligence to allow these machines to operate in a truly flexible manner. They require a system that allows for continual learning, near instant reactions to new situations and constant access to editable and dynamic databases, in other words, something that more closely resembles the human mind and its natural processes.

Over the years, as artificial intelligence has been developing, there has been a shift in the levels of complexity the systems could operate at. In the beginning, it was simply a matter of a system being taught to carry out mundane data manipulation tricks or something akin to basic algebra IE A+B=C, A*B=D. In order to program any type of action, programmers had to completely outline every specific action or reaction, directly.

It was and is still exceptionally useful in a lot of data-rich environments, but as with any basic programmes, good data going in results in good data coming out, and any drop in data quality going in can mess up the end results. Generally speaking, this is the type of programming you would use for any type of automation, be it data sifting, basic guidance programs, or controlling the repetitive actions of of a machine.

Next came the growth of Machine Learning (ML) which revolves around the application of Linear Algebra, a more complex system that relies on an ability to remember and recall specific results from previous activities IE A+B=C and XY=Z: What does CY=?? This means C & Y have been calculated and remembered and are available for use in other elements of ongoing actions.

Continual learning processes such as these create ever-growing databases of pertinent data connected directly to the tasks in hand.

As powerful as this learning process can become, it is still limited by the data that is input. Or, to put it more accurately, systems are restricted by the quality of data which is accessible to any individual algorithm within the entire framework.

These linear algebraic instructions form the bedrock of AI (specifically Machine Learning) and are used to create multiple algorithms which are the elements of AI that carry out the tasks. They have progressively become more and more intricate over the years, allowing AI programs to evolve into multi-layered systems that are capable of correlating multiple data points across very specific information channels, and generating signals for action to be taken in response to certain stimuli. For instance, one of the above mentioned reconnaissance UAVs spots a large machine being transported on the ground and ‘recognises’ it as a machine, but one which is of unknown purpose or origin. As a result of this incomplete recognition, a signal is sent back to the controller showing that something has been spotted.The controller then recognises it is a new kind of weapon and directs his own troops to take some form of action, such as to intercept and destroy, change course to avoid coming under fire, or possibly to direct another UAV system to engage and destroy the weapon.

The AI systems themselves are becoming faster and more accurate with every year that passes. In reality though, they are still limited by something which is extremely basic, they all operate in individual data silos. A UAV only ever has access to, and an understanding of, that data which has been directly learned, or fed into it and which relates directly to the task in hand. As complicated as that task is, it is limited by the data with which it is sent on its mission and the live information captured by its sensors, in flight. Likewise, with any one of the robots that Boston Dynamics have created, if ever they encounter something that hasn’t formed part of their initial training or can’t be worked out through the various inputs they recieve themselves, they are effectively unable to react in an effective manner. Note: As this article was being written, researchers at MIT announced the development of an AI system they say is capable of recognising items it hasn’t been trained with. This will have an impact on a whole range of AI applications.

As the final destination for AI is to fully mimic human capabilities and then perform at a greater capacity, it only makes sense to start incorporating a better flow of data and an ability for multiple platforms to communicate with each other. The current systems in play do not allow for this, any additional data elements that are required mid-mission are generally input by human operators upon notification by the machines themselves. Breaking down the information silo element of these systems is key to broadening their abilities and creating a truly autonomous system that is fully capable of learning and reacting in the same way humans do.

As computing power and understanding of how memory and recall improves speed and accuracy, efforts to create more powerful systems that are able to work with data in multiple forms, are ongoing. Researchers and programmers are looking at AI architecture structures and how they create and use information ‘within themselves’.

The aforementioned basic AI, , ‘if this, then that’, system, will never offer a different result when given the same enquiry on multiple occasions, in other words it functions in a linear fashion (perfect for robotic production lines). Once you move into the realm of Machine Learning (ML) and start working with systems that can learn from previous actions, and apply that to their current activities, then you have something called a ‘Recurrent Neural Network’ (RNN). RNNs are able to analyse information and come to conclusions in real time, while also accessing their own memory banks for information relevant to the task they are performing. It is these RNNs working alongside each other that enables the types of complex behaviours seen throughout the unmanned vehicle and robot world. To move into the next phase where human interaction is removed from the equation entirely, a system is required that will automate and link all available databases and systems, in what can only be described as something similar to a hive mind. A system that not only learns from its individual platforms, sensors and databases, but also allows for cross pollination between them, open and pro-active gateways allowing a smooth flow of pertinent and relevant data between them in real time.

A phrase that is used to describe this type of system is Automatic Machine Learning (AutoML) there are a few systems in place that are already providing AutoML, although in a somewhat limited capacity. Google, for instance, offers Cloud AutoML which allows companies to plug in and upload their own data, from which an AI network can be created from previously designed and built blocks of code. This type of service opens the doors for organisations to forgo their own internal research and development but still be able to use the power of AI (using Google-created network blocks that are constantly updated) for their own data analysis requirements.

Automating the ability to create networks around data is a fine way of expanding the reach of AI into a lot of alternate areas. However, it doesn’t go far enough when it comes to interlinking all aspects of military or medicinal applications of AI. For this you need a system that allows free rein to the AI itself and an unfettered ability to teach itself from anywhere and everywhere. A system that, like the human mind, can recognise its own deficiencies and requirements and take action to fix the issue or use information from alternate datasets if useful. A system where multiple RNNs are connected and can interact and exchange data at a moment’s notice, and add to an ever growing repository of data that covers multiple fields of expertise and experience.

A further evolution of ML, Deep Learning (DL) is offering, in part at least, some movement in the right direction. However, data structuring, database storage and processing still has to be improved before the next leap into Artificial Super Intelligence (ASI) can be achieved. A separate article on this subject can be found here: Applied vs. Generalised ai | where do we fit, and what is the difference?

The human mind is quite clearly the most complicated intelligence system in nature. Its ability to carry out multiple actions simultaneously and absorb so much information simultaneously, is truly amazing. Even babies are able to learn new things far faster than any AI system available today. In 2014, dozens of AI systems had their IQ tested and rated against three humans. Two years later, they were tested and rated again to check how rapidly their IQ had improved. The three humans were aged six, twelve and eighteen. Siri (Google’s AI) was both the smartest and most improved after two years came in with a score of 47.28 (up from 26.5) but still failed to beat the youngest human, whom finished with a score of 55.5!

As an aside, comparing the IQ of AI with that of humans does indeed segway into awareness and raises several questions, such as: What is consciousness? and, why can humans acquire it at a certain age?’

If we are to say that we, as humans, are capable of becoming self-conscious, because we compare ourselves to others and attempt to improve our status, what stops a bot from doing the same thing? If we try to look at it from the same perspective as that from which we view children, should we believe that an AI will acquire this trait on its own by learning about the world? How do we know that it understands what it is learning?

Another theory suggests that consciousness is nothing more than an advanced algorithm that coordinates all our sub-processes. If the human mind boils down only to a vast number of Adversarial Neural Networks, then it may be possible to simulate it in the coming years.

Back to the story in hand…. It is precisely because of the brain’s own processing and flow of information that this learning is possible. Everything a baby learns from the moment it is born is accessible at any time to add to its ongoing learning curve, actions and reactions of those around it inform how it should react in different situations. Learning a language by simply being there while it is spoken, or understanding what is edible by observing what its parents eat. Nobody teaches a baby how to learn, they only provide a constant stream of stimuli and information and the baby absorbs and learns.

All forms of data stored in the brain (by data I mean thoughts, memories, images and calculations, etc.) are labelled and logged, but remain accessible as required, generally with respect to their storage pattern being determined by similarity. The human mind does not keep individual pieces of information separate or set aside only for access when performing certain tasks, it is openly available for use at any point. Because of this, humans are capable of learning further skill sets based on their similarities to pre-practised and remembered activities; they don’t have to learn everything new from scratch as their memory baseline is always expanding. That is the main difference between current AI systems and the human mind, and the main sticking point that has almost stalled the progression of AI. Sure, there are many AI systems out in the world that have access to almost unlimited data but they are restricted in scope. I have to return to my earlier theme of ANI and AGI, as it is here that the sticking point occurs.

As referenced previously, ANI involves a system carrying out specific actions based on all the relevant data which has been programmed into the system (robotic controls). AGI, is a system that can learn from its own memories as well as the data it receives from its own sensors (within its own defined speciality) which can then operate and act in accordance to the world around it, e.g.unmanned vehicles. All these systems can work brilliantly, but as highlighted earlier, their ability to learn as they go is limited as they can only access their own data and apply it in the context in which it was presented, or expected. There is no capability to draw from alternate data sources in times of need. Even if they had been programmed to recognise and climb over a table, they may not be able to climb out of a hole if they fell into one. If they were able to access data from other machines and systems that may have fallen into a hole before and successfully climbed out then there would be a chance that they could learn from that event and act accordingly.

It is with this ability to cross-pollinate in mind, that Catena have designed and created our AI system. We have incorporated a system, referred to internally as the “Hoop-Loop-Bus’ process, that takes the ideology of RNN’s to the next level, coupled around an ever evolving dataset and a dynamic scaling and processing framework, essentially to ‘infinity and beyond’.

There are no silos within Catenas neural framework, so all information is accessible by any of the multiple modules. Data pulled in and analysed from satellites for market movement projections, is available for any other function or in any arena the system is set to work within. Likewise data-points and decisions created while processing for facial recognition tasks are available for all the other systems too. Catena is able to read the world and learn in a much more humanistic way, analysing and correlating information from all corners of the globe as well as its own databanks, weighing up possible outcomes and making decisions. With an in-built ability to recognise data deficiencies and errors, Catena is able to ‘hunt and gather’ required information without any human interaction. Similarly, if Catena recognises that there is some form of issue in a specific data-set, it is able to rectify it through its own internal processes and verify any new data gathered in the process.

Analysis happens simultaneously on multiple levels and there is a constant flow of information in and around the entire system. Unlike other AI systems that we are aware of, Catena’s AI is entirely scalable in multiple directions and is able to build its own neural networks around any elements that are required, based on baseline algorithms being re-purposed, cut or clipped on demand.

Catena has been built from the ground up with a vast number of interactive modules individually capable of performing a huge number of functions and tasks, such as facial recognition, object detection (weapons, vehicles, aircraft and ships etc), NLP, pattern recognition, emotion detection, motion detection, financial analysis and market movement projections… to name a small sample from a list which is too long to write here.

The adoption of AI for use in warfare has been a bone of contention for many years, there are some who argue that the automation of warfare is morally wrong as it is removing decision making ability from humans and putting our lives in the hands of ‘unfeeling’ machines. At Google, for example, their staff staged a revolt when it was revealed that Google was involved with Project Maven (US military program for AI controlled drones). As a result, it has been reported that Google will not be renewing their contract for fear that their main programmers will leave if they continued. This may have some connection with the more recent story that Stryke Industries (run by Walter O’Brien) will be licensing Scorpion technologies for use by the US Army Contracting Command, Redstone Arsenal, Alabama, on their universal ground control station (UGCS) and other drone platforms. It is reported that they run a selection of weaponized drones out of this facility including MQ-1C Gray Eagle, RQ-7B Shadow and MQ-5B Hunter drones.

For others, Catena included, the argument is that regardless of people’s best wishes, warfare is something that will continue as long as there are humans on the planet and they can find something to fight over. Perhaps the best way to prevent human casualties and safeguard our way of life would be the full implementation of AI in warfare, thereby removing humans, their egos and their emotions from the equation. Eventually we would reach a point where wars will be fought by one machine against another. The only way to achieve such an objective is to push ahead with connecting the various systems into one collective system which can oversee, act, and react in real time to remove an enemy’s ability to fight back as swiftly as possible. Whether this is through the targeting of their weapons, supplies, infrastructure or key personnel with surgical precision, or through highly coordinated machine action that limits and restricts their possible aggressive activities, AI could orchestrate and execute with precision, and strike decisively.

Regardless of either side of the argument, efforts are continuing globally to push AI into all corners of military activity. Billions upon billions are being spent in a drive to have the fastest, most reactive AI systems attached to the most powerful weapons. Some countries are further ahead than others. The very real threat of these weapons falling into the wrong hands, or being developed by people of malintent, is growing year on year. If reports are to be believed, the Russian military have already had to deal with UAV swarm attacks at their Khmeimim air base and Tartus naval facility. They were apparently attacked with 13 explosive laden UAVs (10 at Khmeimim, 3 at Tarus). Russian officials believe the devices were armed at and launched from a site some 50 miles from their targets and were guided by a combination of GPS and altitude control sensors. According to the top brass, they believe this attack was carried out by a well funded terrorist group. If terrorists are already creating and using crude forms of weapons using this technology and launching attacks on Russia, then it really is only ‘a matter of when, not if’ they do something even larger in scale. At that point in time it will be essential that those under attack are able to adequately defend themselves. As it was, the attack was fairly small and the human operators of the Pantsir-S anti-aircraft missiles took down around 7 with Electronic Warfare (EW) units, taking out some of the others. Had the attack been carried out with a much higher number of drones then it is fairly certain the reaction times of the soldiers manning the defences would not have been sufficient to thwart the attack and there would almost definitely been substantial casualties.

With an AI controlled ‘detect and protect’ system watching over the base, then they would be better able to act and react if or when this type of attack next occurs.

It appears the best way to ensure that there is an outcome in line with our society’s best interests is to push harder and faster to become the best in the field. Possession is the best deterrent.

Providing the type of hive mind AI with the deep seated connectivity required to properly coordinate and empower military forces to remove soldiers from the horrors of warfare, is something that we have been dedicated to creating for the last 10 years through the design, creation and construction of Catena.

Within our overall system there are multiple types of scripture and environments

These are just a few examples of how they are used:

Automated data preparation and ingestion (from raw data and miscellaneous formats) Automated column type detection; e.g., boolean, discrete numerical, continuous numerical, or text Automated column intent detection; e.g., target/label, stratification field, numerical feature, categorical text feature, or free text feature Automated task detection; e.g., binary classification, regression, clustering, or ranking

Automated feature engineering Feature selection Feature extraction Meta learning and transfer learning Detection and handling of skewed data and/or missing values Automated model selection Hyperparameter optimization of the learning algorithm and featurization Automated pipeline selection under time, memory, and complexity constraints Automated selection of evaluation metrics/validation procedures Automated problem checking Leakage detection Misconfiguration detection Automated analysis of results obtained User interfaces and visualizations for automated machine learning Feeds are an ever growing part of the data process, be they fintech feeds or surveillance feeds. Wherever we look, from an ATM to our smart phones, there are cameras. These cameras are either automatically programmed to record or stream, or are manually switched on for instagram or the like. However, if we narrow down specifically to CCTV, we’re looking at billions of cameras. Petabytes of useful visual data created in an instant, however, as things stand, CCTV feeds are only properly monitored or reviewed during major public events or after a terrorist attack or a bank heist, for instance . Some CCTV systems even record and store on-site without ever transferring the data for storage or review. This is where Catena sees its capabilities best being harnessed from a state security perspective.

If we look at the statistics for the USA, for example, there are tens of millions of CCTV feeds, these feeds send collected images to data-banks without being processed or reviewed, just flowing in. These feeds sometimes inadvertently capture activities that clearly indicate that a crime is going to happen. Generally it is only after a crime has taken place that these data-feeds get any kind of true attention. When crimes do happen and if they are serious enough, an analyst will either rewind and play through the recordings manually or run an automated scripture to pick up relevant visual data. This type of post-event analysis just doesn’t go far enough. There are approximately 100k non-fatal/fatal events within the US per year, this number could be substantially reduced if the feeds were monitored live, 24/7, by an AI system with the capability to spot different guns, knives, criminals or criminal activity, and react accordingly.

Here is a generic overview of one element within our weapons detection system, more specifically, our. Knife Detection Module.

We analysed publicly-accessible CCTV feeds (recordings released onto YouTube) featuring crimes committed using a dangerous object. Two observations were made:

Real-life CCTV recordings are usually of poor quality. Dangerous objects are generally visible for only a short period of time. Based on these observations, we have created a set of requirements for our process flow.

First, we decided that our algorithm needs to cope well with poor quality input. This means a low resolution input image and a small size of the dangerous object. We also decided that the algorithm should work in real-time.

One of the most important points is to keep the number of false alarms as low as possible (high specificity), even at the cost of missing some events (at the cost of sensitivity). This is due to the fact that if an automated algorithm generates too many alarms, the operator starts to ignore them, in turn rendering the whole system useless.

Moreover, an algorithm that misses some events is obviously better than running the system blind. False alarms are unacceptable in practical application due to the high costs they generate, as each alarm has to be verified by a human operator, causing stress and overload. Still, while maintaining a low number of false alarms, we try to achieve as high a sensitivity setting as possible. As such the automated system is required to pull on other modules and data streams, to asses the context and thus the potential severity of the situation.

We designed the knife detection algorithm based on visual descriptors and Machine Learning (ML). Part of the flow of the algorithm is depicted below. The first step was to choose image candidates as cropped sections from the input. We chose candidates using a modified sliding window technique. In contrast to the original sliding window, we looked for knives near the human silhouette only and when at least one human silhouette appears in the image. We believe that a knife is only dangerous when held by a person. In addition, detecting a knife held in the hand in a limited part of the image is faster. Furthermore, a hand holding a knife has more characteristic visual features than a knife on its own, so we can expect better results. We distinguished two areas in the image,one near the potential offender and the other close to the potential victim. In those areas, we can expect the knife to show due to the general dynamics of a knife attack. Usually, a knife is held in the hand and used against the body of another person.

It is impossible to distinguish between the offender and the defender automatically during processing because of the dynamics of such events. For this reason, both areas are observed for each human silhouette found in the image (each human silhouette is considered to be both a potential offender and a potential victim).

The next step was to convert the image into its numerical representation. We are using a sliding window mechanism to find parts of images that contain features that are characteristic for knives. This way we are able to determine the approximate position of the knife in an image. We do not need to detect the knife’s edges, which is not trivial when images with a variable and nonhomogeneous background are considered.

Due to the specific knife image pattern, we chose two descriptors: edge histogram and homogeneous texture.

These two descriptors provide complex information about features characteristic of knives such as: (edge, peak and the steel surface of the blade). The first contains information about various types of edges in the image. It is a numerical vector that contains 80 different references. The second describes specific image patterns, such as directionality, and the coarseness and regularity of patterns in the image. The edge histogram and homogeneous texture descriptors are represented by vectors of 80 and 62 elements. The edge histogram defines five edge types. There are four directional edges and one non-directional edge. The four directional edges include vertical, horizontal, 45-degree and 135-degree diagonal edges.

These directional edges are extracted from the image blocks. If the image block contains an arbitrary edge without any directionality, then it is classified as a non-directional edge. To extract both directional and non-directional edge features, we need to define a small square image block. Applying edge detection filters, the edge strengths are calculated. The homogenous texture characterizes the region texture using mean energy and energy deviation from a set of frequency channels. The mean energy and its deviation are computed in each of 30 frequency channels. The energy ei of the i-th feature channel is defined by Gabor-filtered Fourier transform coefficients derived using Formulas and. The energy deviation di of the i-th feature channel is defined in a similar form by Formulas and.

The objective is to avoid using color and keypoint-based descriptors because of the many potential distortions and errors. Color-based descriptors are unable to deal with light reflections and different color balances of image sensors.

Keypoint-based descriptors were also unsuitable for the problem, since knives do not have many characteristic features.

More key points were frequently detected around the object rather than on the knife itself. Because of the great number of different types of knives, we decided on similarity-based descriptors rather than those based on keypoint matching or exact shape.

The numerical representations of the descriptors were stored as binary vectors for shorter access time and easier processing. The feature vectors are used in the decision making part of the system. The extracted feature vector is an input to a support vector machine (SVM).

We used a nonlinear version of this algorithm with Gaussian radial basis functions (RBF)

To find the best SVM parameters, we used a simple grid search algorithm guided by four-fold cross-validation results. The final decision about the action or non-action is made based on the SVM result.

All of these individual modules and many more can either be used in part or as components of a larger more cohesive system and applied within any aspect of the technologies outlined in the article above. Individual AI modules can be recreated by programmers given enough time or are already available in the market however, Catena differs from our competitors through the flow of data between modules. Our system is entirely flow driven, with all modules interacting and reacting to create an unencumbered automatic learning entity that can self direct its own analysis engines towards virtually any subject matter and understand it.