The views expressed by contributors are their own and not the view of The Hill

We are much closer than you think to letting AI launch the missiles; there is a better way

The rapid development of AI has created a new risk that would have been unimaginable only a few years ago. It comes from autonomous AI weapons that will be allowed to kill humans. Today, aerial drones already pick targets at their own discretion. Tomorrow, independent AI will be able to launch missiles, if allowed. This thought keeps me awake at night.

I am a Ph.D. in Materials Science, specializing in fabrication of microelectronics. Currently, I apply my skills to building devices that work like neural networks — in other words, developing lifelike artificial brains for autonomous weapons.

Lifelike AI was supposed to have a civilian use. It was intended for autonomous means of production, which would increase labor productivity and raise the profitability of the manufacturing sector, to cope with the decline in working population.

Today, autonomous AI is also considered for military purposes, because artificial brains can independently control aerial and submersible drones and replace humans in armored fighting vehicles. The military believes that autonomous weapons can open new theaters of war and stimulate the development of new methods of warfare.

The big question now is whether the autonomous weapons should be authorized to use deadly force at their sole discretion. I think it would be a terrible thing to allow, for no one, including these machines’ creators, can predict how the artificial lifelike brains will behave in every situation.


First, the structure of any brain capable of analyzing natural environments is too complex to fully comprehend. Its workings involve too many variable parameters to create an exact mathematical model predicting outcomes.

Second, the structure of the brain changes during operation under the influence of unknown external factors. Autonomous AI systems must be adaptable. Unfortunately, this means that they can learn bad things while operating in rapidly changing environments. Even worse, some malicious actors could try to teach them wrong lessons without our knowledge.

The creators of the lifelike brains are now in a tough situation. We have difficulty designing autonomous systems with safe behaviors. We cannot predict how these systems will change during the course of work. At the same time, we are being pressured to put them on the battlefield in an unsafe manner that may cause more deaths.

Is there a safe way to deal with autonomous AI? In my opinion, yes. Under normal conditions, lifelike AI should be developed through evolution and artificial selection. It is a slow process, in which manufacturers fabricate a large variety of different AI species, put them in various situations, and pick the right ones for the right jobs. The whole process must take place in a secure environment. Basically, when we cannot trust our own creations, we must play a god-like role, letting them run in the wild, providing the conditions for the survival of the fittest, and selecting the good ones for further use and multiplication.

Here is the most important point: The best habitat for this artificial wildlife is virtual reality. Only the righteous ever see the real world. And even these chosen ones should be subjected to additional periodic testing, to ensure that they still work as intended.

I strongly doubt that the development of autonomous weapons can be effectively restricted. They will be created anyway. It would be more prudent to develop sensible safety regulations. The most important safeguard, then, is to thoroughly test all forms of autonomous AI in secure environments. If we follow this simple rule, we will avoid many complications and perhaps tragedies while achieving remarkable results.

Dmitry Kukuruznyak is a researcher developing autonomous AI systems for military applications.