Pages

Tuesday, April 30, 2019

I, For One, Welcome Our New Robot Commanders?

I believe (if my memory is correct) that the Soviets tried to develop computer programs to make tactical decisions for lower level officers, based on their notions of "scientific socialism" that could quantify the process and make it predictable (allowing higher levels of command to have confidence that sound decisions are made at lower levels). Can AI do the job for American officers?

The Pentagon wants AI to assist human combatants, not replace them. The issue is what happens once humans start taking military advice — or even orders — from machines. ...

Future “decision aids” might automate staff work, turning a commander’s general plan of attack into detailed timetables of which combat units and supply convoys have to move where, when. And since these systems, unlike Aegis, do use machine learning, they can learn from experience — which means they continually rewrite their own programming in ways no human mind can follow.

Sure, a well-programmed AI can print a mathematical proof that shows, with impeccable logic, how its proposed solution is the best, assuming the information you gave it is correct, one expert told the War College conference. But no human being, not even the AI’s own programmers, possess the math skills, mental focus, or sheer stamina to double-check hundreds of pages of complex equations. “The proof that there’s nothing better is a huge search tree that’s so big that no human can look through it,” the expert said.

Developing explainable AI — artificial intelligence that lays out its reasoning in terms human users can understand — is a high-priority DARPA project.

As suggested, you don't want the military equivalent of a driver following a computer's directions into a lake.

But how does a program explain to humans what the humans don't understand?

Do read it all.

And I'd love to see how an AI would run the complicated Japanese plan for the Battle of Midway. Could AI use that complexity that was beyond the ability of humans to manage?

Related thoughts on simulating an enemy AI. My first impression thought is that you have to decide what advantages AI brings and perhaps look to simulate the gap between AI and humans rather than simulate the AI.

For example, if AI allows for faster decisions, slow down the speed that the human players can make decisions to reflect the gap. The gap is the issue and not the absolute speeds of humans or AI, right?

Although I'm sure the people working on this have thought of far, far more things than my first impression comes up with.