The Army is integrating artificial intelligence tools to help write its doctrine, the service announced Wednesday.
The Joint Doctrine Directorate, the Army’s platform responsible for producing foundational publications intended to guide how Soldiers operate, trains doctrine writers to “immediately apply approved AI tools to their work” — including idea generation, according to one service. press release.
The military has aggressively applied expanded language models across virtually every part of the force, as Pentagon officials tout the emerging technology as a boon to operations. Experts have widely warned of the risks of AI to society and its military use, including its tendency to fabricate or “hallucinate” information that risks reducing public trust in establishments.
In the statement, the Army acknowledged AI’s “critical flaws in an area where accuracy is paramount,” particularly its propensity to invent facts or “confuse sources,” but said it was improving over time and would not be used as a “crutch” for doctrine writers.
In one case, the Army noted that an AI tool used an outdated manual when writers were developing a doctrine test, “an error that was only caught because the user who created the test was an expert” on the subject.
“You treat him as a resourceful, motivated young officer who may not know all the information, but can certainly help you take shortcuts and be a little more efficient,” said Lt. Col. Scott McMahan, military doctrine editor, according to the release.
The service said changes have been minimal so far, but the authors used technology to search “hundreds of texts for historical vignettes that illustrate a complex doctrinal point” to save time.
“The large language model tools under development now have access to databases that we needed access to in the past,” said Richard Creed, Jr., director of CADD. “Access to data is the fundamental measure of whether the tools are useful to us.”
It was unclear from what data the anonymous AI capabilities were extracted. Information security is a major concern of experts in this field. DefenseScoop recently reported that the military’s top authority on EOD technology has warned its bomb technicians against uploading highly secure technical material into AI models, including Pentagon platforms.
AI tools are also intended to aid grammar, readability, and idea generation for doctrine.
“We were looking for a little more meat for an idea,” McMahan said. “We were able to feed this tool with some early thoughts, and out of the three paragraphs it spit out, one sentence was used, but it was a really powerful and useful sentence.”
Officials developed a “four-pronged strategy” to train Army doctrine writers on AI capabilities, according to the service. This plan includes onboarding a “master shooter” – someone who is already trained in using large language models – to help writers use the technology.
“We have been clear that AI tools are not intended to be a crutch for not doing the work we expect of our people,” Creed said. “Humans will examine every line of what an LLM produces for accuracy. Ensuring that happens requires making sure your employees know their business.”
