The policy is ahead of the tooling
In February 2020, the Department of Defense adopted five ethical principles for artificial intelligence: Responsible, Equitable, Traceable, Reliable, and Governable. The Responsible AI (RAI) Toolkit followed, requiring explainability for high-risk AI decisions and mandating that 'no AI solution will be operationalized by the DOD without explainability.' Traceability requires documenting all data and decisions including training data, processing methods, and outputs.
These are good principles. The problem is that the tooling to implement them at the tactical edge does not exist in any fielded system. Enterprise AI platforms can log model versions and query histories in cloud databases. Edge platforms running on disconnected laptops and phones have no equivalent infrastructure for model provenance, inference audit trails, or decision chain reconstruction.
The gap between policy and operational tooling is not hypothetical. It manifested in the Maven/Anthropic crisis of 2026, and it will manifest again every time a program office tries to field AI at the edge without a governance architecture.
The Maven/Anthropic supply chain lesson
Palantir's Maven Smart System relied deeply on Anthropic's Claude for intelligence analysis and weapons targeting workflows. In early 2026, the Pentagon mandated removal of all Anthropic AI products from military systems within 180 days. Palantir now has to replace Claude with another model and rebuild parts of the software that depend on it.
The Pentagon, in effect, created a situation in which its flagship AI system depended on technology it subsequently declared a supply chain risk. The result is a program of record with 20,000+ users across three classification domains that now has to swap its core AI engine on an aggressive timeline.
This happened because Maven's architecture treats the model as a fixed dependency rather than a swappable component. When the model vendor becomes unacceptable, the entire system is at risk. There is no model abstraction layer. There is no governance framework that makes model replacement a routine operation rather than a crisis.
Any edge AI platform that hard-codes a single model vendor into its architecture will eventually face the same problem. Vendor relationships change. Export restrictions change. Model capabilities change. The platform has to be designed so that swapping models is an administrative action, not an engineering project.
What model governance means at the edge
Model governance at the enterprise level is well understood. You track model versions in a registry. You log inference queries in a database. You maintain training data provenance. You run evaluation benchmarks before deployment. These processes assume network connectivity, centralized infrastructure, and administrative staff.
At the tactical edge, none of those assumptions hold. The node is disconnected. There is no centralized database to log queries. There is no admin team monitoring model performance. The operator may be running a model that was deployed via physical courier three weeks ago on a device that has not connected to any network since.
Edge model governance requires a different architecture. The model catalog has to travel with the node, not live in a cloud registry. The inference audit trail has to be recorded locally, not streamed to a central database. The model provenance, including version, approval authority, export restrictions, and approved use classifications, has to be verifiable on the device without network access.
EdgeLance's product spine provides this. Every model deployed to an EdgeLance node carries its provenance metadata: version, approval status, approval authority, export classification, approved device types, and approved mission types. The product spine records which loadout was deployed to which node for which mission via which delivery method. When the node connects to fleet management, this data syncs. When it does not connect, the provenance data is still on the device and auditable locally.
Mission-specific loadouts as a governance mechanism
General-purpose model deployment is the enemy of model governance. When every node runs the same model permanently, you have one configuration to audit but no ability to tailor capability to the mission, no way to restrict sensitive models to appropriate classification levels, and no mechanism to wipe specialized models after the mission that required them.
EdgeLance packages models and RAG (retrieval-augmented generation) reference libraries into mission-specific loadouts. Before an operation, the mission planner selects the detection model, language model, transcription model, and reference material appropriate for the mission type, threat environment, and classification level. The loadout is approved through the product spine governance process and deployed to target nodes.
During the mission, the loadout is the AI configuration. After the mission, it can be wiped. Models, RAG content, cached inferences, all purged from the node. Classified reference material does not persist beyond the operation that required it. The next mission starts with a fresh, purpose-built loadout.
This addresses multiple governance requirements simultaneously. Model appropriateness: the right model for the right mission. Classification enforcement: sensitive models restricted to appropriate classification levels. Data minimization: reference material does not accumulate on edge devices over time. Auditability: the product spine records exactly which loadout ran on which node for which mission.
Evidence coupling closes the explainability requirement
The RAI Toolkit's explainability requirement is difficult to implement on most edge platforms because AI outputs are disconnected from the sensor data that produced them. An alert on a map says 'hostile contact.' The audit trail says the detection model produced a classification with a certain confidence. But the camera frame, the RF signature, the NFC check result, and the behavioral analysis that informed the classification are stored separately, if they are stored at all.
EdgeLance couples every AI output to its source evidence as a core architectural feature. A threat assessment references the specific camera frame, the correlated RF/TPMS signatures, the NFC IFF check result, the acoustic data, the detection confidence from each sensor modality, and the entity's track history. The operator sees the evidence, not just the conclusion.
This coupling serves three purposes. Operational: the operator can verify, challenge, and override AI recommendations with documented rationale. Legal: the decision chain from sensor input through AI assessment through operator action is preserved for JAG review or AR 15-6 investigation. Institutional: after-action review can examine not just what happened, but why the AI recommended one course of action and the operator chose another, which is the data that makes AI systems improve over time.
The compute policy as a governance boundary
Model governance is not only about which model runs. It is also about where inference executes. In a multi-classification environment, a SECRET model processing classified imagery cannot send that data to a cloud endpoint, even if the cloud has a better model available. The compute routing decision is a classification decision.
EdgeLance's compute policy engine enforces this boundary. Inference routing modes include local-only (all processing on-device), base GPU (offload to a trusted local server), cloud-augmented (use approved cloud resources when policy and connectivity allow), classification-gated (restrict routing based on data classification), and DDIL-only (local processing only, no network transmission of inference data under any circumstances).
The compute policy is set per mission loadout, per classification level. A UNCLASS training loadout might allow cloud augmentation. A SECRET operational loadout restricts to local-only. The policy is enforced at the platform level, not left to operator discipline. This is the kind of governance boundary that enterprise platforms handle through network segmentation. At the edge, where there is no network to segment, it has to be enforced on the device itself.
From principles to fielded capability
The DoD's five AI ethical principles are the right principles. The RAI Toolkit asks the right questions. The Maven/Anthropic crisis demonstrated what happens when the tooling does not match the policy.
EdgeLance provides the operational tooling that converts governance principles into fielded capability: a model catalog with provenance tracking, mission-specific loadouts with approval gates, evidence-coupled AI with operator override, compute policy enforcement at the device level, and a product spine that maintains the audit chain from model approval through deployment through mission execution through after-action review.
This is not governance as a compliance exercise bolted onto an existing platform. It is governance built into the architecture from the start, designed to work on disconnected devices in contested environments where the operator cannot phone home to a cloud registry to check whether their model is approved. That is where governance matters most and where current tooling fails.