Destroyers (DDGs). Frigates (FFGs). Aircraft carriers (CVNs). The modern U.S. Navy is built around a historically small force of these highly advanced vessels, which patrol the world’s waterways to project maritime superiority and deter adversarial action. For a number of reasons, including the impractical cost and lengthy time needed to replace these legacy platforms in conflict, the Navy is shifting to a “hybrid force” that augments the fleet with distributed, unmanned assets.
What does that shift mean in practice? Take the example of medium unmanned surface vessels (USVs). A fleet of 10 medium USVs can match the missile capacity of a DDG. But because that capacity is distributed, an adversary would need at least 10 times as many strikes to neutralize the threat. Unlike a manned destroyer, losing those USVs also risks no crew onboard.
The combination of manned and unmanned platforms is referred to as a hybrid fleet, and this emerging force design will require new forms of collaboration between robotic systems and their manned counterparts. The algorithms for communication and decision-making, which must stay resilient in permissive or contested environments, pose a critical software-defined challenge to solve.
This blog post will examine that software-defined collaboration, called “collaborative autonomy,” and explore how simulation is key to testing, integrating, and scaling unmanned systems into fleet operations.
The Challenge of Collaborative Autonomy
Collaborative autonomy is deceptively challenging. The USV example above shows how distributed unmanned forces inherently involve a scaling factor. But as the size of the swarm grows, getting all of those vehicles to collaborate becomes exponentially harder. Why is that?
Autonomous swarms represent a distributed system, connected by a network of data links that is imperfect by the nature of wireless communication. At a given time, each vehicle can be assumed to have only partial knowledge of its surroundings and other vehicles, due to communication delays or degradation from latency, bandwidth limits, dropouts, or adversarial effects. Because autonomy software cannot rely on perfect situational awareness, engineers must bake in a rigorous ability to make decisions under uncertainty. As conditions change, the software must reason through incomplete or potentially conflicting information.

Testing these swarms at scale bears a similar exponential curve in difficulty. Every new vehicle or vendor adds integration overhead as well as O(n2) lines of communication, where n is the number of vehicles being integrated. A software bug or API mismatch can quickly become a bottleneck in overall coordination. Additionally, the financial and personnel cost to test dozens of vehicles at once is out of reach for all but the most well-funded organizations.
Finally, live testing is limited by fixed constraints—the time of day, clear weather conditions, maximum sea state and commercial maritime traffic that must be worked around. It isn’t uncommon to see teams of engineers and operators stuck at a remote test range with little to do because of prohibitive testing conditions. That’s an expensive delay.
Even if money were unlimited, iterating on software only through the feedback loop of live testing is simply too slow to keep pace with evolving threats. So what would it take to shift testing toward a digital-first approach?
Modeling Multi-domain Autonomy in Maritime Environments
To develop a digital twin of the ocean for testing autonomy on USVs, a large feature space needs to be considered. Realistic motion and decision making for autonomy are calculated at both the vessel level and for individual payloads, such as a swiveling pan-tilt camera.
The table below shows a non-exhaustive list of simulation features required to test collaborative autonomy onboard USVs.
On top of this long list of phenomena to be modeled, a hidden but essential challenge is computing all of them with low latency. Onboard autonomy software typically has just milliseconds to process inputs, decide, and act, and therefore its simulator must generate data at the same pace. If it cannot maintain real-time performance, the results may not reflect operational reality. Real-time generation of hydrodynamic effects, RF propagation, and sensor degradations is an essential but extremely difficult aspect of digital autonomy development.
Building all of these components also requires a multidisciplinary team. For instance, hydrodynamicists are needed to model wave forces and vessel motion. Sensor modeling teams must verify that fog leads to a drop in thermal camera contrast and attenuates lidar returns. 3D content artists build ports, coastlines, and vessels with accurate radar cross-sections. And simulation engineers integrate all of this into a single, high-performance product with a friendly user interface to design, run and analyze simulations. No small feat!
Dual-use Technology: The Path to Viable Sim
It turns out that the core simulation technology—a low-latency simulation engine, generic sensor models that can be tuned to a particular design—is largely identical between domains as diverse as maritime and on-road autonomy. Consider that many of the core features described above have analogies to other domains:
Examples of analogous features across two very different domains for unmanned systems
Because these challenges are not unique to the ocean, Applied Intuition was able to build an initial solution to simulate USVs realistically in a matter of weeks. Because Applied Intuition invests strategically across multiple domains, features developed for one use case often transfer seamlessly to another. Certainly there are new aspects like hydrodynamics and littoral environments, but the core requirements and infrastructure remain the same: High-fidelity environment modeling, accurate sensor simulation, and real-time performance that keeps pace with the autonomy stack, all with friendly APIs and interfaces.
Axion is exactly that, built on the same simulation technology that Applied Intuition uses to accelerate its commercial customers in heavy industries such as automotive, trucking, mining, construction, and agriculture. Axion has been extended to test USV autonomy, from critical submodules like automated target recognition (ATR) to end-to-end collaborative autonomy stacks, where USVs and UAVs work together to execute mission commands.

Axion users can integrate autonomy for testing at the payload or platform level, including different software packages for multiple vehicles in one test. Simulation can be built and run ad-hoc in a desktop environment or scaled up to 24/7 regression testing over a standard rack of GPU-enabled servers.
Axion for maritime autonomy has already been adopted by a number of leading maritime autonomy companies, including Saronic and Scientific Systems Inc, as well as the U.S. Navy, the Defense Innovation Unit (DIU), and the Chief Data and AI Office (CDAO) within the Department of Defense.

As collaborative autonomy advances, the primary challenge will shift toward complex hardware-software integration. New USV classes, cross-domain teaming, and real-time communications layers will succeed only after continuous iteration and evaluation of how these systems interact under realistic and stressful conditions.
The decisive factor will be rapid systems integration, and simulation is an essential tool for integration. The side that can integrate and adapt fastest will prevail, and a digital-first evaluation framework is the only path to getting feedback at the tempo of a modern battlefield.