Abstract: Artificial Intelligence is transforming warfare on land, in the air, and above all—for this paper—at sea. AI is not only enhancing non-kinetic dimensions, such as information dominance, cyber operations, and decision-making processes, but is also amplifying the effectiveness of kinetic means. Recent conflicts have demonstrated how AI-enabled systems are used to deter, degrade, and destroy enemy assets, reshaping both the physical and virtual battlefields. This change necessitates a new perspective on military operations, where AI and digital technology are seamlessly integrated from planning to logistics.
Problem statement: Is AI modifying the international maritime scenario?
So what?: Expecting universal adherence to ethical frameworks in the use of AI is unrealistic. Strategic cultures differ widely, and authoritarian states such as the PRC, Russia, and the DPRK have shown little restraint in deploying autonomous systems, prioritising operational advantage over ethical concerns.

Revolutionising Naval Warfare and Operational Efficiency
The military integration of artificial intelligence (AI) stems from the need to overcome the traditional limitations of kinetic weapon systems.[1] This integration facilitates a synergistic interaction between manned and unmanned assets, enabling rapid decision-making in highly volatile environments.[2]
Contemporary AI units employ high-performance processors (often based on GPUs or architectures specialised for deep learning), which, together with convolutional neural networks (CNNs), process sensor data in real-time. While AI provides recommendations and operational scenarios, human judgment remains indispensable for evaluating and implementing decisions in complex and ambiguous settings. This approach maximises both elements’ strengths by integrating machines’ computational capacity with human experience and creativity. For instance, Human-in-the-Loop (HITL) systems enable operators to retain full control over target selection, ensuring ethical compliance and situational awareness. In Human-on-the-Loop (HOTL) configurations, artificial intelligence performs real-time data processing and threat identification. At the same time, the human operator can supervise and override decisions if necessary. This enables rapid responses in high-stakes environments while maintaining human oversight.
However, Human-out-of-the-Loop (HOOTL) systems operate autonomously without direct human input, relying on pre-programmed parameters and sensor-driven targeting algorithms. While these offer great speed and efficiency, their deployment raises significant ethical and legal concerns, highlighting the need for a well-calibrated balance between autonomy and human control.[3]
Implementing either type of system requires significant resources—both human and material. From an organisational standpoint, it is essential to invest in specialised training programmes designed to update and expand the skills of military personnel. Training must focus on digital technologies and the ability to effectively interface with AI systems, as well as manage scenarios in which rapid response is critical. In addition to training, an appropriate technological infrastructure is necessary, including dedicated data centres, secure communication networks, and advanced cybersecurity systems, to ensure the safety and reliability of AI-based weapon systems.
From an organisational standpoint, it is essential to invest in specialised training programmes designed to update and expand the skills of military personnel.
In recent years, numerous practical examples have demonstrated the effectiveness and potential of AI in naval systems. A prime example is the use of intelligent drones capable of operating in electronic warfare (EW)-constrained environments, as observed in the war in Ukraine.[4] Although AI-operated drones have demonstrated enhanced resilience against measures compared to traditional systems, this resilience is not absolute and depends on several critical factors. Chief among these is the degree of autonomy engineered into the system. Semi-autonomous or fully autonomous drones can maintain functionality—by relying on onboard decision-making algorithms—even when GPS signals are jammed or communications are disrupted. This allows drones to execute missions without human control and reduces vulnerability to conventional EW tactics.[5]
Another important factor is the presence of system redundancy. Advanced drones integrate multiple navigation and communication systems that operate across different frequencies or rely on alternative mechanisms, such as inertial navigation systems or fibre-optic data links.[6] This layered architecture ensures that the drone can continue to function effectively even if one channel is compromised, increasing operational resilience.
The integration of active countermeasures—such as real-time threat detection and adaptive frequency hopping communication protocols—has further strengthened the defensive capabilities of AI-driven drones. Such systems have demonstrated the ability to navigate, detect, and engage targets without external input, even in environments saturated with electronic interference.[7]
However, AI-operated drones remain susceptible to sophisticated electronic attacks. Techniques such as adversarial signal injection, where false data inputs manipulate AI behaviour, or cyber-intrusions into command algorithms, highlight the ongoing need for layered security approaches.[8]
While these innovations are enhancing the resilience of unmanned systems, it is important to note that they are neither mature nor universally effective. The war in Ukraine clearly illustrates that, despite the deployment of countermeasures and AI-based navigation, drones still experience significant disruption under intense electronic warfare conditions. Therefore, this analysis does not contradict the realities observed on the battlefield; rather, it emphasises the technological trajectory and the evolving efforts to mitigate vulnerabilities, which are still being tested and refined in real-world operational environments.
In parallel, the navies of various nations are experimenting with the use of remotely controlled naval vehicles—both surface and subsurface[9]—that integrate AI-based systems to perform surveillance, gather information, and conduct targeted attacks. Such systems are designed to operate autonomously, reducing the exposure of human personnel to dangerous situations and increasing overall operational efficiency.
The navies of various nations are experimenting with the use of remotely controlled naval vehicles.
The challenges and opportunities of integrating AI into decision-making processes are not exclusive to naval forces but extend across all domains of modern warfare. However, the focus of this paper is on the maritime domain, where these dynamics take shape within platform-centric operations and naval-specific command and control structures. An illustrative case is the application of AI in the “Naval Tactical Kill Chain,” a decision-making process that, through data analysis and the recognition of behavioural patterns, optimises the phases of observation, orientation, decision, and action (OODA) loop in maritime operations.[10] Studies conducted at institutions such as the Naval Postgraduate School have mapped specific AI methods to support these functions, reducing uncertainty and improving reaction times in combat scenarios.[11]
Automation and the use of complex algorithms raise significant ethical and security concerns, including the potential for errors in decision-making systems, dependence on digital infrastructures, and vulnerability to cyberattacks. These are just some of the challenges that must be addressed. Studies have shown that AI systems used in simulated military scenarios can exhibit more aggressive and inconsistent behaviours than human experts, leading to a higher risk of escalation.[12] Conversely, humans are susceptible to automation bias,[13] where over-reliance on AI recommendations can result in significant errors, as evidenced by increased prescribing mistakes when AI systems incorrectly flagged medications.[14] These findings underscore the importance of a synergistic approach that leverages the computational power of AI while maintaining critical human oversight to navigate complex and ethically charged military decisions. In this light, cybersecurity becomes a key element, requiring constant investment to update and strengthen systems against potential intrusions or external tampering. Moreover, in Western democracies, it is seen as crucial to maintain a balance between system autonomy and human control, ensuring that operators can intervene in critical situations to avoid catastrophic errors. Yet this is not a universal concern: powers such as Russia, the PRC, the DPRK, and even India in its regional rivalries place far greater emphasis on operational effectiveness than on preserving human oversight. While ethical concerns surrounding the integration of AI in military operations have become a prominent issue within Western democracies, it is important to recognise that such considerations are not shared globally. Indeed, authoritarian regimes such as Russia, the People’s Republic of China (PRC), and the Democratic People’s Republic of Korea (DPRK) demonstrate far less restraint regarding the use of autonomous systems in warfare.[15] For example, the PRC’s military doctrine explicitly advocates the rapid development and deployment of intelligentised warfare capabilities, aiming to leverage AI across all dimensions of conflict without being hindered by Western notions of human oversight or accountability.[16] Similarly, Russian military theory increasingly embraces the idea of “algorithmic warfare,” integrating autonomous systems into its concepts of information dominance and hybrid operations, again with limited ethical debate.[17]
The PRC’s military doctrine explicitly advocates the rapid development and deployment of intelligentised warfare capabilities.
Moreover, even democratic states outside the traditional Western sphere, such as India, might not display the same emphasis on ethical frameworks when engaged in (at least perceived) regional rivalries or when perceiving a threat as existential. In the context of its strategic competition with China and Pakistan, India’s primary concern remains military efficacy and deterrence rather than adhering to emerging global ethical standards for AI.[18] Therefore, although the ethical regulation of AI in warfare is gaining traction in Europe, North America, and select allied nations, it remains a principle that is not universally accepted. It is thus essential to clarify that the debate on ensuring human centrality in AI-driven command and control processes predominantly reflects Western political and ethical values. In a multipolar world where antagonists and even some neutral actors may not adhere to the same principles, the ethical regulation of military AI remains a regionally concentrated concern, rather than a globally uniform one.
Several countermeasures have been developed to counter the threats posed by the use of AI. The adoption of defence systems that also integrate AI technologies stands out; these systems are capable of monitoring and analysing enemy activities in real-time, identifying suspicious patterns, and activating automatic response protocols.
Cyber warfare constitutes a parallel battlefield where the ability to intercept and neutralise digital attacks can determine the outcome of operations. In addition to AI-integrated defence systems and cyber countermeasures, other solutions have been developed to address the threats posed by adversarial AI. Among these are deception techniques designed to confuse enemy systems by generating false signals or altering incoming data, thereby compromising the adversary’s decision-making capacity. Although these countermeasures are also based on advanced algorithms, they must be integrated with traditional defensive systems to ensure multi-level and dynamic protection.
Military personnel must be continuously updated on emerging technologies and methods for analysing and interpreting data. Adopting advanced simulations and virtual environments based on the “digital twin” concept represents an effective tool for training operators to interface with complex systems and make rapid decisions in a crisis. Collaboration among military institutions, universities, and research centres is therefore essential to develop an innovative ecosystem that fully leverages the potential of AI while ensuring the safety and operational efficiency of the armed forces.
The Growing Role of AI in Naval Operations
The application of AI in maritime intelligence has drastically changed how naval forces monitor and analyse vast amounts of data in real-time. Machine learning algorithms have enabled naval intelligence agencies to identify suspicious patterns, detect potential threats, and track anomalous behaviour across global maritime domains. This has improved the ability to proactively respond to emerging risks rather than simply reacting to incidents after they occur.[19]
In particular, AI has revolutionised surveillance at sea. Integrating AI into unmanned systems, such as drones and autonomous ships, has bolstered the efficiency and reach of naval intelligence operations. For instance, drones equipped with AI-powered image recognition software can autonomously identify and track targets, providing naval forces with a significant edge in operational planning and response.[20]
AI has revolutionised surveillance at sea.
Nodalpoint Systems has introduced The SatShipAI system, an advanced satellite surveillance system utilising AI and high-quality satellite imagery to detect and track maritime vessels. It works with data from the Sentinel-1 Earth observation satellite, part of the European Space Agency’s (ESA) Copernicus programme, along with other satellites as needed.
SatShipAI uses geospatial AI to assess suspicious ship behaviour in international waters. By analysing satellite images and tracking patterns, the system can identify interactions between vessels (such as those involved in illegal activities like drug trade or illegal fishing). It offers near-real-time monitoring, providing actionable information that security agencies can use to intervene swiftly. The system identifies suspicious vessels based on their proximity, movement patterns, and behaviours, which helps authorities make informed decisions on intervention strategies.
The system is particularly useful for monitoring and countering maritime intrusions, such as illegal, unreported, and unregulated (IUU) fishing, piracy, drug trafficking, and human trafficking. In regions like the Bay of Bengal, such activities are frequent, SatShipAI can track illicit activities, such as drug smuggling through fishermen, by analysing the movement of vessels between countries’ territorial waters and international zones.
By using AI and satellite data, SatShipAI improves operational decisions, reducing costs and increasing the accuracy and speed of maritime security interventions.[21]
Revolutionising Naval Operations: The Impact and Challenges of AI
AI is transforming naval operations at every level, from ship automation and logistics to surveillance, threat detection, and maritime strategy. While it is reshaping various industries, its role in the maritime sector is particularly impactful, offering the potential to revolutionise how naval forces operate. This includes defence strategies, resource management, logistics optimisation, and real-time decision-making in critical security environments. However, integrating AI into naval operations surfaces a series of technical, strategic, ethical, and security challenges that require careful evaluation.
One of the most significant applications of AI in naval operations is the automation of ships. Autonomous vessels, designed to operate with minimal or no human intervention, are becoming an integral part of the maritime landscape. These unmanned ships are equipped with AI systems capable of making real-time decisions, navigating through challenging environments, and adapting to changing circumstances without the need for human pilots. However, this very absence of crew raises a critical vulnerability: how can effective damage control be carried out on an unmanned vessel? At sea, fires, flooding, and battle damage are inevitable realities, and without personnel, the ability to contain and recover from such events is severely limited. This has led some analysts to suggest that unmanned ships may need to be conceived as “attritable fleets”, designed to be lower-cost, expendable assets that can be risked in high-threat environments where the loss of a traditional manned ship would be unacceptable. While this offers operational advantages, it also underscores the trade-offs and limitations inherent in the adoption of fully autonomous naval platforms.[22] They can be used for various missions, including surveillance, search and rescue, and defence operations. Additionally, AI-powered maritime drones, such as autonomous submarines, can gather oceanographic data, monitor suspicious activities, and conduct intelligence operations without exposing human crews to high-risk environments. In mixed fleets, these unmanned platforms complement manned vessels by operating in contested areas where deploying personnel would be too dangerous.
One of the most significant applications of AI in naval operations is the automation of ships.
Strategic framework: Project 33
The importance of AI in the naval sector is confirmed by initiatives such as the US Navy’s Project 33. Project 33 is an implementation project aimed at defining a strategy to improve the Navy as a distinct service and enhance its contributions to the joint warfighting ecosystem.[23]
One of the main points of the plan is to “operationalise robotic and autonomous system” based on unmanned systems (UxSs) due to their rapid deployment, stealth capabilities, and ability to carry diverse payloads. Sea denial and sea control are critical objectives for Project 33, and artificial intelligence (AI) is essential in achieving them. In the Indo-Pacific region, AI can significantly enhance sea denial and sea control by supporting various capabilities.
Sea denial and sea control are central objectives of Project 33, and AI plays a crucial role in supporting these missions, particularly in contested environments such as the Indo-Pacific. Here, AI can support operations by enhancing command-and-control efficiency, real-time intelligence sharing, and coordinated responses across vast maritime theatres. In particular, AI-driven systems contribute to the planning and execution of military exercises, such as Pacific Sentry and Northern Edge, improving joint coordination and the Navy’s ability to respond swiftly to crises.
In terms of detecting threats, AI’s predictive capabilities enable the anticipation of enemy behaviour. AI can forecast future actions and behaviours by analysing historical data and identifying patterns in maritime traffic or hostile group activities. This predictive analysis allows naval forces to counter threats preemptively, optimise responses, and refine defence strategies.
AI’s predictive capabilities enable the anticipation of enemy behaviour.
AI’s potential in optimising logistics operations is also noteworthy. Naval forces can utilise AI to streamline fleet management, predict equipment failures, and ensure resource allocation is handled effectively. AI can analyse data from ships to forecast maintenance needs, which improves the fleet’s availability and efficiency. Furthermore, AI can enhance supply chain management, ensuring real-time optimisation of planning, resource distribution, stock management, and mission planning.
Despite its considerable advantages, the implementation of AI in naval operations raises several challenges. One primary concern is the reliability of autonomous systems. While AI can handle many tasks with efficiency and precision, its ability to operate reliably in unpredictable and complex environments remains a challenge. These challenges have been observed in various contexts. For example, the US Navy’s Sea Hunter, an Autonomous Submarine Tracker Vessel, was tested three times in 2017 to integrate its systems and ensure compliance with the International Regulations for Preventing Collisions at Sea during realistic scenarios.[24] The test went well, but the reliability of these systems cannot be confirmed with 100% certainty, especially in scenarios involving congested maritime traffic or adverse weather conditions. The challenge is that AI must be able to make critical decisions, such as avoiding collisions, without human intervention—an inherently complex task when unexpected or unplanned situations arise. Yet it is essential to acknowledge that collisions have always been a part of naval operations; even experienced human crews have been unable to prevent them, as evidenced by the history of, for example, HMAS Melbourne. What is therefore expected from AI is not the impossible elimination of accidents, but rather a reduction in their likelihood. The key question becomes whether there is evidence that AI can already outperform human decision-making in collision avoidance, or at least provide more consistent reliability under certain conditions. While AI enhances safety and operational efficiency, human oversight remains crucial, particularly in critical or emergency situations.[25]
Ethical and legal concerns arise with integrating AI into naval warfare, particularly with autonomous weapon systems. These systems raise critical questions about accountability, as decisions made by AI without human intervention can blur the lines of responsibility.[26] Additionally, there are moral dilemmas surrounding the automation of military operations, as it may lead to the loss of human judgment in life-and-death situations. Privacy and data protection issues are also significant, especially in intelligence gathering and surveillance operations.
Ethical and legal concerns arise with integrating AI into naval warfare, particularly with autonomous weapon systems.
These concerns are not just hypothetical. While AI is already used in military systems, such as Israel’s Iron Dome, for defensive purposes, there is still a lack of clear accountability when autonomous systems make decisions. Furthermore, antagonist powers may not adhere to ethical guidelines, making it even more difficult to enforce international standards. As such, international treaties and stronger regulations are necessary, but their effectiveness depends on the commitment of all nations.[27]
The Italian Navy and NexTech Partnership
The Italian Navy is embracing transformation through its partnership with Fincantieri NexTech. One of the most striking examples is their work aboard the PPA “Francesco Morosini”, where NexTech supported the crew in mastering the Naval Cockpit. This integrated, user-friendly interface allows a small team to control an entire warship’s core systems.[28] Through dedicated training, officers learned how to use the new tools, think differently, and collaborate with intelligent systems in real-time.
Another standout case is the Trieste LHD, NexTech has equipped it with a full suite of digital systems for command, control, and communications. These systems are not only technologically sophisticated but also designed for the complexity of modern missions.
SIMAP and the Future of Naval Officer Education
Born from an innovative vision for military education, the SIMAP (Simulatore di Manovra Plancia) has long been a cornerstone of officer training at the Livorno Naval Academy. While not a brand-new system, SIMAP remains highly relevant and increasingly sophisticated, evolving with new layers of intelligent and immersive technologies.
The SIMAP is designed to provide realistic and high-impact training experiences. It faithfully replicates a warship’s bridge and simulates complex operational scenarios, such as coastal navigation, severe weather, emergency response, and maritime traffic management. The SIMAP’s ongoing integration with intelligent, AI-driven immersive systems makes it especially significant today.
The SIMAP is designed to provide realistic and high-impact training experiences.
Thanks to these advancements, the SIMAP has grown beyond a static simulator into a dynamic, adaptive training environment. The incorporation of AI modules, either currently in place or under development, enables:
- The dynamic generation of mission scenarios, tailored to the trainee’s skill level;
- Real-time performance monitoring and feedback;
- The personalisation of training paths through machine learning techniques;
- The simulation of other naval and civilian assets, governed by realistic artificial agents.
This intelligent, adaptive component sets the SIMAP apart from earlier generations of simulators. It doesn’t just replicate; it interacts, evaluates, and adapts the experience based on the individual. As a result, trainees develop not only technical and operational skills but also cognitive and decision-making abilities, practising in a setting that mirrors the complexity and unpredictability of real-world maritime operations.[29]
The SIMAP thus stands as a bridge between the present and future of naval training, where immersive environments, artificial intelligence, and advanced interfaces come together to deliver training that is increasingly effective, secure, and readiness-oriented. It is a concrete example of how technological innovation, supported by industrial partners such as Fincantieri NexTech, is actively transforming the Italian Navy’s preparation of its future leaders.
Conclusion
Integrating AI into naval operations has reshaped military strategies, significantly enhancing decision-making, efficiency, and automation. Autonomous systems have enhanced operational effectiveness, but challenges such as accountability, cybersecurity risks, and striking a balance between human oversight and machine autonomy persist. To tackle these challenges, robust regulations, continuous training, and ethical frameworks are necessary for the responsible application of AI in military settings. Moreover, international collaboration, such as the partnership between the Italian Navy and Fincantieri’s NexTech, is crucial for advancing naval operations. This collaboration fosters innovation, prepares personnel for emerging technologies, and strengthens maritime defence systems.
Integrating AI into naval operations has reshaped military strategies, significantly enhancing decision-making, efficiency, and automation.
The convergence of AI, human expertise, and ethical principles defines the future of naval warfare. With the right oversight, technological advancements can enhance operational efficiency, refine strategic capabilities, and ensure global security in an increasingly complex and interconnected world.
Midshipman Junior Second Class Marco Francavilla and Midshipman Junior Second Class James Filippo Armstrong are midshipmen currently in their third year of the five-year officer training programme at the Italian Naval Academy. Both Midshipmen are focused on technological transformations reshaping the naval domain and have chosen to explore these dynamics in relation to their future employment within the Italian Navy. The views expressed in this article are solely those of the authors and do not represent the official position of the Italian Navy or the Italian Naval Academy.
[1] Istituto Superiore di Stato Maggiore Interforze, Applicazione di Artificial Intelligence per fini militari: individuazione dei criteri relativi al passaggio dall’approccio Human in the Loop allo Human on the Loop, Roma: Centro Alti Studi per la Difesa, 2023, 9-10, 12-13.
[2] Istituto Superiore di Stato Maggiore Interforze, Applicazione di Artificial Intelligence per fini militari: individuazione dei criteri relativi al passaggio dall’approccio Human in the Loop allo Human on the Loop, Roma: Centro Alti Studi per la Difesa, 2023, 27–28, 68–69, 30–31.
[3] Istituto Superiore di Stato Maggiore Interforze. Applicazione di Artificial Intelligence per fini militari: individuazione dei criteri relativi al passaggio dall’approccio Human in the Loop allo Human on the Loop. Roma: Centro Alti Studi per la Difesa, 2023, 22–24, 31.
[4] Fighting the war in Ukraine on the electromagnetic spectrum, February 2025, The Economist.
[5] Swarm Wars: The Shaky Rise Of AI Drones In Ukraine – Analysis, August 16, 2024. Eurasia Review.
[6] Ukraine’s Drone Revolution: AI-Powered, EW-Resistant, and Fiber-Optic Innovations, TechUkraine, January 08, 2025.
[7] Shield AI MQ-35 V-BAT.” Wikipedia, accessed April 27, 2025.
[8] Adversarial signal injection and cyber-intrusions: On adversarial signal injection and cyber-attacks in AI systems, see James Anderson, Cybersecurity Challenges in AI-Operated Military Drones (Cambridge: MIT Press, 2024), 89-92.
[9] C.A. (ris) Michele Cosentino, L’intelligenza artificiale e le operazioni aeronavali (Eng. trans. Artificial Intelligence and air and naval operations), May 2022, Rivista Marittima, 63-65.
[10] The OODA loop is a decision-making model developed by United States Air Force Colonel John Boyd.
[11] Dr. Bonnie Johnson, Dr. John M. Green, Gregory Burns, Todd Collier, Richard Cornish, Kyle Curley, Allan Freeman, Jared Spears, 2023, Mapping Artificial Intelligence to the Naval Tactical Kill Chain, NAVAL ENGINEERS JOURNAL, No. 135-1, 156-159.
[12] P. Scharre, “Army of None: Autonomous Weapons and the Future of War.,” W. W. Norton & Company, 2018.
[13] N. B. Sarter, “Automation Bias in Decision Support Systems,” *Proceedings of the Human Factors and Ergonomics Society Annual Meeting* 61, no. 1 (2017): 1859-1863.
[14] S. J. Hoffman and L. M. S., “The Irony of Automation: Why Clinicians Let Computers Make Mistakes,” Wired, April 01, 2015, https://www.wired.com/2015/04/the-irony-of-automation-why-clinicians-let-computers-make-mistakes.
[15] Samuel Bendett, “Russian Perspectives on Autonomy in Warfare,” Center for Naval Analyses, 2019.
[16] Elsa Kania, “Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power,” Center for a New American Security, November 2017.
[17] Dmitry Adamsky, “Russian AI Warfare: Projecting Power and Subverting Trust,” Texas National Security Review, Summer 2021.
[18] Abhijnan Singh, “Artificial Intelligence in India’s National Security Strategy: Prospects and Challenges,” Observer Research Foundation, October 2022.
[19] National Intelligence University, “Artificial Intelligence and National Security,” 2020.
[20] Department of the Navy, Artificial Intelligence in the U.S Navy, 2020.
[21] Sadhli Roomy, “Geospatial AI for Maritime Surveillance”, https://www.acmeai.tech/.
[22] C.A. Enrico Vignola, Future combat naval system 2035 nelle operazioni multi dominio – Il ruolo cruciale dell’Unmanned (Eng. trans. Future combat naval system 2035 in multi-domain operations – The crucial role of the Unmanned), February 2022.
[23] ADM Lisa M. Franchetti, USN, Chief of Naval Operations Navigation Plan for America’s Warfighting Navy 2024; and James Holmes, “The Navy’s New NavPlan Sets Its Sights on China, from a Sea Denial Stance,” U.S. Naval Institute Proceedings 150, no. 9 , September 2024.
[24] Julian Turner, “Sea Hunter: inside the US Navy’s autonomous submarine tracking vessel,” Naval Technology, 2018.
[25] C.A Enrico Vignola, Future combat naval system 2035 nelle operazioni multi dominio – La persona al centro (Eng. trans. The centrality of the Person), February 2022.
[26] Filippo Santoni de Sio, “Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight,” Academia, 2020.
[27] Yasmeen Serhan, “How Israel Uses AI in Gaza—And What It Might Mean for the Future of Warfare,” Time, December 2024.
[28] Redazione, “Fincantieri NexTech ha abilitato gli ufficiali del PPA Morosini alla condotta del Naval Cockpit,” Analisi Industria, October 20, 2022.
[29] Gian Luigi Cantini, “Tecnologia per l’addestramento in continua evoluzione,” 28 Gennaio 2020, https://www.marina.difesa.it/media-cultura/Notiziario-online/Pagine/20200128_evoluzione_simulatore_plancia.








