Abstract
Current control systems for autonomous surface vessels (ASVs) often disregard model uncertainties and the need to adapt dynamically to varying model parameters. This limitation hinders their ability to ensure reliable performance under complex and frequently changing maritime conditions, highlighting the need for more adaptive and robust approaches. Therefore, this study introduces an innovative approach that integrates deep reinforcement learning (DRL) with nonlinear model predictive control (NMPC) to optimize the control performance and model parameters of ASVs. The primary objective is to ensure that the digital twin of the ASV remains continuously synchronized with its physical counterpart, thereby enhancing the accuracy, reliability, and adaptability of the digital twin in representing the vessel under complex and dynamic maritime conditions. Leveraging the capabilities of digital twins, agents can be trained in safety-critical applications within a risk-free virtual environment, minimizing the hazards associated with real-world experimentation. The DRL framework optimizes NMPC by tuning its parameters for peak performance and identifying unknown model parameters in real-time, ensuring precise and dependable vessel control. Extensive simulations confirm the effectiveness of this approach in improving the safety, efficiency, and reliability of ASVs. The proposed methods address critical challenges in ASV control by enhancing reliability and adaptability under dynamic conditions, providing a foundation for future advancements in autonomous maritime navigation and control system development.