Abstract
This thesis presents an autonomous robotic system for multi-view 3D scanning of complex objects using an RGB-D (Red, Green, Blue and Depth) camera mounted on a robot arm. The system targets the challenges of scanning irregular and occluded structures, such as strawberry plants. It does this by implementing and evaluating multiple Next-Best-View (NBV) planning strategies, including sample-based, frontier-based, and Gradient-based methods. To enhance exploration and avoid local minima, the Gradient-based method was extended with global optimization heuristics: Simulated Annealing (SA) and Particle Swarm Optimization (PSO). A complete 3D stitching pipeline was developed using the Georgia Tech Smoothing and Mapping (GTSAM) framework, integrating robot odometry and Generalized Iterative Closest Point (GICP) registration to align partial scans. Both simulation and real world experiments were conducted using Intel RealSense D435, L515, and Zivid 2 depth sensors across synthetic models and real strawberry plants. Quantitative results demonstrate that the enhanced Gradient-based planner with SA significantly reduces path length and execution time while achieving comparable or superior coverage and F1 scores compared to other methods. The stitching pipeline consistently reconstructs high-fidelity models, particularly when combining odometry and GICP constraints. The system’s performance was validated using Chamfer and Hausdorff distances, RMSE, and F1-score metrics, showing high geometric accuracy and robustness. This work delivers a complete, autonomous pipeline for robotic 3D scanning and reconstruction, suitable for deployment in agricultural environments.