Abstract
This paper introduces a novel method for autonomous navigation of mobile robots, combining functional scene categorization with tailored navigation strategies. Scene categorization is a computer vision task that identifies environmental types in images. Our approach establishes a taxonomy for scene categorization with the aim of utilization in mobile robots. We used a motion-skill dataset for training different neural networks. The dataset, comprising 19923 images with nine navigation strategy labels, demonstrated the feasibility of coupling environment recognition with neural networks. Three architectures (ResNet-50, BEiT-Base, ConvNeXt-Tiny) achieved notable performance, with ConvNeXt-Tiny leading with a weighted F1 score of 0.934. Pre-training on the SUN397 dataset significantly improved the model performance, emphasizing the importance of leveraging existing scene categorization datasets. This study highlights the synergy between scene categorization and functional scene categorization, providing insights for designing efficient, context-aware autonomous navigation systems.