To main content

Deep learning on Images, Videos and 3D data

The computer vision group develop AI-based vision systems based on extensive understanding of the image generation process and cutting-edge machine learning algorithms. We specializes in developing robust and trustworthy deep learning networks for a wide range of vision applications (e.g. Subsea, Agri, Industry, Robot vision).

Contact person

Our research focuses on designing, developing, and training networks for a wide range of 2D and 3D vision applications, such as object detection, inspection, semantic segmentation, pose estimation, classification, prediction, and anomaly detection. We also emphasize explainable deep learning models through the use of physics-aware deep learning models and uncertainty estimation, which ensures that these models can be safely deployed in autonomous systems and other high-risk applications.

We collaborate closely with our customers to integrate domain-specific knowledge into their deep learning models, which helps to produce robust solutions, reduce model size, enhance explainability, and improve AI performance.

A successful deep learning-based system relies on being trained on a large and representative dataset which can be costly to acquire. Our group exploit data-efficient learning methods to lower the cost of acquiring labeled data needed to train deep learning models, including the use of simulated labeled datasets and semi- and self-supervised learning techniques to leverage unlabeled data.

Our expertise

  • Deep learning on 3D data (points-clouds), 2D image, and video data for various applications.
  • Deep learning for inspection and situational awareness in autonomous systems such as robots and drones.
  • Data efficient learning (techniques reducing the required labeled data).
  • Improve explainability of the deep learning models through physics aware modeling and uncertainty estimation.
  • Efficient deep-learning models running on embedded processors.

Explore research areas

Computer Vision group

Other expertise

Intelligent Autonomy

Smart design of sensors and onboard AI enables safer, more capable, and more efficient autonomy for unmanned ground vehicles and drones.

Selected projects

Transformer models for point cloud analysis

Transformer models for point cloud analysis

Start:
End:

Transformers have had a major impact on Natural Language Processing (NLP), prompting researchers to investigate their potential in point cloud processing. Our latest research delves into the use of transformer-based networks for pre-training a 3D...

SIGHT

SIGHT

Start:
End:

Project SIGHT aimed to create new technologies that capture and understand real-world physical spaces the same way humans do. The technologies resulting from this project aimed to provide the world's first intelligent platform for monitoring...

Deep learning for visual guidance of autonomous agri-robots

Deep learning for visual guidance of autonomous agri-robots

Start:
End:

Autonomous navigation technology allows agricultural robots to move through agricultural fields on their own and carry out tasks like examining soil and monitoring crops. We have worked with NMBU to improve the latest technology for visually tracking...

Self-supervised learning when you don’t have enough labeled data

Self-supervised learning when you don’t have enough labeled data

Start:
End:

In manufacturing process, a burr refers to the extra material that remains attached to the workpiece. In our most recent publication, we introduced a method that relies on unlabelled data to accurately detect and estimate the size of burrs.

SFI Manufacturing

SFI Manufacturing

Start:
End:

SFI Manufacturing's vision is to show that sustainable manufacturing in a high-cost country like Norway is possible, given the right products, technologies and people. Cross-disciplinary research will provide a knowledge based toolbox for future...

Underwater 6D Pose Estimation

Underwater 6D Pose Estimation

Start:
End:

A research focus for us is to develop Deep Learning (DL) methods for accurate detection and prediction of the 6-Degrees of Freedom (6-DoF) pose of relevant objects in the challenging underwater environment.

H2020 Smartfish – TrawlMonitor

H2020 Smartfish – TrawlMonitor

Start:
End:

By developing a smart 3D camera mounted on the trawl opening, we will be able to report quantity, size and species of fish entering the trawl. This can be used to optimize the catch to make trawl fishing more sustainable.