Our research

Research projects

Goal-directed Feature Extraction from Terrestrial Mobile Mapping Laser Scanning Data

Leader: Abdul Awal Md Nurunnabi

Supervisor: Prof. Geoff West

Co-supervisor: Dr David Belton

Associate supervisor: Ireneusz Baran, AAM

Sponsor/supported by: International Postgraduate Research Scholarship (IPRS) at Curtin University, and Top up scholarship from CRCSI.

Total duration of the project: 3 years – Part of the CRCSI project 2.01 (Terrestrial Mapping Project): Multimodal Terrestrial Data Acquisition and Feature Extraction

Abstract: It is impractical to imagine point cloud data obtained from laser scanner based mobile mapping systems without outliers and/or noise. In presence of outliers/noise, most frequently used methods for point cloud analysis (e.g. surface fitting, reconstruction, modelling and segmentation) are non-robust and give inaccurate results. We are researching for automatic feature extraction and error analysis in terrestrial mobile mapping laser scanning (TMMLS) data. We have been investigating state-of-the-art computer vision, pattern recognition, photogrammetry and remote sensing as well as classical, diagnostic and robust statistical techniques. We have developed an algorithm for statistically robust local planar surface fitting based on diagnostic-robust statistical approaches. The algorithm outperforms classical methods (like LS and PCA) and shows distinct advantages over current methods including RANSAC in terms of computational speed, sensitivity to the percentage of outliers and number of points, it has better approximation in case of thick planes . Two region growing based segmentation algorithms have been developed for multiple planar and non-planar complex surfaces (e.g. long cylindrical and approximately cylindrical surfaces (poles), lamps and sign posts) extractions.

We have proposed two highly robust outlier detection algorithms that are able to identify outliers and efficient for reliable local saliency features (e.g. curvatures and normal) estimation.  Results from artificial and real 3D point cloud data show  that the methods have advantages over other existing popular techniques: (i) computationally simpler (ii) successfully identify high percentage of uniform and cluster outliers (iii) are more accurate, robust and faster than existing robust methods like RANSAC and MSAC. The applications for region growing depicts that the proposed methods reduce segmentation errors and provide better segmentation and robust feature extraction results. Developed methods have the potentials for surface edge detection, surface reconstruction and fitting, sharp features extraction, registration and covariance statistics based point cloud processing. Research is continuing to process a large volume of data and for merging several segments to make it one if it is required. Further research is for robust fitting of non-planar geometric primitives (e.g. cylindrical) for complex features extraction and modelling in 2D-3D point cloud data.

Automated and Generic Identification and Analysis of High Level Features from Multi-Modal Spatial Data

Leader: Richard Palmer

Supervisor: Prof. Geoff West

Co-supervisor: Assoc Prof Tele Tan

Sponsor/supported by: CRCSI – Part of the CRCSI project 2.01 (Terrestrial Mapping Project): Multimodal Terrestrial Data Acquisition and Feature Extraction

Abstract: Object recognition research has thus far been predominantly focussed on the task of recognising objects from preordained categories in well-defined data sourced from single sensors. The increasing availability of large image and depth datasets of urban areas collected via many different kinds of sensor, and the ubiquity of high powered computers is motivating research into less restrictive domains. Existing high accuracy object recognition methods require a system to be trained prior to deployment on live data; once in the field, there is little possibility of the system being “reprogrammed” to recognise objects and their location from user defined categories.

This research aims to develop state of the art object recognition methods that leverage the increased information content available from multi-modal fused datasets and further, allow users to define new object categories post-deployment for the system to recognise. This will be accomplished by demonstrating how the use of multiple low-level features, innovative object representation schemes and modern machine learning techniques can best be combined to maximise the potential for object discrimination, recognition and pose estimation even when dealing with previously unknown object categories.

Feature Extraction from Multi-modal Mapping Data

Leader: Michael Brock

Supervisor: Prof. Geoff West

Co-supervisor: Assoc Prof Tele Tan

Sponsor/supported by: CRCSI – Part of the CRCSI project 2.01 (Terrestrial Mapping Project): Multimodal Terrestrial Data Acquisition and Feature Extraction

Abstract: Automated recognition and analysis of objects in images from urban transport corridors are important for many applications including asset management, measurement, location, analysis and change detection. Current object recognition algorithms are not robust enough to automatically label all objects solely from images, and interactive tagging tools require significant manual effort. The availability of registered 2D images and 3D scanner data (“3D image”) of real-world environments has created new opportunities for automated object labelling. Automatically tagging objects in large 3D images is complex and computationally demanding. It is proposed to segment the images into regions and then classify the objects within these regions. An interactive interface to select region exemplars will be developed. Extracting features from these exemplars and using machine learning, relevance feedback, and other techniques will allow similar regions of interest within the data set to be identified and labelled. Algorithms will be developed to enable efficient search through the data set. Features will be recognised in 2D imagery as well as in 3D point clouds. Techniques and workflows will be developed to allow the selection of exemplars, the development of algorithms to search the image space to locate, and to segment regions of interest from terrestrial based scanned 3D point clouds and 2D imagery of an urban environment.

3D Reconstruction of the HMAS Sydney II Shipwreck - completed

Leader: Joshua Hollick

Supervisor: Dr Petra Helmholz

Co-supervisor: Andrew Woods (Centre for Marine Science and Technology) and Andrew Hutchison (School of Design and Art)

Sponsor/supported by: iVEC and Curtin University

Total duration of the project: 3 months, finished Feb 2013

Abstract: HMAS Sydney II and HSK Kormoran were sunk during World War II and currently reside approximately 200 km from the west coast of Australia at a depth of 2.5 km. These wrecks were discovered in 2008 and shortly after, the Finding Sydney Foundation performed a series of ROV dives capturing significant amounts of video and still images. The aim of this project is to investigate the possibility of creating 3D models of the shipwrecks and to provide recommendations for planning future dives which are expected to take place in 2014. There are significant challenges present when creating 3D models from the current data as 3D reconstruction was not a consideration when collecting the data. The dataset is large and mostly unstructured which in itself poses significant challenges. In addition the images and video are often blurry or contain noise from various sources. To combat these issues we have developed techniques to process the dataset. When processing the dataset we first filter the videos to remove unusable frames. Then we extract a minimal number of frames while maintaining connectivity. At this point we classify the images and video based on image features then proceed to match images. Images are matched based on temporal information from the video, location data (if available) and the classification. Using this information to guide matching reduces the complexity from O(n2) to something more manageable. Finally point clouds and textured models are generated. Using these techniques the results of the project have been positive and several different parts of the ship have been modelled that either were not modelled before or extend previous models. Also several recommendations have been made for futures dives, which should allow a dataset better suited to photogrammetry to be collected next time.

A Vision System for Mobile Maritime Surveillance Platforms - completed

Leader: Dr. Thomas Albrecht

Supervisor: Prof. Geoff West

Co-supervisor: Assoc Prof Tele Tan

Sponsor/supported by: DSTO PhD scholarship

Total duration of the project: 3 years, finished Jan 2012

Abstract: Mobile surveillance systems play an important role to minimise security and safety threats in high-risk or hazardous environments. Providing a mobile marine surveillance platform with situational awareness of its environment is important for mission success. An essential part of situational awareness is the ability to detect and subsequently track potential target objects.

Typically, the exact type of target objects is unknown, hence detection is addressed as a problem of finding parts of an image that stand out in relation to their surrounding regions or are atypical to the domain. Contrary to existing saliency methods, this thesis proposes the use of a domain specific visual attention approach for detecting potential regions of interest in maritime imagery. For this, low-level features that are indicative of maritime targets are identified. These features are then evaluated with respect to their local, regional, and global significance. Together with a domain specific background segmentation technique, the features are combined in a Bayesian classifier to direct visual attention to potential target objects.

The maritime environment introduces challenges to the camera system: gusts, wind, swell, or waves can cause the platform to move drastically and unpredictably. Pan-tilt-zoom cameras that are often utilised for surveillance tasks can adjusting their orientation to provide a stable view onto the target. However, in rough maritime environments this requires high-speed and precise inputs. In contrast, omnidirectional cameras provide a full spherical view, which allows the acquisition and tracking of multiple targets at the same time. However, the target itself only occupies a small fraction of the overall view. This thesis proposes a novel, target-centric approach for image stabilisation. A virtual camera is extracted from the omnidirectional view for each target and is adjusted based on the measurements of an inertial measurement unit and an image feature tracker. The combination of these two techniques in a probabilistic framework allows for stabilisation of rotational and translational ego-motion. Furthermore, it has the specific advantage of being robust to loosely calibrated and synchronised hardware since the fusion of tracking and stabilisation means that tracking uncertainty can be used to compensate for errors in calibration and synchronisation. This then completely eliminates the need for tedious calibration phases and the adverse effects of assembly slippage over time.

Finally, this thesis combines the visual attention and omnidirectional stabilisation frame- works and proposes a multi view tracking system that is capable of detecting potential target objects in the maritime domain. Although the visual attention framework per- formed well on the benchmark datasets, the evaluation on real-world maritime imagery produced a high number of false positives. An investigation reveals that the problem is that benchmark data sets are unconsciously being influenced by human shot selection, which greatly simplifies the problem of visual attention. Despite the number of false positives, the tracking approach itself is robust even if a high number of false positives are tracked.