Our projects

Below is a list of projects undertaken by the Photogrammetry and Laser Scanning (PaLS) research group, showcasing projects from 2012 onwards only.

If you are interested in obtaining further information for any of the projects listed below, please contact us.

HDR student projects (PhD, MPhil)

Fast - 3D

Leader: Richard Palmer

Supervisor: Hedwig Verhoef

Co-supervisors: Dr Gareth Baynam (GSWA), Mark Walters (CMF at PMH), Dr Petra Helmholz, Dr David McMeekin

Commissioned by: Genetic Services of WA – Part of the CRCSI project 4.404

Total duration of the project: 1 year

Abstract: Accurate 3D facial analysis provides a powerful diagnostic, treatment monitoring, surgical planning and audit tool, based on information obtained in a non-invasive and non-irradiating manner.  To support clinical use of 3D facial analysis, a digital stereo photogrammetric platform (3DMD, Atlanta) is available at the Cranio-Maxillo Facial unit (CMF) at Princess Margaret Hospital for Children (PMH).  However, currently the process from data-capture to evaluation is too time-consuming for ready translation, thus limiting its clinical use. With the support of Genetic Services of Western Australia (GSWA) this project improves the current workflow by reducing the time needed to produce image analysis input files from the raw data. The time-reduction is achieved by (semi)automating the laborious and manual process of “landmark detection” and “facial segmentation” through the 3D FAST app. Thus, the project is providing the foundation for enhanced clinical services at PMA and GSWA.

In situ calibration of a mobile mapping system

Leader: Hoang Long Nguyen

Supervisor: Dr Petra Helmholz

Co-supervisors: Dr David Belton, Prof Geoff West

Total duration of the project: 3 years

Abstract: A Mobile Mapping System (MMS) allows us to observe 3D measurements of objects at close range from a vehicle, hence revealing more detail of objects than traditional satellite mapping or surveying. In many mobile mapping applications, there is a need to scan the same area multiple times in order to obtain the required point density or to obtain the wanted objects which are occluded due to the presence of unwanted objects during the scan. <p>Those collected point clouds may not perfectly overlap with each other as well as the real world for the two following reasons: (1) the system is not well calibrated and/or (2) the presence of positioning errors. Theoretically, calibration and registration procedures are performed to compensate for these problems. <p>In urban areas, especially urban canyons, the collected point clouds will be affected by both calibration errors and positioning errors and these two parameter groups are highly correlated with each other. As a result, a traditional calibration procedure cannot be applied in this case (in a traditional calibration procedure, data is collected in an open area, positioning errors are considered equal to zero). In order to compensate for this problem, this study will investigate methods which first match different point clouds with each other. Then, the registered point clouds will be matched with reference data. <p>Recently, point cloud matching and fitting has been utilised to obtain the registration and calibration parameters. There has been much research on point cloud matching with the most popular algorithm being based on iteratively matching and aligning points; however, there is no guarantee of having the correct corresponding points between different MMS point clouds. To overcome this, higher order primitives (e.g. cylinders, planes) should be considered as matching primitives. In addition, positioning errors may also cause distortion of point clouds which may affect the feature extraction and detection process. Hence, this study will first investigate the distortion issues in MMS point clouds and its effect on feature extraction. Furthermore, unlike Terrestrial Laser Scanning (TLS) that uses static scanners, a rigid body transformation is not suitable for MMS point cloud data. Based on the result of the investigated feature extraction and detection process, methods which utilize feature based alignment will be investigated. The transformation parameters will be recalculated and updated whenever the corresponding features are detected from both point cloud. In situ calibration can give different calibration parameters at different times during the scan. This research will investigate methods for in situ calibration of a MMS by matching the registered point cloud with reference data from other data sources.

In many mobiles mapping applications, there is a need to scan the same area multiple times in order to obtain the required point density or to obtain the wanted objects which are occluded due to the presence of unwanted objects during the scan. In urban areas, especially urban canyons, the collected point clouds will be affected by both calibration errors and positioning errors. Each of these errors can cause discrepancy between those point clouds and the real world. The ultimate goal of this research is to develop methods to compensate for this problem by investigating novel methods that first match different point clouds with each other. Then, match the registered point cloud with reference data. Two main objectives of this research are identified as:

  • Investigate methods for automate MMS point cloud registration. This objective can be split into three objectives:
  • Investigate scenarios where, and possibly why, distortion of point cloud data occurs due to the positioning error in MMS point clouds and investigate its effect on feature extraction.
  • Investigate which extracted high level features are appropriate for the matching/fitting procedure.
  • Investigate methods that utilize feature based alignment. The transformation parameters will be recalculated and updated whenever the corresponding features are detected from both point clouds.
  • Investigate methods for in situ calibration of mobile mapping systems in order to improve point cloud quality.

Grammar-based automatic 3D model reconstruction from terrestrial laser scanning data and model matching

Leader: Cynthia Yu

Supervisor: Dr Petra Helmholz

Co-supervisors: Dr David Belton, Prof Geoff West

Associate supervisor: Thomas Werner, AAM

Sponsor/supported by: Australian Postgraduate Awards (APA) at Curtin University and Top Up Scholarships from CRCSI

Total duration of the project: 3 years – Part of the CRCSI project 2.01 (Terrestrial mapping project); Multimodal terrestrial data acquisition and feature extraction

Abstract: In recent years, 3Dmodels have been used in a huge variety of applications and the demands in quality and quantity are steadily growing. The project will investigate a grammar-based method for automatic 3D model reconstruction and further explore model matching using the reconstructed results. Terrestrial laser scanning (TLS) data which contains the location and intensity information of each 3D point, and the corresponding images with RGB information will be used as the primary source for the reconstruction. The proposed method will combine a stochastic algorithm with formal grammars to find the optimised 3D models to fit the point clouds. The grammars will define a set of mathematical expressions of the basic structures, and the structure attributes, as well as the rules to manipulate these elements. The stochastic algorithm will provide a feasible solution to find out the best combinations of the basic structures and corresponding rules. Further a proof of concept automatic model reconstruction frame work based on the proposed grammar-based method will be explored. The reconstructed results will be also used to compare with the given models to identify any difference for model matching purposes.

The proposed research programme is to reconstruct 3D models from images and 3D point cloud data acquired from laser scanners automatically and explore model matching using the reconstructed results. The 3D models from reconstruction will mainly focus on various buildings and facades with depth visible from different angles. Other man-made objects in the urban city environment, such as street lights, road signs, mail boxes and soon will also be taken into account in the proposed automatic method. The main objectives for the research are as follows:

  • Exploit a grammar-based method that can be used in the process of 3D building model reconstruction.
  • Pre-process raw point cloud data using one or more existing robust segmentation methods, such as those by Nurunnabi et al. (2012).
  • Define/learn geometry-based grammar and rules.
  • Invoke the proposed grammar and rules to generate parameterised 3D building models and 3D façade models.
  • Investigate and implement efficient automatic methods to discover the best parameter values.
  • Explore methods to define and visualise accuracy and quality criteria for the reconstructed models.
  • Explore matching across different data/model representations, e.g. reconstructed model and given model.
  • Potentially extend the proposed grammar-based method to reconstruct other man-made objects and matching them with actual models for evaluation purposes.

Automated and generic identification and analysis of high level features from multi-modal spatial data

Leader: Richard Palmer

Supervisor: Prof Geoff West

Co-supervisor: Assoc Prof Tele Tan

Sponsor/supported by: CRCSI as part of the CRCSI project 2.01 (Terrestrial Mapping Project): Multimodal Terrestrial Data Acquisition and Feature Extraction

Abstract: Object recognition research has thus far been predominantly focused on the task of recognising objects from preordained categories in well-defined data sourced from single sensors. The increasing availability of large image and depth datasets of urban areas collected via many different kinds of sensor, and the ubiquity of high powered computers is motivating research into less restrictive domains. Existing high accuracy object recognition methods require a system to be trained prior to deployment on live data; once in the field, there is little possibility of the system being “reprogrammed” to recognise objects and their location from user defined categories.

This research aims to develop state of the art object recognition methods that leverage the increased information content available from multi-modal fused datasets and further, allow users to define new object categories post-deployment for the system to recognise. This will be accomplished by demonstrating how the use of multiple low-level features, innovative object representation schemes and modern machine learning techniques can best be combined to maximise the potential for object discrimination, recognition and pose estimation even when dealing with previously unknown object categories.

Feature extraction from multi-modal mapping data

Leader: Michael Borck

Supervisor: Prof Geoff West

Co-supervisor: Assoc Prof Tele Tan

Sponsor/supported by: CRCSI as part of the CRCSI project 2.01 (Terrestrial Mapping Project): Multimodal Terrestrial Data Acquisition and Feature Extraction

Abstract: Automated recognition and analysis of objects in images from urban transport corridors are important for many applications including asset management, measurement, location, analysis and change detection. Current object recognition algorithms are not robust enough to automatically label all objects solely from images, and interactive tagging tools require significant manual effort. The availability of registered 2D images and 3D scanner data (“3D image”) of real-world environments has created new opportunities for automated object labelling. Automatically tagging objects in large 3D images is complex and computationally demanding. It is proposed to segment the images into regions and then classify the objects within these regions. An interactive interface to select region exemplars will be developed. Extracting features from these exemplars and using machine learning, relevance feedback, and other techniques will allow similar regions of interest within the data set to be identified and labelled. Algorithms will be developed to enable efficient search through the data set. Features will be recognised in 2D imagery as well as in 3D point clouds. Techniques and workflows will be developed to allow the selection of exemplars, the development of algorithms to search the image space to locate, and to segment regions of interest from terrestrial based scanned 3D point clouds and 2D imagery of an urban environment.

Estimating growth rates and above ground biomass using remote sensing in the sub-tropical climate zones of Australia (completed in 2015)

Leader: Charity Mundava

Supervisor: Dr Petra Helmholz

Co-supervisor: Dr Robert Corner

Associate Supervisor: Dr Brendon McAtee, Landgate

Sponsor/supported by: Scholarship from CRCSI

Total duration of the project: 3 years; part of the CRCSI project 4.21

Abstract: Western Australia covers approximately one third of the total land mass of Australia and rangelands constitute 87% of the land area in the state. Remote sensing can be used as an aid in assessing and mapping of total standing above ground biomass in rangelands. This in turn provides producers with information concerning availability of feed in pastures and potentially optimal stocking rates. Efforts have been made in the past to come up with tools that help in forage assessment. Direct (clipping quadrats) and indirect methods (pasture height, visual estimates, remote sensing) have been used for above ground biomass estimation. Indirect methods are faster and minimise on sampling time and can be scaled from the site to the landscape scale. However, current methods to measure above-ground biomass do not deliver adequate results in relation to the extent and spatial variability that characterise rangelands.

In this context the thesis aimed to focus on assessing total standing above ground biomass for rangeland stations in Northern Western Australia. The focus was on Liveringa station in the Kimberleys. The research investigated both empirical and semi-empirical approaches in combination with remote sensing and environmental data in the derivation of above ground biomass assessment models. These models covering standing above ground biomass and growth rates were calibrated and validated with field based measurements to assess the amount of above ground biomass available as a function of the land system to optimise grazing management. Field based measurements were taken during an extensive field campaign covering two seasons.

Remotely sensed data used in this study were obtained from medium and coarse resolution satellites. The starting point was to develop a field data collection protocol to measure above ground biomass in heterogeneous environments using a combination of field data from visual estimates, rising plate meter and a hand held radiometer (Crop Circle). The protocol provided accurate assessments of total above ground biomass for sites dominated by Bunch grass and Spinifex vegetation (“Leave-Site-Out” Q2 values of 0.70-0.88), while assessment of green GB was accurate for all vegetation types (“Leave-Site-Out” Q2 values of 0.62-0.84). The protocol described could be applied at a range of scales while considerably reducing sampling time. Relationships were also developed between remotely sensed indices and total and green AGB with the Landsat ETM+ data. Single and multiple regression relationships between single date vegetation indices and green and total AGB were calibrated and validated. The cross-validation results for green AGB improved for a combination of indices for the Open plains and Bunch grass sites, but not for Spinifex sites. When rainfall and elevation data are included, cross validation results improved slightly with a Q2 of 0.49-0.72 for Open plains and Bunch grass sites respectively. Cross validation results for total AGB were moderately accurate (Q2 of 0.41) for Open plains but weak or absent for other site groups despite good calibration results.<p>Time-series of NDVI were temporally filtered and smoothed with a Savitzky-Golay filter using TIMESAT software for the years 2010 to 2013. Sites with vegetation types Spinifex, Bunch grass and Open plains clearly differed in their multi-temporal NDVI patterns, with typically a limited range in amplitude for sites with Spinifex vegetation, the largest range in amplitude for Open plains sites that senesce earlier than sites with Bunch grass. Landsat ETM+ NDVI correlated moderately strong to Crop Circle NDVI (R2 of 0.6). Landsat ETM+ and MODIS NDVI were moderately strongly correlated, indicating that atmospheric noise is substantial. NDVI explained up to 88% of variation in green components of the vegetation; however relationships were only significant when aggregating to vegetation types. It was found that green AGB can be monitored accurately with cumulative temporally smoothed MODIS NDVI for the vegetation types Open plains and Bunch grass, but not for Spinifex. Strong curvilinear relationships between cumulative NDVI and cumulative green AGB were found (R2=0.89) for vegetation types Open plains and Bunch grass. Fitted NDVI with green above ground biomass had an R2 of 0.6 for the greener part of the season (Open plains). The overall goal of predicting AGB with remote sensing could be achieved for some vegetation types (Open plains and Bunch grass) but not for Spinifex.

Goal-directed feature extraction from terrestrial mobile mapping laser scanning data (completed in 2014)

Leader: Abdul Awal Md Nurunnabi

Supervisor: Prof Geoff West

Co-supervisor: Dr David Belton

Associate supervisor: Ireneusz Baran, AAM

Sponsor/supported by: International Postgraduate Research Scholarship (IPRS) at Curtin University, and Top up scholarship from CRCSI.

Total duration of the project:  3 years – Part of the CRCSI project 2.01 (Terrestrial Mapping Project): Multimodal Terrestrial Data Acquisition and Feature Extraction

Abstract: It is impractical to imagine point cloud data obtained from laser scanner based mobile mapping systems without outliers and/or noise. In presence of outliers/noise, most frequently used methods for point cloud analysis (e.g. surface fitting, reconstruction, modelling and segmentation) are non-robust and give inaccurate results. We are researching for automatic feature extraction and error analysis in terrestrial mobile mapping laser scanning (TMMLS) data. We have been investigating state-of-the-art computer vision, pattern recognition, photogrammetry and remote sensing as well as classical, diagnostic and robust statistical techniques. We have developed an algorithm for statistically robust local planar surface fitting based on diagnostic-robust statistical approaches. The algorithm outperforms classical methods (like LS and PCA) and shows distinct advantages over current methods including RANSAC in terms of computational speed, sensitivity to the percentage of outliers and number of points, it has better approximation in case of thick planes . Two region growing based segmentation algorithms have been developed for multiple planar and non-planar complex surfaces (e.g. long cylindrical and approximately cylindrical surfaces (poles), lamps and sign posts) extractions.</p><p>We have proposed two highly robust outlier detection algorithms that are able to identify outliers and efficient for reliable local saliency features (e.g. curvatures and normal) estimation.  Results from artificial and real 3D point cloud data show  that the methods have advantages over other existing popular techniques: (i) computationally simpler (ii) successfully identify high percentage of uniform and cluster outliers (iii) are more accurate, robust and faster than existing robust methods like RANSAC and MSAC. The applications for region growing depicts that the proposed methods reduce segmentation errors and provide better segmentation and robust feature extraction results. Developed methods have the potentials for surface edge detection, surface reconstruction and fitting, sharp features extraction, registration and covariance statistics based point cloud processing. Research is continuing to process a large volume of data and for merging several segments to make it one if it is required. Further research is for robust fitting of non-planar geometric primitives (e.g. cylindrical) for complex features extraction and modelling in 2D-3D point cloud data.

Graduate projects (Master by Coursework, Honours)

As Constructed Surveys of Fabrications using Photogrammetry

Leader: Brendon Barnes

Supervisor: Dr David Belton, Dr Petra Helmholz, Dr Sten Claessens

Total duration of the project: 1 year (pending)

Abstract: The goal of this research is to obtain a method to produce 3D models of as constructed fabrication. This data is needed in the fabrication industry to make sure the produced fabrication meets the specification of the plan. The method developed by this research is intended to be used multiple times to produce as constructed surveys of different fabrications before they are transported to site. The difficulties faced with this is that the fabrication industry does not want to shut down to produce 3D models, therefore accurate models are needed to be developed in less than perfect conditions, with minimum time and cost. The photogrammetric data will be collected using a digital SLR camera and a scale bar with automatic magnet targets. A set 1 total station will also be used to coordinate a distribution of targets from different set ups, the observations will then be put through a least squares program. The idea behind this is to achieve 3D coordinates with an accuracy of less than 2mm. The angles and distances between targets can then be calculated using the 3D model and the observed total station coordinates, these angles and distances can then be compared to give an indication of the accuracy achieved. The 3D model will then be compared to the fabrication plans to see if it is inside the plans specification.

Analysis of different spectral laser scanning data on indigenous artwork at Walga Rock

Leader: Bjorn Skoog

Supervisor: Dr David Belton, Dr Petra Helmholz

Total duration of the project: 1 year (pending)

Abstract: During the mid-year 2014 Curtin University Student Survey Expedition, one project consisted of performing high density laser scanning and the photogrammetric capturing of Indigenous artwork at Walga Rock near Cue. Walga Rock itself is a large granite monolith approximately 1.5km long and 5km around the base. It is listed in the Register of National Estates due to large gallery of Indigenous artwork located in a large depression on one side. The section is close to 100m long and contains an impressive display of motifs with varying degrees of style. The artwork also contains significant levels of overlay and varying amounts of degradation.

During two survey campaigns the following data were captured the entire rock face:

  • Very high density laser scanning data from multiple setups using Leica’s C10
  • Very high density laser scanning data from multiple setups using Trimble TX5
  • A high number of images using a digital SLR camera
  • A high number of images using Trimble’s image rover V10

In addition, a control network was established. The immediate result from these expeditions are 3D models and imagery to maintain a record of the art work, as it will degrade over time. However, the goal of this research project is to specifically analyse the use of the different wavelength captured with the different devices in order to optimise the output for plotting the rock art for Heritage preservation. While multi-spectral analysis of images is common in remote sensing, the approaches were not transferred to 3D data captured from laser scanning devices. The instruments used to capture the rock included different single wavelengths, i.e. green light (532 nm, Leica C10) and infrared light (905 nm, Trimble TX5) as well as bands of wavelengths within the red, green and blue spectrum of the visible light (digital SLR, Trimble’s V10).

The project includes the processing of the data from all devices following the best practise including geometric and radiometric calibration (if required) and the co-registration of the data. Afterwards, approaches will be explored which are suitable to consider the different wavelength for plotting of the rock art. Afterwards, a (semi-) automatic workflow will be develop, implemented and evaluated.

Effects of solar radiation and the Sun’s elevation angle to structural deformation

Leader: Shane Leknys

Supervisor: Dr David Belton, Dr Petra Helmholz

Total duration of the project: 1 year (pending)

Abstract: Using building 111 located at Curtin University otherwise known as the Dome, two techniques of data capture are utilized in an attempt to see any deformation of the building’s roof. These two methods include a laser scan of the whole building inside and out, and also taking images with a camera inside of the building to be used for photogrammetric analysis. Since it is not actually known if he building deforms, the first objective is to see if there is any changes within both data sets, and if so, comparing the magnitude and direction of the deformation of both sets of data.

As the building has a primarily metal roof, it is expected that deformation will occur when any type of heat is present. As the main heat source is the sun, the next step will be to see if there is any correlation between the sun’s elevation angle and the direction of deformation. In the past deformation has been analysed over a periods of days to months, however if any deformation is present it will be analysed at separate epochs over a period of one day.

Crop monitoring and management using UAV

Leader: John Long

Supervisor: Dr David Belton, Dr Petra Helmholz, Darren Wilkinson (Land Surveys), Dr Ayalsew Zerihun (Dept. of Agriculture and Environment)

Total duration of the project: 1 year (pending)

Abstract: Due to the recent introduction of Unmanned Ariel Vehicles (UAV’s) and the development of photogrammetric software to process the imagery, this has provided the surveying industry with the ability of mass data capture and processing within acceptable timeframes. A recent project has been developed in conjunction with the Department of Agriculture and Environment at Curtin University to seek and analyse the suitability of UAV technology to detect changes in growth rate of crops. The UAV data captured will be merged with additional remote sensing data in order to predict areas within the crop fields which require site-specific management.

The goal of this project is to create 3 dimensional models of the crop fields constructed from the images captured with the UAV. These models will be developed based on different times the images were captured and display the crop growth stages overtime. The 3D models will then be analysed and compared to the different treatments applied to the separate crop trails and determine appropriate methods for achieving the maximum crop growth/ yield based on the fertiliser and disease management applied.

The main questions to be answered from this project will comprise of:

  1. Can plant height data be captured from aerial images and be correlated to different time periods?
  2. What is the coarsest/ worst possible resolution of the images where the plant height data can still be calculated and captured?
  3. What is the correlation between the plant height data (DEM) and the treatments applied to separate the trail crops?
  4. Can the method of processing aerial crop images in order to determine the required treatment be applied to non-professionals in the future?

The data needed in order to achieve the objectives will consist of multiple UAV flights of the same trail crops at different stages of the crop’s growth cycle. The images captured from the flights will comprise of well-spaced and easily visible ground control points (GCP’s). These GCP’s will then be conventionally surveyed with GPS in order to determine the relative position and provide support in orientation of the images. The GCP’s will also be used in computation to help calculate the scale and orientation between the different images captured and offer accurate data on the models created. Multiple UAV flights will be conducted on the same day at different heights, for example 30m, 50m 100m. These altered flying heights will be held continuous throughout the project to allow for easy comparison between crop growth models and determine an appropriate overall flying height to achieve maximum image evidence and give the finest data and information. As an individual check for the computed plant growth heights in each trail crop, physical measurements will be taken in the field in multiple sections of separate crop trails.

The methodology behind this project includes not only the image capturing but the processing of the aerial images. The preferred program for image assessment and model creation will be 3DM Analyst from Adam Technology. 3DM Analyst offers advanced functions for model orientation using both camera positions and control points along with the option to use natural image points to improve the image and model orientations. The processing of the aerial images will include determining the camera parameters and orientation then applying this to the images and overlaying the images together to form 3D models. The power of 3D Analyst will allow for different information to be extracted from the images to provide verification of the overall figures given on crop growth such as Normalised Difference Vegetation Index (NDVI). This will act as supporting evidence on the crop growth and volume. The models will then be analysed to determine the growth of the separate crop trails and compared to other models formed from later captured images, to give the difference in growth over time.

3D Reconstruction of the HMAS Sydney II shipwreck - completed 2014

Leader: Joshua Hollick

Supervisor: Dr Petra Helmholz

Co-supervisor: Andrew Woods (Centre for Marine Science and Technology) and Andrew Hutchison ( School of Design and Art)

Sponsor/supported by: iVEC and Curtin University

Total duration of the project: 3 months, finished Feb 2013

Abstract: HMAS Sydney II and HSK Kormoran were sunk during World War II and currently reside approximately 200 km from the west coast of Australia at a depth of 2.5 km. These wrecks were discovered in 2008 and shortly after, the Finding Sydney Foundation performed a series of ROV dives capturing significant amounts of video and still images. The aim of this project is to investigate the possibility of creating 3D models of the shipwrecks and to provide recommendations for planning future dives which are expected to take place in 2014. There are significant challenges present when creating 3D models from the current data as 3D reconstruction was not a consideration when collecting the data. The dataset is large and mostly unstructured which in itself poses significant challenges. In addition the images and video are often blurry or contain noise from various sources. To combat these issues we have developed techniques to process the dataset. When processing the dataset we first filter the videos to remove unusable frames. Then we extract a minimal number of frames while maintaining connectivity. At this point we classify the images and video based on image features then proceed to match images. Images are matched based on temporal information from the video, location data (if available) and the classification. Using this information to guide matching reduces the complexity from O(n2) to something more manageable. Finally point clouds and textured models are generated. Using these techniques the results of the project have been positive and several different parts of the ship have been modelled that either were not modelled before or extend previous models. Also several recommendations have been made for futures dives, which should allow a dataset better suited to photogrammetry to be collected next time.

A comparative analysis of land administration systems in Western Australia and North Rhine Westphalia to identify good practice fundamentals for land administration modernisation (completed 2014)

Leader: Noel Taylor

Supervisor: Dr Petra Helmholz

Total duration of the project: 1 year

Abstract: A critical component of the land administration reform initiatives outlined above has been the introduction or upgrading of systems for documenting information on the location and extent of immovable real property objects; the rights, interests and restrictions attached to these property objects; and the parties (individuals, communities, legal entities) with whom these rights, interests and restrictions are associated. Immovable property registers and cadastres provide the frameworks for not only the storage of property records, but also the rules, procedures and systems necessary for transparent security of tenure, support for efficient formal land markets, and effective governance of land resources. In the context of wider benefits stemming from land administration reform, a well-functioning property register and cadastre contribute to increased revenue collection by governments, providing security for access to credit, improved land use planning and development, sustainable natural resource management, and protection of publicly owned lands.

When examining options for upgrading or reforming immovable property registration and cadastral systems, government policy makers and other stakeholders are generally presented with three major land registration systems which to choose.

High levels of informality in the real property sector is quite often what drives governments, particularly in developing and emerging economies, to undertake comprehensive cadastre and registration reform programs. Given the role of private conveyancing in enabling such informality, these programs generally opt to choose between systems of deeds and title registration. While deeds registration continues to be used as the land registration in a majority of countries around the world, primary options for enhancement include replacement of person based indexes with parcel based indexes, which may include significant cadastral surveying and mapping reform, simplification of transaction processes, and computerisation of registers.

When deeds registration systems are deemed to no longer adequately serve the needs of a country’s land administration system, whether based on political, legal, financial, cultural or technical reasoning, the introduction of title registration systems has been the choice of many countries. Unfortunately, the need to make choices does not stop with the simple selection of title registration as the designated system to introduce. Firstly a government must decide whether it will adopt a German or Torrens/English style title registration system. From there comes consideration of the myriad of policy, legal, institutional, technical and financial aspects of the system for implementation.

With the convergence of advancements in technology, from which the land administration sector has benefited enormously, and the emergence of a globally connected society, there is now far greater emphasis on systems of good governance providing transparent access to current and concise data, and the delivery of efficient and cost effective land administration services.

The aim therefore of this research, is to demonstrate that the selection of a German style title registration system over the Torrens/English style system, or vice versa, is ultimately inconsequential. The two variations are so fundamentally similar that it is possible to identify a core set of factors and principles that are inherent in all well-functioning title registration systems. These could then be adapted and adopted as components of wider undertakings by other jurisdictions wanting to modernise their existing land administration systems.

Underwater photogrammetry: investigating ruggedized camera equipment and robotic vision data processing methods (completed 2013)

Leader: Carolyn Martin

Supervisor: Dr Petra Helmholz

Total duration of the project: 1 year

Awards: Australasian Hydrographic Society (AHS) Education Award 2014”, SSSI WA post- graduate of the year award in 2014


This project explores the suitability of ruggedized adventure camera equipment and robotic vision software to create photorealistic 3D models of underwater objects.

It addresses questions surrounding the viability of using photogrammetry underwater, and improving mapping and display of underwater objects particularly for diver location and navigation. The research uses mainstream photogrammetric equipment and software to provide baseline data which are compared to data collected by a ruggedized adventure camera (GoPro) and processed with robotic vision software (VisualSFM) using structure from motion techniques. The study proves that when used underwater, the GoPro is capable of producing accuracy similar to that of a mainstream in-air camera. This combined with its affordability, usability and prevalence in the diving community makes the GoPro a suitable and attractive option for underwater data collection on a wider scale. The study also proves that automatic processing of images using robotic vision software is faster and more effective than the comparable functionality available in commercial photogrammetry software, and its additional output of photorealistic 3D models substantiates this method’s superiority.

A tool for feature extraction and classification of Point Cloud (completed 2013)

Leader: Sukanya Bhasin

Supervisor: Dr David Belton, Dr Petra Helmholz, Dr David McMeekin

Total duration of the project: 1 year

Abstract: Laser scanners provide extremely accurate survey images for surveyors to be able to work with the captured data. The images captured are in a form of a cloud of points usually defined in a 3D space with X, Y and Z coordinates as in a three-dimensional coordinate system. The data file consists of millions of measurable points which can be further scrutinised for a number of applications, such as modelling of buildings (Landes et al., 2012) or modelling of trees (Raumonen et al., 2013).

The increasing use and significance of 3D point clouds has encouraged the need for a useful and feasible method to process the given data. One special issue which has to been solved is the calibration of 3D scanning systems. The establishment of such a calibration field is a work in progress within the Department of Spatial Sciences at Curtin University.

However, in order to be able to apply a calibration method, suitable objects/object parts must be extracted by first identifying the regions as different geometrical objects. For this report, we choose three geometrical shapes to focus on – plane (any region which has a flat surface and is somewhat smooth), pole (any region which is elongated and looks like a stick) and sphere (any region/(s) which has no direction and looks like a sphere or a ball). Afterwards, points with similar classes are segmented (grouped together) as objects or object parts. These object/object parts extracted from a point cloud acquired by a laser scanner are then compared to a reference data set in order to find the calibration parameters. The overall purpose of this project is to create a robust tool to extract prominent features and classify the points into classes such as plane, pole and sphere. This can be considered the initial step for laser calibration.