NZGRC-2019 – Abstracts



What progress have we made in the Spatial Sciences?


Peter Whigham


This presentation will start by revisiting a paper I wrote for GeoComputation 2001 that outlined 23 areas of research that I argued were important for the progress of the spatial sciences. In a somewhat light-hearted way I’ll look at several aspects where progress has been made, some concepts that are now well handled by current technology, and a few examples where there still appears to be a mismatch between the methods and the vision I presented 18 years ago.
The second part of the talk will address more technical aspects of space and spatio-temporal modelling. We will present ideas related to: using a correlated variable in space to allow randomisation of point patterns when testing clustering; accounting for spatial metric bias; and the perils of hidden and exogenous variables with spatio-temporal data. I will conclude by asking the question: Why are we still building models of correlation when we often want to understand causality?
Short Bio: Peter A. Whigham is an Associate Professor in the Information Science Department, University of Otago, having moved to Dunedin in 1999. He was the director of the Spatial Information Research Centre (SIRC) from 2001 till 2013, and the coordinator of the Spatial Information Processing Theme (a university research theme) from 2003-2006. Peter first worked in GIS in the late 1980s while at CSIRO (Australian Government research centre), developing a spatial expert system for modelling natural resource problems and building decision support systems for land use planning. He has worked in research areas such as ecology, public health, finance, theoretical biology, geology, surveying and evolutionary computation. He is currently the coordinator of the Master of Business Data Science at Otago University and has PhD students working in areas such as genetic programming, modelling fish distribution, theoretical spatio-temporal models, manipulation in financial systems and the relationship between housing and public health.

Plenary talks


Multiple Environments for Visualising Spatial Reality: Blurring the lines between geospatial visualisation and virtual reality


Keri Niven


Visualisation tools very effectively tell the story of a project, by giving context to complex information and providing a powerful mechanism for engagement. Just like in real stories, the way we present our information is incredibly important for defining how that information is “made real” to users. So then, does our selection of tools.
In engineering the conversation has matured from “how can we model our designs using BIM” to “how can we integrate our BIM models with other datasets to create collaborative environments that make review and design processes more efficient”.
More and more, we are presented with multiple options for creating these environments. Virtual and Mixed Reality tools are now widespread – enabling designers, engineers, planners and architects to come together into a single virtual environment where designs can be viewed in their “digital world”. This “democratisation of the design process” – where design is no longer the single preserve of experts – has created a basic requirement for visual environments that transcend multiple levels of technical understanding.
In virtual reality you truly get a feel for the geometry and space that is represented by your designs. Multi-player virtual environments create next level digital storytelling, by engaging multiple parties in a manner more “realistic” and immersive than can ever be achieved on a screen. Add to this the growing importance of parametric design – which as it matures is streamlining and automating the design model-to-visual model process, making collaboration possible in near-real time.
Where does this leave geospatial technology? Are geospatial environments simply a tool to prepare datasets for use in VR? Perhaps the basic concepts that lie at the heart of geospatial; including real-time updates, advanced analysis, modelling and data manipulation may offer some significant advantages when leveraged in a visualisation environment.

Creating impact from spatial information


Nathan Quadros


Data and spatial information are now an important component of virtually every operational workflow across all sectors in Australia and New Zealand, and this integration is becoming critical for informed real time decision making. Global technology trends have heavily influenced the way we acquire, process and manage data. The way we work and collaborate is changing and has meant organisations which can build processes to collaborate and organise efficiently are able to address this growing need.
FrontierSI, a not-for-profit company, was established following 15 years of operations as the Cooperative Research Centre for Spatial Information. For 15 years they have developed the capabilities needed to collaboratively tackle big, cross-sectoral spatial research challenges. They exist to deliver major benefits to governments, industry and the community using their deep expertise in spatial mapping, infrastructures, positioning, geodesy, analytics and standards.
Dr Nathan Quadros, FrontierSI’s Chief Commercial Officer, will present some of the key research initiatives of FrontierSI – past, present, and future – that are helping to inform real time decision making. This will include examples of the way FrontierSI extracts value to create impact from spatial information across multiple sectors and disciplines.

Lowering the Barriers to Scalable Geospatial Computation


Eric Shook, Coleman Shepard and Tyler Buresh


Achieving scalable geospatial computation is a massive challenge due to the technical, methodological and conceptual barriers in integrating geographic information science and computational science. This talk will highlight two projects aiming to lower those barriers and provide information on how scholars can learn more about each project.
The Hour of Cyberinfrastructure (Hour of CI) is a US National Science Foundation supported project aiming to introduce hundreds of diverse undergraduate and graduate students to cyberinfrastructure and to help them develop Cyber Literacy for Geographic Information Science. The project will produce 17 interactive, problem-based lessons that cover foundational concepts and introduce course skills. Each lesson will be freely available online as a Jupyter Notebook. Simply put, the Hour of CI serves as a gateway for students and early career professionals from diverse backgrounds to begin exploring geospatial computing.
The second project aims to lower barriers to entry by establishing a domain-specific language for geospatial computing. The language is designed For Expressing Spatial-Temporal computation and called ForEST. The talk will review key features of the ForEST language and how it is designed to lower barriers to entry for scalable parallel computation. Two ongoing research projects that are in active development will be discussed as case studies. The first project is using ForEST to leverage Graphic Processing Units (GPUs) to simulate the spread of the invasive Brown Marmorated Stink Bug across the state of Minnesota in the United States. The second project is using ForEST to create a scalable workflow to delineate and map farm fields across the state of Minnesota using satellite imagery with the aim to achieve continental-scale mapping in the long-term.

Regular talks


Identifying high-risk agricultural activity in New Zealand hill country with remote sensing


Alexander Amies, Stella Bellis, Heather North, David Pairman, John Dymond, Jan Zoerner, James Shepherd and John Drewry


Soil erosion can have a significant impact on the sediment and associated nutrient content of waterways. Limiting practices which cause erosion is thus desirable to preserve water quality. Agricultural hill country land which is allowed to be grazed down to near-bare soil in winter has been identified as a major cause of soil erosion. This project for the Ministry for the Environment sought to identify locations of bare ground in paddocks planted with pasture or winter forage during winter 2018 through large-scale spectral and temporal analysis of Sentinel-2 imagery.
Satellite passes which were suitably cloud-free from the time range September 2017 to November 2018 were manually identified. Manaaki Whenua’s cloud masking procedures using a time-series-controlled layer (TMASK) were then used to remove any remaining cloud, cloud shadow, and snow. The imagery was clipped to regional boundaries to allow for parallelised processing on the New Zealand eScience Infrastructure (NeSI) system.
A paddock boundary vector layer was generated to enable robust land cover classification. An algorithm based on identification of high standard deviation linear features through directional filters (North, Pairman et al. 2019) was used for this. Each pixel was classified using a maximum likelihood (ML) classifier trained with spectral responses and covariance matrices of 10 land cover classes developed from previous Hawke’s Bay winter forage cropping analysis.
Risky agricultural activity was identified by selecting paddocks with a mean slope greater than 7°, a land cover of either pasture or winter forage, and at least one image of the paddock showing bare soil in winter. These paddocks were manually checked to eliminate false positives. Comparisons of winter forage land areas with previous studies were used to validate the ML classification. High-risk activity was identified in 0.69% of New Zealand’s agricultural land over 7°, and likely led to 689,921 tonnes of soil loss.

Augmented GNSS – benefits to the NZ economy and its innovation opportunities


Matt Amos


Since January 2017 Land Information New Zealand and Geoscience Australia have been collaborating on a test bed for demonstrating how current and future Satellite Based Augmentation System (SBAS) technology could be used across Australia and New Zealand. The test bed commissioned 27 projects across 10 sectors to demonstrate its practical and developmental usage. The outcome of the work was recently published in an economic benefits report that indicates the potential value of SBAS to both economies.
New Zealand and Australia now working to implement a regional SBAS over the coming years. The service will support the aviation, maritime, rail and road transport sectors, which have a requirement for high-integrity positioning with guaranteed performance. The SBAS will also support a precise correction capability that can deliver decimetre accurate services across the region with anticipated applications in the agriculture, maritime, mining, spatial and construction sectors. This will provide a unique opportunity to research organisations and business to develop innovative use cases with the world-leading technology.

Mapping NZ 2025 – integrating land and sea


Graeme Blick


Land Information New Zealand’s Mapping New Zealand 2025 programme of work aims to seamlessly map New Zealand from the top of Aoraki Mt Cook to the edge of the Continental Shelf. There are two overarching projects, better utilisation of Earth Observations for mapping and Joining Land and Sea (JLAS). The mapping projects are focused on the land and sea, and new project of coastal mapping.
Key to joining all of these datasets into a seamless map is the JLAS project which aims to develop transformations between the land and marine datums using NZVD2016 as a common reference surface thereby enable the integration of land and sea spatial datasets. The project will also include the integration of an improved New Zealand tidal model to enable the determination of marine datum, eg MSL, values away from tide gauges using GNSS.
This paper gives and overview of the Mapping NZ2025 programme of work and the JLAS project.

One Billion Trees: A spatial analysis of reforestation scenarios for multiple benefits


Bradley Case


The New Zealand government has announced the goal of making New Zealand carbon neutral by 2050 and will be aiming to address the biodiversity crisis via a revamped Biodiversity Strategy. Further, the latest Parliamentary Report on the Environment paints a dismal picture for the state of our environment and calls for urgent action around issues such as water quality, erosion, and the potential effects of ongoing climatic changes. From an ecological perspective, targeted and science-based revegetation of privately-owned agroecosystems could address a large number of the above issues, leading to multiple beneficial outcomes. Could the One Billion Trees programme, launched in early 2018, provide a key mechanism by which this could be achieved? To achieve a step-change via 1BT, one of the first requirements is the exploration of ecologically-designed spatial scenarios for identifying where and how agricultural landscapes could be reforested to improve multiple objectives while minimising economic impacts for farmers. In this talk, I present some recent spatial analyses that our team has been carrying out towards these goals as part of the New Zealand’s Biological Heritage National Science Challenge project “Farming and Nature Conservation” ( Country-scale analyses indicate that over 300,000 hectares of farmland across the country would be considered very high priority for native revegetation to improve environmental outcomes. I will discuss the relevance of these results in terms of the potential for achieving enhanced biodiversity, and other aligned goals such as carbon sequestration, within New Zealand’s sheep and beef agroecosystems.

A spatially explicit analysis of health inequalities using spatial microsimulation and self-organizing maps


Ricardo Crespo and Claudio Alvarez


This work aims at studying the spatial dynamics of health inequalities at small area level by combining two powerful statistical modelling tools: spatial microsimulation and self-organizing maps (SOM). Spatial microsimulation offers a powerful technique to cope with the problem of limited survey data by simulating attributes of the whole population using Census and the same survey data. In turn, SOM is a type of artificial neural network that is trained using unsupervised learning to produce a discrete spatial representation of the previously simulated data. Put differently; spatial microsimulation generates the input to be used in the SOM clustering process. Unlike some traditional methods whose analyzes are mostly based on the spatial interaction and spatial proximity of agents (that is, on spatial autocorrelation), SOM can spatially identify groups of people with similar features despite being located far away from each other. Consequently, we attempt to examine health inequality in a spatially explicit and more realistic manner in order to support health policymakers, particularly with regard to the elderly.
We chose Santiago, the capital city of Chile, as our case study. Santiago is located near the centre of Chile, with an approximate population of six million, accounting for 35 per cent of the total Chilean population. Despite being ranked as a high-income country by the World Bank, and became an OECD country in 2010, Chile, with a Gini index of 0.47, is the most unequal OECD country and the seventh worst in the world. This inequality has also been observed in the health domain. As Chile is still a developing country experiencing a transformation from a poor to a rich country, this case study may also be particularly useful to understand the socioeconomic dynamics of rapid-changing urban systems.

Disaster risk reduction: frameworks and data


Robert Deakin, Kasey Oomen, Susan Shaw and Matthew Wilson


The United Nation’s Sendai Framework for Disaster Risk Reduction seeks to substantially reduce the risks and losses caused by disasters to people, communities and economies.
The first of its four priority areas for action is to improve our understanding of disaster risk; without an adequate understanding of current risks our ability to plan effective actions to reduce them is compromised.
The framework was adopted in 2015 and runs through to 2030. In 2017 the United Nations Committee of Experts on Global Geospatial Information published a “Strategic Framework on Geospatial Information and Services for Disasters”. This was co-designed by member states specifically to guide and support the application of geospatial information in delivering the outcomes of the Sendai Framework.
In 2019 the New Zealand Government laid out its new National Disaster Resilience Strategy. This directly references the Sendai Framework, recognising the need to identify and understand risk to inform policy and planning decisions.
There is recognition that within New Zealand there are many gaps in and barriers to the use of data that could be used by decision makers to better understand and manage risk.
Land Information New Zealand are committed to improving the effective use of geographic information to deliver value for New Zealand, and has a clear focus on supporting those working in the domains of resilience and climate change.
This paper explores the key frameworks that set policy and priority action areas for risk reduction and highlights some of the needs analysis and data improvement work that has been undertaken relating to the use of key geospatial datasets in this important area of work. It is based on research undertaken by the Resilience Team at Land Information New Zealand in conjunction with the Geospatial Research Institute at the University of Canterbury.

Using alpha-shapes to robustly map distributions of geographic phenomena from spatially biased citizen science data


Thomas Etherington


Citizen science data in the form of point events are becoming more widely used to map the distribution of geographic phenomena. However, these unstructured data contain a spatial sampling bias, meaning that analytical techniques based on density and distance could produce misleading results. Rather than try to remove the unknown spatial sampling biases to allow the use of bias-sensitive techniques, an alternative approach would be to consider different techniques that are robust to sampling bias. Alpha-shapes are a computational geometry technique that was developed for delineating the spatial boundary of objects from a point-cloud. The basis of an alpha-shape is a Delaunay triangulation, with the interior of the alpha-shape defined by those Delaunay triangles whose circumcircle’s radius is less than an alpha distance. A large alpha distance creates an alpha-shape that is simply the convex hull that contains all the points, while reducing the alpha distance will produce increasing detailed shapes that capture more detail in the point cloud. Previous uses of alpha-shapes have focused on trying to find an ‘optimal’ alpha distance that best describes the shape. In some instances, such as constructing physical objects from scanning data, this will be appropriate. However, for non-physical phenomena that do not have an obvious spatial boundary between existence and non-existence, it may make more sense to embrace the uncertain nature of the phenomena distribution. Then, rather than chasing an optimum alpha distance that does not exist, different alpha distances can be explored to establish a fuzzier boundary that is likely to be a better realisation of the spatial boundary of many geographic processes. Using virtual geography experiments based on neutral landscape models and spatial point processes with varying levels of spatial sampling bias, I demonstrate that alpha-shapes can reliably map distributions of geographic phenomena from spatially biased observation data.

Sea ice drift estimated using high-resolution satellite images in comparison with low-resolution data set in the Ross Sea region, Antarctica


Usama Farooq, Wolfgang Rack and Adrian McDonald


Sea ice drift is a key driver of spatiotemporal variations in sea ice area, concentration and thickness distributions. Consequently, drift affects roughness, the surface albedo, moisture and heat fluxes between the ocean and atmosphere, the freshwater budget and sea ice melt & growth rates. Furthermore, for an accurate representation of sea ice in climate models, realistic parameterization of the sea ice motion and deformation rates are required. This study uses sequential high-resolution Synthetic Aperture Radar (SAR) images to calculate the sea ice motion in the western Ross Sea region. In this region, the most significant increase in sea ice extent has been observed in recent decades. By combining the available low-resolution sea ice motion vectors with high-resolution drift data, we can quantify the uncertainties of satellite-derived sea ice dynamics. The drift velocity is calculated in centimetre per second using phase correlation technique. The images are downsampled from 75m to 150m spatial resolution. The outcome is validated from manually drawn vectors.

Fine-grained automated data provenance for transparent environmental modelling


Alexander Herzig, Ben Jolly, Raphael Spiekermann, Tom Burleigh and David Medyckyj-Scott


Spatial decision-making and policy development rely on high quality spatial data and assessments processed or modelled by geospatial processing workflows or environmental models. The quality of the generated results depends on the input data, the algorithms, and the parameters used to process these data. Typically, the applied methodology, data, and parameters are summarised in an accompanying report, but a complete account of what had actually been done to which dataset in what order using which algorithms and parameters is rarely provided to end users. However, there is increasing demand for greater transparency of the science underpinning decision-making processes in land resource management. In this presentation, we introduce implementations of automated fine-grained data provenance tracking for two different environmental modelling frameworks, pyluc (1) and LUMASS (2), present results, and discuss associated challenges and unresolved issues. Pyluc is a Python-based framework to generate spatial land-use classifications with increased levels of transparency and repeatability. LUMASS is a spatial modelling and optimisation framework. Both frameworks provide automated fine-grained data provenance tracking for each model implemented in the respected framework. The provenance information generated by both frameworks is structured according to the W3C PROV data model and recorded in text files using the W3C PROV-N notation. The generated information helps end-users to understand the sources and transformations that were applied to data. It can be used for monitoring the usage and production of data, to reproduce the data from its original sources, and to provide context for reusing data published to the community. It is also useful during model development for verification and debugging purposes. To facilitate the exploration of large and complex fine-grained provenance data, we developed the interactive web-based visualisation tool provis (3).

Change in community water fluoridation, childhood dental ambulatory sensitive hospitalisations (ASH) and the moderating effect of area-level deprivation


Matthew Hobbs, Alicia Wade, L Marek, M Tomintz, P Jones, K Sharma, J McCarthy, B Mattingley, M Campbell and S Kingham


Background: Little is known about how change in community water fluoridation (CWF) is related to dental ambulatory sensitive hospitalisation (ASH) rates and little if any evidence has considered the moderating effect of area-level deprivation.
Methods: Dental ASH conditions (dental caries and diseases of pulp and periapical tissues), age, gender and home address identifier (meshblock) were extracted from pooled (2011 [Q3] to 2017 [Q2]) cross-sectional data on children aged 0-4 and 5-12 from the National Minimum Dataset (NMDS). Community Water Fluoridation (CWF) was obtained for 2011 and 2016 from the Institute of Environmental Science and Research (ESR). Dental ASH rates for children aged 0-4 and 5-12 (/1000) were calculated for census area units (CAU). Multilevel negative binomial models investigated associations between CWF, dental ASH rates and moderation by area-level deprivation.
Results: Relative to CWF (2011 and 2016), no CWF (2011 and 2016) was associated with increased dental ASH rates (children aged 0-4, IRR=1.171 [95% CI 1.064-1.288]; aged 5-12 (IRR=1.181 [95% CI 1.084-1.286]). An interaction between area-level deprivation showed that the association between CWF and dental ASH rates was more pronounced for children within the most deprived quintile in children aged 0-4 (IRR=1.316 [1.052-1.645]).
Conclusions: CWF was associated with reduced dental ASH rate for children aged 0-4 and 5-12 years. Those who live in the most deprived areas have the most to gain from CWF. Variation in CWF contributes to structural inequalities in oral health outcomes for children.

Situating social and ecological influence process to combat invasive species: a socio-ecological approach


Audrey Lustig, Alex James and Michael Plank


Control of invasive species over increasingly complex social landscapes requires not only a sound understanding of ecological processes (e.g. pest population dynamics) but also the support and the coordinated actions of many landholders which can have widely differing views and expectations influencing their engagement and participation in environmental management practices. The question is how can we trigger behaviour changes and galvanise collective action to ensure the success of pest management? Insights from behavioural science provide clues by exploring the ecological, economic and social drivers that transform awareness to action and action to sustained behaviour change. However, these behavioural aspects are seldom integrated into forecasting models for environmental management. We will introduce a socio-ecological agent-based model that allow for social systems and ecological systems to be linked and scenarios tested to help predict their influence over each other and help improve invasive species management practices. We will discuss some conditions under which a behavior (engaging or not in pest control) diffuses and becomes persistent in a community. Our findings showed the potential of this socio-ecological approach and serve as guidelines for developing these models for predicting the distribution of other mammals across different management scenarios.

Use of Geospatial tools during response and recovery for rapid disaster impact and risk assessment in the remote and dynamic West Coast region of New Zealand


Jo Paterson and Ed Cook


The West Coast Civil Defence and & Emergency Management Group (WC CDEM) is undergoing a rapid shift in how they plan, respond and recover from natural events – with geospatial intelligence a major enabler. Innovating in partnership with Eagle Technology, GIS has become a central decision-making tool not just for Emergency Managers but for Emergency Services, Lifeline Operators, Agencies and the Public.
The Geospatial Common Operational Picture for Response & Recovery Tool is a practical example of what the future holds for geospatial science and technology.
Hosted on a SaaS product, ArcGIS Online, this creates a dynamic geospatial view of operations to be shared over the web – enabling access to geointelligence to local and national stakeholders as an event unfolds. Increasing situational awareness and ensuring better decision-making during response and recovery.
The information being viewed is innovative too, an example is QuickCapture, a mobile app for rapid data collection which has been configured for monitoring and post-impact geointelligence collection.
During Severe Weather in March 2019, this application was used by CDEM Volunteers, Helicopter Operators and Staff to capture information including:
  • Flood Observations
  • Welfare Assessments
  • Geotechnical observations,
  • Impacted infrastructure (buildings, bridges and flood protection schemes

Locally, the tools were used by community members to share intelligence to WC CDEM.


These sources of information all stream real time geointelligence into the tool, displaying both 2D/3D automatic dissemination of incoming information. Being web-based, monitoring is accessible to Central Government and Emergency Services, ensuring good decisions off spatial information.


The data collected during a response, increases in its value for recovery/ reduction decision making across built, natural, social, economic sectors. Future hazard and risk modelling for both infrastructure and the population of the West Coast , opportunities for communication of science through these tools increases overall hazard awareness.



What can a drone tell us about snow depth, and how can we decipher it?


Todd Redpath, Pascal Sirguey, Nicolas J. Cullen and Sean J. Fitzsimons


Dynamic in time and space, seasonal snow represents an important resource and a difficult target for ongoing in situ measurement and characterisation. Such difficulties hamper efforts to accurately model seasonal snow and associated hydrological and climatic processes. Advances in remotely piloted aircraft (RPAS) technologies and photogrammetry techniques now permit the mapping of snow depth at very high spatial resolution, albeit over relatively small areas. Accompanying this new ability is the challenge of providing relevant and useful insights that improve understanding of seasonal snow processes and beneficially inform modelling efforts. This study exploits high resolution snow depth maps, captured over two winters by an RPAS, to resolve patterns of spatial variability in snow depth, and asses the role of associated topographic controls on snow distribution for a small (0.4 km2) alpine basin in the Pisa Range, Central Otago. Spatial variability was characterised by semi-variograms for snow depth at each epoch. Topographic controls were assessed via regular regression and regression tree analysis between snow depth and popular terrain indices, including the topographic position index (TPI), relative solar exposure (RSE), and Sx (maximum upwind slope). Despite substantial differences in both total snow volume and spatial distribution, the range of spatial-autocorrelation for snow depth was comparable for both winters at 20 – 30 m. No direct relationships were found between snow depth and topographic controls. By resolving complex relationships between controlling parameters, however, regression tree modelling performed well at reproducing the spatial structure of snow depth. The regression tree approach also revealed temporal variability in the relative importance of controlling parameters, particularly the impact of varying wind regimes on the spatial distribution of snow. These results demonstrate the utility of high-resolution snow depth mapping for improved understanding of seasonal snow processes and highlight the need to robustly capture dynamic processes in spatial snow models.

GeoAI: the future of feature extraction and classification


Sagar Soni


Advancements in earth observation data from satellites, planes and drones mean that big data is a big part of our world.
Orbica turns this big data into information by combining the best of geoprocessing with the latest advancements in artificial intelligence (AI) to extract and classify features of the earth’s surface from 3-band imagery. We call it GeoAI.
Orbica’s GeoAI engine extracts – with high accuracy and very fast performance – building outlines, roads, surface water types and vegetation, to name a few features. This presentation will focus on the extraction and classification of surface water types, as management of water resource is arguably the most pressing issue for global planners.
Our advancements include:
  • Ability to consume data at high speed
  • Ability to automate AI and geoprocessing to output raster and vector datasets
  • Agnostic of dataset. It can be tuned to extract information from many resolutions
  • Very scalable due to the models’ ground-up architecture
  • Levels of certainty are provided against all datasets through confusion matrixes


We applied deep learning techniques to rasters and vector files. This means that we can extract all surface water from raster and classify the vectorised results as to their type. For instance, in less than two minutes we can label the entire Land Information New Zealand Topo50 Water Polygons dataset as lakes, ponds, rivers, canals etc. at 95% testing accuracy. This is a very novel application of deep learning and geospatial techniques and offers a full service from pixels into classified, vectorised spatial datasets.


One of the key goals of GeoAI was to provide near real-time information from imagery datasets. Manually, it can take upwards to 8-hours per image to digitise and draw polygons for one tile of water bodies. We can produce a digitised raster in 30 seconds.

Lightning talks


A new workflow for spatially enabling low-cost UAV Full Motion Video


Graham Hinchliffe


The development of low-cost fully integrated UAV’s, combined with advances in Structure from Motion (SfM) photogrammetric processing, has generated a paradigm shift in the geospatial industry by allowing anyone to easily create GIS-ready orthophotos, Digital Surface Models and 3D point clouds. However, applications of UAV-collected video data have not been thoroughly investigated outside of high-end commercial or military surveillance systems. This is mainly because the use of video data presents various challenges, such as processing difficulties related to the lack of metadata for platform and sensor 3D location, orientation and rotation. This research presents a new workflow for the capture and post-processing of low-cost (e.g. DJI Phantom / Mavic) High Definition video to create Motion Industry Standard Board (MISB) compliant Full Motion Video (FMV), allowing for full integration into a GIS for measurement, georeferenced frame extraction and feature tracking. Initial development of the workflow has targeted marine applications, where the dynamic environment of waves, reflections and other water column effects prevent the use of existing SfM techniques. Absolute accuracy trails are still to be undertaken, however, initial results indicate metre scale accuracy suitable for feature location, local level mapping and dynamic tracking. The presentation will include a live demonstration of processed FMV data and highlight the wide range of potential application areas.

A hi-fidelity approach for raster to vector conversions


Robbie Price


In the field of spatial analysis we often find ourselves in a situation where our modelling is best done in the raster domain, but our results are only acceptable or useful to people when presented in the Vector domain.
Converting a raster data set to a vector map is one of those seemingly trivial exercises that never quite provides us with what we need. Literal representations of raster data (referred to herein as fishnets) do not make pleasing maps, and non-spatial people looking at them are want to note that the lines look rubbish and therefore the data it represents is rubbish.
It is not unusual, therefore, that as part of the conversion from raster to vector an analyst may choose to modify the linework to make the data look more natural in the vector representation. There exist a number of different methods for this using either the ESRI suite, QGIS, GRASS, or R. The provide varying degrees of smoothing and topological risks.
We discuss the common pitfalls of using these tool and present the results of a prototype approach using constrained displacement to overcome them.
Our approach enables us to maintain both topology and fidelity of the original raster to vector fishnet whist providing acceptably smooth line-work after just a single pass through the algorithm.

Digital Surface Model from aerial imagery for the Horizons Region


Andrew Steffert


Horizons Regional Council (HRC) has a requirement to update the current indicative flooding layer that was captured from 1:50000 scale maps in the late 1990s. A viable elevation model for the region was needed for this but the cost for an aerial lidar survey covering 22800 km² was considered prohibitive. One of the many datasets used in analysis of the land after the Kaikoura earthquake on 14 November 2016 was a point cloud generated from recent aerial photography.
One the strength of this, HRC invested in extraction of elevation data from the full regional 2016/17 aerial photography (0.3m) capture. The delivery was a dense matched point cloud extracted from every second pixel with X,Y,Z,C,R,G,B format. The classification allows the removal of isolated trees and structures, shelterbelts and some buildings.
The main purpose for this dataset is to update the current indicative flooding/ponding layer. These areas provide ‘flags’ for further investigation. The products from a regional elevation model have numerous uses and, for the first time, provide region wide information to support planning rules such as cultivation of slopes greater than twenty percent.
The elevation model has maybe 40 percent utility (in clear and open spaces) and is approximately 2.5 percent the cost of a lidar survey. Is this a cost effective solution?

Designing virtual reality environments to study vapers behavioural and psychophysiological reactions in New Zealand


Melanie Tomintz, Maria Vega Corredor, Simon Hoermann and Nawam Karki


There is a continuous increase in the use of electronic cigarettes (e-cigarettes), also known as vaping, worldwide. Little is known about the long-term health effects and much controversial literature is getting published that highlight the pros and cons of vaping on people’s health. In New Zealand, the sale of e-cigarettes was legalised with the aim to support current smokers helping to quit tobacco smoking. However, this may pose detrimental long-term health effects in children and adolescents o non-adult smokers.
The aim of this study is to build a multi-sensory environment that allows us to measure people’s behavioural and psychophysiological reactions (eg. rise of heart rate, sweat) when placing them into different virtual environments using virtual reality technology and simulating different exposures, e.g. tobacco, different flavours of e-cigarette liquids, food, weather conditions, that can evoke different subconscious reactions. Challenging is the creation of virtual environments, as not much is known about vapers behaviour yet. Therefore, we will run focus groups, conduct a nationwide online survey and do the virtual reality experiences with people in the human interface technology laboratory at the University of Canterbury.
This study is the first internationally that builds virtual environments and uses other additional new technologies to understand more about the vaping behaviour of people living in New Zealand when exposing them to different situations. The results will help design innovative smoking intervention and support policy in their decision making.



Understanding tsunami evacuation dynamics to improve tsunami evacuation modelling, using the case study of the evacuation dynamics in Christchurch’s coastal communities during the 2016 Kaikōura Earthquake


Danielle Barnhill, Laura Tilley, Thomas Wilson, Matthew Hughes and Sarah Beaven


Coastal areas, while long-favoured locations of development, are at risk of tsunami. Tsunami can have disastrous impacts to the built and natural environments, and can cause injuries and fatalities to humans. Christchurch, New Zealand is exposed to tsunami. Prompt evacuation from exposed coastal areas is crucial in reducing the risk of injuries and fatalities to humans during a tsunami. Evacuation planning can be used to achieve efficient evacuations, however it is best achieved when there is knowledge of how people react to warnings and the evacuation decisions they subsequently make. The relative rarity of tsunami events globally, and locally, means there is limited knowledge on tsunami evacuation behaviour, and therefore local and global tsunami evacuation planning, has been largely informed by hurricane evacuations.
The 2016 Kaikōura Earthquake generated the largest local source tsunami in New Zealand since 1947. The response to this event provided an opportunity to survey community members of Christchurch’s coastal suburbs to gain an understanding of the decisions made during this evacuation, including where people evacuated to and from, how they travelled and the time taken to evacuate.
This research will improve understanding of human behaviour during tsunami evacuations by analysing detailed survey responses of Christchurch community members who experienced the 2016 Kaikōura Earthquake. The research will contribute to evacuation planning in Christchurch’s coastal communities by utilising an Agent Based Model (ABM) to simulate tsunami evacuations. This model will be informed by the evacuation behaviour documented in the surveys, and will inform and refine future tsunami evacuation planning for Christchurch. This will allow emergency managers to improve their evacuation planning and the decisions they make during an evacuation, to ensure that future tsunami evacuations are more efficient and safer for communities that are exposed.

Temporal drivers of Disaster Risk and Resilience in Rural New Zealand


Becca Fraser, Thomas Wilson, Sarah Beaven, Nicholas Cradock-Henry and Matthew Hughes


Aotearoa-New Zealand’s rural communities are an essential part of the nation’s economy, society and culture. They face key challenges to their resilience such as the impacts of hazards, alongside the compounding impacts of social, cultural and economic change. Disruption to rural communities can reveal vulnerabilities and strengths previously unknown, and catch communities and disaster decision makers unaware and unprepared. This indicates a need for identifying and understanding the drivers of rural change and the implications of this for the future.
Whilst there is a growing body of rural focused disaster resilience research in New Zealand, there is not yet a cohesive summary investigating the drivers and outcomes of resilience over multiple dimensions in the rural sector. Additionally, a cohesive summary of the impacts of this on current and future disaster risk is lacking. This study addresses aspects of these gaps by identifying and assessing the factors which influence resilience in New Zealand rural communities, and the impact of this on current and future disaster risk.
This research will focus on quantifying the evolution of communities through dynamic longitudinal social, economic and physical change, primarily through the use of national geospatial datasets. Geospatially analysing these factors will allow for a more complete understanding of how these communities have changed and the interaction of these factors. This will enable researchers to explore how this may have impacted disaster resilience, while also allowing community members, policy makers and disaster decision makers (such as Civil Defence and Emergency Management Groups) to make more effective decisions.
Ultimately the research proposed here aims to effectively evaluate the implications of rural change over time and explore how this information could be more useful and usable for community members, policy makers and disaster decision makers.

Understanding Tsunami Evacuation Dynamics: Informing evacuation modelling through a case study of the 2016 Kaikoura Earthquake


Laura Tilley


Tsunami events including the 2004 Indian Ocean Tsunami and the 2011 Tohoku Earthquake and Tsunami resulted in 230,000 and 15,894 deaths respectively. Scientists believe a similar magnitude event of the 2011 Tohoku Earthquake is possible if the Hikurangi subduction zone ruptures, affecting the east coast of New Zealand. This highlights the need for comprehensive risk mitigation, including effective tsunami evacuation planning. New Zealand is highly exposed to tsunamis from multiple sources and continues to invest in tsunami risk assessments, awareness and response.
Evacuation is the most important risk reduction strategy for preventing casualties. Understanding how people respond to warnings and natural cues is an important element to improving evacuation modelling techniques, however, there is still limited research internationally and in New Zealand on understanding how people evacuate during and following a ‘real-event’ evacuation response. This research aims to address this gap by analysing evacuation behaviour and movements of the Kaikoura community following the 2016 Kaikoura Earthquake and subsequent tsunami evacuations.
Stage 1 of this research is conducting a tsunami risk assessment of the Kaikoura community to determine the risk to population and the relevant assets exposed to tsunami hazard. Through the use of GIS it is important to determine the spatio-temporal patterns of population movements including visitors and transient populations to determine the highest risk. A multi-method approach will be used to determine visitor and transient trends to the area.
Stage 2 of this research is using the results of the data collected in stage 1 and information provided by Kaikoura residents on their evacuation movements and behaviour during the 2016 event to develop a network evacuation model and an Agent-based model. This will determine evacuation times across multiple scenarios to develop an optimal evacuation model. This is unique opportunity to use real-event behaviour to inform evacuation modelling methods.

Development of a geospatial framework for analysis of water quality in Canterbury, New Zealand


Maria Vega-Corredor and Matthew Wilson


Water pollution has significant environmental impacts and can be detrimental to human health, particularly for water users undertaking recreational activities such as swimming or whitebait fishing. Particular problems for waterways include contamination from faecal bacteria, excess sediment, nutrients, and heavy metals resulting from storm water runoff from roads, roofs, and carparks. Environment Canterbury completes regular monitoring of nutrients, metals, E. coli and Enterococci, polycyclic aromatic hydrocarbons (PAHs) among others at numerous sites across the region. However, the lack of an easily accessible geospatial database makes analysis of collected data challenging. Here, we present the development of a geospatial database and framework, which is facilitating detailed mapping and analysis of water pollution with respect to upstream contributing area of sample locations.