GISTAM 2018 Abstracts


Area 1 - Data Acquisition and Processing

Full Papers
Paper Nr: 15
Title:

Automatic Tree Annotation in LiDAR Data

Authors:

Ananya Gupta, Jonathan Byrne, David Moloney, Simon Watson and Hujun Yin

Abstract: LiDAR provides highly accurate 3D point cloud data for a number of tasks such as forest surveying and urban planning. Automatic classification of this data, however, is challenging since the dataset can be extremely large and manual annotation is labour intensive if not impossible. We provide a method of automatically annotating airborne LiDAR data for individual trees or tree regions by filtering out the ground measurements and then using the number of returns embedded in the dataset. The method is validated on a manually annotated dataset for Dublin city with promising results.

Paper Nr: 19
Title:

Improvements to DEM Merging with r.mblend

Authors:

Luís Moreira de Sousa and João Paulo Leitão

Abstract: r.mblend is an implementation of the MBlend method for merging Digital Elevation Models (DEMs). This method produces smooth transitions between contiguous DEMs of different spatial resolution, for instance, when acquired by different sensors. r.mblend is coded on the Python API provided by the Geographic Resources Analysis Support System (GRASS), being fully integrated in that GIS software. It introduces improvements to the original method and provides the user with various parameters to fine tune the merging procedure. This article showcases the main differences between r.mblend and two conventional DEM merge methods: Cover and Average.

Paper Nr: 33
Title:

Outdoors Mobile Augmented Reality Application Visualizing 3D Reconstructed Historical Monuments

Authors:

Chris Panou, Lemonia Ragia, Despoina Dimelli and Katerina Mania

Abstract: We present a mobile Augmented Reality (AR) tourist guide to be utilized while walking around cultural heritage sites located in the Old town of the city of Chania, Crete, Greece. Instead of the traditional static images or text presented by mobile, location-aware tourist guides, the main focus is to seamlessly and transparently superimpose geo-located 3D reconstructions of historical buildings, in their past state, onto the real world, while users hold their consumer grade mobile phones walking on-site, without markers placed onto the buildings, offering a Mobile Augmented Reality experience. we feature three monuments; e.g., the ‘GialiTzamisi’, an Ottoman mosque; part of the south side of a Byzantine Wall and the ‘Saint Rocco’ Venetian chapel. Advances in mobile technology have brought AR to the public by utilizing the camera, GPS and inertial sensors present in modern smart phones. Technical challenges such as accurate registration of 3D and real world, in outdoors settings, have prevented AR becoming main stream. We tested commercial AR frameworks and built a mobile AR app which offers users, while visiting these monuments in the challenging outdoors environment, a virtual reconstruction displaying the monument in its past state superimposed onto the real world. Position tracking is based on the mobile phone’s GPS and inertial sensors. The users explore interest areas and unlock historical information, earning points. By combining AR technologies with location-aware, gamified and social aspects, we enhance interaction with cultural heritage sites.

Paper Nr: 41
Title:

Optimal Estimation of Census Block Group Clusters to Improve the Computational Efficiency of Drive Time Calculations

Authors:

Damon Gwinn, Jordan Helmick, Natasha Kholgade Banerjee and Sean Banerjee

Abstract: Location selection determines the feasibility of a new location by evaluating factors such as the drive time of customers, the number of potential customers, and the number and proximity of competitors to the new location. Traditional location selection approaches use census block group data to determine average customer drive times by computing the drive time from each block group to the proposed location and comparing it to all competitors within the area. However, since companies need to evaluate on the order of hundreds of thousands of potential locations and competitors, traditional location selection approaches prove to be computationally infeasible. In this paper we present an approach that generates an optimal set of clusters to speed up drive time calculations. Our approach is based on the insight that in urban areas block groups are comprised of a few adjacent city blocks, making the differences in drive times between neighboring block groups negligible. We use affinity propagation to initially cluster the census block groups. We use population and average distance between the cluster centroid and all points to recursively re-cluster the initial clusters. Our approach reduces the census data for the United States by 80% which provides a 5x speed when computing drive times. We sample 200 randomly generated locations across the United States and show that there is no statistically significant difference in the drive times when using the raw census data and our recursively clustered data. Additionally, for further validation we select 300 random Walmart stores across the United States and show that there is no statistically significant difference in the drive times.

Paper Nr: 68
Title:

Minimum Collection Period for Viable Population Estimation from Social Media

Authors:

Samuel Lee Toepke

Abstract: Using volunteered geographic information for population estimation has shown promise in the fields of urban planning, emergency response and disaster recovery. A high volume of geospatially enabled Tweets can be leveraged to create population curves and/or heatmaps delineated by day of week and hour of day. When making these estimations, it is critical to have adequate data, or the confidence of the estimations will be low. This is especially pertinent to disaster response, where Tweet collection for a new city/town/locale may need to be rapidly deployed. Using previously leveraged data removal methods, temporal data quantity is explored using sets of data from increasingly longer collection periods. When generating these estimates, it is also necessary to identify and mitigate data from automated Twitter bots. This work examines the integration of a modern, web services based, Twitter bot assessment algorithm, executes data removal experiments on collected data, describes the technical architecture, and discusses results/follow-on work.

Short Papers
Paper Nr: 27
Title:

Standardized Big Data Processing in Hybrid Clouds

Authors:

Ingo Simonis

Abstract: There is a growing number of easily accessible Big Data repositories hosted on cloud infrastructures that offer additional sets of cloud-based products such as compute, storage, database, or analytics services. The Sentinel-2 earth observation satellite data available via Amazon S3 is a good example of a petabyte-sized data repository in a rich cloud environment. The combination of hosted data and co-located cloud services is a key enabler for efficient Big Data processing. When the transport of large amounts of data is not feasible or cost efficient, processes need to be shipped and executed as closely as possible to the actual data. This paper describes standardization efforts to build an architecture featuring high levels of interoperability for provisioning, registration, deployment, and execution of arbitrary applications in cloud environments. Based on virtualization mechanisms and containerization technology, the standardized approach allows to pack any type of application or multi-application based workflow into a container that can be dynamically deployed on any type of cloud environment. Consumers can discover these containers, provide the necessary parameterization and execute them online even easier than on their local machines, because no software installation, data download, or complex configuration is necessary.

Paper Nr: 58
Title:

Towards Rich Sensor Data Representation - Functional Data Analysis Framework for Opportunistic Mobile Monitoring

Authors:

Ahmad Mustapha, Karine Zeitouni and Yehia Taher

Abstract: The rise of new lightweight and cheap sensors has opened the door wide for new sensing applications. Mobile opportunistic sensing is one type of these applications which has been adopted in multiple citizen science projects including air pollution monitoring. However, the opportunistic nature of sensing along with campaigns being mobile and sensors being subjected to noise and missing values leads to asynchronous and unclean data. Analyzing this type of data requires cumbersome and time-consuming preprocessing. In this paper, we introduce a novel framework to treat such type of data by seeing data as functions rather than vectors. The framework introduces a new data representation model along with a high-level query language and an analysis module.

Paper Nr: 65
Title:

Investigating the Use of Primes in Hashing for Volumetric Data

Authors:

Léonie Buckley, Jonathan Byrne and David Moloney

Abstract: Traditional hashing methods used to store 3D volumetric data utilise large prime numbers in an attempt to achieve well-distributed hash addresses to minimise addressing collisions. Hashing is an attractive method to store 3D volu- metric data, as it provides simple method to store, index and retrieve data. However, implementations provide no evidence as to why they utilise large primes [1], [2], [3], [4], [5], which act to create a hash address through randomising key values. The coordinates of quantised 3D data are unique. It is thus investigated in this paper whether whether this randomisation through the use of large primes are necessary. The history of the use of primes for hashing 3D data is also investigated, as is whether their use has persisted due to habit rather than due to methodical investigation.

Paper Nr: 71
Title:

Unmanned Aerial Survey for Modelling Glacier Topography in Antarctica: First Results

Authors:

Dmitrii Bliakharskii and Igor Florinsky

Abstract: For Antarctic research, one of the most important support tasks is a rapid and safe monitoring of sledge routes, snow / ice airfields, and other visited areas for detection of open crevasses, revealing of hidden, snow-covered ones, as well as studying of their dynamics. We present the first results from a study of applying unmanned aerial systems (UASs) and UAS-derived data to model glacier topography in contexts of detecting crevasses and monitoring changes in glacier surfaces. The study was conducted in East Antarctica in the austral summer 2016/2017. The surveyed areas included an eastern part of the Larsemann Hills, an airfield of the Progress Station, an initial section of a sledge route from the Progress to Vostok Stations, and a north-western portion of the Dålk Glacier before and after its collaps. The surveying was performed by Geoscan 201, a flying-wing UAS. For the photogrammetric processing of imagery, we applied software Agisoft PhotoScan Professional. High-resolution digital elevation models (DEMs) for surveyed areas were produced. For the Dålk Glacier, we derived two DEMs related to the pre- and post-collapsed glacier surface. A further analysis will be performed by methods of geomorphometry. The focus will be on the revealing of crevasses.

Posters
Paper Nr: 22
Title:

Evaluation of AW3D30 Elevation Accuracy in China

Authors:

Fan Mo, Junfeng Xie and Yuxuan Liu

Abstract: The AW3D30 dataset is a publically available, high-accuracy digital surface model; the model’s cited nominal elevation accuracy is 5 m (1σ). In order to verify the accuracy of AW3D30, we selected China as test area, and used field measurement points in the national control point image database as control data. The elevation accuracy of the field measurement points in the national control point image database is better than 1 m. The results show that the accuracy of the AW3D30 satisfies the requirement of 5 m nominal accuracy, and elevation accuracy reached 2 m (1σ). Accuracy is related to both terrain and slope. Accuracy is better in flat areas than in areas of complex terrain, and the eastern region of China is characterized by better accuracy than the western region.

Area 2 - Remote Sensing

Full Papers
Paper Nr: 21
Title:

Mapping and Monitoring Airports with Sentinel 1 and 2 Data - Urban Geospatial Mapping for the SCRAMJET Business Networking Tool

Authors:

Nuno Duro Santos, Gil Gonçalves and Pedro Coutinho

Abstract: SCRAMJET is an online tool that allows business travellers to connect and plan to meet in any of the airports included in their trip. To successfully deliver, SCRAMJET needs accurate and up-to-date worldwide airport mapping information. This paper describes an assessment of the use of Earth Observation (EO) products, in particular the Sentinel program, for improving airport mapping and monitoring its changes. The first step is to verify the data availability of Sentinel-1 and Sentinel-2 at a global scale, and then evaluate its adequacy for airport mapping. For monitoring airport changes, the analysis tested multispectral change detection methods and interferometry processing techniques. The main conclusion was that the acquisition frequency of both Sentinels is a great benefit to assure up-to-date information at a global scale. The recommended approach for a target of 200 airports is to do the airport mapping, assisted by Sentinels data for validation and improvements, and monitoring changes by integrating a Sentinel-2 change detection chain (using NIR/SWIR bands), in parallel with OpenstreetMap change detection processing.

Paper Nr: 39
Title:

Land-use Classification for High-resolution Remote Sensing Image using Collaborative Representation with a Locally Adaptive Dictionary

Authors:

Mingxue Zheng and Huayi Wu

Abstract: Sparse representation is widely applied in the field of remote sensing image classification, but sparsity-based methods are time-consuming. Unlike sparse representation, collaborative representation could improve the efficiency, accuracy, and precision of image classification algorithms. Thus, we propose a high-resolution remote sensing image classification method using collaborative representation with a locally adaptive dictionary. The proposed method includes two steps. First, we use a similarity measurement technique to separately pick out the most similar images for each test image from the total training image samples. In this step, a one-step sub-dictionary is constructed for every test image. Second, we extract the most frequent elements from all one-step sub-dictionaries of a given class. In the step, a unique two-step sub-dictionary, that is, a locally adaptive dictionary is acquired for every class. The test image samples are individually represented over the locally adaptive dictionaries of all classes. Extensive experiments (OA (%) =83.33, Kappa (%) =81.35) show that our proposed method yields competitive classification results with greater efficiency than other compared methods.

Short Papers
Paper Nr: 23
Title:

Comparison of Landsat and ASTER in Land Cover Change Detection within Granite Quarries

Authors:

R. S. Moeletsi and S. G. Tesfamichael

Abstract: This study evaluated and compared the utility of Landsat and ASTER in land cover change detection within granite quarries. Landsat data used was acquired in 1998 and 2015 while ASTER data used was acquired in 2001 and 2013. Both Landsat and ASTER were classified using supervised and maximum likelihood classification. Post-classification and Normalized Difference Vegetation Index change detection techniques were applied to assess and measure changes in land cover caused by granite quarries. Overall classification of ASTER was slightly higher than that obtained for Landsat (overall accuracy (OA) =79% and kappa 0.75vs. OA=75% and kappa 0.71). Both Landsat and ASTER were able to assess land cover changes within granite quarries. Change detection results revealed increase in granite quarries which subsequently resulted in decrease in vegetation and bare land and increase in water bodies within the quarries. The study found ASTER to be better at discriminating granite quarries from other land cover features and was able to detect small water bodies within granite quarries due to higher spatial resolution of bands in the VNIR subsystem. On the contrary, Landsat was found better at detecting changes in vegetation within granite quarries

Posters
Paper Nr: 24
Title:

Quantifying Land Cover Changes Caused by Granite Quarries from 1973-2015 using Landsat Data

Authors:

Refilwe Moeletsi and Solomon Tesfamichael

Abstract: Environmental monitoring is an important aspect in sustainable development. The use of remote sensing in the mining industry has evolved significantly and allows for improved mapping and monitoring environmental impacts related to mining activities. The aim of this study was to measure land cover changes caused by granite quarrying activities located between Rustenburg and Brits towns, North West Province, South Africa using Landsat time series data. Landsat data used in the study were acquired in the years 1973, 1986, 1998 and 2015. Each image was classified using supervised classification and change detection was subsequently applied to measure land cover changes. Furthermore, the normalized difference vegetation index (NDVI) was used to highlight the dynamics in vegetation in the quarries. Accuracy assessment of the classification resulted in an overall accuracy and Kappa coefficient of 75% and 0.71, respectively. The results of post –classification change detection revealed a significant increase of 907.4 ha in granite quarries between 1973 and 2015. The expansion in granite quarries resulted in development of water bodies (2.07 ha) within the quarries. Correspondingly, there were significant losses in vegetation (782.1 ha) and bare land (119 ha). NDVI results showed variability in mean NDVI values within the digitized quarries. The overall mean NDVI values trends showed that most granite quarries had the highest vegetation in 1998, while the least vegetation cover was observed 1986.

Paper Nr: 49
Title:

Delimitation of Urban Areas with Use of the Plataform Google Engine Explorer

Authors:

Sherlyê Francisco de Carvalho, Jhonatha Fiorio Conceição Guimarães, Carla Bernadete Madureira Cruz and Elizabeth Maria Feitosa da Rocha de Souza

Abstract: Google Earth Engine is a cloud computing platform for developing and hosting web applications that allows for the automatic sorting and mapping of terrestrial coverage. The objective this research is to evaluate the tool's potential for the generation of thematic maps of urban areas, using a big data plataform, with data in the cloud. The proposed methodology evaluate the CART classifier for different scales. The local scale considered the area of Rio de Janeiro city. A simplified legend (urban and non-urban) and other with greater detailing (different types of urban intensify), were tested. The main input was the Landsat TOA (Top of Atmosphere) mosaic. Potential, classification time, and results were evaluated. The main products generated were the temporal classifications, in which one can observe the expansion of urban areas and some confusion between classes. In this case editing is necessary. The rapidity in the classification and generation of products is one of the most important positive points of the analysis. The tool is very interactive and easy to handle, even by users with little experience. The urban areas delimitation and identification were promising, requiring more research on the best techniques to be adopted at each geographic scale.

Area 3 - Modeling, Representation and Visualization

Full Papers
Paper Nr: 8
Title:

A Newly Emerging Ethical Problem in PGIS - Ubiquitous Atoque Absconditus and Casual Offenders for Pleasure

Authors:

Koshiro Susuki

Abstract: Thanks to the recent technological advances of cellular phones, the practical realization of GeoAPI and SNS, and the consolidation of wireless LAN networks, hardware has become capable of providing portable high-speed Internet access and interactive SNS, and people can now easily communicate far more, casually and unboundedly, via the Internet. Currently, PGIS studies mainly look at the ‘sunny side’ of GIT progress. Although there are also relevant studies on online ethics, they rely unduly on spontaneously arising equilibrium innervated by mutual surveillance among the people involved. However, it is an over-optimistic and ingenuous perception regarding this exponential technological advance. In this paper, the author illustrates the existence of ‘casual offenders for pleasure’ by referring to two recent online cyberbullying incidents. Because the appreciation of technology-aided ubiquitous mapping can be very hard to see or to grasp, especially for people not educated and trained to see it, the advances prompt people to nonchalantly lower technical and ethical barriers. Further studies are essential to establish the geographic information ethics and offer a clear-cut answer for this newly emerging problem.

Paper Nr: 36
Title:

An Interactive Story Map for the Methana Volcanic Peninsula

Authors:

Varvara Antoniou, Paraskevi Nomikou, Pavlina Bardouli, Danai Lampridou, Theodora Ioannou, Ilias Kalisperakis, Christos Stentoumis, Malcolm Whitworth, Mel Krokos and Lemonia Ragia

Abstract: The purpose of this research is the identification, recording, mapping and photographic imaging of the special volcanic geoforms as well as the cultural monuments of the volcanic Methana Peninsula. With the use of novel methods the aim is to reveal and study the impressive topographic features of the Methana geotope and discover its unique geodiversity. The proposed hiking trails along with the Methana’s archaeology and history, will be highlighted through the creation of an ‘intelligent’ interactive map (Story Map). Two field trips have been conducted for the collection of further information and the digital mapping of the younger volcanic flows of Kammeni Chora with drones. Through the compiled data, thematic maps were created depicting the lava flows and the most important points of the individual hiking paths. The thematic maps were created using a Geographic Information System (GIS). Finally, those maps were the basis for the creation of the main Story Map. The decision to use Story Maps was based on the numerous advantages on offer such as user-friendly mapping, ease of use and interaction and user customized displays.

Paper Nr: 64
Title:

VOLA: A Compact Volumetric Format for 3D Mapping and Embedded Systems

Authors:

Jonathan Byrne, Léonie Buckley, Sam Caulfield and David Moloney

Abstract: The Volumetric Accelerator (VOLA) format is a compact data structure that unifies computer vision and 3D rendering and allows for the rapid calculation of connected components, per-voxel census/accounting, Deep Learning and Convolutional Neural Network (CNN) inference, path planning and obstacle avoidance. Using a hierarchical bit array format allows it to run efficiently on embedded systems and maximize the level of data compression for network transmission. The proposed format allows massive scale volumetric data to be used in embedded applications where it would be inconceivable to utilize point-clouds due to memory constraints. Furthermore, geographical and qualitative data is embedded in the file structure to allow it to be used in place of standard point cloud formats. This work examines the reduction in file size when encoding 3D data using the VOLA format. Four real world Light Detection and Ranging (LiDAR) datasets are converted and produced data an order of magnitude smaller than the current binary standard for point cloud data. Additionally, a new metric based on a neighborhood lookup is developed that measures an accurate resolution for a point cloud dataset.

Short Papers
Paper Nr: 4
Title:

Improving Urban Simulation Accuracy through Analysis of Control Factors: A Case Study in the City Belt along the Yellow River in Ningxia, China

Authors:

Rongfang Lyu, Jianming Zhang, Mengqun Xu and Jijun Li

Abstract: Spatial heterogeneity of urban expansion and macro-scale influence of socioeconomic development are the two main problems in urban-expansion modelling. In this study, we used the SLEUTH-3r model to simulate urban expansion at a fine scale (30 m) for a large urban agglomeration (22000 km2) in north-western China. Multiple spatial constraint factors were integrated into the model through Ordinary Least Regression and Binary Logistic Regression to simulate the spatial heterogeneity in urban expansion. A critical parameter—the diffusion multiplier (DM)—was used to simulate the macro-scale influence of socioeconomic development in the urban model. These two methods have greatly enhanced the ability of the SLEUTH-3r model to simulate urban expansion with high heterogeneity, and adapt to urban growth driven by socioeconomic development and government policy.

Paper Nr: 18
Title:

Creating a Likelihood and Consequence Model to Analyse Rising Main Bursts

Authors:

Robert Spivey and Sivaraj Valappil

Abstract: A model was created that analysed the likelihood and consequence of a sewage rising main bursting at any given time. Likelihood of failure was analysed through factor analysis using GIS data and historical rising main bursts data. Consequence was analysed through spatial analysis on GIS using multiple spatial joins, property density and a cost of tankering model that was created using data from GIS. This analysis created a likelihood and consequence score for each section of rising main to then create a combined overall risk score. These outputs were then used to develop a rising main planning tool in the data presentation programme Tableau to identify the high risk sites and target asset maintenance and rehab works. This paper will explain how the tool was created and the benefits of the final outputs.

Paper Nr: 40
Title:

Positional Accuracy Assessment of the VGI Data from OpenStreetMap - Case Study: Federal University of Bahia Campus in Brazil

Authors:

Elias Nasr Naim Elias, Vivian de Oliveira Fernandes and Mauro José Alixandrini Junior

Abstract: Geographic information is a crucial part of the daily lives of billions of people and is constantly used in decision-making processes for geospatial problems. With the increasing dissemination of information technology in society there has been a great gain in terms of the quantity and quality of spatial information available to internet users. This paper evaluates the positional quality of data from the Open Street Map platform (OSM) and uses data from a cartographic accuracy standard from the same region for this evaluation. The methodology section presents the methodology proposed by the Brazilian PEC-PCD which divides cartographic products into accuracy classes. For the data evaluated, the vectors from the OSM obtained better accuracy in the 1:30,000 scale.

Paper Nr: 47
Title:

Logic Modeling for NSDI Implementation Plan - A Case Study in Indonesia

Authors:

Tandang Yuliadi Dwi Putra and Ryosuke Shibasaki

Abstract: The importance of sharing and reusing geographic information for national development programs has led many countries establishing National Spatial Data Infrastructures (NSDIs). Indonesia is one of the early adopters of NSDI which begun the initiative in the 1990’s. Some achievements have been made; nevertheless, there are also constraints of NSDI implementation identified by the stakeholders. Considering recent improvement in geospatial technology that has changed the landscape of NSDI into more user-driven location services, NSDI coordinator needs to compose a comprehensive framework that integrates requirements and detailed activities as the realization of strategies. This paper presents a strategic planning using logic model, incorporating components of the NSDI in Indonesia including policy, institutional arrangements, technology, standards and human resource issues. A logic model visualizes systematic programs and connecting related activities with the projected outcomes. The model started with the identification of requirements through in-depth interviews and documents study to provide insight for NSDI implementation. Subsequently, it determines intended impact and outcomes, analyses activities, defines expected outputs from NSDI initiative and identify the resources for the operation. Our proposed model can be useful for the implementation of NSDI particularly for countries that do not have strategic management yet or are considering improving it.

Paper Nr: 50
Title:

An Alternative Raster Display Model

Authors:

Titusz Bugya and Gábor Farkas

Abstract: In this paper we present an alternative, vector based coverage model, which could extend the traditional raster model. As the coverage model only changes the representation model behind rasters, it needs minimal effort to implement, mitigates current raster limitations, and has minimal performance impact due to modern computers’ increased computing capacities. Moreover, coverages could still get the benefits of the traditional raster model. As the data model remains the same, traditional raster based operations can still be applied on rectangular coverages, while other patterns can still benefit at least from matrix algebra. Finally, not only current raster operations could be kept, but there would not be any limitations of developing new ones optimized for different coverage patterns (e.g. hexagonal operations).

Paper Nr: 59
Title:

Evaluation of Two Solar Radiation Algorithms on 3D City Models for Calculating Photovoltaic Potential

Authors:

Syed Monjur Murshed, Alexander Simons, Amy Lindsay, Solène Picard and Céline De Pin

Abstract: Different algorithms are used to calculate solar irradiance on horizontal and vertical surfaces of the 3D city models. The goal of this paper is to evaluate the hourly solar irradiance calculated by two widely used algorithms in order to assess photovoltaic (PV) potential of the 3D city models. Both algorithms are implemented in an open source software infrastructure consisting of PostgreSQL database connected with PostGIS, Python, etc. The results show a significant variation of solar irradiances on horizontal, vertical and tilted surfaces. Finally, the justification of a particular algorithm to assess citywide PV potentials is made.

Paper Nr: 69
Title:

Synthetic Images Simulation (SImS): A Tool in Development

Authors:

Carlos Alberto Stelle, Francisco Javier Ariza-López and Manuel Antonio Ureña-Cámara

Abstract: Images from remote sensing are presented as the main and most relevant data produced by this technology due to the numerous applications in the most diverse areas of knowledge. In this context, simulating these products can mean a significant reduction in costs, time, as well as assisting in the design stages of future sensors in the laboratory. One of the challenges of simulation is to reduce as much as possible the gap between it and the reality one wishes to study. In this context, the purpose of this work is, from a brief review of the methods of simulation of passive sensor images, present a proposal to classify them, to cite some examples of each, to present the conceptual model that is being developed, to mention aspects which provide versatility and functionality as well as some results.

Posters
Paper Nr: 20
Title:

Identifying the Impact of Human Made Transformations of Historical Watercourses and Flood Risk

Authors:

Thomas Moran, Sivaraj Valappil and David Harding

Abstract: In the past, many urban rivers were piped and buried either to simplify development, hide pollution or in an attempt to reduce flood risk and these factors define a culverted watercourse. A large amount of these watercourses are not mapped, and if they are, then their original nature is not clearly identifiable due to being recorded as part of the sewer network. Where these culverted watercourses are not mapped due to being lost to time and development, we expressed these to be so-called ‘lost rivers’. There is a lack of awareness of the flood risk in catchments housing these rivers, and because many of them are incorrectly mapped as sewers, there is often confusion over their legal status and responsibility for their maintenance. To identify the culverted watercourses many datasets were used including LiDAR data (Ground Elevation Data), historical maps (earliest 1840's), asset data (Sewer network), and the river network. Automatic and manual identification of potential culverted watercourses were carried out and then the mapped assets are analysed with flooding data to understand the impacts. A GIS map has been created showing all potential lost rivers and sites of culverted watercourses in the North London area.

Area 4 - Knowledge Extraction and Management

Full Papers
Paper Nr: 11
Title:

Enhanced Address Search with Spelling Variants

Authors:

Konstantin Clemens

Abstract: The process of resolving names of spatial entities like postal addresses or administrative areas into their whereabouts is called geocoding. It is an error-prone process for multiple reasons: Names of postal address elements like cities, streets, or districts are often reused for historical reasons; structures of postal addresses are only coherent within countries or regions - around the globe addresses are not structured in a canonical way; human users might not adhere even to locally common format for specifying addresses; also, humans often introduce spelling mistakes when referring to a location. In this paper, a log of address searches from human users is used to model user behavior with regards to spelling mistakes. This model is used to generate spelling variants of address tokens which are indexed in addition to the proper spelling. Experiments show that augmenting the index of a geocoder with spelling variants is a valuable approach to handling queries with misspelled tokens. It enables the system to serve more such queries correctly as compared to a geocoding system supporting edit distances: While this way the recall of such a system is improved, its precision remains on par at the same time.

Paper Nr: 37
Title:

GIS and Geovisualization Technologies Applied to Rainfall Spatial Patterns over the Iberian Peninsula using the Global Climate Monitor Web Viewer

Authors:

Juan Antonio Alfonso Gutiérrez, Mónica Aguilar-Alba and Juan Mariano Camarillo Naranjo

Abstract: Web-based GIS and geovisualization are increasingly expanding but still few examples exist with regard to the diffusion of climatic data. The Global Climate Monitor (GCM) (http://www.globalclimatemonitor.org) created by the Climate Research Group of the Department of Physical Geography of the University of Seville was used to characterize the spatial distribution of precipitation in the Iberian Peninsula. The concern about the high spatial-temporal variability of precipitation in Mediterranean environments is accentuated in the Iberian Peninsula by its physiographic characteristics. However, despite its importance in water resources management it has been scarcely addressed from a spatial perspective. Precipitation is characterized by positive asymmetric frequency distributions so conventional statistical measures lose representativeness. For this reason, a battery of robust and non-robust statistics of little used in the characterization of precipitation has been calculated and evaluated quantitatively. The results show important differences that might have significant consequences in the estimation and management of water resources. The realization of this study has been carried out using Open Source technologies and has implied the design and management of a spatial database. The results are mapped through a GIS and are incorporated into a web geovisor (https://qgiscloud.com/Juan_Antonio_Geo/expo) in order to facilitate access to them.

Paper Nr: 42
Title:

ResPred: A Privacy Preserving Location Prediction System Ensuring Location-based Service Utility

Authors:

Arielle Moro and Benoît Garbinato

Abstract: Location prediction and location privacy has retained a lot of attention recent years. Predicting locations is the next step of Location-Based Services (LBS) because it provides information not only based on where you are but where you will be. However, obtaining information from LBS has a price for the user because she must share all her locations with the service that builds a predictive model, resulting in a loss of privacy. In this paper we propose ResPred, a system that allows LBS to request location prediction about the user. The system includes a location prediction component containing a statistical location trend model and a location privacy component aiming at blurring the predicted locations by finding an appropriate tradeoff between LBS utility and user privacy, the latter being expressed as a maximum percentage of utility loss. We evaluate ResPred from a utility/privacy perspective by comparing our privacy mechanism with existing techniques by using real user locations. The location privacy is evaluated with an entropy-based confusion metric of an adversary during a location inference attack. The results show that our mechanism provides the best utility/privacy tradeoff and a location prediction accuracy of 60% in average for our model.

Paper Nr: 61
Title:

Elcano: A Geospatial Big Data Processing System based on SparkSQL

Authors:

Jonathan Engélinus and Thierry Badard

Abstract: Big data are in the midst of many scientific and economic issues. Furthermore, their volume is continuously increasing. As a result, the need for management and processing solutions has become critical. Unfortunately, while most of these data have a spatial component, almost none of the current systems are able to manage it. For example, while Spark may be the most efficient environment for managing Big data, it is only used by five spatial data management systems. None of these solutions fully complies with ISO standards and OGC specifications in terms of spatial processing, and many of them are neither efficient enough nor extensible. The authors seek a way to overcome these limitations. Therefore, after a detailed study of the limitations of the existing systems, they define a system in greater accordance with the ISO-19125 standard. The proposed solution, Elcano, is an extension of Spark complying with this standard and allowing the SQL querying of spatial data. Finally, the tests demonstrate that the resulting system surpasses the current available solutions on the market.

Paper Nr: 70
Title:

Spatiotemporal Data-Cube Retrieval and Processing with xWCPS

Authors:

George Kakaletris, Panagiota Koltsida, Manos Kouvarakis and Konstantinos Apostolopoulos

Abstract: Management and processing of big data is inherently interweaved with the exploitation of their metadata, also "big" on their own, not only due to the increased number of datasets that get generated with continuously increased rates, but also due to the need for deeper and wider description of those data, which yields metadata of higher complexity and volume. Taking into account that generally data cannot be processed unless enough description is provided on their structure, origin, etc, accessing those metadata becomes crucial not only for locating the appropriate data but also for consuming them. The instruments to access those metadata shall be tolerant to their heterogeneity and loose structure. In this direction, xWCPS (XPath-enabled WCPS) is a novel query language that targets the spatiotemporal data cubes domain and tries to bring together metadata and multidimensional data processing under a single syntax paradigm limiting the need of using different tools to achieve this. It builds on the domain-established WCPS protocol and the widely adopted XPath language and yields new facilities to spatiotemporal datacubes analytics. Currently in its 2nd release, xWCPS, represents a major revision over its predecessor aiming to deliver improved, clearer, syntax and to ease implementation by its adopters.

Short Papers
Paper Nr: 31
Title:

An Automated Approach to Mining and Visual Analytics of Spatiotemporal Context from Online Media Articles

Authors:

Bolelang Sibolla, Laing Lourens, Retief Lubbe and Mpheng Magome

Abstract: Traditionally spatio-temporally referenced event data was made available to geospatial applications through structured data sources, including remote sensing, in-situ and ex-situ sensor observations. More recently, with a growing appreciation of social media, web based news media and location based services, it is an increasing trend that geo spatio-temporal context is being extracted from unstructured text or video data sources. Analysts, on observation of a spatio-temporal phenomenon from these data sources, need to understand, timeously, the event that is happening; its location and temporal existence, as well as finding other related events, in order to successfully characterise the event. A holistic approach involves finding the relevant information to the phenomena of interest and presenting it to the analyst in a way that can effectively answer the “what, where, when and why” of a spatio-temporal event. This paper presents a data mining based approach to automated extraction and classification of spatiotemporal context from online media publications, and a visual analytics method for providing insights from unstructured web based media documents. The results of the automated processing chain, which includes extraction and classification of text data, show that the process can be automated successfully once significantly large data has been accumulated.

Paper Nr: 46
Title:

Representing GeoData for Tourism with Schema.org

Authors:

Oleksandra Panasiuk, Zaenal Akbar, Thibault Gerrier and Dieter Fensel

Abstract: A large amount of tourism data on the web, representing different touristic services, refers to information which is geographically located. With the intensive development of artificial intelligence, interest in the annotation of data is continuously increasing. It is therefore important to describe all tourist needs. To be understandable to search engines, chatbots or other personal assistant systems, content data should be structured, well-formed and semantically consistent. Schema.org is a de-facto standard for marking up structured data on the web. In this paper we show how to annotate geographical information related to different touristic services and activities (e.g. hotels, restaurants, events, hiking and climbing trails) available on an interactive map using the schema.org vocabulary.

Paper Nr: 57
Title:

Location Intelligence for Augmented Smart Cities Integrating Sensor Web and Spatial Data Infrastructure (SmaCiSENS)

Authors:

Devanjan Bhattacharya and Marco Painho

Abstract: Spatio-temporal aspects of data lead to critical information. Sensors capture data at all scales continually so it is imperative that useful information be extracted ubiquitously and regularly. Location plays a vital part by helping understand relations between datasets. It is crucial to link developmental works with spatial attributes and current challenge is to create an open platform that manages real-time sensor data and provides critical spatial analytics atop expert domain knowledge provided in the system. That is a two-faced problem where the solution tackles not only data from multiple sources but also runs data management platform, a spatial data infrastructure(SDI) as backbone framework able to harness sensor web(SW). The paper proposes development of such a globally shared open spatial expert system(ES), SmaCiSENS, a first of a kind geo-enabled knowledge based(KB) ES for multiple fields, smarter cities to climate modeling. SmaCiSENS is integration of SW and SDI with domain KB on data and problems, ready to infer solutions. The paper describes an architecture for semantic enablement for SW, SDI; connect interfaces, functions of SDI and SW, and sensor data application program interfaces (APIs) to better manage climate modeling, geohazard, global changes, and other vital areas of attention and action.

Posters
Paper Nr: 51
Title:

Geospatial Data Sharing Barriers Across Organizations and the Possible Solution for Ethiopia

Authors:

Habtamu Sewnet Gelagay

Abstract: Geospatial data sharing across organizations is a well-recognized challenge .Due to the absence of appropriate space to share geospatial assets, they often remain scattered and locked in various sectors of Ethiopia, no data sets are maintained and updated regularly, efforts are duplicated, and finding the available data set is difficult. Exploiting the full socio-economic benefit of geospatial information is thus impossible. This paper therefore aimed to assess inter-organizational geospatial data sharing challenges; and the possible solutions in Ethiopia. Lack of coordination, poor data quality and incompatibility, institutional, legal, policy, and technological issues were identified as major challenges. ENSDI, already initiated, should be promoted more as the collaborative entity meant for effective inter-organizational geospatial data sharing. National strategy to hand over informal SDI initiatives, clear ENSDI development approach (top down), and investment on the building block of ENSDI are suggested for the successful execution of ENSDI.

Area 5 - Domain Applications

Full Papers
Paper Nr: 7
Title:

Hotspot Analysis of the Spatial and Temporal Distribution of Fires

Authors:

Chien-Yuan Chen and Qi-Hua Yang

Abstract: Fire can take lives and destroy structures. However, modern technology can assist authorities to make decisions on fire disaster prevention. Geographic information systems can play a vital role in fire prevention and mitigation by predicting potential hotspots for fires. This study collected and analysed data on fires in Tainan City in southern Taiwan. Spatial statistics analysis tools employing average nearest neighbour analysis and global analysis through Moran's I were used to analyse whether the fires had a clustered pattern and to plot a fire hotspot map using Getis-Ord Gi* analysis. The results showed that the highest fire risk index is that for people over 80 years old, followed by those between the ages of 60 and 80. The spatial distributions of fire locations, injuries, deaths, factory fires, house fires, and wild fires have clustered patterns in the city. The fire hotspots surround the downtown districts, which have high population density and highly developed commercial and industry areas. The fire cold spots are located in the lowly developed mountainous and coastal areas, which have lower population density. Residents in hotspots should be able to better understand their fire risk through studying the hotspot map. Moreover, authorities can identify hotspots for decision making on fire prevention and urban development planning.

Short Papers
Paper Nr: 34
Title:

An Example of Multitemporal Photogrammetric Documentation and Spatial Analysis in Process Revitalisation and Urban Planning

Authors:

Agnieszka Turek, Adam Salach, Jakub Markiewicz, Alina Maciejewska and Dorota Zawieska

Abstract: Urban space is undergoing permanent and dynamic transformations resulting from economic and social changes, technological development and migrations of people from rural areas to cities. It strongly affects the evolution of the landscape and city structures. Current photogrammetric techniques allow for the acquisition of data for large areas within a relatively short time, and thus, allow for fast updating and verification of existing data files. The authors of this paper have focused their research on the possibility to use multi-variant spatial analysis in the process of revitalisation of degraded areas in cities. The industrial district of Warsaw was selected for this study, where problems concerning the development of formally industrial areas located in the attractive part of the city (close to the city centre) are extremely visible. As a result of urban development, degraded areas are included within administrative boundaries of the city and have created urban wastelands.

Paper Nr: 52
Title:

A Concept for Fast Indoor Mapping and Positioning in Post-Disaster Scenarios

Authors:

Eduard Angelats and José A. Navarro

Abstract: This work presents an early concept for a low-cost, lightweight fast mapping and positioning system suitable for civil protection and emergency teams working in post-disaster scenarios. Such a concept envisages continuous, seamless tracking in both indoor and outdoor environments; this would be possible because of the low geometric requirements set by emergency teams: knowing what is the floor and room where they are located is enough. The authors believe that current technologies (both hardware and software) are powerful enough to build such system; such an opinion is backed by an assessment of several currently available sensors (IMUs, RGB-D cameras, GNSS receivers), embedded processors and mapping and positioning algorithms as well as a feasibility study taking into account the several factors playing a role in the problem.