Tag Archives: hyperspectral

Wrapping Up: Seeing Earth, Talking to the Geon

To wrap up this bespoke residency, now that our 6 months (part-time) has come to an end, I thought it would be helpful to cover a bunch of ideas and workflows we’ve established, some problems, and future directions to explore. There’s a lot to unpack, but I’ll try and be succinct – with a few digressions.

Here’s a video that covers the main points, which I expand upon in the text below.

During the course of this residency I’ve conducted lots of experiments and had a great dialogue with Chris about how to approach things, leading down some inspiring pathways for future work.

Philip Island Multispectral QGIS

2D visualisation

Initially we started off with looking at how to visualise hyperspectral and multispectral satellite data – and it turns to be pretty straight-forward using Python and some programming help using AI – a great way to develop ~explained~ code for libraries one may be unfamiliar with.

You need to know how to iterate and debug a program, work through some documentation, and the data can be accessed in complex file formats and transformed into usable images. This can be done in Juypiter notebooks, running locally in e.g. Anaconda, or remotely on Google Colab or NCRIS Cloudstor notebooks. Pretty straight-forward. These programming assistants will significantly improve over time and complex tasks will become even simpler to prompt.

HDF View File Data
ChatGPT 3.5 HDF Python advice

What the data captures is an entirely different matter – and hyperspectral data is a lot more complicated than multispectral data. I explored a raycasting volume system for that in UE, but that is beyond the remit of this wrap-up. There are lots of different ways to approach it – the question is which one is most useful. It’s a bit “chicken and egg.”

In concert, the question arises as to what one can infer from the data – which is in itself a huge field of scientific and engineering research. It’s not just a matter of applying some kind of GIS colour palette to spectral data – there’s whole fields of analytics that can be applied. These range from naïve naked-eye approaches, through quantitative and statistical analyses to fascinating work in deep learning .

What we have attempted to do is to move this data from the exclusive purview of the specialist to the generalist, from the desktop GIS system to the spatially-located mobile device where an observer may ~actually~ be in a moment in time, and to establish how a human-AI interaction might be established that can create dialogue, queries and observations about the data and their immediate environment.

3D+ Visualisation

Similar approaches can be undertaken in Python for creating 3D+ representations of data, using e.g.  MatPlotLib or VisPy (amongst many others).

For our purposes, Python scripts can run in Unreal Engine/ Omniverse platforms (as well as Unity) and become involved in the creation of geometry, textures, actors and a whole range of actions and narrative entities, Very interesting to see how this is opening up as a result of Virtual Production pipelines, upon which art/science projects can piggyback. This includes creating complex time series animations, physics simulations and volumetric rendering, including interactions with other toolsets used in scientific visualisation and creative industries workflows. Python is the lingua franca.

Cesium

Cesium for Unreal has progressed a great deal. It is currently more stable and flexible than implementations for Omniverse or Unity. I surmise that this is because of the source-available nature of Unreal, as opposed to the closed-source model of the other platforms. Smart move by Epic. I hope it stays that way and becomes more open over time. Cesium is opensource, which is crucial to its flexibility and widespread adoption.

AI and ML

Of course, the elephant in the room is Artificial Intelligence and Machine Learning. It has been fascinating to watch this evolve over the past 6 months – a huge hype-cycle reminiscent of the Blockchain frenzy of the last few years – but at least blockchain was never represented as an ‘existential risk’ . Nevertheless, quantum blockchain technologies will, some day, become human-actor authentication and provenential authorities for lots of different types of data – it may become the only way to distinguish between ‘real’ and generative datasets.

AI  is the definitive enabling technology of our time. It present risks (not yet existential ones) and great opportunities. Like any powerful technology it must be treated with great circumspection and aligned with scientific and ethical interests for the benefit of ‘humanity’. It’s a mirror of humanity, and humanity is not all good. Artistic engagement can help explore and critique this new domain.

XR

David Chalmer’s Reality+ (2022) stimulates  thoughts about the interfusion of world, data and intention, as does Jeff Malpas’ Place and Experience: A Philosophical Topography (2nd Ed. 2018) and Peter Otto’s Multiplying Worlds: Romanticism, Modernity and the Emergence of Virtual Reality (2011).

Apple’s Vision Pro is the latest technological offering in this long history of the intermediation of the self, panorama and place. And it is compelling – not because of the artificial gaze projected to the outside world (deixis to the other), but because of the disposal of controllers – that it can operate by coded hand-gesture alone. Its parts are not new, but the bringing together of systems is. It looks very interesting.

Yet the price of all ‘complete’ XR is total surveillance, even with a ‘secure enclave’: it’s still a head in a box, inherently panoptic and performative.

One wonders how long it will be before we all need to start wearing tinfoil hats to resist implanted thoughts or inception. Quite a long time, I expect, but not forever. Besides, the notion of implanted thoughts is epistemically ambiguous – often these are simply referred to as ‘culture’, ‘beliefs’ and ‘language’. It’s all quite problematic in the post-truth, post-human world of the Novacene. More to contemplate.

Some Practical Examples

In these naive and early days of AI XR, the world that is opening up is fascinating, as I hope the brief sketches above demonstrate. I think of them simply as sketches in exploring how XR will become continuous across mobile devices, HMDs/spectacles and desktop and large immersive screens. Each device format has its own affordances and content, interactions and UI/UX needs to be cognizant of that – lots of interesting design considerations. Natural interactions seem the most compelling, as the premise of ubiquitous/ambient computing is that it will disappear into the background and essentially become invisible – except for intermediation with the world via AI agents such as our idea of the Geon.

I hope you’ve found the material I’ve covered here as interesting, useful and thought provoking as I have! My sincere thanks to Prof. Chris Fluke, the SmartSat CRC and ANAT for facilitating this absorbing residency.  Lots to think about and lots of ideas for future work.

Visualising Satellite data using Google Colab

Having spent a few hours reading documentation and having an ongoing conversation with chatGPT, I’m getting the hang of the hdf5 file structure and can now visualise some multispectral data in Google Colab:


from google.colab import drive
drive.mount('/content/drive')

import h5py
import numpy as np
import matplotlib.pyplot as plt

# Get the HDF5 file from your Google Drive
file_id = '/content/drive/MyDrive/DATA/file_name.he5'

with h5py.File(file_id, "r") as f:
# List all groups
print("Keys: %s" % f.keys())
a_group_key = list(f.keys())[0]

# Get the data
data = list(f[a_group_key])

#This gives us some idea about the groups/keys in the hdf file and some idea about the datasets contained therein - but will become more detailed as we go along

# Open the HDF5 file
with h5py.File(file_id, 'r') as f:
# Open the data field
#currently this is hard-coded as I know from hdfView that this is the path I want to look at - but really we want to find this programmatically.
data_field = f['/path_to/Data Fields/Cube']
# Print the shape of the data field
print(f'Shape: {data_field.attrs}')
print(f'Shape: {data_field.dtype}')
print(f'Shape: {data_field.shape}')
#This gives us some idea about the data cube we are examining - such as its attributes, data type and shape (typically rows and columns) - it'll print them to output

# Open the HDF5 file
with h5py.File(file_id, 'r') as f:
# Open the data field
data_field = f['/path_to/Data Fields/Cube']
# Get the data and reshape it to 2D
data = np.array(data_field[:]).reshape(data_field.shape[0], data_field.shape[1])
# Scale the data to fit within an 800x800 pixels
data = np.uint8(np.interp(data, (data.min(), data.max()), (0, 255)))
# Create a figure with the specified size
fig = plt.figure(figsize=(8, 8))
# Add the data to the figure
plt.imshow(data, cmap='viridis')
# Display the figure
plt.show()

Satellite data plotted with viridis colourmap

Next steps involve developing a way of iteratively traversing the hdf5 directory structure, so that I can identify relevant data fields within the file – they’re not explicitly identified as ‘image files’. This can be done using h5py functions. Another thing to explore is GDAL: once I’ve identified the correct data in geolocation fields, it should become possible to output geotiffs or UE-friendly png files with geolocation metadata.

Here’s some other matplotlib colourmaps applied to the same dataset.

It’s all pretty crude at this point – just figuring out how this stuff might work.

Working with ChatGPT & Hyperspectral Data

Thus far I have been playing around with HDFView to extract hdf5 image data – but I am very interested in working with advanced machine learning systems and developing processes to interface them with Unreal Engine and on-line data sources. It seems the most tractable way to do this is through the use of a Python API – which is pretty much the lingua franca of scientific computing.

As I am a neophyte with using hyperspectral image data, let’s use ChatGPT to find out what might be an ‘easy’ or ‘best’ way to access image data within an hdf5 file:

Q: what is the best way to extract images from an hdf5 file containing hyperspectral image data?

ChatGPT:

One way to extract images from an HDF5 file containing hyperspectral image data is to use the h5py library in Python. This library allows you to open the HDF5 file and access the data stored within it, including the images. You can then use the numpy library to manipulate and extract the images as needed. Here is an example of how you might extract an image from an HDF5 file using h5py:

import h5py
import numpy as np
# Open the HDF5 file
with h5py.File(‘hyperspectral_image_data.hdf5’, ‘r’) as hdf5_file:
# Access the dataset containing the images
images = hdf5_file[‘images’]
# Extract an image by its index
image = np.array(images[0])
print(image)

Another way is to use the gdal library in python, specifically with the gdal_translate utility, which can convert HDF5 to other raster formats like tiff or geotiff, and then use any image processing library to open and read the image.

gdal_translate -of GTiff HDF5:"file.hdf5"://dataset output.tif

In this way you will be able to use any library of your preference to read and extract the images.


I have no idea currently whether this will work, so lets look up the relevant documentation:

Both look promising – and has saved me a heap of time looking for ways to do it!

The first outputs images as a numPy array – meaning that we can examine/export each image by its index – which would be useful for selecting for certain  λ (wavelength)) values and conducting operations upon them.

The second uses GDAL (Geospatial Data Abstraction Library), which provides powerful utilities for the translation of geospatial metadata – enabling correct geolocation of the hyperspectral image data, for instance.

So perhaps a combination of both will be useful as we proceed.

But of course, any code generated by chatGPT or OpenAI Codex or other AI assistants must be taken with several grains of salt. For instance – a recent study by MIT shows that users may write more insecure code when working with an AI code assistant (https://doi.org/10.48550/arXiv.2211.03622). Perhaps there are whole API calls and phrases that are hallucinated? I simply don’t know at this stage.

So – my next step will be to fire up a python environment – probably Google Colab or Anaconda and see what happens.

A nice overview of Codex here:

OpenAI Codex Live Demo

No Description

Technical Webinar: Can satellites monitor crop and pasture quality across Australia?

From the SmartSat CRC Newsletter:

This Technical Presentation hosted by Ha Thanh Nguyen, Research Scientist in Digital Interactions (Agriculture & Food) at CSIRO explores the outcomes of SmartSat Project P3.25: Can satellites monitor crop and pasture quality across Australia?

Knowledge of crop and pasture quality can provide the industry with insights to assist with the grazing management of pastures and input management decisions for crops. Handheld and lab-based spectroscopy have been extensively employed to monitor quality-based plant attributes. The methods employed are time consuming and expensive to implement and do not provide the industry with insights into the temporal trends of the critical variables. High resolution and frequent return time can overcome numerous deficiencies affecting equivalent visible IR and SWIR platforms, that limit the ability to create a viable product around crop and pasture quality. This project conducted a feasibility analysis capitalising on existing and planned satellite missions, including the Aquawatch satellites and precursors to test development of new high frequency products for crop and pasture quality across the Australian landscape. This project is led by Dr Roger Lawes, Principal Research Scientist at CSIRO Agriculture Flagship and includes participants from CSIRO and the Grains Research and Development Corporation. For more information visit https://smartsatcrc.com/research-prog…

Scope: Hyperspectral Imaging, Earth Model, Unreal Engine

Our initial scope is to examine how we can can use hyperspectral satellite data within Unreal Engine (UE).

Hyperspectral Imaging

Wikipedia entry for Hyperspectral Imaging:

Hyperspectral imaging collects and processes information from across the electromagnetic spectrum.[1] The goal of hyperspectral imaging is to obtain the spectrum for each pixel in the image of a scene, with the purpose of finding objects, identifying materials, or detecting processes.[2][3] There are three general branches of spectral imagers. There are push broom scanners and the related whisk broom scanners (spatial scanning), which read images over time, band sequential scanners (spectral scanning), which acquire images of an area at different wavelengths, and snapshot hyperspectral imaging, which uses a staring array to generate an image in an instant.

Whereas the human eye sees color of visible light in mostly three bands (long wavelengths – perceived as red, medium wavelengths – perceived as green, and short wavelengths – perceived as blue), spectral imaging divides the spectrum into many more bands. This technique of dividing images into bands can be extended beyond the visible. In hyperspectral imaging, the recorded spectra have fine wavelength resolution and cover a wide range of wavelengths. Hyperspectral imaging measures continuous spectral bands, as opposed to multiband imaging which measures spaced spectral bands.[4]

Hyperspectral imaging satellites are equipped with special imaging sensors that collect data as a set of raster images, typically visualised as a stack, which each layer representing a discrete wavelength captured by the sensor array.

HyperspectralCube.jpg
By Dr. Nicholas M. Short, Sr. - NASA Link

This represents a 3-dimensional hyperspectral data cube, consisting of x,y coordinates that correspond to Earth-coordinates within a defined geodetic reference system, and the λ coordinate which corresponds to spectral wavelength.

Earth Model

An Earth Model means precisely that – a model of the Earth!

More specifically, the Earth is understood as a highly complex three-dimensional shape. It can be simply represented as a sphere or more accurately as an oblate spheroid, representing the diametric differences between equatorial and polar diameters caused by the rotation of the planet.

The scientific study of the shape of the Earth is called Geodesy. Arising from this is the World Geodetic System, about which Wikipedia notes:

The World Geodetic System (WGS) is a standard used in cartographygeodesy, and satellite navigation including GPS. The current version, WGS 84, defines an Earth-centered, Earth-fixed coordinate system and a geodetic datum, and also describes the associated Earth Gravitational Model (EGM) and World Magnetic Model (WMM). The standard is published and maintained by the United States National Geospatial-Intelligence Agency.[1]

Most relevant to creating a digital model of the Earth is the Earth-centred, Earth-fixed coordinate system (or ECEF):

The Earth-centered, Earth-fixed coordinate system (acronym ECEF), also known as the geocentric coordinate system, is a cartesian spatial reference system that represents locations in the vicinity of the Earth (including its surface, interior, atmosphere, and surrounding outer space) as XY, and Z measurements from its center of mass.[1][2] Its most common use is in tracking the orbits of satellites and in satellite navigation systems for measuring locations on the surface of the Earth, but it is also used in applications such as tracking crustal motion.

The distance from a given point of interest to the center of Earth is called the geocentric distanceR = (X2 + Y2 + Z2)0.5, which is a generalization of the geocentric radiusR0, not restricted to points on the reference ellipsoid surface. The geocentric altitude is a type of altitude defined as the difference between the two aforementioned quantities: h′ = R − R0;[3] it is not to be confused for the geodetic altitude.

Conversions between ECEF and geodetic coordinates (latitude and longitude) are discussed at geographic coordinate conversion.

Unreal Engine

Unreal Engine 5.1 provides a comprehensive system for managing ECEF georeferenced data, including ‘flat’ and ’round’ planet projections:

https://docs.unrealengine.com/5.1/en-US/georeferencing-a-level-in-unreal-engine/

In addition to this, there are a variety of plugins available for managing georeferenced data.

For this research we are evaluating two initial approaches:

Cesium for Unreal: https://cesium.com/platform/cesium-for-unreal/

Landscaping Plugin:  https://landscaping.ludicdrive.com

More soon.