Tag Archives: unreal engine

Wrapping Up: Seeing Earth, Talking to the Geon

To wrap up this bespoke residency, now that our 6 months (part-time) has come to an end, I thought it would be helpful to cover a bunch of ideas and workflows we’ve established, some problems, and future directions to explore. There’s a lot to unpack, but I’ll try and be succinct – with a few digressions.

Here’s a video that covers the main points, which I expand upon in the text below.

During the course of this residency I’ve conducted lots of experiments and had a great dialogue with Chris about how to approach things, leading down some inspiring pathways for future work.

Philip Island Multispectral QGIS

2D visualisation

Initially we started off with looking at how to visualise hyperspectral and multispectral satellite data – and it turns to be pretty straight-forward using Python and some programming help using AI – a great way to develop ~explained~ code for libraries one may be unfamiliar with.

You need to know how to iterate and debug a program, work through some documentation, and the data can be accessed in complex file formats and transformed into usable images. This can be done in Juypiter notebooks, running locally in e.g. Anaconda, or remotely on Google Colab or NCRIS Cloudstor notebooks. Pretty straight-forward. These programming assistants will significantly improve over time and complex tasks will become even simpler to prompt.

HDF View File Data
ChatGPT 3.5 HDF Python advice

What the data captures is an entirely different matter – and hyperspectral data is a lot more complicated than multispectral data. I explored a raycasting volume system for that in UE, but that is beyond the remit of this wrap-up. There are lots of different ways to approach it – the question is which one is most useful. It’s a bit “chicken and egg.”

In concert, the question arises as to what one can infer from the data – which is in itself a huge field of scientific and engineering research. It’s not just a matter of applying some kind of GIS colour palette to spectral data – there’s whole fields of analytics that can be applied. These range from naïve naked-eye approaches, through quantitative and statistical analyses to fascinating work in deep learning .

What we have attempted to do is to move this data from the exclusive purview of the specialist to the generalist, from the desktop GIS system to the spatially-located mobile device where an observer may ~actually~ be in a moment in time, and to establish how a human-AI interaction might be established that can create dialogue, queries and observations about the data and their immediate environment.

3D+ Visualisation

Similar approaches can be undertaken in Python for creating 3D+ representations of data, using e.g.  MatPlotLib or VisPy (amongst many others).

For our purposes, Python scripts can run in Unreal Engine/ Omniverse platforms (as well as Unity) and become involved in the creation of geometry, textures, actors and a whole range of actions and narrative entities, Very interesting to see how this is opening up as a result of Virtual Production pipelines, upon which art/science projects can piggyback. This includes creating complex time series animations, physics simulations and volumetric rendering, including interactions with other toolsets used in scientific visualisation and creative industries workflows. Python is the lingua franca.

Cesium

Cesium for Unreal has progressed a great deal. It is currently more stable and flexible than implementations for Omniverse or Unity. I surmise that this is because of the source-available nature of Unreal, as opposed to the closed-source model of the other platforms. Smart move by Epic. I hope it stays that way and becomes more open over time. Cesium is opensource, which is crucial to its flexibility and widespread adoption.

AI and ML

Of course, the elephant in the room is Artificial Intelligence and Machine Learning. It has been fascinating to watch this evolve over the past 6 months – a huge hype-cycle reminiscent of the Blockchain frenzy of the last few years – but at least blockchain was never represented as an ‘existential risk’ . Nevertheless, quantum blockchain technologies will, some day, become human-actor authentication and provenential authorities for lots of different types of data – it may become the only way to distinguish between ‘real’ and generative datasets.

AI  is the definitive enabling technology of our time. It present risks (not yet existential ones) and great opportunities. Like any powerful technology it must be treated with great circumspection and aligned with scientific and ethical interests for the benefit of ‘humanity’. It’s a mirror of humanity, and humanity is not all good. Artistic engagement can help explore and critique this new domain.

XR

David Chalmer’s Reality+ (2022) stimulates  thoughts about the interfusion of world, data and intention, as does Jeff Malpas’ Place and Experience: A Philosophical Topography (2nd Ed. 2018) and Peter Otto’s Multiplying Worlds: Romanticism, Modernity and the Emergence of Virtual Reality (2011).

Apple’s Vision Pro is the latest technological offering in this long history of the intermediation of the self, panorama and place. And it is compelling – not because of the artificial gaze projected to the outside world (deixis to the other), but because of the disposal of controllers – that it can operate by coded hand-gesture alone. Its parts are not new, but the bringing together of systems is. It looks very interesting.

Yet the price of all ‘complete’ XR is total surveillance, even with a ‘secure enclave’: it’s still a head in a box, inherently panoptic and performative.

One wonders how long it will be before we all need to start wearing tinfoil hats to resist implanted thoughts or inception. Quite a long time, I expect, but not forever. Besides, the notion of implanted thoughts is epistemically ambiguous – often these are simply referred to as ‘culture’, ‘beliefs’ and ‘language’. It’s all quite problematic in the post-truth, post-human world of the Novacene. More to contemplate.

Some Practical Examples

In these naive and early days of AI XR, the world that is opening up is fascinating, as I hope the brief sketches above demonstrate. I think of them simply as sketches in exploring how XR will become continuous across mobile devices, HMDs/spectacles and desktop and large immersive screens. Each device format has its own affordances and content, interactions and UI/UX needs to be cognizant of that – lots of interesting design considerations. Natural interactions seem the most compelling, as the premise of ubiquitous/ambient computing is that it will disappear into the background and essentially become invisible – except for intermediation with the world via AI agents such as our idea of the Geon.

I hope you’ve found the material I’ve covered here as interesting, useful and thought provoking as I have! My sincere thanks to Prof. Chris Fluke, the SmartSat CRC and ANAT for facilitating this absorbing residency.  Lots to think about and lots of ideas for future work.

NVIDIA Omniverse or Unreal Engine 5?

Throughout this residency I’ve been working with two development platforms to try and evaluate which is the ‘best'(easiest?optimal?) workflow for the ideas we’ve been exploring. Even at this penultimate stage, things are unclear, so I thought I’d unpack some of the similarities, differences and interactions between the approaches.

Here’s two takes (after a few unsatisfactory tries)  from ChatGPT 4 and Google Bard, using the prompt:

“You are an expert computer artist and programmer. Explain what are the differences and similarities between NVIDIA Omniverse platform and Unreal Engine 5? Create a list of 5 (or so) dot points that clearly indicate when each approach would be the optimal one.”

They’re quite useful – generic (as expected) – and indeterminate because there is so much replication, cross-over and interaction of features.

<aside>reading these responses, I suppose this inevitably speeds the end of some national, regional, alternative and idiomatic versions of English – “ise/ize”, “colour/color”, “mould/mold” etc. </aside>

It seems to come down to a few points that I’ll detail at the end.


GPT4:

As a computer artist and programmer, you are looking at a blend of aesthetics, functionality, and the ability to customize and extend the features of a platform. As of my knowledge cutoff in September 2021, both Unreal Engine 5 and NVIDIA Omniverse have features that will appeal to artists and programmers, albeit with different strengths.

**Unreal Engine 5:**

1. **Game Development**: Unreal Engine 5 is built primarily for the development of games, with a wealth of tools for creating, scripting, and animating game worlds.

2. **Blueprint and C++**: UE5 provides both visual scripting through Blueprint and full source code access with C++, giving you a range of options for scripting and customizing your projects.

3. **Nanite and Lumen**: Nanite allows you to import high-fidelity, cinema-quality art assets directly into the engine, while Lumen provides dynamic, real-time global illumination.

4. **Multi-platform Deployment**: UE5 supports a wide range of platforms, making it ideal for game and application development across devices.

5. **Community and Resources**: The Unreal Engine has a large and active community, which means extensive resources, tutorials, and support.

**Optimal for**: Artists and programmers focused on game development or creating other interactive, real-time experiences. If you need access to source code or want a large community for support, Unreal Engine is a strong choice.

**NVIDIA Omniverse:**

1. **Collaboration and Interoperability**: Omniverse is designed as a platform for collaboration, allowing multiple users to work together in real-time across different applications, thanks to its use of USD (Universal Scene Description).

2. **Physically Accurate Rendering**: Omniverse provides highly accurate, real-time ray tracing rendering with the NVIDIA RTX platform, which is ideal for projects requiring high-quality visualization.

3. **AI Integration**: Omniverse leverages NVIDIA’s AI technologies, offering unique possibilities for AI-enhanced simulation, animation, and more.

4. **Infrastructure Management**: Omniverse can be used to visualize and manage data center infrastructure, an application unique to NVIDIA’s platform.

5. **Integration with other NVIDIA tools**: Omniverse is designed to work seamlessly with other NVIDIA tools, such as Deep Learning Super Sampling (DLSS), enhancing the platform’s capabilities.

**Optimal for**: Collaborative projects requiring real-time contributions across different applications, projects requiring high-fidelity, physically accurate rendering, and tasks involving AI-enhanced graphics or simulation.

Please keep in mind that both platforms continue to evolve and have likely added new features or improvements since my last training data in September 2021.


BARD:

Sure. Here are the similarities and differences between NVIDIA Omniverse platform and Unreal Engine 5:

Similarities

  • Both are powerful 3D creation tools.
  • Both can be used to create realistic and immersive 3D experiences.
  • Both support a wide range of features, including 3D modeling, animation, rendering, and simulation.
  • Both are used by a wide range of professionals, including game developers, architects, and filmmakers.

Differences

  • NVIDIA Omniverse platform is a more collaborative platform, designed to help teams work together on 3D projects.
  • Unreal Engine 5 is a more game-oriented platform, designed to create high-performance, real-time 3D experiences.
  • NVIDIA Omniverse platform is currently only available for Windows, while Unreal Engine 5 is available for Windows, macOS, and Linux.

Here are 5 dot points that clearly indicate when each approach would be the optimal one:

  • NVIDIA Omniverse platform is the optimal choice for:
    • Collaborative 3D projects: The platform’s real-time collaboration features make it easy for teams to work together on 3D projects, regardless of their location.
    • Large-scale projects: The platform’s scalability makes it well-suited for large-scale projects that require multiple users and high-performance computing.
    • Complex projects: The platform’s powerful tools and features make it well-suited for complex projects that require a high level of detail and realism.
  • Unreal Engine 5 is the optimal choice for:
    • Game development: The platform’s real-time rendering and performance make it well-suited for developing high-performance, visually stunning games.
    • Virtual reality and augmented reality: The platform’s support for VR and AR makes it well-suited for developing immersive experiences.
    • Filmmaking: The platform’s powerful tools and features make it well-suited for creating realistic and immersive visual effects for films.

I hope this helps!


It does indeed – thanks Machine Intelligence! 🙂

The main differences are subtle and strategic, and perhaps more effectively understood in the context of industrial competition and the territorialisation of artistic and developer activity – driven by business. It’s basically technological evolution driven by capitalism,  not social good or woo-woo philosophical outcomes. Was it ever not thus? Well, yes – opensource provides an important alternative model, but it has significant infrastructural constraints.

Omniverse provides – at a basic level – an incredible toolset that enables the development of bespoke applications for realtime visualisation and simulation. The workflow is quite different to UE, but it provides access to all its features in Python (above) and C++ (below)- and a graph visual language that is easy to use, including standardised UI features. So it is nice and fast, even if it’s a bit crashy at this stage. It’s pretty compelling and I am interested to learn more, especially as it develops in the direction of UI’s for XR streaming across multiple platforms and the integration of AI and physics simulations (e.g, via Paraview). It’s very modular – reminiscent of Opendoc – but not open! Forgotten dreams of a better world. Really, Omniverse could be access-controlled opensource, like Unreal is. And much more free (not only in $$, but in principle).

Of course, all these systems ~could~ be like a better improved version of Opendoc. But $$$ – it’s in their interests, currently, to be non-interoperable.

Unreal Engine is more mature – and much more stable, in my experience. But far more monolithic – perhaps it could be modularised. There is an emergent UI Library, but UE UI (UMG) seems counter-intuitive to me – it’s complex to get your head around (I understand why it is as it is, because of C++ legacy, but feel it needn’t be this way – UI could be an intuitive MVC plastic layer, not casting, binding and widgets).

Cesium runs both in Omniverse and UE – is it possible to create an Omniverse USD scene using Cesium and pipe that into UE  using an Omniverse connector or is it best just to use the UE Cesium plugin?

An interesting Omniverse/UE co-simulation here.

It would be nice if UE had ~easy~ realtime server capabilities like Omniverse, without all the asset issues of version control with Git, LFS, Bitbucket, Github, Plastic, Perforce, Subversion etc cross-platform. This is something I need to investigate to find an optimal solution for our use-case. Of course, everything is complicated in a non-commercial research context.

A useful talk comparing the two approaches can be seen here (registration may be required)

https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s42162/

Omer Shapira, a Senior Omniverse Engineer from NVIDIA discusses: “Learn about designing and programming spatial computing features in Omniverse. We’ll discuss Omniverse’s real-time ray-traced extended reality (XR) renderer, Omniverse’s Autograph system for visual programming, and show how customers are using Omniverse’s XR tools to do everything together — from design reviews to hangouts.”

A useful diagram from the PDF slides:

It provides an insightful abstract overview of the relationships between data sources, pipelines and developer/artist/user activity – and, of course, computers. Somewhere there must be cloudy-bits and AI that will soon complicate the picture.

I must try and find the abstract view from an Unreal Engine engineer somewhere – I’m sure the diagram would be different. Nevertheless, the territorial play is the same – a stack from hardware engineering, through software engineering, to ‘experiences’ is basically the monopoly. It comes down to engineering choices – the physical constraints, scientific research, economics and politics  of engineering. That defines everything. But it is driven by human desire, a seeming-paradox, top-down but bottom-up.

USD (Universal Scene Description) seems to be on track to be the lingua franca of the metaverse/omniverse/whatevs. A universal file format is an incredibly important idea – not only because there are literally thousands of incompatible, difficult to translate formats, but because of obsolescence. Like human language, digital ‘file language’ evolves and changes; it is currently much more fragile. A significant solar storm might wipe out AI and all digital knowledge. Biology might not be fried in the deep ocean. We’ve had a few billion years of experience.

Nevertheless, the Hyperscale/Game Engine counterpoint is instructive. From my research into Cloud XR, either via Google Immersive Stream or Azure Cloud XR or Amazon etc. – the problem at scale is simply that you need lots and lots of individual virtual machines to run instances of a ‘world’ and lots of low-latency network traffic to synchronise apparent time (with a sprinkling of predictive AI for the ~20ms perceptual lag barrier).

Strangely, this sounds like an ecosystem.

This is clearly a problem that Omniverse tries to address, but won’t work until it is much more open. UE may beat it via the Omniverse of the expanded Fortnight platform, if they can colonise the hardware and mindware at speed. Or they may collaborate or parasitise each other – hard to tell. Are the metaphors appropriate? There are, of course, many other potential players in this space, even, I expect, disruptors or disruptive technologies that have yet to emerge from someone’s loungeroom.

No doubt agentive AGI systems would approach this in entirely different ways, given their own interests.

 

AR for iOS/Android in Unreal Engine 5.1

Unreal Engine is a robust platform for exploring how to develop applications in XR. So, a few weeks ago I worked through the documentation and various tutorials for developing an Augmented Reality app using Unreal Engine (UE) 5.1.

It’s confusing to say the least – as there is lots of advice available coming from different quarters, and the processes of compilation and deployment are quite complicated. Unnecessarily complicated, in my opinion – but I understand why such requirements are involved.

UE comes with an AR Template that can cross-compile for both iOS and Android. My initial thought – for ease of use – was to go and buy a cheapish Android device (tablet or phone) and deploy locally on that. But of course it isn’t that simple – there are a whole bunch of technical requirements for the platform e.g. Android OS version, CPU, GPU etc – and these must be matched with the correct Android SDK. So, a whole lot of time required to figure all that stuff out – surely there is an easier way?

There is, to a degree, on iOS, as the range of models is much more limited, and iOS seems more standardised and slightly simpler to target – despite a whole bunch of security requirements, such as developer signing certificates. Plus I already have a few iOS devices around to test on – my iPhone XR and an IPad Pro.

The next problem is that I require a recent Apple computer on which to run UE and XCode in a recent version of MacOS. These days, as I mainly develop using Windows 11 (for graphics) and Ubuntu Linux 22.04 LTS (for machine learning), this presented a problem. I have a bunch of MacPro 5,1 towers that I’ve kept as a render farm – but the most up-to-date one is a mid-2010 model, with a metal-capable Radeon RX580 GPU that was running MacOS 10.14 Mojave. This was the last OS officially supported for the ‘Classic’ Mac Pro desktop (cMP) in 2018, last updated in 2019. Lots of software was becoming ‘non-updateable’ on it, not only because it is a 13-year-old computer, but because Mojave was non-longer officially supported by Apple.

So what to do?

Fortunately there is a community of hackers who have developed work-arounds for supporting newer OS versions on unsupported machines, ranging from dosdude1’s easy-to-use MacOS patchers, to OCLP, to Martin Lo’s cMP OpenCore package. Having read around the subject and having contemplated updating my old machines for the last few years (!), I decided to take the plunge and use Martin Lo’s package, as it is specifically targeted for Mac Pro ‘cheese-graters’ like mine – with minimal patching of the underlying OS (unlike OCLP) – it is principally a modified bootloader and relatively easy to install and uninstall, without bricking your machine. There’s even a very friendly and helpful facebook group for technical discussions.

Happily it’s worked perfectly. That means I can develop as usual in Windows and use my ancient machine to deploy on iOS devices.  So – I’ve got the UE 5.1 AR Template Demo compiled on MacOS Monterey 12.6.1 and XCode 14.2, running on my old cheese-grater cMP – and installed on my 2015 iPad Pro running iOS 16.3. It runs fine. Problem solved.

I have no loyalty to any particular platform - they're just systems with which to make things, and they each have their strengths and weaknesses. If possible I try and avoid vendor lock-in by exploring opensource approaches - but some degree of it seems to be unavoidable. I'm all for open standards and cross-platform compatibility. Having said this UE is my current engine of choice (though I keep abreast of Unity and Godot), and it works well enough with a broad opensource/source-available ecosystem.

Here’s a shot of it showing the default architectural model, on a placemat on a kitchen table. It works fine and has been useful for learning – but looks kind of clunky.

UE 5.1 AR Demo showing architectural model
UE 5.1 AR Demo showing architectural model

The following is a screenshot of the impressive AR app by Handbuilt Creative – demonstrating a photorealistic animated Psittacosaurus. There is obviously a lot of work behind this, and it indicates the level of detail that can be achieved. Of course, I would expect more recent devices would be capable of a lot more – but I don’t have the budget to just pop out and buy new tech, unfortunately, so it’s a matter of working with what I have available.

Famous Fossil AR App Screenshot
Famous Fossil AR App Screenshot

Things have progressed a great deal from here (thanks to a bunch of developments announced at GDC 2023)- I’ll cover them in a future post.


Here’s a summary of the steps to get UE 5.1 AR working, courtesy of ChatGPT:

Provisioning for iOS

If you want to distribute your iOS app made with Unreal Engine 5.1 through the Apple App Store, you will need to create a new certificate and provisioning profile for each version of your app. This is because each version of your app is treated as a separate and distinct application by Apple.

To publish your app to the App Store, you will need to create an App ID, a Distribution Certificate, and a Provisioning Profile for each version of your app. You will also need to configure your Unreal Engine 5.1 project to use these credentials for building and packaging your app.

Here are the steps you need to undertake on your Apple Developer account to create the necessary certificates and provisioning profiles for your Unreal Engine 5.1 app, as well as how to enter these details into Unreal Engine:

  1. Create an App ID:
    • Log in to your Apple Developer account.
    • Click on “Certificates, Identifiers & Profiles” in the sidebar.
    • Click on “Identifiers” and then click the “+” button to create a new App ID.
    • Choose “App IDs” from the “Register” dropdown menu, then select “iOS App” as the App ID type.
    • Enter a name for your App ID and a unique Bundle ID (e.g. com.mycompany.myapp).
    • Click “Continue” and review your App ID details, then click “Submit”.
  2. Create a Distribution Certificate:
    • In the “Certificates, Identifiers & Profiles” section, click on “Certificates”.
    • Click on the “+” button to create a new certificate.
    • Select “App Store and Ad Hoc” as the certificate type and click “Continue”.
    • Follow the instructions to create a certificate signing request (CSR) using Keychain Access on your Mac.
    • Upload the CSR and click “Continue”.
    • Download the distribution certificate and install it on your Mac by double-clicking the downloaded file.
  3. Create a Provisioning Profile:
    • In the “Certificates, Identifiers & Profiles” section, click on “Provisioning Profiles”.
    • Click on the “+” button to create a new profile.
    • Select “App Store” as the provisioning profile type and click “Continue”.
    • Choose the App ID you created in step 1 and click “Continue”.
    • Choose the distribution certificate you created in step 2 and click “Continue”.
    • Give your provisioning profile a name and click “Continue”.
    • Download the provisioning profile and install it on your Mac by double-clicking the downloaded file.
  4. Enter the details into Unreal Engine:
    • Open your Unreal Engine project and go to the “Project Settings” section.
    • Under “Platforms”, select “iOS”.
    • Enter your Bundle ID in the “Bundle Identifier” field.
    • Click the “Import” button next to the “Certificate” field and select the distribution certificate you created in step 2.
    • Click the “Import” button next to the “Provisioning Profile” field and select the provisioning profile you created in step 3.
    • Save your settings and build your iOS app in Unreal Engine.

Unreal Engine 5.1 AR Documentation:

Scripting the Unreal Editor Using Python

An important aspect of our visualisation pipeline involves chaining together Unreal Engine and Python workflows.

The UE 5.1 Python introductory documentation lists some common examples of what might be done:

  • Construct larger-scale asset management pipelines or workflows that tie the Unreal Editor to other 3D applications that you use in your organization.
  • Automate time-consuming Asset management tasks in the Unreal Editor, like generating Levels of Detail (LODs) for Static Meshes.
  • Procedurally lay out content in a Level.
  • Control the Unreal Editor from UIs that you create yourself in Python.

Extensive UE Python API documentation is listed here.

A useful introductory video on Youtube:

Scope: Hyperspectral Imaging, Earth Model, Unreal Engine

Our initial scope is to examine how we can can use hyperspectral satellite data within Unreal Engine (UE).

Hyperspectral Imaging

Wikipedia entry for Hyperspectral Imaging:

Hyperspectral imaging collects and processes information from across the electromagnetic spectrum.[1] The goal of hyperspectral imaging is to obtain the spectrum for each pixel in the image of a scene, with the purpose of finding objects, identifying materials, or detecting processes.[2][3] There are three general branches of spectral imagers. There are push broom scanners and the related whisk broom scanners (spatial scanning), which read images over time, band sequential scanners (spectral scanning), which acquire images of an area at different wavelengths, and snapshot hyperspectral imaging, which uses a staring array to generate an image in an instant.

Whereas the human eye sees color of visible light in mostly three bands (long wavelengths – perceived as red, medium wavelengths – perceived as green, and short wavelengths – perceived as blue), spectral imaging divides the spectrum into many more bands. This technique of dividing images into bands can be extended beyond the visible. In hyperspectral imaging, the recorded spectra have fine wavelength resolution and cover a wide range of wavelengths. Hyperspectral imaging measures continuous spectral bands, as opposed to multiband imaging which measures spaced spectral bands.[4]

Hyperspectral imaging satellites are equipped with special imaging sensors that collect data as a set of raster images, typically visualised as a stack, which each layer representing a discrete wavelength captured by the sensor array.

HyperspectralCube.jpg
By Dr. Nicholas M. Short, Sr. - NASA Link

This represents a 3-dimensional hyperspectral data cube, consisting of x,y coordinates that correspond to Earth-coordinates within a defined geodetic reference system, and the λ coordinate which corresponds to spectral wavelength.

Earth Model

An Earth Model means precisely that – a model of the Earth!

More specifically, the Earth is understood as a highly complex three-dimensional shape. It can be simply represented as a sphere or more accurately as an oblate spheroid, representing the diametric differences between equatorial and polar diameters caused by the rotation of the planet.

The scientific study of the shape of the Earth is called Geodesy. Arising from this is the World Geodetic System, about which Wikipedia notes:

The World Geodetic System (WGS) is a standard used in cartographygeodesy, and satellite navigation including GPS. The current version, WGS 84, defines an Earth-centered, Earth-fixed coordinate system and a geodetic datum, and also describes the associated Earth Gravitational Model (EGM) and World Magnetic Model (WMM). The standard is published and maintained by the United States National Geospatial-Intelligence Agency.[1]

Most relevant to creating a digital model of the Earth is the Earth-centred, Earth-fixed coordinate system (or ECEF):

The Earth-centered, Earth-fixed coordinate system (acronym ECEF), also known as the geocentric coordinate system, is a cartesian spatial reference system that represents locations in the vicinity of the Earth (including its surface, interior, atmosphere, and surrounding outer space) as XY, and Z measurements from its center of mass.[1][2] Its most common use is in tracking the orbits of satellites and in satellite navigation systems for measuring locations on the surface of the Earth, but it is also used in applications such as tracking crustal motion.

The distance from a given point of interest to the center of Earth is called the geocentric distanceR = (X2 + Y2 + Z2)0.5, which is a generalization of the geocentric radiusR0, not restricted to points on the reference ellipsoid surface. The geocentric altitude is a type of altitude defined as the difference between the two aforementioned quantities: h′ = R − R0;[3] it is not to be confused for the geodetic altitude.

Conversions between ECEF and geodetic coordinates (latitude and longitude) are discussed at geographic coordinate conversion.

Unreal Engine

Unreal Engine 5.1 provides a comprehensive system for managing ECEF georeferenced data, including ‘flat’ and ’round’ planet projections:

https://docs.unrealengine.com/5.1/en-US/georeferencing-a-level-in-unreal-engine/

In addition to this, there are a variety of plugins available for managing georeferenced data.

For this research we are evaluating two initial approaches:

Cesium for Unreal: https://cesium.com/platform/cesium-for-unreal/

Landscaping Plugin:  https://landscaping.ludicdrive.com

More soon.

Introduction

Hi and welcome to our creative research journal for Playable Earth: Disrupting the status quo of Earth Observation Visualisation through immersion and interaction.

I’m going keep this pretty informal – basically a blog with posts about our research progress, with links and references to relevant materials.

Chris and I have been meeting regularly since the beginning of the project in late November 2022, and will continue posting throughout the duration of the project until late June 2023. We’ve agreed to run the project on a 0.5 basis, allowing us to extend the residency from the usual 3-months duration full-time, to ~6-months half-time. This fits in well with our various commitments to other projects and facilitates some cross-hybridisation.

Broadly speaking, the objective of our research is to explore novel ways of using a computer game engine – specifically Unreal Engine – in the deployment and visualisation of Earth Observation data. Our interest is in exploring various aspects of ‘immersive’ visualisation through XR (eXtended Reality). XR is a catch-all abbreviation for an array of technologies including augmented reality (AR) and virtual reality (VR).

An important caveat here is that we need to exemplify the utility of ‘immersive’ media. It’s a vague term that does not explicitly identify the utility of the form – if there is one, beyond obvious entertainment applications. My intuition is that there is, and that it is not terribly complicated – but we need to articulate knowledge about prospective users, human visual perception, semiotics and human-computer interaction as well. More about this in a later post.

Modern computer game engines provide a robust platform for this kind of research, subtended by the huge financial resources of the games industry and its investment in the technological infrastructure of software and compute resources – that are developing at such a startling rate. Interaction with advanced machine learning systems – AI – is obvious and imperative. It makes sense to use them for sciartistic visualisation, as they provide many useful features consolidated within a systematic approach to programming, content development and human-computer interaction (HCI).

Some difficulties arise in interfacing data formats and approaches commonly used by creative industries technologies (CIT), with more specialised forms used by scientific visualisation and visual analytics.

Furthermore, game engines can  enable the development of novel aesthetic approaches towards visualisation, that often fall outside the remit of strictly ‘scientific’ visualisation conventions.  Thus a prospectively fertile intersection of art and science emerges – they can become much more conversant modalities for eliciting knowledge and insight about the world.

As an artist who has worked extensively with Unreal Engine on a number of personal projects, as well as someone with a PhD in computational geophysics, focusing upon scientific data visualisation, the opportunity to work with Chris and the Smartsat CRC at Swinburne is opportune and exciting – truly a way to draw together these strands of art and science in some blue-sky research.

We also express our gratitude to the Australian Network for Art and Technology (ANAT), CEO Melissa Delaney and her staff for facilitating this novel approach to supporting this research – we hope it paves the way for future initiatives of this type.