Category Archives: CloudXR

NVIDIA Omniverse or Unreal Engine 5?

Throughout this residency I’ve been working with two development platforms to try and evaluate which is the ‘best'(easiest?optimal?) workflow for the ideas we’ve been exploring. Even at this penultimate stage, things are unclear, so I thought I’d unpack some of the similarities, differences and interactions between the approaches.

Here’s two takes (after a few unsatisfactory tries)  from ChatGPT 4 and Google Bard, using the prompt:

“You are an expert computer artist and programmer. Explain what are the differences and similarities between NVIDIA Omniverse platform and Unreal Engine 5? Create a list of 5 (or so) dot points that clearly indicate when each approach would be the optimal one.”

They’re quite useful – generic (as expected) – and indeterminate because there is so much replication, cross-over and interaction of features.

<aside>reading these responses, I suppose this inevitably speeds the end of some national, regional, alternative and idiomatic versions of English – “ise/ize”, “colour/color”, “mould/mold” etc. </aside>

It seems to come down to a few points that I’ll detail at the end.


GPT4:

As a computer artist and programmer, you are looking at a blend of aesthetics, functionality, and the ability to customize and extend the features of a platform. As of my knowledge cutoff in September 2021, both Unreal Engine 5 and NVIDIA Omniverse have features that will appeal to artists and programmers, albeit with different strengths.

**Unreal Engine 5:**

1. **Game Development**: Unreal Engine 5 is built primarily for the development of games, with a wealth of tools for creating, scripting, and animating game worlds.

2. **Blueprint and C++**: UE5 provides both visual scripting through Blueprint and full source code access with C++, giving you a range of options for scripting and customizing your projects.

3. **Nanite and Lumen**: Nanite allows you to import high-fidelity, cinema-quality art assets directly into the engine, while Lumen provides dynamic, real-time global illumination.

4. **Multi-platform Deployment**: UE5 supports a wide range of platforms, making it ideal for game and application development across devices.

5. **Community and Resources**: The Unreal Engine has a large and active community, which means extensive resources, tutorials, and support.

**Optimal for**: Artists and programmers focused on game development or creating other interactive, real-time experiences. If you need access to source code or want a large community for support, Unreal Engine is a strong choice.

**NVIDIA Omniverse:**

1. **Collaboration and Interoperability**: Omniverse is designed as a platform for collaboration, allowing multiple users to work together in real-time across different applications, thanks to its use of USD (Universal Scene Description).

2. **Physically Accurate Rendering**: Omniverse provides highly accurate, real-time ray tracing rendering with the NVIDIA RTX platform, which is ideal for projects requiring high-quality visualization.

3. **AI Integration**: Omniverse leverages NVIDIA’s AI technologies, offering unique possibilities for AI-enhanced simulation, animation, and more.

4. **Infrastructure Management**: Omniverse can be used to visualize and manage data center infrastructure, an application unique to NVIDIA’s platform.

5. **Integration with other NVIDIA tools**: Omniverse is designed to work seamlessly with other NVIDIA tools, such as Deep Learning Super Sampling (DLSS), enhancing the platform’s capabilities.

**Optimal for**: Collaborative projects requiring real-time contributions across different applications, projects requiring high-fidelity, physically accurate rendering, and tasks involving AI-enhanced graphics or simulation.

Please keep in mind that both platforms continue to evolve and have likely added new features or improvements since my last training data in September 2021.


BARD:

Sure. Here are the similarities and differences between NVIDIA Omniverse platform and Unreal Engine 5:

Similarities

  • Both are powerful 3D creation tools.
  • Both can be used to create realistic and immersive 3D experiences.
  • Both support a wide range of features, including 3D modeling, animation, rendering, and simulation.
  • Both are used by a wide range of professionals, including game developers, architects, and filmmakers.

Differences

  • NVIDIA Omniverse platform is a more collaborative platform, designed to help teams work together on 3D projects.
  • Unreal Engine 5 is a more game-oriented platform, designed to create high-performance, real-time 3D experiences.
  • NVIDIA Omniverse platform is currently only available for Windows, while Unreal Engine 5 is available for Windows, macOS, and Linux.

Here are 5 dot points that clearly indicate when each approach would be the optimal one:

  • NVIDIA Omniverse platform is the optimal choice for:
    • Collaborative 3D projects: The platform’s real-time collaboration features make it easy for teams to work together on 3D projects, regardless of their location.
    • Large-scale projects: The platform’s scalability makes it well-suited for large-scale projects that require multiple users and high-performance computing.
    • Complex projects: The platform’s powerful tools and features make it well-suited for complex projects that require a high level of detail and realism.
  • Unreal Engine 5 is the optimal choice for:
    • Game development: The platform’s real-time rendering and performance make it well-suited for developing high-performance, visually stunning games.
    • Virtual reality and augmented reality: The platform’s support for VR and AR makes it well-suited for developing immersive experiences.
    • Filmmaking: The platform’s powerful tools and features make it well-suited for creating realistic and immersive visual effects for films.

I hope this helps!


It does indeed – thanks Machine Intelligence! 🙂

The main differences are subtle and strategic, and perhaps more effectively understood in the context of industrial competition and the territorialisation of artistic and developer activity – driven by business. It’s basically technological evolution driven by capitalism,  not social good or woo-woo philosophical outcomes. Was it ever not thus? Well, yes – opensource provides an important alternative model, but it has significant infrastructural constraints.

Omniverse provides – at a basic level – an incredible toolset that enables the development of bespoke applications for realtime visualisation and simulation. The workflow is quite different to UE, but it provides access to all its features in Python (above) and C++ (below)- and a graph visual language that is easy to use, including standardised UI features. So it is nice and fast, even if it’s a bit crashy at this stage. It’s pretty compelling and I am interested to learn more, especially as it develops in the direction of UI’s for XR streaming across multiple platforms and the integration of AI and physics simulations (e.g, via Paraview). It’s very modular – reminiscent of Opendoc – but not open! Forgotten dreams of a better world. Really, Omniverse could be access-controlled opensource, like Unreal is. And much more free (not only in $$, but in principle).

Of course, all these systems ~could~ be like a better improved version of Opendoc. But $$$ – it’s in their interests, currently, to be non-interoperable.

Unreal Engine is more mature – and much more stable, in my experience. But far more monolithic – perhaps it could be modularised. There is an emergent UI Library, but UE UI (UMG) seems counter-intuitive to me – it’s complex to get your head around (I understand why it is as it is, because of C++ legacy, but feel it needn’t be this way – UI could be an intuitive MVC plastic layer, not casting, binding and widgets).

Cesium runs both in Omniverse and UE – is it possible to create an Omniverse USD scene using Cesium and pipe that into UE  using an Omniverse connector or is it best just to use the UE Cesium plugin?

An interesting Omniverse/UE co-simulation here.

It would be nice if UE had ~easy~ realtime server capabilities like Omniverse, without all the asset issues of version control with Git, LFS, Bitbucket, Github, Plastic, Perforce, Subversion etc cross-platform. This is something I need to investigate to find an optimal solution for our use-case. Of course, everything is complicated in a non-commercial research context.

A useful talk comparing the two approaches can be seen here (registration may be required)

https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s42162/

Omer Shapira, a Senior Omniverse Engineer from NVIDIA discusses: “Learn about designing and programming spatial computing features in Omniverse. We’ll discuss Omniverse’s real-time ray-traced extended reality (XR) renderer, Omniverse’s Autograph system for visual programming, and show how customers are using Omniverse’s XR tools to do everything together — from design reviews to hangouts.”

A useful diagram from the PDF slides:

It provides an insightful abstract overview of the relationships between data sources, pipelines and developer/artist/user activity – and, of course, computers. Somewhere there must be cloudy-bits and AI that will soon complicate the picture.

I must try and find the abstract view from an Unreal Engine engineer somewhere – I’m sure the diagram would be different. Nevertheless, the territorial play is the same – a stack from hardware engineering, through software engineering, to ‘experiences’ is basically the monopoly. It comes down to engineering choices – the physical constraints, scientific research, economics and politics  of engineering. That defines everything. But it is driven by human desire, a seeming-paradox, top-down but bottom-up.

USD (Universal Scene Description) seems to be on track to be the lingua franca of the metaverse/omniverse/whatevs. A universal file format is an incredibly important idea – not only because there are literally thousands of incompatible, difficult to translate formats, but because of obsolescence. Like human language, digital ‘file language’ evolves and changes; it is currently much more fragile. A significant solar storm might wipe out AI and all digital knowledge. Biology might not be fried in the deep ocean. We’ve had a few billion years of experience.

Nevertheless, the Hyperscale/Game Engine counterpoint is instructive. From my research into Cloud XR, either via Google Immersive Stream or Azure Cloud XR or Amazon etc. – the problem at scale is simply that you need lots and lots of individual virtual machines to run instances of a ‘world’ and lots of low-latency network traffic to synchronise apparent time (with a sprinkling of predictive AI for the ~20ms perceptual lag barrier).

Strangely, this sounds like an ecosystem.

This is clearly a problem that Omniverse tries to address, but won’t work until it is much more open. UE may beat it via the Omniverse of the expanded Fortnight platform, if they can colonise the hardware and mindware at speed. Or they may collaborate or parasitise each other – hard to tell. Are the metaphors appropriate? There are, of course, many other potential players in this space, even, I expect, disruptors or disruptive technologies that have yet to emerge from someone’s loungeroom.

No doubt agentive AGI systems would approach this in entirely different ways, given their own interests.

 

AI XR in the Cloud

Abstract XR. SD Image by Peter Morse.

Over the last couple of weeks I’ve spent some time virtually attending Nvidia’s GTC Developer Conference, which has been very illuminating. The main take-aways for me have been about how, now that we’re in the Age of AI, that it’s time to really start working with cloud services – and that they’re actually becoming affordable for individuals to use.

Of course, like most computer users, I use cloud services every day – most consumer devices already use them – like Netflix, iCloud, AppleTV, Google Drive, Cloudstor, social media etc. These are kind of passive, invisible services that one uses as part of entertainment, information or storage systems. More complicated systems for developers include things like Google ColabAmazon Web Services (AWS)Microsoft Azure and Nvidia Omniverse, amongst others.

In Australian science programmes there are National Computing Infrastructure (NCI) services such as AuScope Virtual Research Environments, Digital Earth Australia (DEA), Integrated Marine Observing System (IMOS), Australian Research Data Commons (ARDC), and even interesting history and culture applications built atop these such as the Time Layered Cultural Map of Australia (TLCMap). Of course, there are dozens more scattered around various organisations and websites – it’s all quite difficult to discover amidst the acronyms, let alone keep track of, so any list will always be partial and incomplete.

So this is where AI comes in in a strong way – providing the ability to ingest and summarise prodigious volumes of data and information – and hallucinate rubbish – and this is clearly going to be the way of the future. The AI race is on – here are some interesting (but probably already dated) insights from the AI Index by the Stanford Institute for Human-Centered Artificial Intelligence that are worth absorbing:

  • Industry has taken over AI development from academia since 2014.
  • Performance saturation on traditional benchmarks has become a problem.
  • AI can both harm and help the environment, but new models show promise for energy optimization.
  • AI is accelerating scientific progress in various fields.
  • Incidents related to ethical misuse of AI are on the rise.
  • Demand for AI-related skills is increasing across various sectors in the US (and presumably globally)
  • Private investment in AI has decreased for the first time in the last decade (but after an astronomical rise in that decade)
  • Proportion of companies adopting AI has plateaued, but those who have adopted continue to pull ahead.
  • Policymaker interest in AI is increasing globally.
  • Chinese citizens are the most positive about AI products and services, while Americans are among the least positive.

Nevertheless, it is clear to me that the so-called ‘Ai pause‘ is not going to happen – as Toby Walsh observes:

“Why? There’s no hope in hell that companies are going to stop working on AI models voluntarily. There’s too much money at stake. And there’s also no hope in hell that countries are going to impose a moratorium to prevent companies from working on AI models. There’s no historical precedent for such geopolitical coordination.

The letter’s call for action is thus hopelessly unrealistic. And the reasons it gives for this pause are hopelessly misguided. We are not on the cusp of building artificial general intelligence, or AGI, the machine intelligence that would match or exceed human intelligence and threaten human society. Contrary to the letter’s claims, our current AI models are not going to “outnumber, outsmart, obsolete and replace us” any time soon.

In fact, it is their lack of intelligence that should worry us”

The upshot of this, for the work Chris and I are doing, is clearly that we need to embrace AI-in-XR in the development of novel modalities for Earth Observation using Mixed Reality. It’s simply not going to be very compelling doing something as simple as, for example, visualising data in an XR application (such as an AR phone app) that overlays the scene. I’ve now seen many clunky examples of this, and none of them seem especially compelling, useful or widely adopted. The problem really comes down to the graphics capabilities of mobile devices, as this revealing graph demonstrates:

(screenshot from Developing XR Experiences for Every Device (Presented by Google Cloud) )

A potential solution arrives with XR ‘in the cloud’ – and it is evident from this year’s GTC that all the big companies are making a big play in this space and a lot of infrastructure development is going on – billions of dollars of investment. And it’s not just ‘XR’ but “XR with AI’ and high-fidelity, low-latency pixel-streaming. So, my objective is to ride on the coat-tails of this in a low-budget arty-sciencey way, and make the most of the resources that are now becoming available for free (or low-cost) as these huge industries attempt to on-board developers and explore the design-space of applications.

As you might imagine, it has been frustratingly difficult to find documentation and examples of how to go about doing this, as it is all so new. But this is what you expect with emergent and cutting-edge technologies (and almost everyone trying to make a buck off them) – but it’s thankfully something I am used to from my own practice and research: chaining together systems and workflows in the pursuit of novel outcomes.

It’s been a lot to absorb over the last few weeks, but I’m now at the stage where I can begin implementing an AI agent (using the OpenAI API) that one can query with a voice interface (and yes, it talks back), running within a cloud-hosted XR application suitable for e.g. VR HMDs, AR mobile devices, and mixed-reality devices like the Hololens 2 (I wish I had one!). It’s just a sketch at this stage, but I can see the way forward if/when I can get access to the GPT-4 API and plugin architecture, to creating a kind of Earth Observation ‘Oracle’ and a new modality for envisioning and exploring satellite data in XR.

Currently I’m using OpenAI GPT3.5-turbo and playing around with a local install of GPT4-x_Alpaca and AutoGPT, and local pixel-streaming XR. The next step is to move this over to Azure CloudXR and Azure Cognitive Services. Of course, its all much more complicated than it sounds, so I expect a lot of hiccups along the way, but nothing insurmountable.

I’ll post some technical details and (hopefully) some screencaps in a future post.