Industrial metaverse

How the Nvidia Omniverse makes AI factories a reality

How does the Nvidia Omniverse accelerate AI factories? The digital ecosystem is transforming planning, production, and robotics in record time. An interview with Timo Kistner, EMEA Industry Business Lead - Manufacturing & Industrial.

Published Modified
Das Nvidia Omniverse transformiert Fabriken in lernende, optimierte KI-Systeme.
The Nvidia Omniverse transforms factories into learning, optimized AI systems. (Symbolic image)

Timo Kistner at the Industrial Metaverse Conference

Dr. Timo Kistner, Emea Industry Business Lead – Manufacturing & Industrial, Nvidia.

Timo Kistner specializes in promoting Nvidia's business growth in the manufacturing and industrial sector. With a focus on areas such as semiconductors, electronics, mechanical engineering, aerospace, chemistry, and transport and logistics, he maintains strategic partnerships and builds trusting relationships with key customers. 

His role includes not only managing Nvidia's strategic customer relationships but also maintaining a partner ecosystem and supporting key technology and business trends within the industry.

At the Industrial Metaverse Conference on February 10 and 11, Timo Kistner will speak on the topic "Agentic, Physics, and Physical AI in the industrial environment." More information and tickets are available here!

Nvidia refers to Omniverse as an "operating system" for physical AI and AI factories. Why is that, and what specific business benefits can manufacturers expect within three to five years if they adopt this ecosystem today?

Timo Kistner: Nvidia Omniverse is best understood as a collection of open libraries that allow developers to create OpenUSD-based applications and workflows. This distinction is crucial: instead of committing manufacturers to a single new tool, Omniverse enables them to connect their existing 3D tools and data from CAD, PLM, simulation, and automation systems into a unified, physically accurate pipeline. In this environment, engineering, operations, and suppliers can collaboratively design, simulate, and optimize before deploying physical hardware.

Through the initial virtual validation of automation, robotics, and AI workflows in this digital factory, manufacturers can identify and resolve most issues in the simulation and then roll out with a high degree of confidence in the physical factory. This approach leads to measurable benefits: shorter commissioning times, fewer change cycles, faster introduction of new products, and higher overall equipment effectiveness (OEE), enabled by AI-based optimization of throughput, energy consumption, and maintenance strategies. 

Join the Industrial Metaverse Conference!

The Industrial Metaverse Conference explores the latest developments and innovations at the intersection of industry and virtual worlds. 

The conference brings together leading experts, technologists, and business strategists to share insights on the use of metaverse technologies in manufacturing, automation, and digital transformation.

The next conference is on February 10 and 11, 2026, in Munich.

More information and tickets are available here: To the Industrial Metaverse Conference.

Instead of relying on costly trial and error in manufacturing, companies can use applications developed with omniverse libraries to play out "what-if" scenarios on the digital twin, increase capacity, and reduce cost per unit - without additional investments in buildings or facilities.

Nvidia's "three-computer" model - dgx for training, ovx and omniverse for simulation, and jetson/agx for edge deployment - covers the entire lifecycle of physical ai. What are the best practices to unlock the full potential, and what steps are required for a successful project?

Kistner: The "three-computer" model describes how nvidia platforms are mapped to the physical ai lifecycle:

  • The first computer - DGX systems for the training and post-training of large models.
  • The second computer - RTX PRO server for conducting large-scale simulations, generating synthetic data, and applications for digital twins.
  • The third computer - Jetson or IGX/AGX systems for onboard computation in machines and robots with low latency and high throughput.
  • Together, they form an architecture where data flows from the cloud to the edge, models are centrally trained and validated, and optimized behaviors are fed back into the physical world. The three phases of physical AI projects - training, simulation, and deployment at the edge - work best when viewed as a continuous cycle rather than separate measures.

In the training phase, teams use large amounts of real-world data, such as camera streams, sensor logs, and failure cases, to teach AI models how to see, understand, and act in complex environments. For example, warehouse robots learn to recognize pallets more reliably, or delivery robots better assess when it is safe to cross a street.

In the simulation phase, these models are tested and refined in physically accurate digital twins of real environments, such as factories, hospitals, or neighborhoods. Thousands of "what-if" scenarios can be safely played out there, like emergency stops in a crowded hallway or heavy rain on a construction site. Only models that function reliably under a variety of conditions are approved for the next phase.

In the deployment phase at the edge, these proven models are rolled out to robots and machines in the field, where they run alongside a simple, highly reliable safety logic. This ensures that basic protective functions like collision avoidance remain predictable even when the overarching intelligence is updated.

By continuously collecting new data from deployed systems, feeding it back into the training environment, conducting demanding simulations again, and carefully updating the software running on the machines, companies can steadily improve accuracy, safety, and autonomy in a controlled and repeatable manner.

What essential data, connectivity, and workflow foundations must factories have already established to truly benefit from physically accurate digital twins with omniverse blueprints and integrations in the next 12 to 24 months?

Kistner: Blueprints combine Nvidia's libraries, models, and frameworks into reference workflows. Developers can either adopt the entire workflow or select parts of it for their own pipeline. Our blueprints support developers in creating digital twins, generating synthetic data, or validating robotic systems.

Production facilities first need reliable, well-structured data from machines, production systems, and energy measurement devices so that the digital twin can map in real-time what is actually happening and simulate "what-if" scenarios. Omniverse libraries help developers merge machine, quality, and maintenance data into a unified view and on a large scale. This allows teams to analyze how changes in line speed would affect scrap and downtime before intervening in real manufacturing.

Current 3D or CAD models of buildings, production lines, and central machines must be available and maintained throughout the entire project lifecycle - from planning to operation. Factory layouts, plant models, and automation assets are integrated into a common OpenUSD workspace, allowing teams to virtually test new layouts, cooling concepts, or camera positions before budgets are approved.

 Finally, the digital twin unfolds its greatest benefit as a shared work environment where operations, engineering, IT, and security teams come together to test ideas and make decisions. In examples from AI factories and industrial digital twins, teams jointly examine how the introduction of a new production line or the adjustment of a process would affect throughput, energy consumption, and occupational safety - all in a consolidated view. This enables faster and more informed decisions.

Platforms like Nvidia Cosmos and Isaac Lab now close the loop from generating synthetic data to deployment in the real world. How does this end-to-end process concretely work in practice for robotics tasks like palletizing or inspection - from data collection to deployment?

Kistner: Nvidia Cosmos, our world foundation model platform, and Nvidia Isaac, an open development platform for robotics, which includes Isaac Lab, an environment for reinforcement learning and the training of control strategies in photorealistic and physically accurate simulations, complement each other ideally.

Developers can use cosmos models to generate synthetic data needed for the post-training of robot policy models. At the same time, they can teach robots in isaac lab new skills such as palletizing or navigating complex factory environments - initially entirely virtually. Together, these platforms enable robots and autonomous machines to learn from both real factory data and synthetically generated scenarios before being deployed on the shop floor.

In practice, the closed loop for tasks such as palletizing, machine loading, or inspection begins with data collection - for example, via cameras or lidar sensors, using nvidia jetson or similar edge-ai systems for real-time perception. This data is used to build or refine a digital twin and generate diverse synthetic scenes in isaac lab. 

There, control strategies are trained and intensively tested across numerous product variants, lighting conditions, and edge cases. The validated models are then rolled out back to the robots via the edge platform, monitored during operation, and regularly retrained when layouts or operating conditions change.

What specific trends and hurdles do you see for German manufacturers - especially medium-sized companies - in the introduction of physical AI and digital twin technologies by 2026? How crucial will the local ecosystem of industry partners, suppliers, universities, and political decision-makers in Germany be to implement pilot projects into scaled AI factories?

Kistner: Physical AI refers to artificial intelligence that perceives, infers, and acts in the physical world, enabling applications such as industrial robotics, autonomous machines, and smart factories. In Germany, this aligns with a long tradition of Industry 4.0, where cyber-physical systems, connectivity, and automation are already central components of the manufacturing base.

Looking towards 2026, many German manufacturers - especially medium-sized suppliers - face a dual challenge: they must scale AI-driven automation, digital twins, and sovereign AI infrastructures to remain globally competitive. At the same time, they must address the shortage of skilled workers, complex integration into existing facilities (brownfield), and demanding security and regulatory requirements. This drives companies towards "simulation-first approaches" and AI factories, where product design, factory planning, and operations are increasingly model-driven, rather than relying on trial and error on the shop floor.

The local ecosystem becomes a strategic advantage. OEMs, ISVs like Siemens, engineering experts like EDAG and EXP, German automotive and mobility companies, including BMW, Mercedes-Benz, and Schaeffler, as well as robotics firms like Agile Robots, Kuka, idealworks, Neura, and Wandelbots are already working with Nvidia open technologies to develop digital twins, AI-enabled robots, and domain-specific applications tailored to German manufacturing standards. Universities and applied research institutions provide expertise and talent, while policymakers create frameworks for AI security, data protection, and sovereignty, giving the industry clear regulatory foundations.

A central component of this ecosystem is the industrial AI cloud from Nvidia and Deutsche Telekom - a sovereign, Germany-based 'AI factory' specifically designed for industrial workloads. It is located in German data centers and is expected to scale to around 10,000 Nvidia GPUs, including DGX B200 systems and RTX PRO servers. Manufacturers can run CUDA-X, RTX, Omniverse libraries, and partner applications from companies like Siemens, Ansys, Cadence, and SAP on sovereign infrastructure. 

Early access is planned for 2026. For German companies, especially medium-sized businesses, this means they can train, test, and deploy AI models, digital twins, and robotics applications close to their data - in compliance with European regulations and without building their own hyperscale infrastructure. This way, isolated pilot projects become scalable AI factories "Made in Germany, for Germany and Europe"

Source: Nvidia

FAQ: Physical AI, digital twins, and Nvidia Omniverse

1. Why is Nvidia Omniverse referred to as the "operating system" for physical AI and AI factories?

Nvidia Omniverse is not a single software but a collection of open libraries that allow for the creation of OpenUSD-based applications and workflows. This enables existing 3D tools and data from CAD, PLM, simulation, and automation systems to be connected in a physically accurate, shared pipeline. Engineering, operations, and suppliers can virtually collaborate, design, and optimize before deploying physical hardware.

2. What measurable business benefits does Omniverse offer manufacturers in the next three to five years?

By virtually validating automation, robotics, and AI workflows, issues can be identified and resolved early in the simulation. This leads to shorter commissioning times, fewer change cycles, faster product launches, and higher overall equipment effectiveness. Additionally, manufacturers can use digital twins to explore "what-if" scenarios, increase capacities, and reduce costs per unit without additional investments in buildings or facilities.

3. How does Nvidia's "three-computer" model work in the lifecycle of physical AI?

The model includes DGX systems for training large models, RTX-based servers for simulation, synthetic data, and digital twins, and Jetson or IGX/AGX systems for edge deployment in machines and robots. These three phases - training, simulation, and deployment - form a continuous cycle where data from the real world flows back into training, models are validated in simulations, and then safely deployed in the physical environment.

4. What prerequisites must factories meet to benefit from digital twins with Omniverse?

Reliable and well-structured data from machines, production systems, and energy measurements, as well as current 3D and CAD models of buildings, facilities, and production lines, are required. These data are integrated into a shared OpenUSD workspace. The greatest benefit arises when the digital twin is used as a shared work environment for engineering, operations, IT, and safety to make informed decisions virtually.

5. What trends and hurdles do German manufacturers face until 2026?

German manufacturers face the challenge of scaling AI-driven automation, digital twins, and sovereign AI infrastructures while addressing skills shortages, brownfield integration, and security and regulatory requirements. The local ecosystem of industry partners, research institutions, and political actors becomes a strategic advantage. A central element is the Industrial AI Cloud by Nvidia and Deutsche Telekom, which aims to provide a scalable, sovereign AI infrastructure for industrial applications in Germany from 2026.

 

Powered by Labrador CMS