Generating useful training data for AI can be tedious and resource-heavy. Relevant data has to be acquired, preselected and labeled. Our synthetic data pipeline solves this issue by automated scenario generation and physical based sensor processing. All according to our customers use case and data structures.
Our datafactory comprises a fully automated process to generate synthetic data. After parametrizing it to match our customers use case, it is able to generate large quantities of ultra realistic training data for autonomous systems.
Are you interested in using synthetic data for your MLOps?
In order to match the content of real world data in quality, we host an extensive library of high-quality models to populate our metaworlds:
Project dependent, we include 3D-models and scene descriptions of our customers. Map and road network data can be easily integrated. Read more about custom scene virtualization in our digital twin section.
Our automated scene generation assembles the input data into defined scenarios. This step is shaped by a plethora of tools and parameters:
Procedural algorithms and specialized AI models empower ultra-realistic, diverse and variable environments
Parametrizable lighting, environment, profile of the terrain, density of objects, trajectories, animations and materials
Variations on consecutive image sequences or a frame by frame level
Physics-based camera, lidar and radar are parametrized to mimic our customers sensors capture the created meta worlds. Our sensor modes are supported by the latest trends in ray-tracing and partially open source.