top of page

Continuous V&V for Sensor Simulation: How to Build Credible Sensor Models Iteratively

  • clemenslinnhoff
  • Jan 6
  • 3 min read

We apply a development method that embeds continuous verification & validation (V&V) into every iteration of our perception sensor model development, so lidar and radar models become more credible, traceable, and automation‑ready as they evolve. Full details are in our new open‑access paper https://www.mdpi.com/1424-8220/25/24/7642.


Why continuous V&V?

For automated driving, simulation is only as valuable as it is credible. Regulations increasingly demand proof of that credibility, documenting model scope, calibration/validation data, tolerances, and limitations. We address this by integrating V&V continuously throughout development, not as an afterthought.


How we do it

We work with a four‑step iteration loop that runs for every effect, cause, or function we add to the sensor model:

  1. Implementation: Introduce one isolated element (e.g., distance noise, beam pattern) via a defined approach; calibrate if needed; generate simulated data.

  2. Verification: Execute verification test cases to confirm the implementation matches its specification.

  3. Validation: Compare simulated and real measurements using quantitative metrics and acceptance criteria that we derive systematically.

  4. Revision: If a test fails, we identify the root cause — implementation, experiment, or test definition — fix it, and re‑run all checks. Every new iteration also re‑validates previous steps to prevent regressions.

This way, every iteration ends with a validated simulation that can already support downstream development.


Iterative development method for perception sensor simulation models
Iterative development method for sensor simulations with continuous verification and validation (V&V). Each iteration adds one effect, cause, or function, followed by verification, validation, and revision if needed. Source: Hofrichter et al., Sensors 2025, 25, 7642, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).

Making requirements actionable

Before iterating, we define requirements and turn them into testable cases:

  • External vs. internal requirements (interfaces, fidelity vs. sensor effects/functions).

  • Test cases grouped as primary, secondary, and external for full traceability.

  • A structured taxonomy — campaign → suites → tests → samples — so validation is automation‑ready.

This structure is what enables us to run everything in CI pipelines using Persival Avelon.


Structured taxonomy for organizing validation test cases: Campaign, Suites, Tests, Samples
Structured taxonomy for organizing validation test cases: Campaign → Suites → Tests → Samples. This hierarchy enables automation and traceability in CI pipelines. Source: Hofrichter et al., Sensors 2025, 25, 7642, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).

Data integrity first

We validate our validation data — reference sensors, setups, and recordings — through sanity and plausibility checks before metrics are computed. Trustworthy inputs mean trustworthy results.


Acceptance criteria you can defend

We apply two strategies:

  • Empirical thresholds from multiple real sensor units, our simulation must stay within the real‑to‑real deviation range.

  • Spec‑based thresholds when empirical data is limited, using manufacturer accuracy specs.

Metrics: the Double Validation Metric (DVM) for bias and scatter differences across empirical CDFs of parameters like distance or angle.


Credible Sensor Model: A real lidar example

We’ve already used this method to build and validate a ray‑tracing‑based lidar simulation. Two exemplary iterations are included in the open access paper:

  • Iteration 1: beam pattern (+ distance‑dependent deviation) and distance measuring (+ offset).

  • Iteration 2: distance noise (which also induces beam‑pattern noise).

Each iteration ended with verification and validation, and all new effects passed without breaking previous ones, exactly what continuous V&V is meant to achieve.


CI‑ready with Avelon

Because our validation taxonomy is machine‑executable, we run it in CI. Our Persival Avelon software automates the full validation procedure, applying metrics and acceptance criteria to simulated and measured data, so every commit gets a credibility check.


Read the full paper

Comments


bottom of page