Design Verification of Airborne AI/ML Systems
The verification process of safety-critical systems must ensure system design performs all intended functionality within the required output ranges and safety limits. It must also ensure that no intended functionality is present having a risk larger than the stated development assurance level.
The verification process of safety-critical systems must ensure system design performs all intended functionality within the required output ranges and safety limits. It must also ensure that no intended functionality is present having a risk larger than the stated development assurance level. The objective of the AI/ML-based system is to assist with the detection of unintended behavior during operations that results in enhanced online hazard analysis and risk mitigation. Validation and verification techniques must be developed for these systems with the future goal of adopting them in airborne operations.