Products and Services

AI Verification Library

Ensure robustness and reliability of AI models

As AI models become part of engineered systems, particularly safety-critical applications, it is crucial to ensure their reliability and robustness. AI Verification Library for Deep Learning Toolbox lets you rigorously assess and test AI models.

With AI Verification Library, you can:

  • Verify properties of your AI models such as robustness to adversarial examples
  • Estimate how sensitive your AI model predictions are to input perturbations
  • Create a distribution discriminator that separates data into in- and out-of-distribution for runtime monitoring
  • Deploy a runtime monitoring system that oversees AI model performance
  • Walk through a case study to verify an airborne deep learning system
Illustration of desktop machine using quantum computing with MATLAB.

What Is AI Verification?

Traditional verification and validation (V&V) workflows, such as the V-cycle, often fall short for AI models. AI verification involves rigorous testing to ensure intended behaviors and avoid unintended ones. Adaptations like the W-shaped development process improve robustness and safety by addressing adversarial inputs, out-of-distribution detection, uncertainty estimation, and network property verification.

Verify an Airborne Deep Learning System (Case Study)

Explore a case study to verify an airborne deep learning system in line with aviation industry standards such as DO-178C, ARP4754A, and prospective EASA and FAA guidelines. This case study provides a comprehensive view of the steps necessary to fully comply with industry standards and guidelines for a deep learning system.

Verify Deep Neural Network Robustness for Classification

Boost your network’s robustness against adversarial examples (subtly altered inputs designed to mislead the network) using formal methods. This approach allows testing an infinite collection of inputs, proving prediction consistency despite perturbations and guiding training enhancements that enable you to improve the network’s reliability and accuracy.

Estimate Deep Neural Network Output Bounds for Regression

Estimate the output upper bound of your network given input ranges using formal methods. This process enables you to gain insights into the network’s potential outputs for given input perturbations, ensuring reliable performance in scenarios such as control systems, signal processing, and more.

Build Safe Deep Learning Systems with Runtime Monitoring

Incorporate runtime monitoring with out-of-distribution detection to build safe deep learning systems. Continuously evaluating if incoming data aligns with training data can help you decide whether to trust the network’s output or redirect it for safe handling, enhancing system safety and reliability.

Use Explainability Techniques

Understand the decision-making process of your network by using explainability techniques. Leverage methods such as the detector randomized input sampling for explanation (D-RISE) algorithm to compute saliency maps for object detectors and visualize the specific regions within the input data that are most influential in the network’s predictions.

Integrate Constrained Deep Learning

Constrained deep learning is an advanced approach to training deep neural networks by incorporating domain-specific constraints into the learning process. By integrating these constraints into the construction and training of neural networks, you can guarantee desirable behavior in safety-critical scenarios where such guarantees are paramount.