See how Trustworthy AI® can help you develop, analyze, and improve your safety-critical software

Black-box risk analysis: Testing on a static set of regression tests can give a false sense of reliability for an autonomous system. Alternatively, exhaustive parameter sweeps over a scenario or operational design domain (ODD) is expensive and slow.

With our TrustworthySearch API, you can understand how often and where your algorithms fail, and you can objectively measure your team's progress. Use the API on top of your simulation framework to guide the search for failure cases in an efficient and scalable way. The TrustworthySearch API is well-suited for A/B testing and including safety assessment as part of your continuous-integration loop. The API is completely black-box: it never needs to know what happens inside the simulation or what the simulation results mean.

Learn more and try the demo at the Github repository.

Realistic agents for simulation: To accurately test an autonomous system in simulation, you need realistic models of the other agents (cars, pedestrians, bicycles, etc.) to interact with.

Instead of using basic models, work with our realistic agent models that are developed using a combination of imitation learning on real-world data and simulated self-play. We work with our own data and public datasets. Alternatively, we can use your data if you have specialized agent-modeling needs.

Diversify existing agent behaviors: If you already have agent models that you use in simulation, you know that modifying their parameters to change behavior can be a cumbersome process, especially if the agent models are based on deep networks.

The TrustworthyAgents API exposes the same self-play algorithms we use internally to develop our own diverse agent models. Use this API to expand your suite of agents encompassing a hierarchy of passive through aggressive behaviors that maintain realism. Like the TrustworthySearch API, the TrustworthyAgents API is completely black-box: it never needs to know what the parameters of your agents mean or what happens inside your simulation stack.

Realistic behavior logs: A comprehensive set of regression tests is often used as a first sanity check for performance as you update an autonomous system. As the algorithm gets better, it becomes harder to find useful data in real life or simulation.

We develop synthetic logs of realistic multi-agent failure scenarios with a variety of agent-types (human or otherwise). We use external data sources together with the TrustworthySearch and TrustworthyAgents APIs to develop an ever-growing library of failure cases.

Custom services: Utilize our expertise across robotics, autonomy, simulation, formal verification, machine learning, optimization, and statistics to improve your approach to developing safety-critical autonomous software. We have consulted on projects involving adversarial methods in machine learning, differentially private and federated learning, as well as optimization-based motion planning and SMT-based model-checking. Contact us to see how we can help you.