Interpretable and Auditable AI Systems

We are buidling a new class of interpretable AI systems and foundation models that humans can reliably debug, trust, and understand.

An illustration of a Black man, shoulder-up view, on a light blue background. The man has a light orange shirt. His face is covered in colorful brush strokes.

Introducing Guide Labs: Engineering Interpretable and Auditable AI Systems

November 17, 2024

Understand which part of the prompt is responsible for the output.

THE PROBLEM

Current AI systems and foundation models:

  • produce explanations and justifications that are unreliable and unrelated to their output.
  • cannot be reliably debugged and fixed.
  • are difficult to control and align with current approaches.

OUR SOLUTION

LLMs & Foundation models engineered to be interpretable

  • Produces human-understandable factors for any output it generates.
  • Produces reliable context citations.
  • Specifies which training input data strongly influences the model’s generated output.

About Our Team

20+

YEARS OF EXPERIENCE

on interpretable machine learning with PhDs from MIT, UMD, & MILA.

24+

RESEARCH PAPERS

on interpretability published at top ML conferences

Developed the first

interpretable generative diffusion model & LLM

We are hiring

Proven best practices

Tectonic VenturesInitializedLombardstreet VenturesPioneerY Combinator
Tectonic VenturesInitializedLombardstreet VenturesPioneerY Combinator

Get notified when you can start using Guide Labs