← Blog / Docs

Hallucination Guard: Developer Documentation

Complete developer reference for CertainLogic Hallucination Guard — validate AI responses against verified facts before they reach your users.

A
Anton
April 15, 2026 · 8 min read

Hallucination Guard checks AI-generated responses against a verified knowledge base and returns a structured verdict: pass or fail, a confidence score, and any specific facts that couldn’t be verified. Drop it into your pipeline before any AI response reaches a user.


What It Does

Every AI response passes through a fact-checking layer:

  1. Extracts factual claims from the response
  2. Verifies each claim against CertainLogic’s validated knowledge graph
  3. Returns a structured result — pass/fail, confidence score, and flagged violations

If validation fails, you decide what happens next: fallback message, retry with a different model, or route to human review. You stay in control.


Quick Start

Install the Python client:

pip install certainlogic

Validate a response in three lines:

from certainlogic import Validator

validator = Validator(api_key="your-api-key")
result = validator.validate(
    query="What is the capital of France?",
    response="The capital of France is Paris."
)

print(result.validation_passed)   # True
print(result.confidence_score)    # 0.99
print(result.flagged_facts)       # []

validate() returns a ValidationResult object. If validation_passed is False, inspect flagged_facts for which claims failed.

Handling failures:

if not result.validation_passed:
    if result.confidence_score < 0.5:
        # High uncertainty — block and fallback
        return fallback_response()
    else:
        # Specific facts failed — log and review
        log_flagged(result.flagged_facts)
        return safe_fallback_response()

API Reference

All validation requests go to a single endpoint.

POST https://api.certainlogic.ai/query

Headers:

HeaderValue
X-Api-KeyYour API key
Content-Typeapplication/json

Request body:

{
  "query": "What is the boiling point of water?",
  "response": "Water boils at 100°C at sea level.",
  "options": {
    "strict_mode": false
  }
}
FieldTypeRequiredDescription
querystringThe original user query
responsestringThe AI response to validate
options.strict_modebooleanIf true, any unverifiable claim fails validation. Default: false

Example response:

{
  "answer": "Water boils at 100°C (212°F) at standard atmospheric pressure (1 atm / sea level).",
  "routing": "knowledge_graph",
  "results": [
    {
      "claim": "Water boils at 100°C at sea level",
      "verified": true,
      "source": "NIST Chemistry WebBook"
    }
  ],
  "validation_passed": true,
  "confidence_score": 0.98,
  "flagged_facts": []
}

Response Fields

FieldTypeDescription
answerstringCertainLogic’s verified reference answer for the query. Use this as a fallback or comparison.
routingstringHow the query was resolved internally: knowledge_graph, cache, or llm_fallback
resultsarrayPer-claim breakdown. Each item has claim, verified (bool), and source
validation_passedbooleantrue if all extracted claims passed verification (or no claims were flagged)
confidence_scorefloat0.0–1.0. How confident CertainLogic is in its assessment. Below 0.7 = treat with caution
flagged_factsarrayList of claims that failed verification. Empty if validation_passed is true

flagged_facts structure

Each item in the array:

{
  "claim": "The Eiffel Tower is 400 meters tall",
  "expected": "The Eiffel Tower is approximately 330 meters tall",
  "severity": "high"
}

severity is low, medium, or high — use this to calibrate your fallback logic.


Best Practices

Validate before the user sees it

Run Hallucination Guard after your LLM returns a response, before you display it. This is the only point where blocking a bad response has zero UX cost.

User query → LLM → [Hallucination Guard] → User

Handling validation failures

validation_passedconfidence_scoreRecommended action
true≥ 0.8Serve the response
true< 0.8Serve with disclaimer or add caveat
falseanyBlock and use answer field as fallback, or route to human review

Latency expectations

  • Cached queries (previously validated): ~20–50ms
  • New queries: ~200–400ms depending on claim complexity
  • For latency-sensitive flows, run validation async and show a “Verifying…” state

Free tier limits

The free tier includes 500 validations/month. Each POST /query call counts as one validation regardless of response length.

Upgrade to a paid plan for higher volume, SLA guarantees, and access to strict_mode.


Error Codes

HTTP StatusMeaningWhat to do
401Missing or invalid API keyCheck your X-Api-Key header. Keys are generated at /get-started
429Rate limit exceededBack off and retry after the Retry-After header value (seconds)
200 with validation_passed: falseRequest succeeded, but the response failed fact-checkingInspect flagged_facts and use the answer field as your fallback

429 retry example:

import time
import requests

def validate_with_retry(query, response, api_key, max_retries=3):
    for attempt in range(max_retries):
        r = requests.post(
            "https://api.certainlogic.ai/query",
            headers={"X-Api-Key": api_key, "Content-Type": "application/json"},
            json={"query": query, "response": response}
        )
        if r.status_code == 429:
            wait = int(r.headers.get("Retry-After", 5))
            time.sleep(wait)
            continue
        r.raise_for_status()
        return r.json()
    raise Exception("Rate limit exceeded after retries")

Get Your API Key

Free tier — no credit card required.

Get started →

Ready to build AI that actually works?

CertainLogic builds deterministic AI tools for small businesses. Fixed price. No surprises.