Skip to content
InformedVoter

Methodology

InformedVoter compiles civic information from authoritative public sources, then routes every datum through a human review queue before it appears on a public page. This page explains exactly how that pipeline works so voters and reviewers can audit it.

Where our data comes from

We rely on a small, named set of upstream sources. The full registry — including license terms — is published at /about/data-sources. For v1 (Justin, Texas), our primary sources are:

From source to canonical: the review pipeline

Every change — whether scraped, fetched from an API, or synthesized by an LLM — is written to a proposed_changes staging table. Nothing reaches a public page until a human reviewer approves it. The Publisher service is the only path from staging to canonical data.

  1. Ingest: an automated job fetches raw data from a registered source.
  2. Stage: the change is written as a ProposedChange with citations and a diff.
  3. Review: a human reviewer in /admin/review approves, edits, or rejects it.
  4. Publish: on approval, the Publisher writes to canonical tables and invalidates caches.

Field-level provenance

Every canonical record carries the source and timestamp it was last verified. Where applicable, fields surface a source_url and last_verified_at so any visitor can trace a fact back to its origin.

Freshness SLA

We display a freshness banner on every public page indicating when its underlying data was last verified. Election-day races are re-checked at least daily during the active window; historical records are re-checked when the upstream source publishes amendments.

Public corrections

Spotted a mistake? Use our public correction form at /correction. Submissions are reviewed by a human and replied to from a real address. We do not gatekeep corrections behind an account.

What we do not do