Methodology v1.0

How Indexera evaluates API quality based on publicly observable signals

What We Measure

Indexera evaluates APIs across 10 dimensions that collectively represent the quality of the public developer experience. Each dimension receives a raw score (0–100), adjusted by a confidence multiplier, and weighted to produce a composite score out of 100.

All scores are derived from publicly observable signals — documentation pages, OpenAPI specs, status pages, pricing pages, SDK repositories, and changelogs. We never call private endpoints or test behind authentication.

What We Do Not Measure

  • - API security posture or vulnerability testing
  • - Runtime performance, latency, or throughput
  • - Data privacy or compliance certifications
  • - Internal documentation or private portals
  • - Customer satisfaction or support ticket quality

A low score means insufficient public evidence to integrate confidently — it does not mean the API is deficient or insecure.

Confidence Model

Each dimension score carries a confidence level that reflects how much evidence was available for that dimension. Confidence directly affects the effective score:

HIGH

Multiplier: 1.0

3+ evidence signals

MEDIUM

Multiplier: 0.85

2 evidence signals

LOW

Multiplier: 0.65

0–1 evidence signals

The report-level confidence is the weighted average of all dimension confidence levels.

Scoring Dimensions

Weights sum to 100. Each dimension is scored 0–100 internally and contributes its weight percentage to the composite.

Documentation completeness

18% weight

Quality and comprehensiveness of API documentation

What we evaluate:

  • Quick start guide for rapid onboarding
  • Authentication setup instructions
  • Error handling documentation
  • Rate limiting information
  • SDK integration guides
  • Changelog and versioning info
  • Code examples and tutorials
  • Comprehensive reference documentation

OpenAPI/spec quality

12% weight

Quality and completeness of OpenAPI/Swagger specification

What we evaluate:

  • Valid OpenAPI specification available
  • Complete endpoint documentation with descriptions
  • Schema definitions for request/response objects
  • Request/response examples
  • Modern OpenAPI version (3.0+)

Authentication clarity

12% weight

Clarity and completeness of authentication documentation

What we evaluate:

  • Clear authentication overview
  • Multiple auth methods documented
  • API key / OAuth 2.0 / Bearer token guides
  • Working code examples for auth flows

Error handling

10% weight

Documentation of error codes and handling procedures

What we evaluate:

  • Comprehensive error code reference
  • HTTP status code documentation
  • Error response examples
  • Retry logic recommendations

SDK availability & quality

10% weight

Availability and quality of official SDKs

What we evaluate:

  • Official SDKs for 3+ languages
  • Published to package managers (npm, PyPI, etc.)
  • Source code on GitHub/GitLab
  • Active maintenance signals

Pricing transparency

10% weight

Clarity and transparency of pricing information

What we evaluate:

  • Public pricing page with numeric values
  • Tiered pricing with usage limits
  • Free tier information
  • Overage pricing details

Status/reliability signals

10% weight

Availability of status page and reliability information

What we evaluate:

  • Public status page
  • Historical uptime data
  • Incident history and post-mortems
  • SLA guarantees and API status endpoints

Changelog/versioning

8% weight

Versioning practices and changelog maintenance

What we evaluate:

  • Public changelog with multiple entries
  • Semantic versioning (SemVer)
  • Release notes documentation
  • Breaking change notifications

Support/policy clarity

6% weight

Clarity of support channels and policies

What we evaluate:

  • Support documentation and contact info
  • Multiple support channels
  • FAQ and knowledge base
  • Response time guarantees

Onboarding friction

4% weight

How quickly a developer can get started

What we evaluate:

  • Quick start guide exists
  • Package manager install available
  • Auth example code provided
  • Runnable code samples within 5 min

Grade Bands

A+(90–100)

Exceptional public documentation and developer experience signals

A(80–89)

Strong public documentation with minor gaps

B(70–79)

Adequate documentation with notable gaps

C(60–69)

Below-average public documentation coverage

D(50–59)

Significant gaps in publicly observable documentation

F(Below 50)

Insufficient public evidence to integrate confidently

Anti-Gaming Protections

Indexera runs automated checks to detect attempts to artificially inflate scores. If a flag fires, the related dimension score is capped.

  • - Keyword stuffing — many pages but very few code examples
  • - Stub OpenAPI spec — spec exists but has ≤1 path and no descriptions
  • - Empty status page — page exists but no uptime or incident data
  • - Thin SDK — one language only, not on package managers
  • - Non-numeric pricing — pricing page but no actual tier or currency info
  • - Single-entry changelog — changelog exists but only one entry, no semver

Report Statuses

Unverified Public Scan

Default status. Automated scan of public signals, not reviewed by the API vendor.

Vendor Reviewed

The API vendor has reviewed the report and confirmed or corrected the findings.

Indexera Verified

Indexera team has independently verified the report with additional manual checks.

No Pay-to-Play

Indexera does not accept payment in exchange for higher scores, preferential ranking, or suppression of findings. All assessments are based on objective, publicly observable criteria.

API providers may request updates or dispute findings at no cost. Disputes are reviewed against the methodology and updated if warranted.

Safe-Harbor Disclaimer

Indexera reports are informational assessments based on publicly observable documentation and signals. They are not legal advice, security certifications, compliance audits, or guarantees of fitness for any purpose.

Scores reflect the availability and clarity of public documentation at the time of scanning. A low score indicates insufficient public evidence to integrate confidently — not that the API is deficient, insecure, or non-compliant.

Developers should use Indexera assessments as one input among many when evaluating APIs, alongside their own testing, security review, and due diligence.

Dispute, Update, or Feedback

API providers can request re-scans or dispute findings. We welcome methodology feedback from everyone.

Contact Us