Editorial Policy

How we source, evaluate, and present evidence for dietary supplements.

Our Approach

Priolix is an automated evidence platform. We do not employ medical writers or editors. Instead, we use a reproducible computational pipeline to discover, score, and present research evidence alongside product data. Every step of this pipeline is documented below so you can evaluate our methods and their limitations.

Evidence Sources

We index the following source classes, each with distinct provenance and quality characteristics:

Source ClassOriginCountVerification
PubMed ArticlesNLM E-utilities API~9,000PMID-verified, metadata from API
Cochrane ReviewsPubMed (Cochrane publication type)725PMID-verified, Cochrane flag set
Clinical TrialsClinicalTrials.gov v2 API3,477NCT ID-verified, status tracked
GuidelinesPubMed (guideline publication type)1,313PMID-verified, MeSH-enriched
ODS Fact SheetsNIH Office of Dietary Supplements65URL-verified, curated
EFSA OpinionsEFSA Journal (regulatory)30DOI-verified, regulatory class
FDA CAERSFDA adverse event reports62Substance-aggregated from raw events
DRI TablesIOM/NAM nutrient reference values31Curated from official publications
Drug-Supplement InteractionsCurated from literature30Severity-rated, source-cited

All PubMed-sourced records are verified via the NCBI E-utilities API using valid PMIDs. We do not fabricate or hallucinate citations. Our original corpus was rebuilt from scratch in April 2026 after discovering 20 hallucinated PMIDs from an earlier AI-generated seed set.

Quality Scoring Methodology

Each research source receives a quality score (0–100) computed from five weighted dimensions:

DimensionMax PointsWhat It Measures
Study Design28Meta-analysis (28), systematic review (24), RCT (20), guideline (28), regulatory (22), etc.
Citation Impact10OpenAlex citation count, normalized by age. Defaults by study type if unavailable.
Journal Quality10Top-tier journals (Lancet, JAMA, NEJM, BMJ, Cochrane, EFSA) get 10; mid-tier 7; other 4.
Metadata Completeness2Bonus for MeSH terms, DOI, OpenAlex enrichment, Cochrane/regulatory flags.
Sample Size10Log-scaled participant count. Defaults by study type if not reported.

Recency bonus: up to +5 for sources published within the last 3 years.

Evidence Tiers

Sources are classified into tiers based on their quality score:

TierScore RangeTypical ContentCount
A75–100Meta-analyses in top journals, Cochrane reviews, clinical guidelines318
B58–74Well-conducted RCTs, systematic reviews, high-quality trials5,044
C40–57Observational studies, pilot trials, narrative reviews8,480
D0–39Case reports, letters, editorials, low-metadata records322

Product Safety Scoring

Each product receives a safety score (0–100) computed from three components:

  • Unsupported Claims (0–40 pts deducted): Health claims on the product label that lack Tier-A or Tier-B evidence support. More unsupported claims → lower score.
  • UL Exceedance (0–40 pts deducted): Ingredients that exceed the Tolerable Upper Intake Level (UL) established by IOM/NAM. Graduated penalty based on how far above UL.
  • Proprietary Blends (0–20 pts deducted): Products containing proprietary blends where individual ingredient doses are not disclosed. Penalty proportional to the number of undisclosed ingredients.

Score interpretation: 80–100 = no significant concerns; 50–79 = caution warranted; 0–49 = concerning (rare; typically products containing ephedra or similar flagged ingredients).

AI Evidence Summaries

Supplement detail pages include AI-generated evidence summaries produced by an open-weight language model (Gemma 4 26B) running locally via Ollama. The generation process:

  1. Context assembly: Top 15 research sources for the supplement (by quality score) are selected, including title, abstract snippet, quality tier, and evidence level.
  2. Prompt construction: A structured prompt asks the model to produce a JSON summary with: overview, evidence by condition (with strength rating), key findings, limitations, and safety notes.
  3. Generation: The model generates a response with temperature 0.3 and repeat penalty 1.5 to minimize hallucination.
  4. Validation: JSON output is parsed with robust extraction (handles markdown fences, nested objects). Required fields are validated; missing fields are filled with "Not available."
  5. Storage: Summaries are stored in SQLite with the generation timestamp. They can be regenerated on demand.

Limitations: AI summaries may oversimplify complex evidence, omit important caveats, or produce inaccurate strength ratings. They are grounded in real sources but should not be treated as authoritative. Always review the underlying research directly.

Data Freshness

Our pipeline is designed for incremental updates:

  • Every 6 hours: Incremental PubMed scan for new publications matching our supplement verticals.
  • Weekly: Full sweep for new Cochrane reviews, meta-analyses, and RCTs.
  • Monthly: DSLD product refresh for new and updated product labels.
  • On demand: AI evidence summaries can be regenerated for any supplement.

Each source's retrieval date is tracked in our database. The "last updated" timestamp on supplement pages reflects the most recent source addition or AI summary generation.

Corrections & Feedback

If you find an error in our data — a misattributed study, an incorrect dose, or a broken citation — we want to know. Our pipeline is automated but not infallible. Contact us with the specific product or supplement page URL and a description of the issue.