Editorial Policy
How we source, evaluate, and present evidence for dietary supplements.
Our Approach
Priolix is an automated evidence platform. We do not employ medical writers or editors. Instead, we use a reproducible computational pipeline to discover, score, and present research evidence alongside product data. Every step of this pipeline is documented below so you can evaluate our methods and their limitations.
Evidence Sources
We index the following source classes, each with distinct provenance and quality characteristics:
| Source Class | Origin | Count | Verification |
|---|---|---|---|
| PubMed Articles | NLM E-utilities API | ~9,000 | PMID-verified, metadata from API |
| Cochrane Reviews | PubMed (Cochrane publication type) | 725 | PMID-verified, Cochrane flag set |
| Clinical Trials | ClinicalTrials.gov v2 API | 3,477 | NCT ID-verified, status tracked |
| Guidelines | PubMed (guideline publication type) | 1,313 | PMID-verified, MeSH-enriched |
| ODS Fact Sheets | NIH Office of Dietary Supplements | 65 | URL-verified, curated |
| EFSA Opinions | EFSA Journal (regulatory) | 30 | DOI-verified, regulatory class |
| FDA CAERS | FDA adverse event reports | 62 | Substance-aggregated from raw events |
| DRI Tables | IOM/NAM nutrient reference values | 31 | Curated from official publications |
| Drug-Supplement Interactions | Curated from literature | 30 | Severity-rated, source-cited |
All PubMed-sourced records are verified via the NCBI E-utilities API using valid PMIDs. We do not fabricate or hallucinate citations. Our original corpus was rebuilt from scratch in April 2026 after discovering 20 hallucinated PMIDs from an earlier AI-generated seed set.
Quality Scoring Methodology
Each research source receives a quality score (0–100) computed from five weighted dimensions:
| Dimension | Max Points | What It Measures |
|---|---|---|
| Study Design | 28 | Meta-analysis (28), systematic review (24), RCT (20), guideline (28), regulatory (22), etc. |
| Citation Impact | 10 | OpenAlex citation count, normalized by age. Defaults by study type if unavailable. |
| Journal Quality | 10 | Top-tier journals (Lancet, JAMA, NEJM, BMJ, Cochrane, EFSA) get 10; mid-tier 7; other 4. |
| Metadata Completeness | 2 | Bonus for MeSH terms, DOI, OpenAlex enrichment, Cochrane/regulatory flags. |
| Sample Size | 10 | Log-scaled participant count. Defaults by study type if not reported. |
Recency bonus: up to +5 for sources published within the last 3 years.
Evidence Tiers
Sources are classified into tiers based on their quality score:
| Tier | Score Range | Typical Content | Count |
|---|---|---|---|
| A | 75–100 | Meta-analyses in top journals, Cochrane reviews, clinical guidelines | 318 |
| B | 58–74 | Well-conducted RCTs, systematic reviews, high-quality trials | 5,044 |
| C | 40–57 | Observational studies, pilot trials, narrative reviews | 8,480 |
| D | 0–39 | Case reports, letters, editorials, low-metadata records | 322 |
Product Safety Scoring
Each product receives a safety score (0–100) computed from three components:
- Unsupported Claims (0–40 pts deducted): Health claims on the product label that lack Tier-A or Tier-B evidence support. More unsupported claims → lower score.
- UL Exceedance (0–40 pts deducted): Ingredients that exceed the Tolerable Upper Intake Level (UL) established by IOM/NAM. Graduated penalty based on how far above UL.
- Proprietary Blends (0–20 pts deducted): Products containing proprietary blends where individual ingredient doses are not disclosed. Penalty proportional to the number of undisclosed ingredients.
Score interpretation: 80–100 = no significant concerns; 50–79 = caution warranted; 0–49 = concerning (rare; typically products containing ephedra or similar flagged ingredients).
AI Evidence Summaries
Supplement detail pages include AI-generated evidence summaries produced by an open-weight language model (Gemma 4 26B) running locally via Ollama. The generation process:
- Context assembly: Top 15 research sources for the supplement (by quality score) are selected, including title, abstract snippet, quality tier, and evidence level.
- Prompt construction: A structured prompt asks the model to produce a JSON summary with: overview, evidence by condition (with strength rating), key findings, limitations, and safety notes.
- Generation: The model generates a response with temperature 0.3 and repeat penalty 1.5 to minimize hallucination.
- Validation: JSON output is parsed with robust extraction (handles markdown fences, nested objects). Required fields are validated; missing fields are filled with "Not available."
- Storage: Summaries are stored in SQLite with the generation timestamp. They can be regenerated on demand.
Limitations: AI summaries may oversimplify complex evidence, omit important caveats, or produce inaccurate strength ratings. They are grounded in real sources but should not be treated as authoritative. Always review the underlying research directly.
Data Freshness
Our pipeline is designed for incremental updates:
- Every 6 hours: Incremental PubMed scan for new publications matching our supplement verticals.
- Weekly: Full sweep for new Cochrane reviews, meta-analyses, and RCTs.
- Monthly: DSLD product refresh for new and updated product labels.
- On demand: AI evidence summaries can be regenerated for any supplement.
Each source's retrieval date is tracked in our database. The "last updated" timestamp on supplement pages reflects the most recent source addition or AI summary generation.
Corrections & Feedback
If you find an error in our data — a misattributed study, an incorrect dose, or a broken citation — we want to know. Our pipeline is automated but not infallible. Contact us with the specific product or supplement page URL and a description of the issue.