How We Work
Editorial Policy
Weight Loss Rankings publishes YMYL (Your Money Your Life) health content. Every clinical claim is verified against a primary source before publication. Here is exactly how that process works, who runs it, and the role of AI tooling in our workflow.
Who writes Weight Loss Rankings
The site is written and edited by Eli Marsden, founding editor. Eli spent his early career in healthcare investment banking and the past 10+ years building pharmaceutical companies — leading drug development programs that resulted in FDA approvals and named inventor on multiple patents for new therapeutics. He is bilingual (English and Spanish) and personally edits both the English and the Spanish corpus on this site.
Eli is not a licensed clinician. He brings deep pharmaceutical industry experience to the editorial role but does not hold a medical degree, nursing license, or pharmacy license. We are explicit about this on every YMYL page so readers know what kind of expertise the content is anchored in.
We are actively seeking a credentialed medical reviewer (MD, PharmD, NP, RN, or RD) to join the editorial team on a freelance / consultant basis. When that role is filled, the byline on every clinical article will switch from “Editorially reviewed (not clinically reviewed)” to “Medically reviewed by [Name, Credentials]” automatically — the infrastructure is already in place. If you are a credentialed clinician interested in this role, see the open posting on our Careers page for details and how to apply.
The 125% verification standard
For YMYL content, “good enough” is not the bar. Every clinical claim is verified at least twice against independent primary sources before publication. We call this the 125% standard — we want to be more accurate than necessary, not less.
Concrete example of what 125% looks like in practice: when we publish that a GLP-1 trial showed X% mean weight loss over Y weeks at Z dose, we cite the published NEJM/JAMA paper with PMID, we cite the FDA prescribing information that quotes the same trial in the label, and we cite the clinicaltrials.gov registration. Three independent primary sources for one number. If any of the three disagrees, we either reconcile the discrepancy in the article body or hold the claim until we can.
How we source clinical claims
The acceptable source hierarchy for clinical claims:
- FDA prescribing information (the official drug label) — the highest-trust source for approved indications, dosing, contraindications, and warnings.
- PubMed-indexed peer-reviewed primary research with PMID — for trial efficacy, mechanism, and safety data not yet on the FDA label.
- Published regulatory filings — FDA warning letters (with the actual letter URL on fda.gov), DOJ press releases, FTC enforcement orders, court records (PACER, CourtListener) for litigation claims.
- Clinicaltrials.gov — for trial design, enrollment, and primary endpoints.
What we do NOT cite for clinical claims: press releases, vendor blog posts, aggregator review sites, “GLP-1 best of” listicles, social media, AI-generated summaries, or our own previously published articles (recursive sourcing is not sourcing).
How we source provider data
Pricing, state coverage, drug formulary, accreditations, and pharmacy partners for every telehealth provider in our dataset are sourced directly from the live provider page on the date of verification:
- The verification URL is stored in the provider record under
verification.source_urls - The verification date is surfaced visibly on every review page as “Last verified [date]”
- The confidence tier (high / medium / low) reflects how independently verifiable the provider's claims are — if a provider does not publish a state list, we explicitly mark it as low confidence rather than pretending we know
- Providers are re-verified on a recurring cadence; bulk pricing changes trigger a re-verification pass on affected articles
The role of AI tooling in our workflow
We use AI assistance for drafting, formatting, structured data generation, and verification scaffolding. We do not publish content that has not been read, fact-checked, and approved by a named human editor — every word on every page goes through Eli before it ships.
Specifically, AI tooling is used to:
- Draft article structure — turning a research outline into a first-pass article that the human editor then rewrites and verifies
- Cross-check citations — confirming that cited PMIDs resolve to the claimed paper, that cited FDA URLs return the actual letter, that cited court cases exist in PACER
- Maintain JSON-LD structured data — generating Product, Review, MedicalWebPage, and Person schemas from the provider and author registries
- Surface consistency issues — flagging when a claim in one article contradicts a claim in another
AI tooling is never used to:
- Generate or fabricate clinical claims, trial numbers, pricing data, or state coverage lists
- Write the final published version of any YMYL claim without human verification
- Generate or fabricate citations, PMIDs, FDA letter URLs, or court case numbers
- Auto-translate Spanish content (the /es/ corpus is first-party translated and edited by the same bilingual editor)
This policy follows Google's public guidance that AI-assisted content is fine when it is high quality and human-verified, and is treated as scaled content abuse when it is mass-generated without value or verification.
The 6-dimension scoring rubric
Provider scores are calculated from a transparent 6-dimension rubric: value, effectiveness, user experience, trust & safety, accessibility, and ongoing support. Each dimension is weighted (value 25%, effectiveness 25%, UX 15%, trust 15%, accessibility 10%, support 10%) and the breakdown is visible on every review page. The full methodology is published at /methodology.
Independence and conflicts of interest
Weight Loss Rankings is reader-supported via affiliate commissions on a subset of provider links. Editorial scores and rankings are produced independently of affiliate relationships and are not for sale.
Specific independence guarantees:
- We rank providers by editorial score, not by affiliate payout. The Editor's Pick on the homepage is whichever provider has the highest overall score among the Katalys-approved set, and the score is calculated from the rubric above before the Katalys approval status is even checked
- We disclose when a covered provider has known FDA enforcement history, pending litigation, or BBB complaints — including providers we monetize. See the Direct Meds and Zealthy review pages for examples
- We list providers we do not monetize alongside ones we do, with no visual distinction in the ranking — the affiliate disclosure is at the link level, not the listing level
- When a provider we cover has a direct conflict with the founding editor's industry background (e.g., a portfolio company, a brand he holds patents on), the conflict is disclosed inline on that provider's review page
The full affiliate disclosure is at /disclosure. Errors get corrected per /corrections regardless of whether the affected party is an affiliate partner.
What we will never do
- Publish a clinical claim without a primary source
- Auto-generate content without human review
- Hide or rewrite a published claim that turned out to be wrong (we mark corrections visibly per /corrections)
- Accept payment to influence editorial scores or rankings
- Cite our own articles as a primary source for claims they were never primary on
- Pretend we have credentials we do not have
- Publish AI-translated Spanish content (the /es/ corpus is first-party translated and edited by the same editor)
How to reach us about editorial questions
Editorial questions, source requests, factual disputes, and correction requests: hello@weightlossrankings.org. We respond within one business day.