Why Kelva Network Demo How It Works API Pricing Request Access
Now onboarding founding partners

Know exactly who is making your AI smarter

Most AI companies get human feedback from anonymous crowds or opaque managed services. Kelva is different — a transparent marketplace of credentialed doctors, lawyers, and researchers, accessible via API.

Request Enterprise Access Join as a Validator
$14B+
RLHF market size
97%
Validator accuracy
<4hr
Average turnaround
100%
Credentials verified

Built for teams building the future of AI

Medical AILegal AIFoundation ModelsAI Safety LabsEnterprise AIResearch InstitutionsFinancial AIGovernment AI Medical AILegal AIFoundation ModelsAI Safety LabsEnterprise AIResearch InstitutionsFinancial AIGovernment AI
Why Kelva
Not all human feedback is created equal
The AI industry relies on three models for human feedback. Only one gives you full transparency into who is evaluating your model.
Anonymous Crowds

The volume play

Millions of unverified workers completing tasks for cents. Fast and cheap — but you have no idea who's rating your medical AI.

  • No credential verification
  • No transparency into who reviewed
  • No domain expertise guarantee
  • Low cost per task
  • High volume capacity
Managed Services

The black box

Premium providers with expert annotators — but you can't see who they are, verify their credentials, or access them directly.

  • Some expert matching
  • No transparency into reviewers
  • No self-serve API access
  • No portable validator scores
  • High cost, long contracts
Kelva

The transparent marketplace

Credentialed professionals with verified licenses, visible accuracy scores, and full audit trails — accessible via self-serve API.

  • Every credential independently verified
  • See exactly who reviewed your data
  • Self-serve API — no sales calls needed
  • Portable Kelva Score for validators
  • Pay per task, no lock-in
The Network
Every expert, fully transparent
Unlike other platforms, Kelva shows you exactly who evaluated your AI — their credentials, accuracy history, and domain expertise. No anonymous labor. No black boxes.
🧡

Dr. Sarah Chen

Interventional Cardiologist
Mount Sinai Hospital, NY
97.1%
Accuracy
1,247
Tasks
Kelva Score 94
⚖️

James Whitfield, JD

Corporate & M&A Attorney
Columbia Law, Bar: NY & CA
94.8%
Accuracy
893
Tasks
Kelva Score 91
🔬

Prof. Anika Patel, PhD

Computational Biology
University of Toronto
96.3%
Accuracy
2,104
Tasks
Kelva Score 96
💰

Michael Torres, CFA

Quantitative Analyst
Wharton MBA, Series 7 & 63
93.5%
Accuracy
671
Tasks
Kelva Score 89
Validator profiles shown with permission. All credentials independently verified.
Validator Tiers
A clear hierarchy. Visible to every customer.
Every validator's tier is shown in the audit trail. You always know exactly who reviewed your data and what their qualifications are.
TIER 1
Elite
Senior practitioners with deep specialization and institutional affiliation.
🧡 Board Certified Specialists
🎓 University Faculty
⚖️ Bar-Admitted Senior Attorneys
Access All tasks · highest-stakes reviews
TIER 2
Licensed
Fully credentialed professionals in active practice.
🩸 Practicing Physicians (MD)
⚖️ Licensed Attorneys
💰 CFA Charterholders
Access Standard and complex tasks
TIER 3
Advanced
Late-stage trainees with substantial domain expertise. Pre-credentialed.
🧠 Medical Residents & Fellows
🔬 PhD Candidates (research domain)
🎓 Final-Year Law Students (3L)
Access Mid-complexity tasks · lower rates
TIER 4
Emerging
Senior students from top institutions, in their final year of study.
🏥 Final-Year Medical Students
🎓 Senior Graduate Researchers
📚 Specialized MS/MA Candidates
Access Foundational tasks only

Enterprise customers control which tiers can review their data. Need only Tier 1 board-certified specialists for clinical AI? Set the minimum credential and Kelva routes accordingly. Every audit trail shows the validator's tier transparently.

How It Works
Two sides. One platform.
Kelva is the marketplace where credentialed professionals and AI companies find each other — with full transparency on both sides.
⚕️
For Validators

Get paid for your expertise

Doctors, lawyers, researchers, and journalists — earn $9 to $40 per task evaluating AI outputs in your field.

01
Verify credentials — submit your medical license, bar ID, or faculty appointment
02
Complete matched tasks — rate AI responses, flag hallucinations, rank accuracy
03
Build your Kelva Score — a portable credential you can put on your LinkedIn
🏢
For AI Companies

Expert feedback at API speed

Submit AI outputs via API, specify the credentials you need, and get transparent evaluations back within hours.

01
Upload via API or dashboard — submit model outputs in batch or real-time
02
Auto-matched to experts — our engine routes to the right credentialed validators
03
See who reviewed — full audit trail with validator profiles, scores, and reasoning
🧡
Cardiology
97.1%
⚖️
Contract Law
94.8%
🧬
Oncology
96.3%
📰
Journalism
93.5%
💊
Pharmacology
95.7%
🏛️
IP Law
92.1%
🧠
Neurology
95.2%
📊
Finance
94.0%
🔬
Research
96.8%
The Moat

The Intelligence Graph

Every task completed adds to a proprietary map of who is accurate at what. Over time, Kelva knows that a specific cardiologist in Chicago is the most reliable reviewer of oncology AI responses.

This graph is the real product. It compounds with every task, gets smarter with every evaluation, and cannot be replicated by competitors starting from zero.

The Intelligence Graph transforms Kelva from a marketplace into infrastructure — the longer you use it, the better your results, the harder it is to leave.

Domains
Expert feedback where stakes are highest
AI companies building in regulated, high-liability verticals can't afford anonymous feedback.
🏥

Medical AI

Board-certified physicians evaluate clinical AI outputs. Full audit trail for FDA compliance and institutional review.

⚖️

Legal AI

Bar-admitted attorneys review AI-generated contracts and legal research. Credentialed validation for malpractice-sensitive products.

📰

Media & Fact-checking

Credentialed journalists flag hallucinations, verify claims, and evaluate AI-generated content against editorial standards.

💰

Financial AI

CFA charterholders review AI-generated investment analysis, risk assessments, and regulatory filings for accuracy.

🎓

Academic & Research

University faculty and PhD researchers validate AI outputs. Institutional partnerships with leading research universities.

🛡️

AI Safety & Red Teaming

Domain experts stress-test AI models for hallucinations, bias, and safety failures with structured adversarial evaluation.

See It In Action
The product, not just the pitch
Click through both sides of the marketplace — what AI companies see and what validators experience.
app.kelva.ai/dashboard
Dashboard — Harvey AI
sk_live_kv_8f2a...x9d1
Tasks this month
4,218
Avg turnaround
2.4hr
Avg confidence
0.94
Active Batches
BatchDomainTasksProgressStatus
batch_8f2a9xCardiology250
Complete
batch_3k7mzpContract Law180
In Progress
batch_9w4htnPharmacology500
In Progress
batch_2p8qvrAI Safety100
Matching
Results — batch_8f2a9x
● Complete — 250/250
Confidence
0.94
Agreement
0.91
Response B preferred
73%
Evaluations — Task #1
🧡
Dr. Sarah Chen
Interventional Cardiologist · Mount Sinai
94
Preferred Response B — specifies 90-minute door-to-balloon time per ACC/AHA guidelines. Includes ticagrelor loading dose for dual antiplatelet therapy. Correctly notes RV involvement check before nitroglycerin.
🩸
Dr. Marcus Williams
Emergency Medicine · NYU Langone
91
Preferred Response B — from an EM perspective, the 90-minute window is the decisive factor. Response A's "immediate intervention" is vague without specific time targets.
🧠
Dr. Priya Sharma, FACC
Cardiologist · Columbia Medical
96
Preferred Response B — both responses should have mentioned antiplatelet therapy (aspirin + P2Y12 inhibitor). Neither mentioned contraindications assessment for a real clinical scenario.
Your Validator Network
ValidatorInstitutionTasksAccuracyScore
Dr. Priya SharmaColumbia Medical1,58398.2%96
Dr. Sarah ChenMount Sinai1,24797.1%94
Dr. Ravi MehtaJohns Hopkins2,10495.7%93
James Whitfield, JDColumbia Law89394.8%91
Dr. Marcus WilliamsNYU Langone89293.5%91
app.kelva.ai/validate
Good morning, Dr. Chen
$2,847 this month
Today
$142
Accuracy
97.1%
Streak
12d
Available Tasks
Cardiology$16.00
Evaluate cardiac diagnosis AI responses
🕑 ~5 min · Preference ranking · Board Certified MD
Pharmacology$28.00
Flag drug interaction hallucinations
🕑 ~10 min · Hallucination detection · MD or PharmD
Oncology$85.00
Full treatment plan review — Stage III NSCLC
🕑 ~35 min · Expert review · Oncology specialist
Contract Law$24.00
Review AI-generated NDA clause
🔒 Bar Admitted JD required — locked
Wallet
$4,218.50
Available balance
Recent
💚
Cardiology — Preference
Today, 2:14 PM
+$16.00
💚
Pharmacology — Hallucination
Today, 11:32 AM
+$28.00
🏆
10-Day Streak Bonus
Today, 12:00 AM
+$25.00
💚
Oncology — Expert Review
Yesterday, 4:21 PM
+$85.00
🏆
Quality Bonus — 97%+ Accuracy
Mar 21
+$150.00
Your Kelva Score
94
Kelva Score
★ Elite Validator — Top 3%
DomainAccuracyTasks
Cardiology97.1%847
Pharmacology95.7%289
General Medicine93.2%111
🔗 Share on LinkedIn · Copy Profile Link
For Developers
Self-serve. No sales calls. Just an API key.
Unlike managed services, Kelva gives you direct API access. Plug credentialed human feedback into your training pipeline in minutes.
python
# Submit AI outputs for credentialed evaluation import kelva client = kelva.Client("sk_live_...") batch = client.tasks.create( domain="medical", task_type="preference_ranking", credential_min="board_certified", responses=[ {"model": "gpt-5", "text": "..."}, {"model": "claude-4", "text": "..."}, ], validators_required=3, ) # Full transparency — see who reviewed results = batch.wait() for r in results.evaluations: print(r.validator.name) # "Dr. Sarah Chen" print(r.validator.kelva_score) # 94 print(r.confidence) # 0.96

Self-serve from day one

Get an API key and start submitting tasks in minutes. No sales calls, no contracts, no minimums.

👤

Full reviewer transparency

See exactly who evaluated your data — their name, credentials, accuracy history, and Kelva Score. No anonymous labor.

🎯

Credential filtering

Specify board-certified, bar-admitted, PhD, or custom requirements. Only matching validators see your tasks.

🔒

Audit-ready compliance

Full audit trail for FDA, EU AI Act, and SOC 2. Every evaluation traceable to a verified professional.

Pricing
Marketplace pricing. Fully transparent.
Validators set their own rates by domain and credential level. Kelva takes a 20% platform fee. You see the total cost before you submit — no surprises, no hidden margins.
👤
Validator sets rate
Experts price their own time based on domain complexity and credential level
💲
You see total cost upfront
Review the per-task price before submitting. No commitments until you approve
⚖️
20% goes to Kelva
The rest goes directly to the validator. That's it. No subscriptions, no minimums

Starter

For teams exploring credentialed feedback

20%
platform fee per task
  • Up to 1,000 tasks/month
  • Dashboard access
  • Standard credential matching
  • Email support
  • 48-hour turnaround SLA
Most Popular

Growth

For teams integrating into their pipeline

20%
platform fee — volume discounts available
  • Up to 25,000 tasks/month
  • Full API access
  • Advanced credential filtering
  • Priority support & Slack
  • 4-hour turnaround SLA

Enterprise

For AI labs with custom requirements

Custom
negotiated rate for high volume
  • Unlimited tasks
  • Dedicated validator pool
  • Custom credential requirements
  • Dedicated account manager
  • Custom SLA & NDA
Security & Compliance
Enterprise-grade security. Audit-ready from day one.
Medical and legal AI companies can't afford data leaks. Kelva is built for regulated industries where compliance isn't optional.
🔒

End-to-end encryption

All data encrypted in transit (TLS 1.3) and at rest (AES-256). Your AI model outputs are never exposed to unauthorized parties at any point in the evaluation pipeline.

📋

Full audit trails

Every evaluation is traceable to a verified professional — who reviewed, when, what credentials they hold, what they said. Built for FDA submissions and EU AI Act documentation requirements.

👥

Dedicated validator pools

Enterprise customers can restrict access to pre-approved reviewers only. Your data is never seen by anyone you haven't vetted — full control over who touches your model outputs.

📜

NDA-protected workforce

Every validator signs a binding non-disclosure agreement during onboarding. Storing, screenshotting, or redistributing AI outputs is a contractual violation with legal consequences.

🏛️

Credential verification

Credentials are verified against issuing authorities — state medical boards, bar associations, university registrars. No self-reported expertise. Annual re-verification on all active validators.

🛠️

Infrastructure security

Hosted on SOC 2 certified cloud infrastructure. Role-based access controls, automated vulnerability scanning, incident response procedures, and 99.9% uptime SLA.

🛡️
SOC 2 Type II
In progress
🏥
HIPAA
Designed for compliance
🇪🇺
EU AI Act
Audit-trail ready
🔐
GDPR
Compliant by design
🔒
TLS 1.3 / AES-256
All data encrypted
About
Built by someone who's done this before
Yunus Özdemir
Founder & CEO
Belgium · Building for the US market

Kelva isn't my first time building verification infrastructure

Before Kelva, I built a media fact-checking platform for EU newsrooms — a system that verifies claims, scores source credibility, and flags AI-generated misinformation. That platform is currently in pilot with European news organizations.

Building that product taught me three things: how to verify human credentials at scale, how to build trust scoring systems that get smarter over time, and how to sell verification products to demanding institutional customers. Those are exactly the three capabilities Kelva requires.

The fact-checking platform isn't a dead project — it's proof of concept. The validator network architecture, the trust scoring logic, and the API infrastructure translate directly into the RLHF marketplace. I'm not starting from zero. I'm expanding from a working base.

⚖️
Credential verification at scale Built and shipped a system that verifies journalist credentials, source authenticity, and institutional affiliations across EU newsrooms
📊
Trust scoring infrastructure Designed reputation scoring algorithms that weight accuracy, consistency, and domain expertise — the same architecture behind the Kelva Score
🚀
Enterprise AI sales Sold verification tools to institutional customers who demand accuracy, audit trails, and compliance — exactly Kelva's target buyer

Looking for a technical co-founder

We're looking for an ML infrastructure engineer to join as co-founder. If you've built data pipelines at scale, let's talk.

Get in Touch
FAQ
Common questions
Every validator submits their professional license, board certification, bar admission, or academic appointment during onboarding. We independently verify each credential against the issuing authority — state medical boards, bar associations, university faculty directories. Verification typically takes 1-2 business days. Unverified users cannot access any tasks. We re-verify credentials annually and when licenses are up for renewal.
The Kelva Score is a composite rating (0-100) that measures a validator's reliability across five dimensions: task accuracy (40%), inter-rater agreement (25%), reasoning quality (20%), consistency and volume (10%), and response time (5%). It's a portable credential — validators can share it on LinkedIn and professional profiles. For AI companies, the Kelva Score is the trust signal that replaces "we hope our annotators are good" with verifiable proof.
Two fundamental differences. First, transparency — other platforms are black boxes where you never see who reviewed your data. Kelva shows you exactly which credentialed professional evaluated your AI output, their accuracy history, and their reasoning. Second, access model — most competitors require sales calls, contracts, and minimum commitments. Kelva is a self-serve API marketplace. Get an API key and start submitting tasks in minutes, no sales calls needed.
Kelva is a marketplace. Validators set their own rates based on their domain expertise and the task complexity. You see the total cost per task before submitting — validator payment plus a 20% platform fee. No subscriptions, no minimums, no lock-in. A simple preference ranking task might cost $16-20 total, while a deep expert review of a treatment protocol might be $85-100. You control the budget by choosing credential levels and task types.
Kelva supports multiple task formats: preference ranking (which AI response is better), hallucination detection (flag factual errors in AI output), accuracy rating (score across multiple dimensions), free-form correction (rewrite the AI response), multi-turn dialogue evaluation, red teaming and adversarial testing, and full expert document review. Tasks range from 5-minute quick evaluations to 45-minute deep reviews, with pricing that scales accordingly.
Average turnaround is 2-4 hours for standard tasks. The Growth plan includes a 4-hour SLA, and Enterprise customers can negotiate custom turnaround guarantees. Results stream back as individual validators complete their evaluations, so you start seeing data within minutes of submission — you don't have to wait for the entire batch to finish.
Yes. All data is encrypted in transit and at rest. Validators operate under NDA and cannot store, screenshot, or redistribute any AI outputs they review. We are pursuing SOC 2 Type II certification and are designed to be HIPAA-compliant for medical AI customers. Enterprise customers can request dedicated validator pools where only pre-approved reviewers see their data. Full audit trails are maintained for regulatory compliance including EU AI Act requirements.
Yes — this is one of Kelva's strongest use cases. The EU AI Act requires documented human oversight for high-risk AI systems, which includes medical and legal AI. Kelva provides a full audit trail showing exactly which credentialed professional reviewed each AI output, their qualifications, their assessment, and their reasoning. This documentation is designed to satisfy the human oversight requirements that regulators will be looking for.

Stop guessing. Start knowing.

Join the founding cohort of AI companies who demand transparency in human feedback — and the credentialed professionals who provide it.