nabu.science
nabu.science

Evaluating research quality.

The work, not the wrapper.

Right now, there's no reliable way to know whether a paper is sound without reading it yourself. Journal name doesn't tell you. Citation count doesn't tell you. Nabu does - with structured, independent assessment of the article itself, producing quality signals that make discovery, evaluation, and funding decisions evidence-based rather than reputation-based.

or upload a paper for evaluation →

Built on best practice. Made scalable.

Nabu takes the principles that make the best research evaluation work — structured rubrics, blinded review, multi-reviewer adjudication — and makes them systematic, consistent, and scalable using AI-enabled reviewers.

Standardised rubric

Every paper scored against the same explicit framework: four dimensions, twenty components, methodology-specific criteria. The rubric is public. The weights are documented.

Fully blinded

No author, journal, or institution signals. Reviewers see only the work and the publication year.

Multi-reviewer, adjudicated

Three independent reviewers per paper. Score divergence is flagged and resolved by an adjudicator on strength of evidence — not consensus, not averaging.

Transparent reasoning

Every score comes with documented rationale. You can read why a paper scored what it scored. That is not true of any citation metric.

Post-publication evolution

Evaluations update with practitioner feedback, replication signals, and reliability monitoring. A score is a living assessment, not a snapshot.

0.81

Inter-rater reliability (ICC₂)

Human peer review benchmark: 0.34

See methodology →

8 of 9

Retracted papers correctly flagged

Scored in bottom tier and flagged Red — blind, without any knowledge of retraction status

See methodology →

6 tiers, all used

The rubric has teeth

Scores spread across the full range — 20% of papers score Weak or Poor. This is not grade inflation.

See methodology →

Our methodology is public, structured, and auditable.

Read the full evaluation framework — from rubric design and scoring calibration to validation results. Nabu holds itself to the same transparency standard it applies to research.

Peer review of the methodology is invited and in progress.

Sample evaluation

Safety and Efficacy of the BNT162b2 mRNA Covid-19 Vaccine

Polack FP, et al. · 2020 · Phase 2/3 RCT · Clinical Medicine

Quality

4.3
/5.0
Very Good
No concerns
Contribution
4.6
Craft
4.3
Clarity
4.5
Context
4.3

Impact Potential

4.8
/5.0
High
Traction
5.0
Translation
4.9
Transferability
4.5
Trajectory
4.6

For researchers

Compare articles on equal terms

Every paper evaluated with the same rubric, regardless of journal. No more relying on where it was published to guess whether it’s good.

Read the verdict before the paper

Dimension scores and rationale give you a structured summary of strengths and shortcomings before you commit to a full read.

Build a quality profile

A portable record of your work’s intrinsic quality, built from structured evaluation. Useful for grants, hiring, and promotion — independent of venue prestige.

Check what you’re citing

Import your reference library. See quality scores on the papers you’re building on.

For evaluators

Standardised, defensible assessment

Structured scores with documented rationale replace subjective impressions. Evaluation decisions become auditable and evidence-based.

Feedback, not ranking

Dimension-level scores turn evaluation into a development conversation. You can identify specific methodological strengths and gaps, not just produce a number.

DORA and CoARA-compliant

Paper-level, methodology-based, venue-independent quality signals. Exactly what 3,000+ signatory institutions committed to.

For the first time, I can point to a structured assessment of my work that doesn’t reduce everything to which journal accepted it. The dimension-level breakdown is genuinely useful for understanding where my methodology is strong and where I need to improve.

Postdoctoral Researcher, Biomedical Sciences

We’ve been DORA signatories for three years, but we had nothing to replace Impact Factor with in practice. Nabu gives us structured, article-level quality signals that make our evaluation panels defensible.

Vice-Dean Research, Faculty of Social Sciences

The standardised rubric across methodology types is what sold me. When I’m screening 200 papers for a systematic review, having a consistent quality signal — especially on Craft — saves weeks of full-text assessment.

Senior Research Fellow, Evidence Synthesis

DORA and CoARA-compliant by design

3,000+ institutions have signed DORA and CoARA commitments to evaluate research on intrinsic merit. Nabu is the first field-agnostic, auditable, rubric-anchored quality signal that makes those commitments operational.

Early access

Nabu is in early access. Sign up to search and view quality evaluations, import your reference library, upload papers for evaluation, and build your researcher quality profile.

Institutional or evaluator access? Get in touch