Built on BERT models trained on retracted papers (2021–2026): 600 cardiovascular research abstracts + 1200 oncology + 700 AI/ML & digital-health abstracts + 750 general medicine + 4,200 journal-style (21 journal).
Ads are shown to free users. You can remove ads for this app by spending 10 credits (Account → Remove ads).
Retract is a lightweight screening tool that estimates how strongly an abstract resembles recently retracted biomedical papers. It provides domain models and (optionally) a journal-style classifier.
Inspiration: Retract was inspired by the BMJ methodological study doi:10.1136/bmj-2025-087581. The authors report that a similar screening model is integrated into the online submission systems of three journals from a major publisher and is being used to screen cancer-related manuscripts.
The screening models are hosted on Hugging Face. In Auto mode, Retract uses a deterministic router to choose the most likely domain (cardiovascular vs oncology vs general medicine; optional AI/ML branch if enabled). If the router is uncertain, it can fall back to a configured default branch. The journal-style model is separate and is used only when enabled on the server.
AI explanation: After screening, an LLM can provide a short interpretation of why the abstract may score higher or lower, plus practical clarity improvements. If the similarity score is ≥ 20%, it also proposes a rewrite suggestion (without adding new facts). If the abstract appears outside the selected domain, a scope note may warn that interpretation can be less reliable.
Model training: AutoTrain (Text Classification).
Validation metrics (cardiovascular): loss 0.346 · F1 0.862 · precision 0.955 · recall 0.785 · AUC 0.920 · accuracy 0.875
Validation metrics (oncology): loss 0.200 · F1 0.954 · precision 0.960 · recall 0.948 · AUC 0.984 · accuracy 0.956
Validation metrics (general medicine): loss 0.259 · F1 0.911 · precision 0.885 · recall 0.939 · AUC 0.973 · accuracy 0.910
Validation metrics (AI/ML & digital-health): loss 0.364 · F1 0.852 · precision 0.943 · recall 0.777 · AUC 0.925 · accuracy 0.867
Validation metrics (journal-style): loss 0.071 · F1(macro) 0.985 · accuracy 0.985
Part of ML in Health Science © .
If you report a bug, include a screenshot + browser/device details.