The RetractionRisk Scanner is designed to help researchers screen their references for retraction risks prior to manuscript submission. By identifying citations of retracted articles or potentially problematic papers, especially within core methodological sections, we help researchers avoid the risk of their own work being retracted. This tool serves a dual purpose: protecting the reputation of individual researchers and upholding the research integrity of the scientific community.
The motivation behind this platform stems from a critical gap identified in our own research. While we found that social media posts can serve as early warning signals for problematic papers[1], we also discovered a concerning limitation: even when retracted papers receive high levels of attention from news and social media, their post-retraction citation rates do not decline significantly compared to papers that did not receive attention[2].
This suggests that despite the tireless efforts of science sleuths in exposing misconduct online, the overall effect of social media on science's "self-correction" remains limited. The reason is practical: while we frequently see online exposés, verifying these criticisms during the actual writing and citation process is difficult. Researchers simply cannot manually cross-reference every cited paper against social media discussions one by one.
Therefore, we developed this platform to enable researchers to screen their entire bibliography for retraction risks in one go. This tool streamlines the verification process, facilitates reliability checks, and helps curb the continued spreading of problematic research within the scientific community.
1. Input References: Submit your manuscript’s reference list. We highly recommend including DOIs to ensure seamless automated extraction by our algorithms.
2. Automated Screening: The scanner cross-references your citations against multiple databases to detect:
3. Risk Assessment: Based on these signals, our algorithm assigns a risk level to each paper: Very High, High, Medium, or Low.
4. Review & Verify: Click on any paper in the results list to view specific details and mentions. We recommend prioritizing a review of any references flagged as Medium risk or above to ensure the reliability of your research.
(1) To Scientific Sleuths
We aim to provide encouragement and ensure their voices are heard. By formally recognizing your efforts in flagging issues, we validate your contributions and motivate you to continue exposing problematic papers in the future.
(2) To Researchers
We help promote greater rigor in citation practices. Our goal is to prevent “secondary retractions”, situations where your own work is retracted because your methodology relied upon or cited retracted/problematic papers (a specific cause of retraction documented by Retraction Watch[3]).
(3) To Publishers, Funders, and University Research Integrity Offices
We equip you with a proactive signal detection system for post-publication review. A "Medium Risk" or above flag serves as a vital alert, filtering through the noise to highlight articles facing significant scrutiny or high-volume negative critique. For publishers, this provides actionable intelligence to prioritize formal investigations or necessary retractions. For Funders and Research Integrity Offices, this data helps reveal broader patterns: if papers by an author consistently triggers risk flags, it empowers you to identify and investigate potential systemic misconduct early, protecting both your resources and institutional reputation.
(4) To Scientific Community
We strive to curb the spread of mis- and disinformation and uphold research integrity, reduce trial-and-error costs and prevents the waste of time and funding on research directions misguided by fake science or problematic papers. Ultimately, this defends the credibility of the scientific community and helps rebuild public trust.
(5) To General Public
We encourage active participation in scientific discussions. High visibility and discussion on social media have been shown to accelerate the retraction of problematic papers[2]. When you see that your engagement in scholarly discussion on social media (even a simple repost!) tangibly benefits science, it fosters a virtuous cycle of increased participation and transparency.
The theoretical framework of this platform is built upon our own research[2,3], which establishes that social media discussions often serve as early warning signals for future paper retractions. This phenomenon has been further corroborated by independent studies from other researchers[4,5].
A key enabler of our technology is the recent introduction of Sentiment Analysis by Altmetric[6,7]. This feature allows for the automated classification of academic posts on X and Bluesky. Specifically, the RetractionRisk Scanner leverages three categories of negative sentiment to identify potential risks:
The ability to categorize these sentiments has significantly advanced the feasibility of retraction risk detection, serving as the primary driver for the creation of the RetractionRisk Scanner.
The RetractionRisk Scanner incorporates data from PubPeer, a renowned platform for post-publication peer review[8]. Researchers frequently use PubPeer to expose problems in articles. Because many official investigations begin with community flags on this platform, the presence of PubPeer comments is a critical indicator in our risk assessment model.
Beyond PubPeer, social media represents the vast majority of scientific discussions. According to Altmetric data for 2025, there were ~22.7 million total mentions of scientific articles. Of these, ~16.4 million (70%) originated from social media platforms (including X, Bluesky, Facebook, Reddit, YouTube, blogs, and podcasts). Notably, X and Bluesky combined accounted for 96% of all social media mentions. Consequently, monitoring X and Bluesky is the most effective method for detecting real-time retraction risks.
Finally, we utilize Retraction Watch, widely recognized as the most comprehensive database of retracted literature[9]. Following its acquisition by Crossref[10], article status data, whether Retracted, Expression of Concern, Corrected, or Non-retracted, is openly available. The current official status of a paper remains a foundational variable in predicting its overall risk level.
The RetractionRisk Scanner employs an internal algorithm that combines data from PubPeer discussions, negative posts on X and Bluesky, and official retraction statuses to calculate a risk score for every paper.
We adhere to Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure”[11]. If the exact formula were public, it could be weaponized to unfairly attack competitors or manipulate academic evaluations. To prevent this and ensure the metric remains a fair and honest indicator, we keep the specific details of the algorithm confidential.
Please remember: Metrics should serve solely as a reference, not as a definitive judgment. We encouraged all researchers to use their own judgment. Do not rely on the risk level alone; always review the underlying comments and context to form your own conclusion about a paper’s validity.
This platform is part of the Unreliable Science Project, which is supported by the Fundação Calouste Gulbenkian European Media and Information Fund. We gratefully acknowledge Altmetric, Crossref, OpenAlex, Retraction Watch, PubPeer, and Bluesky for providing API support.
Zheng, E. T. (2025). RetractionRisk Scanner. https://www.retractionrisk.com/.
[1] Zheng, E. T., Fu, H. Z., Thelwall, M., & Fang, Z. (2025). Can social media provide early warning of retraction? Evidence from critical tweets identified by human annotation and large language models. Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.70028.
[2] Zheng, E. T., Fu, H. Z., Jiang, X., Fang, Z., & Thelwall, M. (2025). Can news and social media attention reduce the influence of problematic research?. arXiv preprint. https://doi.org/10.48550/arXiv.2503.18215.
[3] Retraction Watch. Retraction Watch Database User Guide Appendix B: Reasons. https://retractionwatch.com/retraction-watch-database-user-guide/retraction-watch-database-user-guide-appendix-b-reasons/.
[4] Haunschild, R., & Bornmann, L. (2021). Can tweets be used to detect problems early with scientific papers? A case study of three retracted COVID-19/SARS-CoV-2 papers. Scientometrics, 126(6), 5181-5199. https://doi.org/10.1007/s11192-021-03962-7.
[5] Amiri, M., & Sotudeh, H. (2025). Early warnings in tweets: detecting pre-retraction signals and their association with retraction timing through natural language processing and survival analysis. Scientometrics, 130(11), 6425-6453. https://doi.org/10.1007/s11192-025-05477-x.
[6] Altmetric. (2025). Sentiment Analysis in Altmetric. https://help.altmetric.com/support/solutions/articles/6000279392-sentiment-analysis-in-altmetric.
[7] Areia, C., Taylor, M., Garcia, M., & Hernandez, J. (2025). Sentiment analysis of research attention: the Altmetric proof of concept. Frontiers in Research Metrics and Analytics, 10, 1612216. https://doi.org/10.3389/frma.2025.1612216.
[8] PubPeer. https://pubpeer.com/.
[9] Brainard, J. (2018). Rethinking retractions. Science. 362 (6413), 390-393. https://doi.org/10.1126/science.362.6413.390.
[10] Rittman, M. (2025). Retraction Watch retractions now in the Crossref API. Crossref Blog. https://doi.org/10.13003/692016.
[11] Goodhart, C. A. (1984). Problems of monetary management: the UK experience. In Monetary theory and practice: The UK experience (pp. 91-121). London: Macmillan Education UK. https://doi.org/10.1007/978-1-349-17295-5_4.