Imposter study participants are an under-recognised threat to the integrity of health research and clinical decision making, academics from the University of Oxford have warned.
Fraudulent participation in research seems to be a growing issue that can skew findings and therefore adversely impact clinical decisions, service design and resource allocation, they said, in an editorial published in the BMJ [link here].
“Imposter participants are more than a nuisance; they are a systemic threat to health research. Their effect is demonstrable and their detection inconsistent,” the authors wrote.
“The research community must acknowledge the problem and dedicate resources to testing and implementing safeguards. These steps are critical to ensure that the data guiding clinical care reflect the real patient voice,” they stressed.
Imposter or fraudulent participants provide deceptive or inaccurate data in order to meet eligibility criteria for health research, and they include both humans and increasingly sophisticated bots that can impersonate human behaviour and responses.
While imposter participants were described as early as 2011, the authors highlight an increasing number of articles investigating their prevalence in recent years. A 2025 scoping review found that 96% of studies describing how to identify imposter participants had been published within the past five years, they noted.
The growth of online recruitment for health research projects in recent years has exacerbated the problem, which impacts all types of studies, from surveys to randomised controlled trials, according to Dr Eileen Morrow, a Doctoral Clinical Academic Fellow at Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Science (NDORMS), and her colleagues.
The scale of the problem is already significant, the editorial argues, again pointing to the 2025 scoping review which found that 18 of 23 studies specifically looking for imposter participants in their datasets found them.
Moreover, the variance in the detected prevalence of imposter participants ranged from 3% to 94% in one survey looking at communication during ovarian cancer treatment.
The survey received 576 responses within seven hours, most of them in the early hours of the morning, the authors said, with 94% judged to be fraudulent and the remaining 6% suspicious. Although the survey was closed and relaunched with stricter protocols, fraudulent responses continued to be detected.
Similarly, a separate randomised controlled trial evaluating an alcohol reduction app identified 76% of online enrolments as bots, the group noted.
Tackling the problem
Rigorous measures such as face-to-face eligibility assessments are vital to prevent datasets from being compromised, the authors said. Other detection strategies that have been proposed include checking for implausible home addresses or submission from multiple formulaic email addresses.
Proposals for prevention include identity verification procedures or CAPTCHA tests. The authors noted that after the introduction of CAPTCHA tests, no additional bots were detected in the alcohol reduction app study, although they cautioned that bots remained a threat in other cases.
The reliability of these measures remain relatively untested, however. In addition, the editorial warns that they could affect marginalised communities, with people with stigmatised health conditions less likely to be willing to submit to identity verification, for example.
Researchers should “routinely” use imposter participant detection and prevention in online research, and should refer to recent reviews summarising published imposter detection and prevention approaches, the editorialists stressed.
“At a minimum, studies should transparently report which safeguards were used and acknowledge their limitations, and journals should encourage consistent and transparent reporting of these safeguards,” they concluded.
Also, “funders and institutions should invest in infrastructure and training to help researchers keep pace with evolving tactics”.