*This post was updated in April 2020 to reflect the postponement of the EU MDR deadline.
If you’re involved in the preparation of Clinical Evaluation Reports (CERs) for medical devices, you’re likely to find yourself doing a lot more literature reviews in the near future. Under the MEDDEV 2.7/1 rev 4 regulations that come into effect on May 26, 2021, literature reviews play an important role in several areas of the CER, including establishment of state of the art.
In the context of the CER, state of the art “describes what is currently and generally considered standard of care, or best practice, for the medical condition or treatment for which the device is used.”
According to the regulatory writing experts at Criterion Edge, “establishing and describing state of the art is not an isolated task, but is central to the entire clinical evaluation”. Therefore, it’s best practice to dedicate a section of the CER to establishing state of the art, which can then be referenced as needed throughout.
Much of the data that supports the state of the art section of a CER comes from reviewing published literature for relevant information. The state of the art literature review must be conducted in a systematic way according to sound and objective methods, according to the guidance provided by the regulators.
So, what could go wrong with the literature review? Our clients tell us that there are a number of issues that can arise in the literature review process, increasing the risk of failing an audit. For example:
Incomplete search coverage
It may be tempting to try to speed up the screening process with a narrower search, but if any potentially relevant articles are shown to be missing from the results, it can call into question the validity of the entire literature review.
Incomplete audit trail
Despite the availability of literature review software, spreadsheets remain a big part of the review process in many organizations. But, as handy as they are, spreadsheets aren’t able to keep track of connections between data and source documents, timestamps, proof of participation, or provide version control. When the Notified Bodies ask, you’ll need to be able to produce this type of data to verify that the literature review was conducted with sufficient rigor and adherence to a prescribed process. Remember, if it’s not documented, it didn’t happen.
Ad hoc process
The guidance provided in MEDDEV 2.7/1 rev 4 calls for a systematic approach to literature reviews. Unfortunately, many reviews lack the rigorously documented process that the notified bodies are looking for. A good rule of thumb is to ask yourself, “If we gave the protocol to another team, would we get the same result?”. There’s only one right answer.
Data integrity
In the literature review, small mistakes can be just as disastrous as big ones. Typos, inconsistencies in data entry, transcription errors, or undocumented manual decisions can all derail your literature review as easily as leaving out a key citation. Without proper tools, such errors are practically inevitable.
Efficiency
Manually tackling tedious, highly operational review tasks, such as data collation and report preparation, is not a good use of expensive researcher time – especially when these tasks can be automated using software. As the number of required literature reviews increases, so does the need for efficient ways to manage the workload. Automation of repetitive manual tasks not only accelerates the literature review process, it also helps reduce costs and produce better quality data.
Avoiding the common points of failure
Luckily, there are tools and best practices for conducting state of the art literature reviews that can help mitigate the risk of failure.