An Intro to the NICE Diagnostics Assessment Programme from Becky Albrow: what did we learn?

29 Jul 2021

In partnership with the British Society for Antimicrobial Chemotherapy, we recently co-hosted a webinar on diagnostic driven strategies for antimicrobial resistance in the UK. As part of the webinar, Becky Albrow, Interim Associate Director at the National Institute for Health and Care Excellence (NICE), presented a session on the NICE Diagnostics Assessment Programme

Infection Clinical Dilemmas logo

In case you didn’t know, the role of NICE is to: speed up NHS uptake of interventions that are both clinically and cost effective; encourage more equitable access to healthcare; promote better use of resources and; support the creation of new and innovative technologies.

In this blog we share our key takeaways from Becky’s talk, which we think will be helpful for those developing diagnostic tests. You can also find the link to the full webinar recording at the end of this blog.  

During the talk, we learnt that at the core of any assessment that NICE undertakes is the value proposition. This means considering… How does the test fit with health system priorities? Will it improve health outcomes? Does it have a justifiable price? Is it backed by a well constructed evidence base?

One question that diagnostic developers often grapple with is, what makes for a “well constructed evidence base”? 

1. Quality goes further than Quantity

Becky explained that when assessing diagnostics, the traditional hierarchy of evidence does not always apply – and NICE does not discriminate by evidence type. Different study types lend themselves to different outcomes, and NICE reviews all studies that are relevant to an assessment.

The evidence base (from a few studies) behind a test must provide NICE with four pieces of information: 

  • Health related quality of life: How does the test impact the patient’s quality of life? 
  • Intermediate measures: For example, what is the accuracy of the diagnostic? How long does it take to get a result? What is the impact on clinical decision-making? What about patient behaviour? 
  • Direct health outcomes for the patient (directly related to the test): Are there side effects to taking the test? 
  • Indirect health outcomes (mapping out what happens to the patient after having a positive or negative test result): What are the consequences of the results on the clinical pathway?

For NICE, indirect health outcomes are key when assessing tests, to understanding their clinical value. More data, therefore, aren’t always better. Lots of evidence on diagnostic accuracy alone – without evidence supporting its clinical utility – won’t generate a full picture of the test’s impact. As Becky puts it, “you don’t necessarily need a huge amount of data to have a well constructed evidence base.” 

2. Piece the Puzzle with Linked Evidence 

Ideally, the evidence base supporting a test would come from a comparative end-to-end study, which not only captures diagnostic accuracy but also assesses how the test influences clinical decision-making and patient outcomes. Unfortunately, end-to-end studies tend to be expensive and may not be feasible for small-to-medium-enterprises (SMEs) developing diagnostics. 

Where end-to-end studies haven’t been done, NICE uses a linked evidence approach to piece together test accuracy data with existing studies on the disease and treatment decisions, for example, to assess the test and understand its impact. Becky advised test developers to consider whether there is other evidence available beyond test accuracy, and on direct and indirect health outcomes, in support of the value proposition of the test. 

3. Engage Early with Users

One final take away from the talk.

Becky spoke about the importance of developers engaging with clinicians (end-users) early on in the diagnostic development process. Doing so can help developers understand the types of clinical advice that might be needed, where to position the test in the care pathway and whether the evidence is directly applicable to the NHS or not. Remember, the scope of NICE is to advise on NHS and PSS (Personal Social Services) budgets. Engaging with clinicians (end-users) early can help developers understand the levels of certainty their test must provide to influence treatment decisions. 

Diagnostics Advisory Committee (DAC)

NICE has a Diagnostics Advisory Committee (DAC), whose 22 standing committee members advise whether a test should be recommended for routine research, whether further research is needed or, in some circumstances, whether the test should not be recommended for routine use. For each topic they are joined by around 8 clinical and lay specialists with experience of the test or the condition the test is intended to identify. Becky presents two recent infection diagnostic assessments that were not recommended for adoption, to highlight gaps – firstly a test to rapidly identify bloodstream bacteria and fungi and secondly a Strep A test.

Take a listen  

We’ve highlighted some of our key takeaways from the talk, but listen for yourself to hear more about the challenges of assessing diagnostics (we have only scratched the surface!), the Diagnostics Advisory Committee, and more on the types of recommendations given by NICE when assessing tests.

Watch the webinar in full here on BSAC’s Infection Dilemma E-learning platform. Becky’s 20 minute talk can be found at 25 minutes.

You might also be interested in

Tackling AMR: A high-level interactive dialogue

Blog

The Brexit learning curve: 4 regulatory implications for Longitude Prize teams

Blog

Watch Sprint Workshop 2: How to attract investment to fund product development and scale your company

Blog

Stories from the frontlines of antimicrobial resistance during Covid-19

Report