Podcast #4: The journey from idea to innovation

Grab a cuppa and press play for a 45 minute journey through taking an idea from local practice to the public.

Clinical Scientists are inventive. Many of us have new ideas that improve our practice, patient experience and outcomes. We perform research to ensure that the new idea or invention works, we disseminate that through publication but could we be doing more to ensure that a wider audience benefits from our work? How can we maximise our impact?

NHS organisations and universities are increasingly recognising the importance of commercialising innovations to maximise impact, but what does that involve? We spoke with Matt Swift, Nadiya Johal and Tim Searle about their experiences of developing the Altilium App (@AltiliumApp) for cardiac physiologists to use in pacemaker clinics. Listen to find out more about the steps in the process, intellectual property, the costs involved and ideas for how innovation could be better supported in practice.

Please let us know what you think and to share any innovation resources which you think others might find helpful. The best way to do this is through our Twitter account @TheClinSci.

Thanks to Matt, Nadiya and Tim for taking the time to chat, and to @timhusselbury94 for editing the podcast.


The Clinical Scientist

Reproducibility- if you can’t reproduce the result did it actually happen?

Written by Dr Karina Bunting, Cardiac Physiologist Research Fellow, University of Birmingham

Reproducibility underpins everything we do, whether it is in clinical practice, research or in ensuring you’re weighing out the same amount of flour for another lockdown banana bread bake!

Reproducible measurements are essential for the tests we carry out as clinical scientists, as ultimately the results we report guide the management of patients. To take a topical example, imagine you carry out a coronavirus test on a patient and it comes back negative, but you do another swab a few hours later and it comes back  positive, so you take another one and it comes back negative what do you do? Do you put the patient on a coronavirus ward or on a clean ward? If you choose incorrectly this could potentially have harmful consequences to the patient or other patients on the ward.

It is essential that tests are reproducible so that any change observed can be assumed to be due to actual clinical change and not an error with the measurement process. However often in scientific studies the reproducibility testing is confined to a few sentences, tucked away at the end of the results section, and in clinical practice how often are you interrogating how reproducible your department’s measurements are?

So firstly what does reproducibility mean and how is this different to accuracy? I’m sure we’ve all come across the bullseye analogy, but here is a reminder of how reproducibility differs from accuracy.

Figure 1. Bullseye

Bullseye (a) shows a high level of reproducibility because every time the shooter has aimed, the arrows have hit the target within a short range of each other. However, the arrows are far away from the centre of the bullseye meaning that although it’s reproducible, the shooter’s aim is not accurate. This would be the same as measuring a patient’s blood pressure and it consistently coming back as high within a narrow range of readings (153/92 mmHg, 152/91 mmHg, 155/92 mmHg, 154/90 mmHg) but the accurate blood pressure reading is normal at 120/80 mmHg.

In bullseye (b), the arrows have all hit the bullseye close to the centre suggesting the shooter is fairly accurate; however the arrows are hitting a different area each time, suggesting the reproducibility is low. This would be the same as getting blood pressure readings ranging from 115/70 mmHg to 128/85 mmHg; although the blood pressure readings are not too far away from the accurate reading of 120/80 mmHg, the readings are not reproducible and there will be uncertainty as to whether the patient is getting a lower or higher blood pressure over time due to the poor reproducibility.

In (c), the arrows are hitting the centre of the bullseye and in a similar area each time, suggesting the shooter’s aim is both accurate and reproducible. This would be the same as getting blood pressure readings all within the range of 118/79 mmHg to 122/80 mmHg; the blood pressures have a high level of reproducibility and are very close to the true value.

In (d), the arrows are all over the place and nowhere near the centre of the bullseye, suggesting that the shooter’s aim is both inaccurate and not reproducible. In the blood pressure scenario this would be like getting a series of blood pressures ranging from 90/60 mmHg to 170/95 mmHg; at this point you might question whether the machine had been dropped on the floor!

Then to add to the confusion there are the three “R”s: reproducibility, repeatability and reliability. These are often used inter-changeably but they mean different things and will guide how you carry out any studies and interpret your results.

[Reproducibility] is explicitly defined as the variation of the same measurement made on a subject, under changing conditions, but in real-life practice also includes changes in measurement, method, observer, time-frame, instrumentation, location and/or environment.  Repeatability can be separately considered as the variation in repeat measurements made on the same subject under identical conditions, whereas Reliability describes the magnitude of error between measurements.”

Bunting KV et al, 2019; Journal of the American Society of Echocardiography

For more information on this, consider reading this paper by Bartlett & Frost (2008).

Setting up your own study

When setting up your own study, whether it is for a clinical audit or a research study, it is important to think carefully about some key steps before starting. In the following section I list how I would go about planning a reproducibility study/ audit and I provide an example study (in italics).

As a cardiac clinical scientist I have used a cardiology example. For your reference an echocardiogram is an ultrasound scan of the heart to assess the heart’s structure and function. Left ventricular ejection fraction (LVEF) is a routine measurement of pumping function, which measures the change in volume of the ventricle during systole, measured by the operator drawing around the endocardial boarder in diastole and systole (figure 2).

Figure 2. Simpson’s biplane left ventricular ejection fraction measurement
  1. Determine your question/ hypothesis?

It is important you have a clear and measurable question/ hypothesis so that you can plan your study and on receiving your results come to a conclusion.

What is the inter-operator reproducibility between cardiac clinical scientists measuring heart function in patients with heart failure? OR,

There is a high level of reproducibility between cardiac clinical scientists measuring heart function in patients with heart failure.

  1. Have a think about what you will do with the results?

It’s important you have good reason for doing the study and you can clearly state how it will benefit patients/ staff. Also to have an idea of what actions you will take with the results.

If there is low reproducibility between operators, I will plan training sessions to improve reproducibility.

  1. Decide on what measurement you want to assess

It is essential that this is specific and instructions are given clearly so that there is no confusion about the technique used.

Heart function measured by Simpson’s Biplane left ventricular ejection fraction (%)

  1. Decide on what population?

A specific patient population isn’t essential but it will enable similar demographics so that any variability seen is less likely to do with the patient population and more to do with the measurement process.

In patients attending heart failure clinic for an echocardiogram

  1. Are there any patients you want to exclude?

It is best to minimise exclusions to make the study as generalizable to clinical practice as possible. However if you believe there is a certain patient demographic that will skew the results, then they should be considered for exclusion. For example in this case patients with significant ventricular ectopy and atrial arrhythmia have been excluded, because their measurements of heart function will vary between cardiac cycles regardless of inter-operator reproducibility potentially under-estimating inter-operator reproducibility.

Patients with significant ventricular ectopy or atrial arrhythmia

  1. In how many patients do you want to test?

To determine how many patients a sample size needs to be calculated. An adequate sample size is important so that you can make true inferences about the population generally.  If you don’t have access to statistical software there are several online calculators which will do this for you and there are usually some friendly hospital statisticians, who will be happy to help you. The things you need to consider is: 1) what level of significance would you accept, this is generally seen as the p-value; a p=<0.05 is saying there is less than a 1 in 20 chance that the observations seen (difference) are down to chance, in another words you can be sure that it’s not a false positive; 2) power, this is the opposite of significance and is defined as the false negative rate we are willing to accept, so in another words the rate at which we fail to detect an actual difference; 3) expected effect size, this is what we expect the difference to be between the things we are comparing which is determined by your experience or what’s been observed in the literature and finally 4) standard deviation, this is the anticipated variability in the data.     

16 patients derived from 1) significance= 0.05 2) power= 80% 3) expected effect size= 5% (from experience I think there is usually a 5% difference between operator’s EF) and 4) standard deviation in the measurement= ±5%, this is the established EF standard deviation

You can read more about sample size and design of reliability studies in this paper by Walter, Eliasziw & Donner (2010).

  1. Decide whether you want to assess reproducibility or repeatability?

This will significantly affect your experimental design. A repeatability design will mean that all conditions are kept the same between measurements (same echocardiographer, same patient, same machine & same position) and the only thing changing will be the small variation in time between the LVEF measurements. Whereas with reproducibility an aspect of the conditions will change to see how that affects the measurement, in this case a different operator.

Reproducibility- between cardiac clinical scientists within the department

  1. Decide on what aspect of reproducibility you would like to assess?

This could be anything which involves changing the environment in which the measurement is taken, to interrogate how this affects its reproducibility, for example: machines, method of measurement, operator, and time.

Between two cardiac clinical scientists (inter-operator reproducibility) taking a single LVEF measurement each on patients with heart failure.

  1. Who are you going to test in?

Again this depends on what you are trying to answer and your study design. If you want to test the department as a whole it is best to get a selection of clinical scientists with different years of experience to test the measurement in. However if you wish to compare between two operators it makes more sense to choose two with similar experience to fairly test the measurement’s reproducibility.

Between two senior cardiac clinical scientists with similar experience in echocardiography

  1. How can you ensure non-biased selection?

Statistical bias is defined as a systematic tendency in the process of data collection which can cause misleading results. To avoid bias in the selection of patients, they should either be randomly selected or consecutively selected. This is important to ensure that you are performing the test fairly and making it generalizable to everyday clinical practice. In this example it will be easier to measure the Simpson’s biplane LVEF in patients with good quality images. However if you “cherry pick” the patients who you think have good quality images, not only is this not representative of an everyday echocardiography clinic, you are also likely to achieve better reproducibility results misleading your interpretation of how reproducible inter-operator assessment of Simpson’s biplane EF is in your department.  

Consecutive patients attending heart failure clinic with no pre-exclusions to image quality.

  1. How can you ensure blinding?

It is important that the assessors are unable to see what each other’s measurements are; otherwise they may be influenced by the other’s results causing an unfair test which is highly biased. 

The second operator will be blinded to the first operator’s results and ideally any previous LVEF measurements performed in the patient.  

  1. How can you ensure other influential environmental factors are minimised?

If you are focussing on inter-operator reproducibility it’s important to not introduce other sources of variation that will confound your results for example choice of machine or measurement software.

The same machine will be used throughout the study and the order of 1st and 2nd operator will be the same between patients. The study will be carried out within a small time frame (avoiding days between data collection)

  1. How should I record my data?

Prior to beginning the study create your database template. In this ensure that you have all the data you want to collect for the study, including the units. The categories should be easy to understand to avoid any ambiguity of what data to put in each column and should have their own unique variable name. I would recommend using excel, as this will mean your data will already be in a format to analyse and can be transferred to statistical software easily, if available.

When you have your results, how do you go about interpreting them?

Interpreting your data

When you have collected your data, it’s time to interpret the results. To start with I would recommend plotting a simple graph; for example a bar chart for categorical data and a scatter plot for continuous data. The graph will give you a good idea of what’s going on before any statistical methods are applied. The choice of statistics applied will depend on your experimental design and data collected. The extensive list of statistics you can apply to your data is beyond the scope of this blog, but it is important to think carefully about what best represents your data and be aware of the disadvantages and advantages of each method.

There are several different terms used to assess how reproducible your measurements are: agreement, association/correlation, variation and bias. These mean different things and it is important to understand their meaning when describing your results or interpreting another group’s reproducibility study. Association assess the relationship between the data. Agreement assesses the degree of consensus between measurements. Variability describes the magnitude in difference between repeated measurements.  Bias tells us to what extent there is a real difference between two data points that has not occurred by chance (statistical significance) and it can also be used to describe the overall magnitude and direction of the difference in data (systematic and proportional bias) to add to the confusion!

To show a brief example of how these terms are used to interpret the overall reproducibility of your results, below in figure 3 are four sets of possible results from a similar study set-up to the example study described above (two operators measuring LVEF in a series of different patients).

Figure 3. Reproducibility assessment between two operators (taken from Bunting KV et al. JASE  2019)

The brown diagonal dotted line on each graph represents the line of equality; if all the points are aligned on the line of equality this would mean that operator 1 and 2 are obtaining the same measurement every time and so have a high level of agreement and perfect association. The red dotted line represents the line-of-best-fit for the data points; if the data points are all aligned on the line-of best-fit it, this suggests that the measurements taken by operator 1 and 2 are highly associated with each other but this does not necessarily mean that the measurements taken by operator 1 and 2 are agreeing. In example A the points are aligned on both the line-of-equality and line-of-best-fit; therefore operator 1 and 2’s measurements have both a high level of agreement and association. In example B the points are well aligned to the line-of-best fit suggesting there is high association but it is shifted to left from the line-of-equality therefore there is low agreement. In this case operator 2 is consistently measuring LVEF higher than operator 1 which is called systematic bias. In example C there seems to be a change in the distribution of the points according to the measurement value; as the LVEF measurement gets higher the points are further away from the line of best fit suggesting there is lower association between the operators’ measurements as the LVEF gets higher, this is called proportional bias. However the points aren’t too far away from the line-of-equality and so other overall agreement isn’t too low. In contrast example D’s points are far away from the line of equality and the line-of-best fit suggesting poor agreement and association but no significant bias as there is no tendency for the points to go in either direction.

There are of course other options for graphs and it will be up to you to decide what best represents your findings; for example in this case if you are looking at the difference between two groups of measurements (operator 1 and 2), a Bland and Altman plot is really useful to visualise the degree of agreement and detect systematic bias (figure 4).

Figure 4. Bland and Altman plots for agreement between two tests or operators (taken from Bunting KV et al. JASE  2019)

The “bias” calculates the mean difference across all observations; the closer this is to 0 will suggest that there is no systematic bias. The closer the points are to the 0 line the better the agreement. The upper and lower limits of agreement represent the range you would expect 95% of values to lie between if you were to repeat the test again. Therefore the narrower the limits of agreement the lower the variability between repeated tests. The limits of agreement can be calculated simply using the following equation: bias ± 1.96 x standard deviation (use “+ 1.96” for the upper limits and “- 1.96 for the lower limits). In Figure 4 Example A-D shows the same data as in Figure 3 A-D but is represented in a Bland and Altman plot. In example A the points are all very close to the 0 line and so there is a high level of agreement and hence the bias is low. At the same time the limits of agreement are also narrow suggesting that the results themselves are reproducible; 95% of future measurements between operator 1 and 2 will lie within the limits. Example B has a positive bias value because operator 2 is consistently measuring higher than operator 1, and so the points are far away from the 0 line suggesting poor agreement. However the limits of agreement are narrow suggesting that there will be a similar difference between the operators on future occasions so the results themselves are consistent or have low variability. In example C the overall bias is low suggesting there is no systematic bias but the points are distributed widely and above and below the 0 line, suggesting that on occasions there is poor agreement and this is why the limits of agreement are wide as there is poor consistency within the reproducibility results. Finally in example D the points are far away from the 0 line suggesting poor agreement and the points are very widely distributed, hence why the limits of agreement are very wide.

In conclusion from this data: example A shows a high level of reproducibility between operators and this is consistent between patients; this is a good result and indicates no training is required but an audit should be repeated in a year or so to ensure the standard is still high. In example B operator 2 is consistently measuring higher than operator 1, which suggests that there may be a variation in how the operators have been taught to measure LVEF, so further training would be required to resolve this problem in reproducibility. In example C the data suggests that inter-operator reproducibility is lower when measuring LVEF in patients with either a very low LVEF or high LVEF, so further training is required in this demographic of patients. Finally for example D the inter-operator reproducibility is very poor across all patients, suggesting a lack of understanding of the LVEF measurement and so a significant amount of re-training would be required. 

I applied this to echocardiography in the publication listed below, but both the recommendations and the online tool I developed can be applied to any clinical measurement:

Bunting KV, Steeds RP, Slater LT, Rogers JK, Gkoutos GV, Kotecha D. A Practical Guide to Assess the Reproducibility of Echocardiographic Measurements. J Am Soc Echocardiogr. 2019 Dec;32(12):1505-1515. doi: 10.1016/j.echo.2019.08.015.

Tool: http://www.birmingham. ac.uk/echo

Further information on how reproducibility studies should be reported is available:

Kottner J, Audige L, Brorson S, Donner A, Gajewski BJ, Hrobjartsson A, et al. Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed. J Clin Epidemiol. 2011;64:96-106

Hopefully this has given you a small insight into the importance of assessing reproducibility and some things to think about when planning and interpreting you next study. Happy testing!

Dr Karina V Bunting (@buntingkarina)

Cardiac Clinical Scientist (University Hospitals Birmingham, NHS Trust)

Post-Doc Research Fellow (University of Birmingham)

Looking for motivation to start your STP equivalence journey?

The opportunity to become a registered Clinical Scientist is such an exciting development for many healthcare professionals who aspire to advanced practice and consultant roles. Despite this, some practitioners seem reluctant to pursue registration though the AHCS equivalence route – The Clinical Scientist is on a mission to change that!

I was recently invited to deliver a talk about the STP equivalence process at a national event for cardiac physiologists. The brief was to motivate practitioners to consider the journey and to provide an overview of the process. I was delighted to hear that so many at the event found the talk helpful but unfortunately, the live event was not recorded. Several organisations have been in touch to ask for the slides. The sponsors have kindly agreed that I can freely share this recorded version of the talk which I hope will be more helpful than the unnarrated set of slides. Although the talk was delivered to a cardiology audience, the content is very broad so might be useful for other disciplines.

If this talk inspires you to take the first step to becoming a Clinical Scientist, please visit our webpages on equivalence where you will find a whole host of useful links, a more detailed presentation about the application process, and a list of coaches who are willing to support you FREE of charge.

As always, please get in touch with comments or questions via our Twitter account @TheClinSci!

Settle down with a cup of tea for this recording since it is ~50mins long.

Best wishes,


Are you ready to lead service transformation?

One of the goals of TheClinSci website and Twitter account is to support the sharing of knowledge, skills and experience across the broad range of healthcare science disciplines.

At the end of 2020 I was invited to put together some thoughts on this at the virtual conference delivered by SCST (the professional body for cardiac healthcare scientists). The brief was to inspire others to work differently by sharing examples from other disciplines. SCST have kindly sent me this 15minute recording of the live talk to share.

Thanks to Gareth McKeeman, Jo Shakespeare and Jack Stancel-Lewis for allowing me to share their stories of transformation as case studies in the talk.

Did I achieve the brief? The talk was very well received at the live event but inspiration is very personal…..check out the video and let me know what you think.

Critical appraisal of research for trainees and novices

This talk was prepared for scientist trainees in cardiology attending the British Society of Echocardiography Conference in 2020. The conference organisers have kindly agreed that it can be shared with others.

I am no expert on this topic but wanted to share one approach that trainees might find helpful. If you have greater expertise or a different approach and you are willing to share with other healthcare scientists, please do get in touch! I’m still learning every day.

Why not take advantage of the fact that this is a recorded session and do the worked example alongside me. Download the research paper and critical appraisal tool highlighted at the start of the talk and use ‘pause’ at the points where you need to read and then ‘play’ when you’re ready to resume. I have used a review relevant to cardiac imaging but the focus is on the process of appraising the paper, so this should be relevant to other disciplines.

Recording a narrated talk is very different from delivering a live session so please forgive the rather stilted start and any verbal stumbles!

I hope that this is useful,


Podcast #3: The challenge of ‘bespoke’ and the freedom of broader working

Press play to hear the 3rd episode in our short series of podcasts about the Higher Specialist Scientific Training (HSST programme). 

In this episode I chat with James Willis (Cardiology) from Royal United Hospitals in Bath and Alison Hadfield (Haematology) who is based in Newcastle. James and Alison can be found on Twitter using @jwillis82 and @AlisonHadfield3.

Listeners who are tuning in to all podcasts might see some common themes emerging from the interviews to date but there are also new insights in each episode. For example, Alison and James joined the programme fairly recently and share how the interview process and the presentation within that has changed. In addition, you will hear about how Covid-19 has impacted on their experience. Find out which element of the HSST programme is described as ‘enlightening’ and the challenges associated with the word ‘bespoke’.

Biomedical Scientists might be particularly interested in this episode as Alison is a Biomedical Scientist who became a Clinical Scientist through the equivalence route before starting HSST. 

Scientists and physiologists in cardiology will be interested to hear how James is using ward rounds and exposure to broader clinical environments to identify ways of innovating and improving the service. It’s great to hear that others are now approaching him for an opinion on how new ideas can be driven forward. 

Thanks to Alison and James for giving up their time to chat with me, and to @timhusselbury94 for editing the podcast.

Happy listening!

Emma Rees

Podcast #2: HSST, “upmarket speed-dating” and the value of an art project!

Press play to hear our second interview with two more scientists enrolled on the HSST programme (40mins).

In this second episode I chat with Joe Wood (Radiotherapy Physics, The Christie Centre) and Oliver Marmoy (Neurophysiology/Vision Science, Great Ormond Street Hospital). Joe and Oliver share an honest account of their experiences on the HSST programme.

There are new insights in this episode with more detail about the structure of the programme and what it takes to succeed. Find out why an art project is relevant in scientific training, which element is described as “upmarket speed-dating”, and which of our guests wrote a leadership essay in a tent on a motorbike tour!

Sincere thanks to Joe and Oliver for chatting with me. You can find them on Twitter using @joewood86 and @O.Marmoy. Thanks also to @timhusselbury94 for editing the podcast.

Happy listening!

Emma Rees

Our first podcast!

Press play to hear our first interview with two scientists enrolled on the HSST programme (35mins)

Do you have aspirations of becoming a consultant Clinical Scientist? Are you wondering what it’s really like to do the Higher Specialist Scientific Training (HSST) programme? Let us bring you closer to scientists who have first-hand experience to share. 

In this first episode I chat with George Burghel (genomics) and Jo McCullagh (blood transfusion) about their experiences of the interview, the best and most challenging parts of the programme, and the advice they would give to prospective applicants. George and Jo are both on Twitter. You can find them using @MccullaghJojomc and @BurghelG

I hope you enjoy this first interview. There are three more in this short series, each with scientists from different specialisms. Many thanks to George and Jo for chatting with me, and to Tim Husselbury for sharing his podcasting and editing experience.

Happy listening!

Emma Rees

To boldly go….into advanced practice

Are we really ready to think and act differently; to face the discomfort of breaking out of our silos?

Clinical Scientists are ideally equipped to lead healthcare transformation but we need courage to rethink elements of our service provision. Forget about the organisational barriers. If we could start from scratch, what would ‘good’ look like?

This is a call to action!

An older population has a lifetime of exposure to risk factors often magnified by social inequality. More people have several chronic conditions which interact to produce symptoms and acute events. Our current system of increasingly specialist services tends to see people as discrete diseases or tests rather than approaching care from a holistic perspective.

Patients complain about the number of appointments, the need to repeat their story again and again, the lack of communication between teams, the lack of focus on self-management and prevention of events. Add to this the unnecessary, repeated tests and the environmental impact of wasted resources and it becomes clear that our current model of care is not sustainable.

It is time to reflect on what we do as Clinical Scientists. How can we improve our services to meet the needs of the person in front of us?

There is no doubt that to offer the best healthcare we need more multi-disciplinary teams. We also need some of our scientists to be working at the very top of their licence in an extended scope of practice.

This is not for the faint-hearted. It is uncomfortable. There are risks.

Extending beyond our safe and comfortable scientific practice means moving into a new world where there is uncertainty and ambiguity. Where information is incomplete, complex and complicated and where we are accountable for the clinical care of patients.

This is the world of advanced clinical practice.

Not an imitation medic!

This is not about becoming a poor imitation of a medic. Nor is it solely about reducing cost and waiting times for tests. Cost and waiting times are important elements of healthcare provision but faster and cheaper is of no use if it leads to inaccuracy and worse outcomes.

This is about nurturing the unique strengths and talents of clinical scientists. It is about supporting the right people to design, deliver and evaluate innovative care.

Image: Dr Nolan Stain, Cardiac Clinical Scientist, preparing to insert an implantable cardiac monitor

Confused about what advanced clinical practice is? You’re not alone

  • Is a Clinical Scientist always an advanced practitioner?
  • Do you have to be on a statutory register to be an advanced practitioner?
  • Are advanced practitioners usually employed at consultant grade?
  • Do you need a particular qualification to have an advanced practice role?

I’m not sure whether there are clear answers to these questions at the moment. I hope that this blog will stimulate some discussion in our community. It would be fantastic to hear more from the National School of Healthcare Science (NSHCS) on the matter since much of what is written is primarily for nurses and allied health professionals.

In my view, we seem to be in a transition between the old and the new. As a result, there is significant variation in the interpretation of the term ‘advanced practice’ across the country. Service pressures in the NHS have resulted in pockets of innovative practice where individuals have extended their clinical scope of practice to improve care. These are brave pioneers who have piloted the safety and efficacy of new models of care without the benefit of a framework or structure to guide them.

There is now an increasing recognition that advanced clinical practice roles are vital in service transformation. To unleash the potential of these roles, to ensure safety, and to standardise practice, Health Education England (HEE) has proposed a multi-professional definition of advanced clinical practice:

Advanced clinical practice is delivered by experienced, registered health and care practitioners. It is a level of practice characterised by a high degree of autonomy and complex decision making. This is underpinned by a master’s level award or equivalent that encompasses the four pillars of clinical practice, leadership and management, education and research, with demonstration of core capabilities and area specific clinical competence.

Advanced clinical practice embodies the ability to manage clinical care in partnership with individuals, families and carers. It includes the analysis and synthesis of complex problems across a range of settings, enabling innovative solutions to enhance people’s experience and improve outcomes.

Health Education England

Does this apply to us?

Advanced practice roles are more well established in nursing and the allied health professions but HEE have said in their advanced practice FAQ that healthcare scientists can be recognised as ‘advanced clinical practitioners’, providing they meet the requirements outlined in the framework (see link below).

It is certainly a good idea to have parity across professions and the definition does offer some clarity for practitioners, employers, education providers and the public. It might answer some of our questions. The definition shows that the journey to advanced practice is usually at MSc level and includes 4 pillars

  1. Clinical Practice
  2. Leadership and Management
  3. Education
  4. Research

These elements feature in the Scientist Training Programme (STP). However, it is also evident that significant experience, deep specialist knowledge and broader clinical skills are needed.

Q. Does this mean that the STP alone does not equip a person for the demands of advanced clinical practice? Are additional experience and work-based training required?

The second part of the definition is also important because it defines the ability to manage clinical care across a range of settings. There are many highly specialised Clinical Scientists, expert in their area, who do not manage clinical care.

Q. Does this mean that being a Clinical Scientist with specialised scientific practice is not necessarily the same as being an advanced clinical practitioner.

What do you think?

A doctoral level qualification, which is needed for Consultant Clinical Scientist roles, is not a requirement for becoming an advanced clinical practitioner. Nevertheless, Consultant Clinical Scientists are operating as advanced clinical practitioners in some disciplines.

Perhaps we should view advanced clinical practice roles as existing along a continuum. Some are scientists who are ‘learners’ needing direct supervision as they develop their experience and clinical decision making skills, others are experienced in the role and working with indirect support, and some are operating at consultant level on a par with medical colleagues.

To gain a more detailed understanding of the expectations of an advanced clinical practitioner, I recommend reading the HEE Multi-Professional Framework which lists core capabilities for advanced practice.

One thing is very clear in this framework document; HEE is sending a clear message to employers that only those on a statutory register for their profession should be considered for advanced practice roles (see the flowchart on p.16).

This should serve as real encouragement for anyone in an extended role to join the HCPC register! Take a look at the route 2 information on our website for more information.

Is advanced clinical practice always the right route?

Quality improvement and service transformation do not always require advanced clinical practitioners. We must consider what is best in our community, for the people we serve.

One way to approach this would be to start by gathering details about the typical patient journey – the whole patient journey, not just the part that currently involves clinical scientists! Where are the problems, the delays, the complaints, the incidents, the poor outcomes? Map the journey and consider who is currently involved and where.

NHS improvement has some useful resources on tools that can be used to efficiently map the patient journey.

It is important to view the journey from the patient perspective. What are their concerns and expectations? What do they value most? Patients and the public should be involved at all stages – this is not something to be added on at a later date.

For public engagement to be effective, we need to build trusted relationships. The Kings Fund has good information on this.  “Patients as partners: Building collaborative relationships among professionals, patients, carers and communities” is an accessible report which contains some very practical and easy to use tips.

Once the current service and patient perspective is well understood, we can think about whether we are currently offering value-based healthcare and what ‘good’ would look like from an individual, organisation and population perspective. In doing this we shouldn’t just consider service metrics. Clinical outcomes, patient reported outcome measures (PROMs) and patient reported experience measures (PREMs) are also important. As scientists we understand the need to measure something to improve it!

Armed with this information, we can think about what steps need to be taken to improve those outcome measures; it informs the structure of a new service.

Naturally, we should only use validated outcome measures and select those which match the concerns and expectations of our patient partners. In an ideal world we would standardise outcome measures across the country, as has happened in national audit programmes, to allow us to compare services. These measures would encompass generic and disease specific outcomes.

The International Consortium for Health Outcomes Measurement (ICHOM) has validated several ‘standard sets’ of outcome measures that can be accessed with a free registration. These include atrial fibrillation, coronary artery disease, heart failure, diabetes, limb abnormalities, breast and lung cancer, stroke and many more.

Respiratory Clinical Scientists improving outcomes through advanced practice:

 Joanna Shakespeare (@shakeyjs) has worked with a multi-disciplinary team to transform the acute non-invasive ventilation (NIV) service at University Hospitals Coventry and Warwickshire NHS Trust. 

In the previous patient journey, ward staff assessed acutely unwell patients and initiated NIV. But high vacancy rates and low confidence were impacting on patient care and outcomes. Acutely unwell patients were not always assessed in a timely manner and there were delays in initiating NIV. The overall mortality for patients requiring acute NIV was higher than the national average.

Respiratory Clinical Scientists are experienced in oxygen assessment, blood gas measurement, domiciliary NIV and mask selection/problem solving skills. These are all relevant to an acute NIV service. Joanna and her team wanted to develop a new model to support and complement existing ward staff. They thought that a better service could be delivered using a multi-disciplinary approach. Scientists had to develop skills in clinical assessment which included auscultation, blood test and x-ray interpretation. They had to recognise when and how to escalate a patient to consultant or intensive care. They had to develop confidence in communicating with ward and medical staff about patient care, which included making recommendations for medical management and initiating therapy.

Scientists took clinical assessment modules either as part of their academic training courses (HSST) or independently at a local university. The team also developed a training and competency package split at three levels according to staff requirements. This is being rolled out across the Trust so that any staff that have involvement in NIV can be trained and assessed in a standardised manner.

In the new service model, a Respiratory Clinical Scientist holds the NIV bleep 8am to 8pm Monday to Friday. When called, the scientist assesses the patient, decides whether to start NIV, sets the patient up and manages them prior to moving them to the respiratory ward.

This service is having a significant impact on outcomes. NIV is established sooner and the mortality for patients using acute NIV has fallen significantly to 15%; the national average is 26% (British Thoracic Society audit, 2019). You can read more about the results of the service in this published abstract.

If you’re interested in reading more about advanced practice roles in Respiratory Science check out the article from Joanna and colleagues  in Breathe.

Preparing for a role as an advanced clinical practitioner

HEE have produced a helpful toolkit of resources relevant to advanced practice. This is worth exploring if you are interested in the role. The education section defines the need for formal education and training at a university alongside work-based learning and clinical supervision.

The work-based learning and clinical supervision is really important in overcoming any resistance to change that might be felt from other professional groups. It builds trust between disciplines and provides reassurance about decision-making. The example below highlights how this was important in establishing the role of Consultant Healthcare Scientist in histopathology.

There is also a need to demonstrate ongoing competence and capability through a portfolio (see the practitioner section of the toolkit). The toolkit suggests that this could include evidence of

  • Observed procedures (DOPS)
  • Case-based discussions (CbD) or records of case exams with senior clinicians
  • 360-degree assessments
  • Reflective accounts
  • Evidence of educational sessions and, where appropriate, evaluations
  • Records of audits

Consultant level Advanced Clinical Practice in Laboratory Sciences:

Dr Jo Horne (@hornej13) is a Consultant Healthcare Scientist (HCS) in cellular pathology at University Hospital Southampton.  

Jo has a consultant level qualification in gastrointestinal histopathology. She independently reports specimens, working alongside medical consultants as part of the clinical reporting team. This includes taking full responsibility for specimens and exercising professional judgement when issuing reports and discussing cases with clinical colleagues.

Jo’s route to independently reporting specimens was through a pilot project between the Royal College of Pathologists and the Institute of Biomedical Science (IBMS). Scientists undertook training alongside their existing clinical, scientific, and managerial roles. The training spanned several years and included a competency exam after 1 year and an exam at an equivalent level to FRCPath part II after a further two years. Successful candidates then entered the final stage of training, similar to that undertaken by medical pathologists, which developed independent practice at consultant level. This stage lasted around a year and preceded the Certificate of Completion of Training, which then provided a route into formal posts. The success of the pilot scheme resulted in a formal programme with routes in gastrointestinal pathology, gynaecological pathology and dermatopathology.

There is a national workforce shortage of histopathologists and Jo actively promotes, supports and helps to develop dissection and reporting qualifications for scientists, working with colleagues from relevant national stakeholder organisations. Participation in national External Quality Assurance and CPD schemes is also an important part of the role.

Jo has published an article which explores whether Healthcare Scientists might avert a histopathology crisis by easing the burden on labs.

Sustainability is important

One of the differences between quality improvement and service transformation is that the latter is meant to be irreversible. Of course there is a need to test and refine but we should plan carefully for the long-term. A service cannot operate effectively if it is entirely dependent on a single trusted individual.

When redesigning the service model we should try to see it through the eyes of other professionals who might support the service in the future.

The role of advanced clinical practitioner should be seen as a career option for other scientists in the department. Consider what would motivate individuals to participate in the new model of care. Think about roles just below and above ‘advanced clinical practitioner’. What are the options for work-based learning and formal education? Involve ‘learners’ in clinical audit, service evaluation and research. Build evaluations of clinical effectiveness and scientific leadership into appraisals so that these skills are recognised and valued by the team.

The example below highlights how a multi-disciplinary team at Guys and St Thomas’ Hospitals transformed the care of people with heart valve disease. This new service model has been sustained for 10 years.

Cardiac Clinical Scientists and sustainability in advanced clinical practice:

Cardiac Clinical Scientists in echo have a high level of autonomy in their daily practice and independently report test results. This includes identifying heart valve disease and using quantitative and qualitative assessment to judge the severity of disease. Valve disease is progressive and there are clear European guidelines for follow-up intervals and intervention points. These are based on the severity of disease and whether it is causing symptoms.

Those with valve disease are typically monitored by their GP and a Cardiologist. However, the watchful waiting period is long, and there is evidence that some become lost to follow-up, have tests performed at the wrong times or are referred for intervention too late.

In 2008, Taggu et al.  showed that a sonographer-led clinic increased the proportion of patients who were managed according to best practice guidelines from 41% to 92%. There was no compromise in safety. The service also reduced unnecessary tests and significantly reduced the number of standard outpatient appointments.

A team at Guys and St Thomas’ Hospitals believed that the development of a bespoke multidisciplinary valve disease clinic with greater input from Clinical Scientists would increase adherence to guidelines and improve outcomes. Patients referred to the clinic are stratified so that cardiologists see new patients, those with complex disease or who are close to an intervention threshold.  The scientist monitors patients with stable moderate or severe native disease and decides on patient follow-up or referral to the cardiologist based on agreed thresholds. A nurse monitors patients after surgery who do not require echocardiography and manages a helpline.

Mark Squirrell, Principal Cardiac Physiologist has actively encouraged those in advanced roles to register as Clinical Scientists and to achieve Consultant status. There is a clear philosophy of developing the workforce to meet the needs of a modern healthcare service. Scientists have been supported to develop new skills in clinical assessment through attendance at a bespoke master’s module and through work-based training led by cardiologists. There are regular multidisciplinary meetings to discuss results, outcomes and triage. This planned access to clinical supervision is important in ensuring the quality and safety of the service.

This multi-disciplinary valve disease service has been running effectively for 10 years. Descriptive data from the clinic was recently published by Chambers et al. in the BMJ journal Openheart. There were 4092 visits to the Scientist element of the service (43% of clinic total) and review by a cardiologist was requested in 10%. The service protocol triggers discussion with the cardiologist at a relatively low threshold for safety. The authors, who include Clinical Scientist Dr Brian Campbell (@DrBrianCampbell), have provided links to their protocol and follow-up assessment sheets with the article, in the hope this supports others to develop high quality clinics. 

How can we support each other to deliver better healthcare?

There is clearly a need to improve communication across organisations and regions. We should share our experiences, methods and results. We must also build a published evidence-base which demonstrates the quality and effectiveness of our work in these advanced roles.

In this spirit of collaboration, Hayley Langridge (co-founder of The Clinical Scientist) is working on a presentation with a team of colleagues which explores the steps others have taken in developing a new scientist-led service. This will be shared through our website soon!

We have also started to build case-studies of advanced clinical practice and scientist-led services on our website. Please get in touch if you have something to share.

OK, that’s enough from me. it’s time to share your thoughts and experiences. I’ve deliberately used questions in this blog and would very much like it if you could tell me what you think. Use the ‘comments’ on this page to link your thoughts to the blog.

Be brave, and have the courage to get uncomfortable – it’s the best way to learn!

Emma (@TheClinSci)

Every journey begins with a step

Hello! I’m glad you found us. Since we are at the start of this journey we should probably introduce ourselves. I’m Emma and alongside me is Hayley. We think this is going to be quite a journey so we’ll be sharing the navigation.

We’re Clinical Scientists with an idea – a big idea. We have no idea if this will work but we’re going to start something and see what happens.

You can read more about us on the website but here is a potted history of how we got to this point. We both work in cardiac physiology, and Clinical Scientists are a relatively new thing in our profession. To be honest, not everyone in our profession is embracing this new role yet, which means that patients and organisations aren’t realising the full benefits of scientific leadership.

Things are changing, but slowly. We want to increase the pace and scale of the change.

As a profession, we know we have a lot to learn and we’re hoping that you will want to share your knowledge and experience with us. But don’t be fooled by our lack of Clinical Scientist pedigree, we have a lot to offer you in return.

We have well-established areas of advanced practice which are highly valued by patients and organisations. Many of us perform tasks previously done by medics; we report complex investigations, lead clinics, make decisions about investigations and management.

We have a growing number of clinical academics and some of us hold senior positions with national influence.

We want to share our knowledge and experience with you, if you’re interested.

Why are we doing this?

Hayley and I met through the healthcare science community on twitter. Our initial discussions were around inspiring more cardiac physiologists to consider the equivalence route to becoming a Clinical Scientist. We talked about the need to work beyond hospitals and regions to accelerate the change. If you’re looking for more information on this, take a look at our tweet about equivalence as well as the coaching and equivalence sections of our new website.

In my role at Swansea University, I work alongside healthcare scientists from several disciplines and I really value our discussions. I had also previously supported practitioners in audiology, nuclear medicine and respiratory physiology through the equivalence process. This made me wonder if we should be thinking bigger.

The diversity among Clinical Scientists is staggering – there is real strength in that diversity – and yet, there is so much we have in common. Think about the Good Scientific Practice Guide which binds us, and that fact that we are all driven by a desire to provide the best possible healthcare for patients.

There are already many organisations, like the Academy for Healthcare Science (AHCS), working to strengthen our collective voice at the highest levels. They’re doing a great job and we should all support them. So why does it feel, to us at least, as though there is a gap? It seems to us that the gap is at a personal level.

This is the reason we set up @TheClinSci on Twitter, the website and this blog.

We want people to feel a personal and direct connection to a community of Clinical Scientists. We want to build a community that is welcoming, open-minded, and supportive. Together, we can make our roles more visible and drive forward innovative practice at scale.

news-2444778_1920 Tell us about your triumphs and your struggles. Found an amazing resource? Shout about it.

Talk about that great course, webinar or conference you attended.

Share that research publication, audit or QI project.

We care about it all and we want to amplify your voice. We’ve started putting useful stuff on our website and hope to add your contributions soon.

Did you see our Twitter thread about critical appraisal? All the resources from the thread are on our website. Hopefully you’ve already spotted the links in this blog. Click on the green word and this should automatically open up a new tab containing the source.

By sharing work at an individual level we can make new connections, find peer support and engage in healthy debate that challenges our assumptions. Perhaps you’ll find new people to collaborate with or an inspirational person to coach.

What are we going to be blogging about?

If you’re still reading, perhaps you think this could work. Maybe you want to know more about our plan for future blogs.

Our aim is to write for Clinical Scientists and other healthcare professionals who might be interested, including students. We’ll try to use a simple and informal language (not always easy for scientists). We’ll blog about anything we think is useful but the priority is to focus on topics that are relevant to many specialisms e.g. advanced practice, scientific leadership, quality improvement and research.

In the very near future, we hope to have an interview with someone doing the Higher Specialist Scientist Training (HSST) programme, some audit results, and some tips from a senior leader about committee and board experience.

Eventually, we want to dip our toe in the water of podcasts and videos, but let’s take this one step at a time!

How often can you expect a blog? I think we can start with a blog every 2 weeks. Each blog might take you 10 mins to read but will also have links to useful information and resources that allow a deeper dive for those who want it.

How can you get involved?

phone-612061_1920A community is built on people.

For this to work, we need lots of people to actively contribute to the idea.

Please post comments below and contact us to suggest topics for future posts. It would be lovely if you could tweet your thoughts about the blog and share it with those you know. Fancy doing a bit more? We’re also open to the idea of people volunteering to write a guest blog. In fact, we might ask the healthcare science networks around the country if they can contribute. Watch this space…

That’s all for now folks! You’ve reached the end of our first blog. We hope this has given you a sense of who we are, why we’re doing this, and where we’re heading.

We’ve taken the first step on our journey. Will you join us? It would be great to have your company.

Until the next time,

Emma and Hayley.

The Clinical Scientist

p.s. Time to share your opinion! Be gentle with us. We’re just two people with an idea (no fancy tech, web-designer or budget).