Improving junior doctor medicine prescribing and patient safety: An intervention using personalised, structured, video‐enhanced feedback and deliberate practice

Aims This research investigated the effectiveness of an intervention for improving the prescribing and patient safety behaviour among Foundation Year doctors. The intervention consisted of simulated clinical encounters with subsequent personalised, structured, video‐enhanced feedback and deliberate practice, undertaken at the start of four‐month sub‐specialty rotations. Methods Three prospective, non‐randomised control intervention studies were conducted, within two secondary care NHS Trusts in England. The primary outcome measure, error rate per prescriber, was calculated using daily prescribing data. Prescribers were grouped to enable a comparison between experimental and control conditions using regression analysis. A break‐even analysis evaluated cost‐effectiveness. Results There was no significant difference in error rates of novice prescribers who received the intervention when compared with those of experienced prescribers. Novice prescribers not participating in the intervention had significantly higher error rates (P = .026, 95% confidence interval [CI] Wald 0.093 to 1.436; P = .026, 95% CI 0.031 to 0.397) and patients seen by them experienced significantly higher prescribing error rates (P = .007, 95% CI 0.025 to 0.157). Conversely, patients seen by the novice prescribers who received the intervention experienced a significantly lower rate of significant errors compared to patients seen by the experienced prescribers (P = .04, 95% CI −0.068 to −0.001). The break‐even analysis demonstrates cost‐effectiveness for the intervention. Conclusion Simulated clinical encounters using personalised, structured, video‐enhanced feedback and deliberate practice improves the prescribing and patient safety behaviour of Foundation Year doctors. The intervention is cost‐effective with potential to reduce avoidable harm.


| INTRODUCTION
"Unsafe medication practices and medication errors are a leading cause of avoidable harm in health care systems across the world". 1 In response to the rise in the proportion of medicine-related deaths and patient safety risks, the World Health Organisation (WHO) identified and announced Medication Without Harm as the third challenge faced by the World Alliance for Patient Safety in 2017. 2 In England, medication errors due to adverse drug reactions are significant, consuming 181 626 bed days, causing 712 deaths and contributing to 1708 deaths in a year. 3 As part of the effort to overcome medication-related errors, one approach is to focus on prescribing. Prescribing is associated with medical errors, with high error rates across the world [4][5][6][7][8] despite arguments of universal underreporting. 9 Prescribing medication is an essential and complex skill 10 for an increasingly wide range of prescribers including doctors, nurses and other prescribers.
Prescribing involves the initiation, monitoring, continuation and modification of medication therapy, demanding a thorough understanding of clinical pharmacology as well as the judgement and ability to prescribe rationally for the benefit of patients. 11 Against this backdrop, a prescribing error has been defined as "an unintentional significant (1) reduction in the probability of treatment being timely and effective or (2) an increase in the risk of harm when compared with generally accepted practice". [ 12 p. 235] Reducing avoidable harm from prescribing errors made by healthcare professionals should be a priority particularly among Foundation Year doctors (junior doctors in their first and second years of hospital training) who are more likely to make an error. 6,13 This group were found to make more prescribing errors than experienced colleagues, 6  • The effectiveness of simulation-based interventions for improving prescribing in practice is limited.

What this study adds
• Simulated clinical encounters with personalised, structured, video-enhanced feedback using a deliberate practice approach support Foundation Year doctors to prescribe at a level consistent to experienced healthcare professionals.
• The intervention was implemented with consistent findings across different clinical sub-specialty contexts in medicine (nephrology and renal transplantation) and surgery (general and orthopaedics).
• The intervention is cost effective and patients who were prescribed medication by Foundation Years who did not receive the intervention experienced significantly higher prescribing error rates. instructions), task factors (poor availability of drug information at the time of admission, and support systems not available), individual factors (lack of personal knowledge and experience), and patient-related factors (complexity of symptoms). 6,[15][16][17][18][19][20] This evidence also confirms that the causes of prescribing errors are multifactorial and multilevel, which is consistent with current understanding about human factors and error more generally across many safety-critical industries. 21 Previous systematic reviews evaluating the effectiveness of prescribing interventions have been inconclusive, [22][23][24] probably because many of the studies appeared only to focus on a single target for interventions such as increasing prescribing knowledge. Consequently, the authors of these reviews have repeatedly called on further research to address the multiple factors identified from studies on human error, and to develop more sophisticated and multidimensional interventions that can address these contributing factors. 6,[15][16][17][18][19][20] Closer examination of some of the empirical studies included in these reviews identifies a number of methodological concerns. Only 19% of studies in one of the reviews distinguished between different grades of medical prescribers, 22 therefore identifying what works for novice prescribers such as Foundation Year doctors remains unclear.
Another review 23 also confirmed that only 13% focused on new prescribers, meaning clear differences in the effectiveness of particular types or combinations of interventions cannot be deciphered for different grades or experience of prescriber. Many of these studies investigate the effectiveness of interventions in single prescribing contexts rather than across a range, limiting the transferability of the interventions. These single site studies typically involve pre-and posttest interventions 23 despite some reported limitations. 24

| Developing expertise through deliberate practice and feedback
Deliberate practice is an instructional approach for developing expertise where the goal is to develop mental models in the minds of learners for how they go about the planning, execution, monitoring and analysis of complex tasks such as prescribing. 25,26 There are examples of educational interventions underpinned by deliberate practice with particular effectiveness in terms of improved learning outcomes across a range of academic learning tasks 27 and clinical skills. 28,29 A Best Evidence in Medical Education systematic review of high-fidelity simulations spanning over three decades of research identified that learning outcomes that adopted deliberate practice were mixed. 29 However, all were positive when feedback was provided as a structured activity alongside. 29 The timing of feedback following performance on clinical simulations underpinned by deliberate practice is known to be important 30 as well as the way in which it is given, that is, structured rather than informal or lacking a framework. 31,32 The way that feedback is delivered, that is, the medium through which it is being communicated is also important for improving potential outcomes. 33 Video feedback has been demonstrated to have a positive performance effect across a wide range of areas-sport, music, communication and rehabilitation. [34][35][36][37] Within healthcare the use of video feedback has already been used in improving prescribing, medical and surgical outcomes. [38][39][40][41][42][43][44] A study involving novices 45 identified that video-enhanced feedback improved clinical skills performance over and above the effect of receiving feedback on skills development directly from an expert. In another study involving a pharmacist-led video-stimulated feedback intervention, researchers also observed a reduction in prescribing errors among participants, albeit there was no control group to fully assess the real effect of the intervention. 46 The aim of this research was to investigate the effectiveness of simulated clinical encounters using personalised, structured, videoenhanced feedback and deliberate practice for improving the prescribing and patient safety behaviour of Foundation Year doctors.

| Study design
A consistent approach to participant recruitment was adopted across the three study sites. Participants were invited to participate . on a voluntary basis via email. The experimental groups were made up of cohorts rotating through the study sub-specialty sites.

| Study 1
This study had two objectives. First, to assess the value of

| Study 2
This study had one objective: to assess the prescribing behaviour of

| Study 3
This study had two objectives. First, to assess the prescribing behaviour of Foundation Year doctors following participation in the intervention at a third research site. The second objective sought to establish the impact of the participating doctors prescribing behaviour on patients, as one patient is likely to be prescribed items by multiple prescribers. Prescribing data were, therefore, analysed to compare prescribing error rates experienced by patients dependent on whether the individual items prescribed had been written by a prescriber belonging to a particular study group (denoted as Patientlevel data analysis).
To meet these two objectives, Study 3 repeated the intervention in the same hospital as Study 2 but in the Department of

| Data collection
Medication orders across the three study sites were made on written prescriptions. Data were collected over a 4-month rotation period for all prescribers. Pharmacists responsible for the inpatient wards participating in the studies collected data from inpatient drug charts. Data were collected as part of the pharmacists' daily activity and therefore prescribing errors were also corrected as part of their usual medicine reconciliation process. All prescribed medicine items reviewed by pharmacists were included in the data corpus. It is feasible, however, that some medicine items prescribed to patients were not reviewed, e.g. items prescribed to patients who were then transferred out of the study site, outside of the pharmacy data collection team working hours. All doctors working in the study sites over the duration of the research were aware prescribing was monitored by pharmacists as part of the usual medicines reconciliation process. Data collected were peer reviewed by the lead pharmacist at each study site.
Anonymised patient-level data were shared by the lead pharmacist with the study team for analysis.
A standardised prescribing error data collection form (Appendix B) was designed and piloted prior to Study 1 to ensure consistency across all sites. 12 Discharge data were not included, similar to other research investigating written prescribing errors. 6 The form required the pharmacy team to record the date a prescription was made, ward, patient details (initials and hospital number), prescriber details (initials, occupation and grade) and prescribing error details (drug name, dose and frequency, description of error, what, if any, doses were given, whether the error led to actual negative outcomes for the patient and the potential severity of error). The potential severity of error classification system used in the GMC-funded EQUIP study, 6 which was based on prior research, [48][49][50][51][52] was also used for this research. This classification system uses four severities of error: minor, significant, serious and potentially lethal.

| Data analysis
The independent variable in Study 1 measured group membership, differentiating between four groups: Experimental Group 1, Experimental Group 2, Novice Control Group and Experienced Control Group. The dependent variable was a count variable, number of errors per prescriber. The data were analysed using a negative binomial regression (to adjust for over dispersion of data) with the Experienced Control Group as the reference group. A break-even analysis was conducted to establish the cost effectiveness of the intervention based on Study 1. The break-even analysis is detailed in Appendix C.
The independent variable in Study 2 consisted of two groups: Experimental Group and Experienced Control Group. The dependent variable in Study 2 was the error rate per prescriber, calculated as the number of errors divided by total number of items prescribed by each individual prescriber. Therefore, these data accounted for the total number of items prescribed by all prescribers.
The independent variable in Study 3 consisted of three groups:

Experimental Group, Novice Control Group and Experienced Control
Group. The dependent variable in Study 3 was, as in Study 2, the error rate per prescriber, calculated as the number of errors divided by total number of items prescribed by each individual. Again, these data accounted for the total number of items prescribed by all prescribers.

| Ethics
The study was undertaken and registered at the two NHS Trusts as part of their patient safety and improving quality clinical effectiveness programme, and Health Education England working across the East Midlands wider quality improvement and innovation initiative (study reference number LEI0085). The study did not require full NHS ethics approval. and 5 FY2). Five participants were excluded from analysis as no prescribing data were attributable to them; 2 FY1 and 2 FY2 from Experimental Group 1 and 1 FY2 from Experimental Group 2 (see Table 1).

| Prescribing data
There was a significant difference in error rates between Experimental Group 1, the intervention group who did not receive video-enhanced feedback, and Experienced Control (P = .006, 95% confidence interval  The break-even analysis (Appendix C) calculated from the errors observed in the data demonstrates that the cost of the intervention is less than the potential cost attributed to the observed errors.

| Participants
Fourteen junior doctors were invited and participated in the simulations and were included in the analysis (see Table 2).

| Prescribing data
There was no significant difference in error rates per prescriber between the Experimental Group comprising Foundation Year doctors and Experienced Control comprising experienced prescribers beyond Foundation Year (Table and Figure 2). This finding was observed despite a significantly higher level of prescribing activity for the Experimental Group (p < 0.001, 95% CI 322.63 to 526.99).
There was no significant difference in error rates per prescriber across the various error severity types.

| Participants
The twelve junior doctors rotating through the Orthopaedic specialty were invited to participate. Seven were able to attend the simulations and constitute the Experimental Group. The remaining five could not participate due to clinical commitments and constitute the Novice Control (see Table 3).    Previous research has consistently identified that all prescribers make errors irrespective of experience, grade or professional background, but Foundation Year doctors make more compared to other prescriber groups. 6,12 The findings in this research also confirmed the same observation. This is the first study, however, to propose and demonstrate an effective intervention for improving the prescribing behaviour of novice prescribers to the level of experienced prescribers. In addition, this research suggests educational interventions (in our case personalised, structured and video-enhanced feedback) underpinned by a deliberate practice approach are effective in changing and sustaining prescribing behaviours over at least a four-month period in comparison to education that merely seeks to provide 'a training experience' or promote a change in knowledge alone.

| patient-level data
The three-study research design was specifically constructed to address concerns raised in educational, training and patient safety research in relation to a lack of evidence for interventions in realworld settings. 55,56 This research demonstrates that complex interventions designed to improve medical education, patient safety and practice can be operationalised in naturalistic healthcare settings without significant difficulty, whilst adopting a robust, experimental study design. That said, the findings from this research need to be investigated in other clinical contexts using a multicentre randomised study design to critically investigate the reproducibility of the intervention.
The impact of feedback on performance is widely reported across many domains and disciplines. 30  The observed error rates calculated from individual's daily prescribing data, collected in Study 2 and 3, are notably greater than the group-level error rates reported in the most cited previous study. 6 There are four possible reasons attributed to this that need to be considered when comparing these results with other research. First, error rate data in this research is based on individual's prescribing activity, whilst previous research 6 reports group-level prescribing activity. Second, the underpinning methodology for collecting the baseline prescribing data was distinctive in our research. Data were collected continuously over a four-month period across our studies, whereas a sampling day approach is a more conventional method of data collection. 6 Third, the sampling approach taken to collect data in the most cited previous study 6  Furthermore, nephrology is reported to be the most complex prescribing clinical sub-specialty. 64 Other research 6 does not report across sub-specialty reducing the potential for learning.
This study demonstrated the intervention could be considered cost-effective as outlined by the break-even analysis reported in more detail in Appendix C. Making sense of cost-effectiveness in a healthcare professions education context is complicated. 65,66 With respect to this study, the analysis was approached by comparing the cost of the intervention to an average cost for a medication error in the first study. Clearly the costs and benefits will vary across different sub-specialty contexts.

| LIMITATIONS
Given the voluntary nature of participation for this research from a sample population of Foundation Year doctors, sample sizes in this study were limited. As a consequence, randomising participants to conditions was not possible; consideration was also given to participation whilst ensuring clinical cover. Similarly, measuring impact on practice across subsequent sub-specialty rotations, longitudinally, was not feasible. Foundation Year doctors rotate across hospitals and care settings within a region making consistent data collection not possible.

| FURTHER RESEARCH
Future research should adopt a granular data collection method similar to the approach used in this research. The granular data collection allowed sub-specialty level, error rate and error severity observations.
This research provides a template for how such an intervention and an associated study may be designed and implemented in the NHS. If this intervention is adopted, research should create long-term changes in participant learning, practice and patient safety behaviours. Finally, this research involved study sites with written inpatient prescription charts, therefore the effectiveness of this intervention using electronic prescribing systems requires further investigation. 67

DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.