Improving Appropriateness and Quality in Cardiovascular Imaging
A Review of the Evidence
Jump to

Abstract
High-quality cardiovascular imaging requires a structured process to ensure appropriate patient selection, accurate and reproducible data acquisition, and timely reporting which answers clinical questions and improves patient outcomes. Several guidelines provide frameworks to assess quality. This article reviews interventions to improve quality in cardiovascular imaging, including methods to reduce inappropriate testing, improve accuracy, reduce interobserver variability, and reduce diagnostic and reporting errors.
Introduction
The goal of cardiovascular imaging is to improve patient outcomes. The data provided by imaging studies identifies pathology which guides clinical interventions and management and may provide prognostic value. Achieving this goal requires a structured process to ensure appropriate patient selection, high quality, and reproducible data acquisition, and timely, accurate reporting which answers clinical questions.1 Guidelines provide a framework of quality measures and recommended laboratory practices to achieve these goals.2 Over the past 3 decades, investigators have identified possible methods to improve quality in cardiovascular imaging. This has included techniques to choose appropriate patients, certification/accreditation programmes, methods to reduce inter- and intraobserver variability, and processes for efficient reporting.
The purpose of this review is to assess the evidence base for these interventions and identify optimal quality improvement techniques. We performed a search of PubMed (January 1, 1970, to September 1, 2015) for English language articles using the search terms quality, appropriateness, certification, accreditation, interobserver variability, reporting with the term cardiovascular imaging, echocardiography, cardiovascular magnetic resonance, computed tomography (CT), and single-photon emission cardiac tomography (SPECT). Selection for inclusion in this review was based on articles (prospective or retrospective) which described the prevalence of quality markers or quality improvement methods relating to appropriateness criteria, certification, interobserver variability, or reporting of cardiovascular imaging.
Improving Appropriateness of Cardiovascular Imaging
Appropriateness Criteria
The growth in cardiovascular imaging investigations over the past 2 decades prompted the American College of Cardiology Foundation together with other professional societies to develop appropriateness criteria for a range of cardiovascular imaging modalities.3,4 They represent an ambition to rationalize imaging to those studies which are cost-effective and likely to benefit patients care. The criteria classify indications for imaging as appropriate, inappropriate, or uncertain based on expert opinion.4
Between 7% and 23% of transthoracic echocardiograms (TTE) are deemed to be inappropriate,5–8 and 28% to 30% of stress echocardiograms (SE) may be performed for inappropriate indications,9–12 whereas only 1% to 3% of transoesphageal echocardiograms (TEE) are inappropriate.8,13,14 Between 9% and 44% of cardiac CT studies are inappropriate,15–17 and 4% to 46% of SPECT studies have been classified as inappropriate.18–22 No studies related to cardiac magnetic resonance imaging (CMR) were identified.
An inappropriate imaging study is one which is not likely to provide incremental information and is less likely to be associated with new diagnosis/abnormalities than appropriate studies. Mansour et al14 found inappropriate studies led to major new abnormalities in 9%, 3%, and 13% of patients undergoing TTE, SE, and TEE compared with 39%, 16%, and 37% of appropriate patients, respectively. Whether inappropriate studies are less likely to lead to change in management compared with appropriate studies is debatable with conflicting data.23 Cardiac stress testing in patients with inappropriate indications is unlikely to identify prognostically significant ischemia. Cortigiani et al10 showed that an SE performed for an inappropriate indication was associated with low likelihood of demonstrating ischemia and a good prognosis. Doukky et al24 showed that an inappropriate SPECT study failed to predict major adverse cardiac events irrespective of the test result.
The impact of published guidelines on referral patterns for imaging has been limited to date. Rahimi et al25 showed that despite publication of appropriateness criteria, there was no change in the proportion of TTE classified inappropriate pre- versus 1 year post publication of appropriateness criteria (13% versus 15%; P=0.58). Furthermore, Willens et al11 found no change in the proportion of inappropriate SE between 2008 and 2011. Fonesca et al26 performed a meta-regression to identify temporal trends in appropriateness criteria in studies published between 2005 and 2014. There was an improvement in proportion of appropriate TTE and cardiac CT performed, but no change in appropriateness rates for TEE or SE/SPECT. These data suggest that publication of appropriateness criteria has not had a universal effect on improving appropriateness of all imaging modalities.
Part of the complexity of implementing appropriateness criteria may be unfamiliarity with the classification and the time required to review each imaging request with the guidelines. Bhave et al27 developed a web-based appropriateness criteria classification application. The application uses a decision tree algorithm to determine the correct classification of a patient. The application showed good agreement with expert physician classification (kappa 0.83) and took 55±38 seconds to determine the correct category. Lin et al28 evaluated the utility of a web-based appropriateness criteria tool in patients being evaluated for coronary artery disease. Physicians were required to use the tool as part of the ordering process for imaging. They found the tool required an average of 137 seconds to reach a recommendation of the appropriateness criteria category. Overall, inappropriate tests reduced from 22% to 6% (P=0.0001) over the 8 month study period. However, given the extra time required to use the application, the sustainability of long-term use and applicability to daily clinical practice is uncertain.
Several investigators have looked at the effect of an active intervention on appropriateness criteria classification. Willens et al11 examined the effect of an education initiative on appropriateness criteria classification. Physicians were invited to a grand round lecture which included a presentation on the rationale of appropriateness criteria and common reasons for inappropriate testing. Compared with preintervention, there was no significant change in the proportion of inappropriate SE postintervention (30.6% versus 32.4%). Gibbons et al29 confirmed that an intervention composed of education at a grand round lecture and focussed presentations to individual departments had no significant effect on the proportion of inappropriate SPECT tests. In comparison, trainee physicians receiving a multifaceted educational approach, including lectures, pocket cards with examples of appropriate and inappropriate indication for echocardiograms coupled with individual feedback was associated with a reduction in inappropriate outpatient echocardiogram requests compared with controls (13% versus 34%; P<0.001).30 In a separate study, the same group of authors showed this intervention was also associated with a reduction in inappropriate testing for inpatient echocardiogram requests.31 However, a sustained program of regular education and feedback may be required to sustain the reduction in inappropriate testing.32
Although guidelines for quality in cardiovascular imaging advocate implementation of appropriateness criteria, there are limited effective methods to reduce the rate of inappropriate testing. Isolated didactic lectures/teaching has limited value. The use of a multifaceted approach, including individual feedback, is key. The use of web-based support tools does allow relatively accurate classification of appropriateness. The interactive nature of these tools at the point of ordering an imaging test may help reduce inappropriate testing.
Choosing Wisely
The American Board of Internal Medicine created the Choosing Wisely campaign. The purpose was to help patients in conjunction with their physicians choose care which is evidence-based, free from harm, free of duplication, and truly necessary. The ultimate aim was to reduce unnecessary and costly care.33 Professional medical societies covering a range of different specialities each created a list of 5 procedures/tests which physician and patients should re-evaluate. Four out the 5 topics chosen by the American College of Cardiology focussed on unnecessary cardiac imaging investigations which have no evidence base or encompasses unnecessary repeat testing (Table 1). Each individual cardiovascular imaging society (echocardiography, CMR, CT, and nuclear) produced their own lists.34 A unique focus of the campaign was involvement of patients in the decision-making process. Reports on imaging tests explaining “when you need them and when you don’t” were created to help patients understand and engage with their physician about what is right for them. This is particularly important as a survey of 600 cardiologists in the United States identified cardiac testing may be requested for nonclinical reasons, including patient and referring physician expectation.35 At present, there are no data assessing the effectiveness of this campaign.
Choosing Wisely American College of Cardiology Recommendations for Cardiovascular Imaging
Improving Quality of Cardiovascular Imaging
Individual and Laboratory Certification/Accreditation
Certification for echocardiography, CMR, CT, and nuclear cardiology has been established both in Europe and the United States. The purpose of these schemes is to set a standard for knowledge and competency in cardiovascular imaging. The American Society of Echocardiography examination pass rate varied from 57% to 64% during the first 4 years of implementation, and the equivalent European Association of Echocardiography examination had an 86% pass rate in the first year of examination.36,37 The nuclear medicine technology certification board examination pass rate was 76% in 2001.38 No published data are available for CT or CMR. Although these data show the process has a discriminatory capacity, it is unclear whether it results in quality improvement. Data examining whether candidates who failed the initial examination subsequently went on to pass the examination may be indicative of an improvement process. This has not been published to date. However possession of certification does demonstrate a certain level of attainment encompassing knowledge, understanding, and interpretation ability. Heidenreich et al39 investigated whether echocardiography certification led to improved quality by comparing the ability of left ventricular ejection fraction to predict patient outcomes between certified and noncertified readers. Overall, the relationship between left ventricular ejection fraction and 1 year mortality was modest (area under curve interquartile range 0.56–0.64). The relationship was stronger for certified physicians (area under curve 0.6, 95% confidence interval 0.59–0.61) compared with noncertified physicians (area under curve 0.56, 95% confidence interval 0.55–0.57), P<0.0001. The reason for the modest correlation may relate to the ability of left ventricular ejection fraction to predict mortality or the accuracy of echocardiographic readings of ejection fraction.
Laboratory certification aims to ensure that the correct procedures and processes are in place to deliver high-quality cardiovascular imaging.40 This encompasses staff training and qualifications, imaging acquisition and reporting protocols, and quality assurance. Schemes are available in the United States, Europe, and the United Kingdom. Nagueh et al41 examined the Intersocietal Accreditation Commission laboratory accreditation program for echocardiography between 2011 and 2013. Overall, of the 3260 facilities which requested accreditation, 2020 (62%) required improvement before accreditation could be given. Common reasons for deferral were related to incomplete or inconsistent reports, inadequate continuing medical education for staff, lack of quality improvement summary, incomplete protocols, and incomplete or absent interrogation of aortic stenosis from multiple views. These data show that the majority of facilities applying for accreditation require some improvement. Although the majority of facilities perceive that the process leads to improved quality parameters,42 data are lacking to show whether the facilities use the feedback from the accreditation process to implement changes and subsequently gain accreditation.
Procedure Volumes and Learning Curves
A learning curve for mastering echocardiography techniques has been well described. The feasibility of obtaining adequate images, duration of echocardiographic examination, and accuracy of reporting are linked to experience and procedure volumes. In the setting of an epidemiological study of 6148 patients over a 4-year period, Savage et al43 found the percentage of adequate echocardiograms achieved by sonographers increased from 28% at 5 months to 81% after 2 years of the study. Weyman et al36 examined predictors of success in the American Society of Echocardiography examinations between 1996 and 1999. The examination includes both knowledge and interpretation of echocardiographic video clips. The number of examination performed/interpreted per week was the greatest predictor of examination score. The length of training and discipline of the examinee also had small but still significant effect on the score achieved.
Furthermore, Ungerleider et al44 extended these findings to TEE linking experience (volume of operator studies) to interpretation skills and subsequent outcome. In a study of 621 consecutive patients undergoing cardiac surgery, the proportion of patients with residual defects on leaving the operating room was used as a surrogate of quality. The proportion of residual defects reduced from 18% during the first 207 studies to 0% in the next 414 studies. A major limitation of this study is that the proportion of residual defects may be related to case complexity or surgical techniques rather than solely on imaging interpretation. Picano et al45 also noted that this learning curve applies to interpretation of stress imaging to identify ischemia. Reporting 100 studies with an expert reader improved the diagnostic accuracy of diagnosis of coronary artery disease from 61±7% to 83±3%.
Certification/accreditation bodies recognize the importance of procedure volumes and learning curves in providing a high-quality service and of maintenance of skills. The Examination of Special Competence in Adult Echocardiography in the US requires fellows applying for certification have performed 150 TTE and reported 300 studies. Furthermore, recertification requires performing/reporting of 400 studies per year for 2 of the preceding 3 years before recertification. The Society of Cardiovascular Magnetic Resonance and Certification Board of Cardiovascular Computed Tomography requires reporting of a minimum of 200 studies every 24months and 150 studies over 36 months, respectively.
Where outcome data are difficult to determine, the use of standards to define the attainment of acceptable individual and departmental standards provide a surrogate measure of quality. The systematic evidence to support this assumption is, however, very limited, and further work to define the relationship between individual and laboratory accreditation and clinical outcome is urgently required.
Interobserver Variability
A range of studies have evaluated the effect of methods to improve accuracy and reduce interobserver variability (Table 2)46–55 of cardiovascular imaging studies, predominantly focussing on echocardiography.
Studies Examining the Effect of Educational Interventions on Accuracy/Interobserver Variability of Cardiovascular Imaging
Left and Right Ventricular Systolic Function
Visual estimation of left ventricular ejection fraction (LVEF) remains a common echocardiographic method of describing left ventricular systolic function. The optimal method to reduce inter- and intraobserver variability has been examined in several studies.46,47,52 Three studies incorporated the use of reference cases as a standard, 2 of which used other imaging modalities. Akinboboye et al46 provided 3 readers immediate feedback after reporting echocardiographic visually estimated LVEF in 60 consecutive patients. The LVEF provided by the readers was compared with LVEF calculated from radionuclide imaging. The correlation between reported LVEF and radionuclide LVEF improved progressively with the number of studies reported (correlation coefficient 0.66 for the first 20 patients compared with 0.78 and 0.81 for each subsequent set of patients, P<0.05). Furthermore, prior experience did not influence the learning curve. However, comparison to LVEF from radionuclide imaging is not contemporary practice. Thavendiranathan et al52 examined the effect of CMR as the reference standard on the interobserver variability of visual-derived echocardiographic LVEF. After initial review of 32 echocardiograms, readers reviewed the same series of echocardiograms with corresponding CMR images and CMR-derived LVEF. Interobserver variability (absolute values) improved from ±12% to ±9.7% after the intervention; P=0.03. The misclassification rate compared with CMR reduced from 56% preintervention to 47% postintervention; P<0.001. Johri et al47 used review of echocardiograms with a range of different LVEF as a reference standard. This was coupled with discussion of factors which may affect LVEF, such as regional wall motion abnormalities, arrhythmia, and so on. Interobserver variability (absolute values) improved from ±14% to ±8% postintervention; P=0.007. Ling et al50 examined the accuracy of echocardiographic assessment of right ventricular systolic function using visual estimation and quantification methods using CMR as a reference standard. The use of quantitative parameters of right ventricular systolic function compared with visual estimation improved accuracy to identify mild right ventricular systolic impairment from 52% to 84% (P<0.001), although quantification had little effect on the accuracy to identify moderate dysfunction. In addition, inter-reader agreement for classification of the subjects as normal or abnormal right ventricular systolic function improved from 0.43 to 0.66.
Two studies did not use reference cases but provided individual feedback identifying the source of measurement/interpretation error and suggestions for improvement coupled with lectures to review guidelines and discuss common errors.54,55 Fanari et al55 demonstrated that this intervention improved the correlation between echo- and catheter-based LVEF from 85% to 88% (P<0.001), although catheter-based LVEF has inherent limitations as a comparator. In summary, improved reproducibility of visually estimated LVEF can be achieved by teaching intervention, individual feedback, review of reasons for misclassification/errors coupled with review of reference images.
Left Ventricular Diastolic Function
Assessment of diastolic function requires assessment of multiple indices and integration of data to classify the degree of impairment. Therefore, quality improvement requires attention to both aspects. Johnson et al48 assessed accuracy of both technical components (accuracy and completeness of data acquisition) and classification of the correct grade of diastolic function (agreement with an expert reader). After baseline assessment, sonographers and physicians took part in a quality improvement exercise. This was composed of 4 parts. A new multistage protocol, including left ventricular inflow, tissue Doppler of mitral annulus in early diastole, flow propagation of left ventricular inflow, and indexed left atrial volume, was introduced. Lectures covering classification of diastolic function, individual feedback, and case reviews were also completed. Correct classification of diastolic function improved from 44% to 76% postintervention; P<0.001. Improving interpretation and classification of diastolic function is complex because of the requirement for integration of multiple parameters. The use of a multistep protocol coupled with individual feedback/case review was the facilitator of quality improvement.
Valve Disease
Classification of the severity of valve stenosis or regurgitation requires a multiparametric approach. Visual estimation of the severity of mitral or aortic regurgitation is not recommended. Guidelines recommend a semiquantitative approach integrating several parameters. Dahiya et al51 found 17 international expert readers only had a moderate (kappa 0.5) agreement when grading severity of aortic regurgitation. A hierarchical algorithm was developed to guide readers’ use of parameters. Use of this algorithm resulted in good agreement between readers (kappa 0.8), and sensitivity for severe aortic regurgitation (using CMR as gold standard) improved from 60% to 100%. A similar approach, using a consensus algorithm, improved differentiation of severe and nonsevere tricuspid regurgitation from 81% to 92%; P=0.001.53
Improved reproducibility can be achieved by a variety of methods, including comparison and review of reference images, individual feedback, and review of reasons for misclassification. This can happen at the level of the individual operator or the laboratory. Identification of a reference standard is a key component of a quality improvement scheme. This will ensure that we are testing the accuracy of the technique against the gold standard rather than just interobserver variability. Although for certain parameters comparison with other modalities, such as CMR, may be adequate (LVEF), there are many parameters where identification of a comparator may be problematic. For example, in heart valve disease, both other imaging modalities, such as CMR and angiography, have limitations. A possible solution is to identify the prognostic value of data generated but this may be impractical because it may take many years to yield results.
Integration of a continuous quality improvement process into a laboratory may be challenging in a time- and resource-limited environment. An alternative approach is to focus quality improvement programs on specific individuals. This may include targeting those undertaking a low volume of procedures or performing quality improvement exercises at a set interval, that is, 5 to 10 years from achieving certification. Studies to identify the most effective method to implement quality improvement process into routine laboratory are required.
Reporting
The report from an imaging study needs to answer the clinical question from the referrer, as well as provide a complete, structured, and accurate reflection of the data acquired. Goldberg et al noted that up to a third of TTE reports they analyzed did not include enough information to determine LVEF post myocardial infarction.56 Reporting may be facilitated by the selection of predetermined phrases/statements from a drop-down menu within a structured template to populate a report. Frommelt et al57 showed that compared with digital transcription, this approach reduced the time taken to generate a final report on the electronic patient record system or send it to the referring physician from 23.8 to 1.2 hours; P=0.001. Errors may occur during reporting where conflicting statements occur within the same report. For example, if normal mitral valve structure is reported in the main text, then mitral valve prolapse should not occur in conclusion. Chandra et al58 reviewed 96 772 echocardiogram reports and found that contradictory statements occurred in 4% of TTE, 3.6% of SE, and 7.1% of TEE. Spencer et al59 noted that on average, 30 finding codes were used per report, and 83% of all reports at least one conflicting statement. They also noted that the number of conflicting statements increased as the number of reports finalized per hour increased. The authors developed a tool which identified conflicting statements within reports and highlighted these to the reader in real time. Overall, 48% of time readers altered their report after the errors were flagged.
Structured reporting provides consistency for recording of imaging data, as well as the delivery of information to the referrer; however, a process to reduce the incorporation of conflicting statements within the structured report is a vital adjunct to their use.
Learning From Diagnostic Error Reporting
Diagnostic errors may be defined as false-positive (an abnormality is reported but not present), false-negative (an abnormality is present but not reported), or discrepant (an abnormality is present but the actual diagnosis is different from that reported). Errors may be related to a variety of reasons, including patient, administrative, procedural, technical factors or communication and cognitive errors.60 Benavidez et al61 reviewed 147 000 echocardiograms and found errors in 254 (0.17%) studies. They classified a diagnostic error as a diagnosis that was unintentionally delayed, wrong, or missed. Of these, 78% were potentially preventable. The most common reasons for errors were cognitive factors (38%), including under-interpretation of findings, technical factors (28%), including poor acoustic windows and limitations of the modality, and procedural factors (21%), including incomplete examination or poor imaging conditions. Independent predictors of errors included echocardiograms performed at night (odds ratio 2.6, 95% confidence interval 1.3–4.8), weekend studies (odds ratio 1.6, 95% confidence interval 1–2.6), high complexity of study (odds ratio 3, 95% confidence interval 1.7–5.4), and rare diagnoses (odds ratio 6.2, 95% confidence interval 3.8–10.1).
The proportion of studies with errors in the study was low; however, the laboratory had quality assurance process in place for many years, and this level may not reflect those found in other laboratories. The data highlight the excess errors which occur outside the traditional working hours, that is, weekends and evenings, with the majority of errors being preventable, cognitive errors. Therefore, targeting extra resources, such second reviews by another reader either remotely or the next day may help reduce these errors.
Conclusions
Improvement of quality in cardiovascular imaging is complex and requires a continuous, multifaceted approach (Figure). Engagement of stakeholders, including referring physicians, imagers, and patients, is essential to reduce inappropriate testing. Individual and laboratory accreditation provide an objective assessment of quality and identify areas of improvement. Educational interventions to improve accuracy should include review of reference cases and individual, personalised feedback in addition to didactic training. Integration of information technology tools at the point of referral, at reporting, and for quality assurance may aid decision-making and improve efficiency. There remains a paucity of data to support and design optimum quality assurance systems, and this requires redress from the cardiac imaging community.
Interventions to improve quality and/or accuracy at each stage of the patient journey through the imaging laboratory. Adapted with permission from Douglas et al.1 Copyright @ 2009, Elsevier.
Disclosures
None.
- Received August 10, 2015.
- Accepted November 11, 2015.
- © 2015 American Heart Association, Inc.
References
- 1.↵
- Douglas PS,
- Chen J,
- Gillam L,
- Hendel R,
- Hundley WG,
- Masoudi F,
- Patel MR,
- Peterson E.
- 2.↵
- Picard MH,
- Adams D,
- Bierig SM,
- Dent JM,
- Douglas PS,
- Gillam LD,
- Keller AM,
- Malenka DJ,
- Masoudi FA,
- McCulloch M,
- Pellikka PA,
- Peters PJ,
- Stainback RF,
- Strachan GM,
- Zoghbi WA
- 3.↵
- Lucas FL,
- DeLorenzo MA,
- Siewers AE,
- Wennberg DE.
- 4.↵
- Patel MR,
- Spertus JA,
- Brindis RG,
- Hendel RC,
- Douglas PS,
- Peterson ED,
- Wolk MJ,
- Allen JM,
- Raskin IE
- 5.↵
- Kirkpatrick JN,
- Ky B,
- Rahmouni HW,
- Chirinos JA,
- Farmer SA,
- Fields AV,
- Ogbara J,
- Eberman KM,
- Ferrari VA,
- Silvestry FE,
- Keane MG,
- Opotowsky AR,
- Sutton MS,
- Wiegers SE.
- 6.↵
- Ward RP,
- Mansour IN,
- Lemieux N,
- Gera N,
- Mehta R,
- Lang RM.
- 7.↵
- Willens HJ,
- Gómez-Marín O,
- Heldman A,
- Chakko S,
- Postel C,
- Hasan T,
- Mohammed F.
- 8.↵
- 9.↵
- 10.↵
- Cortigiani L,
- Bigi R,
- Bovenzi F,
- Molinaro S,
- Picano E,
- Sicari R.
- 11.↵
- Willens HJ,
- Nelson K,
- Hendel RC.
- 12.↵
- Bhattacharyya S,
- Kamperidis V,
- Chahal N,
- Shah BN,
- Roussin I,
- Li W,
- Khattar R,
- Senior R.
- 13.↵
- 14.↵
- Mansour IN,
- Razi RR,
- Bhave NM,
- Ward RP.
- 15.↵
- 16.↵
- 17.↵
- 18.↵
- Gibbons RJ,
- Miller TD,
- Hodge D,
- Urban L,
- Araoz PA,
- Pellikka P,
- McCully RB.
- 19.↵
- Mehta R,
- Ward RP,
- Chandra S,
- Agarwal R,
- Williams KA
- 20.↵
- Hendel RC,
- Cerqueira M,
- Douglas PS,
- Caruth KC,
- Allen JM,
- Jensen NC,
- Pan W,
- Brindis R,
- Wolk M.
- 21.↵
- Gholamrezanezhad A,
- Shirafkan A,
- Mirpour S,
- Rayatnavaz M,
- Alborzi A,
- Mogharrabi M,
- Hassanpour S,
- Ramezani M.
- 22.↵
- Doukky R,
- Hayes K,
- Frogge N,
- Nazir NT,
- Collado FM,
- Williams KA Sr.
- 23.↵
- 24.↵
- Doukky R,
- Hayes K,
- Frogge N,
- Balakrishnan G,
- Dontaraju VS,
- Rangel MO,
- Golzar Y,
- Garcia-Sayan E,
- Hendel RC.
- 25.↵
- 26.↵
- 27.↵
- Bhave NM,
- Mansour IN,
- Veronesi F,
- Razi RR,
- Lang RM,
- Ward RP.
- 28.↵
- Lin FY,
- Dunning AM,
- Narula J,
- Shaw LJ,
- Gransar H,
- Berman DS,
- Min JK.
- 29.↵
- Gibbons RJ,
- Askew JW,
- Hodge D,
- Kaping B,
- Carryer DJ,
- Miller T.
- 30.↵
- 31.↵
- 32.↵
- 33.↵
- 34.↵ABIM Foundation. Choosing Wisely: An Initiative of the ABIM Foundation. Accessed November 15, 2015. Available from http://www.choosingwisely.org.
- 35.↵
- Lucas FL,
- Sirovich BE,
- Gallagher PM,
- Siewers AE,
- Wennberg DE.
- 36.↵
- Weyman AE,
- Butler A,
- Subhiyah R,
- Appleton C,
- Geiser E,
- Goldstein SA,
- King ME,
- Kaul S,
- Labovitz A,
- Picard M,
- Ryan T,
- Shanewise J.
- 37.↵
- Fox KF,
- Flachskampf F,
- Zamorano JL,
- Badano L,
- Fraser AG,
- Pinto FJ.
- 38.↵
- Dawadi BR.
- 39.↵
- 40.↵2013 IAC Standards and Guidelines for Accreditation Echocardiography Accreditation. Accessed April 25, 2013. Available from http://www.intersocietal.org/echo/main/echo_standards.htm.
- 41.↵
- Nagueh SF,
- Farrell MB,
- Bremer ML,
- Dunsiger SI,
- Gorman BL,
- Tilkemeier PL.
- 42.↵
- Manning WJ,
- Farrell MB,
- Bezold LI,
- Choi JY,
- Cockroft KM,
- Gornik HL,
- Jerome SD,
- Katanick SL,
- Heller GV.
- 43.↵
- Savage DD,
- Garrison RJ,
- Kannel WB,
- Anderson SJ,
- Feinleib M,
- Castelli WP.
- 44.↵
- 45.↵
- 46.↵
- 47.↵
- 48.↵
- 49.↵
- 50.↵
- Ling LF,
- Obuchowski NA,
- Rodriguez L,
- Popovic Z,
- Kwon D,
- Marwick TH.
- 51.↵
- Dahiya A,
- Bolen M,
- Grimm RA,
- Rodriguez LL,
- Thomas JD,
- Marwick TH
- 52.↵
- Thavendiranathan P,
- Popović ZB,
- Flamm SD,
- Dahiya A,
- Grimm RA,
- Marwick TH.
- 53.↵
- Grant AD,
- Thavendiranathan P,
- Rodriguez LL,
- Kwon D,
- Marwick TH.
- 54.↵
- 55.↵
- Fanari Z,
- Choudhry UI,
- Reddy VK,
- Eze-Nliam C,
- Hammami S,
- Kolm P,
- Weintraub WS,
- Marshall ES.
- 56.↵
- 57.↵
- Frommelt P,
- Gorentz J,
- Deatsman S,
- Organ D,
- Frommelt M,
- Mussatto K.
- 58.↵
- 59.↵
- 60.↵
- Benavidez OJ,
- Gauvreau K,
- Jenkins KJ,
- Geva T.
- 61.↵
This Issue
Jump to
Article Tools
- Improving Appropriateness and Quality in Cardiovascular ImagingSanjeev Bhattacharyya and Guy LloydCirculation: Cardiovascular Imaging. 2015;8:e003988, originally published December 1, 2015https://doi.org/10.1161/CIRCIMAGING.115.003988
Citation Manager Formats