Open AccessAmerican Journal of Speech-Language PathologyReview Article16 Nov 2022

What Is Clinical Evidence in Speech-Language Pathology? A Scoping Review

    Abstract

    Purpose:

    Two disparate models drive American speech-language pathologists' views of evidence-based practice (EBP): the American Speech-Language-Hearing Association's (2004a, 2004b) and Dollaghan's (2007). These models discuss evidence derived from clinical practice but differ in the terms used, the definitions, and discussions of its role. These concepts, which we unify as clinical evidence, are an important part of EBP but lack consistent terminology and clear definitions in the literature. Our objective was to identify how clinical evidence is described in the field.

    Method:

    We conducted a scoping review to identify terms ascribed to clinical evidence and their descriptions. We searched the peer-reviewed, accessible, speech-language pathology intervention literature from 2005 to 2020. We extracted the terms and descriptions, from which three types of clinical evidence arose. We then used an open-coding framework to categorize positive and negative descriptions of clinical expertise and summarize the role of clinical evidence in decision making.

    Results:

    Seventy-eight articles included a description of clinical evidence. Across publications, a single term was used to describe disparate concepts, and the same concept was given different terms, yet the concepts that authors described clustered into three categories: clinical opinion, clinical expertise, and practice-based evidence, with each described as distinct from research evidence, and separate from the process of clinical decision making. Clinical opinion and clinical expertise were intrinsic to the clinician. Clinical opinion was insufficient and biased, whereas clinical expertise was a positive multidimensional construct. Practice-based evidence was extrinsic to the clinician—the local clinical data that clinicians generated. Good clinical decisions integrated multiple sources of evidence.

    Conclusions:

    These results outline a shared language for SLPs to discuss their clinical evidence with researchers, families, allied professionals, and each other. Clarification of the terminology, associated definitions, and the contributions of clinical evidence to good clinical decision-making informs EBP models in speech-language pathology.

    Supplemental Material:

    https://doi.org/10.23641/asha.21498546

    Recommendations from the preeminent models of evidence-based practice (EBP) in speech-language pathology (American Speech-Language-Hearing Association [ASHA], n.d.-a, 2004a, 2004b; Dollaghan, 2007) suggest that clinicians should identify and critically appraise evidence from research, clinical, and patient sources, and then integrate these to make the best possible treatment decision. However, ASHA's (2004a, 2004b) and Dollaghan's (2007) models differ markedly in their use of language to describe the sources of evidence that are clinical in nature (e.g., “clinical opinion,” “clinical expertise,” “evidence internal to clinical practice”). As a result, authors of peer-reviewed EBP papers in the field define clinical sources of evidence differently and describe the role of clinical sources of evidence inconsistently. Without clarity or consistency in the terminology or process, speech-language pathologists (SLPs) cannot optimally identify, use, or communicate about the evidence they generate through clinical practice as part of EBP.

    In this review, we sought to clarify the sources of evidence that are clinical, which we unify under the term “clinical evidence,” and identify the role of clinical evidence in evidence-based decision-making processes. To do this, we conducted a scoping review of peer-reviewed publications, relevant and accessible to American SLPs that included a description of terms related to clinical evidence. Our primary purpose was to establish a shared language for clinicians to discuss their clinical evidence with researchers, allied professionals, families/clients, and each other. Our secondary purpose was to clarify the congruency of our findings on clinical evidence with prominent models of EBP.

    Evidence-Based Medicine to EBP

    In the early 1990s, the term evidence-based medicine (EBM; Guyatt, 1991) was used to unify the terms and procedures that outlined how to systematically gather, appraise, and use the “best possible evidence” to inform medical decisions (Sackett & Rosenberg, 1995, p. 620). Early papers by Sackett (1997), Sackett and Rosenberg (1995), and Sackett et al. (1996) on EBM suggested that the “best possible evidence” was a practitioner's integration of high-quality medical research with high-quality clinical expertise. By the 1990s, high-quality medical research was well defined according to measurable, gold standard metrics of randomized controlled trial designs (Sackett, 1997; Sackett & Rosenberg, 1995). However, high-quality “individual clinical expertise” was not defined according to the design or outcomes of clinical practice but according to the characteristics of the practitioner themself, including their skills, (tacit) knowledge, and decision-making abilities (Sackett et al., 1996, p. 71). Practitioner characteristics are dynamic, varying within and across practitioners over time, making “individual clinical expertise” very difficult to define, measure, or appraise. The ambiguity of “individual clinical expertise” was further complicated by Sackett et al.'s (1996, p. 71) introduction of other evidence terms that were described as clinical (e.g., external clinical evidence, which Sackett used to refer to as “clinically relevant” intervention research). Confusion about the precise nature and role of clinical expertise, as well as other terms of clinical evidence, permeated the translation of EBM to EBP for SLPs.

    In the mid-2000s, three influential publications on EBP were disseminated to SLPs in the United States. Two of these sources were from the professional organization responsible for supporting and empowering SLPs in the United States, ASHA. ASHA synthesized Sackett's framework into a technical report (ASHA, 2004a) and a joint coordinated committee report (ASHA, 2004b). These reports promoted the idea of EBP within the professional community and set goals for the organization. The third source was the book, The Handbook for Evidence-Based Practice in Communication Disorders, in which Dollaghan (2007) proposed an E3BP framework and heavily focused on critical appraisal of evidence.

    ASHA's joint coordinated committee report (2004b) largely adopted Sackett's evidence-based decision-making model, which they represented as an equilateral triangle with “current best evidence,” “clinical expertise,” and “client values” as the vertices (see Figure 1, left). They defined the goal of EBP as “the integration of (a) clinical expertise, (b) best current evidence, and (c) client values to provide high-quality services” (ASHA, 2004b, p. 1). Like Sackett, ASHA defined “clinical expertise” according to the general characteristics of a clinician but did not clarify how to measure/appraise this source of evidence and did not define how “clinical expertise” should be integrated with other sources of evidence to make good clinical decisions. The phrase “best current evidence” stands without either a “clinical” or “research” modifier, which is also observed in the triangle graphic. However, throughout the 2004b report, the term evidence” is described implicitly or explicitly as meaning research-based evidence, such as in the statement, “the integration of clinical expertise, the best current research evidence, and individual client values” (p. 2). ASHA (2004a) presents a similarly research-focused characterization of “evidence,” stating that, “It is extremely rare for a single study to provide the definitive answer to a scientific or clinical question, but a body of evidence comprising high quality investigations can be synthesized to approach a definitive answer even when, as is likely, results vary across study” (para 8). The focus of “evidence” here is on amassing research evidence that converges to answer a clinical question and implies that research evidence will always clarify clinical uncertainty (Dodd, 2007). Overall, ASHA's (2004a, 2004b) conceptualization of “evidence” specifically refers to research.

    Figure 1.

    Figure 1. American Speech-Language-Hearing Association (ASHA) evidence-based practice (EBP) models (note: the diagrams are reprinted with permission).

    The E3BP model (Dollaghan, 2007) conceptualized “evidence” differently. In the introduction, Dollaghan (2007) proposes that there are three separate types of evidence, which should be integrated: “(1) best available external evidence from systematic research, (2) best available evidence internal to clinical practice, and (3) best available evidence concerning the preferences of a fully informed patient” (p. 2). In her conceptualization of E3BP, Dollaghan removed clinical expertise from the model itself (running contrary to Sackett et al., 1996, and ASHA, 2004a, 2004b), because “clinical expertise is not a separate piece of the E3BP puzzle but rather the glue by which the best available evidence of all three kinds is integrated in providing optimal care” (p. 3). Critically, Dollaghan identifies “clinical expertise” as partially encompassing or synthesizing research evidence, as well as family or patient values. As an expansion of Sackett's “individual clinical expertise,” Dollaghan's view suggests that the totality of a clinician's knowledge (including the research evidence they know), decisions, skills, and abilities is “clinical expertise.” However, in Dollaghan's EBP model, clinical expertise is not its own source of evidence, which is a substantial departure from Sackett's and ASHA's models. This discrepancy in the role of clinical expertise fundamentally means that the E3BP model is incongruent with the ASHA (2004a, 2004b) triangle. Subsequent work that uses the term “clinical expertise” rarely clarifies its meaning or which model (if either) is referenced.

    Another incongruency relates to the second component of the E3BP model. Dollaghan's (2007) description of the new term “evidence internal to clinical practice,” or E2, explicitly states that, “E2 is not a synonym for routine measures of patient performance” (p. 115). Dollaghan's E2 is not the data generated during standard clinical evaluation and treatment. Instead, she recommends appraising E2 using single-subject research design metrics. Dollaghan's model suggests that when a clinician is uncertain about a treatment decision, they should examine a clinical intervention using metrics such as blinding and metrics for comparing baseline versus treatment phases, and suggests calculating Cohen's d to evaluate the magnitude of treatment effects. This experimental view of clinical evidence differs from others who argue for the utility of routine data collection as a deciding factor in evidence-based decision making (e.g., Olswang & Bain, 1994).

    Recent Reworkings of Clinical Evidence in Speech-Language Pathology

    EBP within speech-language pathology has evolved since the publication of the ASHA (2004a, 2004b) and Dollaghan (2007) models. The relatively recent reworking of the EBP models on ASHA's EBP website (ASHA, n.d.-a) launched between 2019 and 2020, which highlights the lack of clarity or consensus about the terms associated with clinical evidence. The most striking evolution is that the E3BP model (Dollaghan, 2007) and the ASHA (2004a, 2004b) model have become intertwined in many of ASHA's subsequent nonrefereed resources—despite the two models being substantially different. This entanglement can be seen in the EBP triangle, which was updated from the traditional three points: “current best evidence,” “clinical expertise,” and “client values” (see Figure 1, left) to a new triangle that includes “client perspectives,” “clinical expertise,” and “evidence” (external and internal; see Figure 1, right). Dollaghan's (2007) evidence internal to clinical practice was a distinct source of evidence in her E3BP model. This source was absent from the original ASHA (2004a, 2004b) model, but it now appears grouped with research evidence in the revised ASHA EBP triangle (ASHA, n.d.-a).

    In addition, within the 2019–2020 time frame, a post from the ASHA Journals Academy by Higginbotham and Satchidanand (2019) proposed a diamond EBP model that separated out “clinical expertise & opinion,” “external scientific evidence,” “client–patient–caregiver perspectives,” and “internal evidence.” Unlike Dollaghan's (2007) model, Higginbotham and Satchidanand (2019) retained clinical expertise as part of the model, adding in opinion. Additionally, their model suggests “internal evidence” arises from the evaluation of client data and stresses the importance of collecting client data as part of the ongoing therapeutic process, which is a departure from Dollaghan's (2007) definition of this term. This reconceptualization of “internal evidence” coincided with ASHA's 2019–2020 website revisions that now describe internal evidence as “the data that you systematically collect directly from your clients to ensure that they're making progress. This data may include subjective observations of your client as well as objective performance data compiled across time” (ASHA, n.d.-b, para 2).

    While ASHA's EBP model revisions attempt to clarify the components of clinical evidence, many questions are left unanswered: Should Dollaghan's (2007) view that “clinical expertise is not a separate piece of evidence in the E3BP puzzle” be upheld or revised? How is Higginbotham and Satchidanand's (2019) conceptualization of “clinical expertise” separate from or similar to “clinical opinion”? How does “clinical expertise” differ from “internal evidence”? Which description of “internal evidence” should we accept, Dollaghan's (2007) or Higginbotham and Satchidanand's (2019)? Most importantly: How should clinicians weigh and integrate these different types of clinical evidence into evidence-based decision-making?

    Statement of Need and Objectives

    Without clearly or consistently defining the various sources of clinical evidence, these constructs cannot be meaningfully discussed or used by clinicians seeking to implement EBP. Without a shared language for clinicians to discuss clinical evidence, it is subject to being automatically accepted or dismissed according to an individual's own biases, the very thing that EBP was designed to guard against (Sackett & Rosenberg, 1995). Clear understanding of clinical evidence is especially important for practicing SLPs who commonly find that their search and appraisal of external research evidence is incongruent with their client's diagnosis (Roberts et al., 2020), service delivery model (Justice et al., 2008), or cultural-linguistic and socioeconomic backgrounds (Fannin, 2017; Seymour, 2004). Even when research does match and inform intervention decisions based on a client's diagnosis, service delivery model, or background, the intervention methods are too often insufficiently detailed to allow clinicians to translate research to practice (Dodd, 2007; Ludemann et al., 2017; McCurtin & Roddham, 2012). In these cases, understanding the different types of clinical evidence and its role in making a good clinical decision is essential.

    To clarify and establish the language that clinicians may use to discuss clinical evidence as part of EBP in speech-language pathology, our objectives were to delineate the language used by authors to describe clinical evidence and its role in good clinical decision-making processes. To accomplish these objectives, we conducted a scoping review of the peer-reviewed EBP intervention literature that was relevant and accessible to American SLPs.

    Method

    Design

    A scoping review methodology (Arksey & O'Malley, 2005, p. 22) includes five steps: (1) developing broad research questions, (2) conducting a search of the relevant literature, (3) applying inclusion and exclusion criteria, (4) representing the data visually, and (5) summarizing the data in a meaningful way. Scoping reviews, compared to systematic reviews, are recommended when attempting to broadly identify concepts and definitions of concepts, particularly in new or emerging bodies of literature (Munn et al., 2018).

    Step 1: Research Questions

    Q1: What terms and descriptions of clinical evidence are presented in the intervention literature?

    Q2: What are the types of clinical evidence that are discussed in the intervention literature?

    Q3: What are the attributes of clinical expertise that are discussed as positive or negative in the intervention literature?

    Q4: What is the role of clinical evidence in making good clinical decisions, as described within the intervention literature?

    Step 2: Search Strategy

    We aimed to represent the language used by authors who described clinical evidence in the peer-reviewed speech-language pathology intervention literature. We also wanted to represent descriptions of clinical evidence contained in publications relevant and accessible to practicing SLPs in the United States. Our inclusion criteria were intervention articles that were published in peer-reviewed speech-language pathology journals between 2005 and 2020, and a description of clinical evidence. We operationalized a “description of clinical evidence” as any language that explicitly stated what the keyword is, includes, or is characterized by, and further detail this process under Step 4. Online article searches were conducted through ASHAWire (an electronic database of ASHA Journal publications) and Google Scholar (an academic search engine). ASHAWire was selected for its relevance to American SLPs and its accessibility for ASHA members. Google Scholar was selected because this search engine is what many practicing SLPs report using to access research evidence (Muttiah et al., 2011; Thome et al., 2020). Therefore, we adjusted this step of the Arksey and O'Malley (2005) framework to include Google Scholar as part of our search strategy. While a limitation of this approach is that those seeking to verify our search procedures may find slightly different results (Rovira et al., 2019), Google Scholar has been increasingly used in the search strategies of published scoping reviews (Daudt et al., 2013), has been identified as a positive supplement to traditional database searches (Haddaway et al., 2015), and aligns with the traditional goal of scoping reviews, “to map rapidly the key concepts underpinning a research area and the main sources and types of evidence available” (Mays et al., 2001, p. 189). Our search strategy yielded many articles (i.e., 972) suggesting our search captured most relevant and accessible articles that contained a description of clinical evidence in the peer-reviewed, speech-language pathology literature.

    The article searches were completed in January and February of 2021. For Google Scholar, eight searches were completed starting with the search line, “SLP” AND “Intervention,” followed by one of the exact phrases: “clinical opinion,” “clinical expertise,” “clinical science,” “clinical evidence,” “internal evidence,” “practice-based evidence,” “practice-based research,” and “science-based practice.” We adapted a method of narrowing search results found in Graham et al. (2006) to include only the first 100 results (10 Google search pages) per search string. This process is further detailed under Step 4. The final search strategy was also refined following several initial pilots of the strategy, which revealed that (a) speech-language pathologist” instead of “SLP” resulted in fewer found publications, (b) the term “evidence-based practice” yielded far too many irrelevant publications that did not discuss clinical evidence, and (c) for each search string, the most relevant results were contained in the first four Google search pages, representing the first 30–40 search results. This search strategy was selected to balance “the laborious nature of study identification and the need for comprehensiveness on the one hand, with the need to complete a scoping study in a timely fashion, on the other” (Daudt et al., 2013, p. 5).

    The search of ASHAWire included the following journals: (a) American Journal of Speech-Language Pathology, (b) Journal of Speech and Hearing Disorders, (c) Journal of Speech, Language, and Hearing Research, (d) Language, Speech, and Hearing Services in the Schools, (e) Contemporary Issues in Communication Science Disorders, (f) Journal of Speech and Hearing Research, and (g) all Perspectives journals published in 2017 or later (at which time the Perspectives journals were considered peer reviewed; Beverly et al., n.d). The ASHAWire search was conducted using the same search string as in the Google search but was conducted all at once. The term SLP” was also omitted from the search strings because of the field-specific nature of the database. Terms included “intervention” AND (“clinical opinion” OR “clinical expertise” OR “practice-based evidence” OR “practice-based research” OR “science-based practice” OR “clinical science”). We piloted these search strings with the word “therapy” rather than “intervention” in the ASHA journals, which resulted in 130 results that were all duplicates or met exclusion criteria, except for one article that contained a definition that was already referenced within the corpus. Therefore, we did not pursue additional synonymous search terms.

    Step 3: Study Selection

    Article titles and citations were first screened for the following exclusion criteria: publication prior to 2005 or after 2020, publication in a book or thesis/dissertation, publication in a non–peer-reviewed source (which included the Perspectives journals before 2017; Beverly et al., n.d.), written in a language other than English, or related to a field other than speech-language pathology. Articles were also excluded if they studied the application of EBP only related to assessment without mention of intervention.

    Articles that were not excluded by title and citation screening were then abstract appraised for inclusion criteria. A search was conducted for keywords commonly associated with aspects of clinical evidence: “opinion,” “expert/ise,” “clinic/al/ician,” “practitioner,” “practice,” “knowledge,” “science,” “evidence,” and “internal.” Articles were included for full-text appraisal if they contained one or more of the above terms within the abstract. Articles that met title screening but did not contain an abstract were automatically included for full-text appraisal.

    During full-text appraisal, articles were included in the final corpus if they contained a description of clinical evidence within the full text. To focus on articles that defined clinical evidence for intervention, articles were excluded if the full text focused exclusively on educational methods for teaching EBP, were systematic or meta-analytic reviews of research evidence, focused exclusively on patient/family preferences, or were restricted to assessment/diagnostic concepts in speech-language pathology. Finally, we conducted hand searches of the reference lists contained in articles that were retained in the final corpus by using the same keyword search process described above. The complete search strategy is represented in Figure 2 (adapted from Moher et al., 2009), which may be found in the results.

    Figure 2.

    Figure 2. Search strategy.

    Step 4: Data Mapping

    We collected and mapped the data from the corpus in four stages. In the first stage, we used the keyword search strategy described above to find and extract the terms and language that authors used to describe clinical evidence. In the second stage, we searched for definitions of the types of clinical evidence being described. In the third stage, we topic coded (Saldaña, 2009) the definition data for contextualized, positive, and negative descriptions of clinical expertise. In the fourth stage, we coded the definition data to summarize how authors reported that clinical evidence should be used to make good clinical decisions. We used NVivo (QSR International Pty Ltd., 2020) to code the data in the third and fourth stages.

    Data Mapping I: Locating Definitions and Terms of Clinical Evidence

    As described in Step 3, we used the keyword search strategy described above to locate any descriptions of clinical evidence within the included publications. When a keyword was found, the two first authors read the paragraph containing the keyword to determine if a description of clinical evidence was present. If a description of clinical evidence was found, we extracted both the verbatim language of the description (typically one to five sentences found proximal to the keyword) and the key clinical evidence term/s that were referenced in the description. For example, a search for the keyword “internal” might have led to the term “internal evidence,” which authors may have explicitly defined as evidence from clinical practice experiences. This would be considered a term related to clinical evidence. If the authors used a keyword (e.g., “internal”) in a way that was not related to clinical evidence (e.g., “internal medicine” or “ratings were internally consistent”), those were not considered a description of terms. Many articles contained more than one term and multiple descriptions of terms. All term-description combinations were compiled into a database by author and year of publication. These comprised the raw data of the study.

    Data Mapping II: Determining the Types of Clinical Evidence

    To determine the types of clinical evidence described in the literature, we summarized the raw definition data according to the source of clinical evidence, or how the evidence was generated. The two first authors independently read through the definitions of clinical evidence and applied topic coding procedures to generate a list of categories that reflected how the clinical evidence was generated. They then compared their frameworks to finalize a set of four categories that described the context in which the clinical evidence was generated, the role of the person or persons generating the clinical evidence, and the procedures that were used to generate the clinical evidence. At this level of analysis, we evaluated the interrater reliability of these methods to classify the types of clinical evidence. A total of 38.5% of definitions/descriptions (30/78) were coded for interrater reliability at 96.7% agreement, which was interpreted as strong. Discrepancies were resolved through discussion and consensus by the first three authors.

    Data Mapping III: Positive and Negative Attributes of Clinical Expertise

    To answer Research Question 3, we hand coded the extracted data for the (positively or negatively) polarized descriptions of clinical expertise. In context, these were often phrased as recommendations to clinicians. For example, the following quote from Ebbels (2017, p. 221) is phrased as a recommendation (e.g., “clinicians need to…”) and contains several words/phrases (e.g., just anecdotes, flawed, mistakes) that signal negative polarity: “Thus, clinicians need to recognize that clinical practice which relies on just anecdotes and experience could be flawed and lead to clinical experience which consists of ‘making the same mistakes with increasing confidence’”. In the next example, the word important signals positive polarity, “Evidence from real world clinical practice can add important data to the E3BP knowledge base” (Chan et al., 2013, p. 335). Both excerpts were included in the corpus.

    The process of mapping the data aligned with grounded theory methodology (Chun Tie et al., 2019). We used an open-coding framework (Williams & Moser, 2019) to code all the data for statements that described positive and negative aspects of clinical expertise or what clinicians should or should not do. The first three authors completed three rounds of coding using the constant comparative method (Kolb, 2012). They independently coded the data, met to develop a common codebook, used the codebook to recode the data, and then met to compare coding. In the first round, the authors independently developed topic codes (Saldaña, 2009) for each qualifying statement. A given statement could be assigned multiple codes, as commonly occurred with lists of positive or negative descriptors of clinical evidence. After each round of coding, the first three authors met to discuss the data and develop a shared codebook. This process repeated after the coders generated the second round of codes. After the third round of coding, the frameworks converged for both codebooks. During the third meeting, authors came to consensus for all codes. Finally, axial coding was conducted by grouping conceptually similar codes into broader categories (Williams & Moser, 2019) that described the positive and negative aspects of clinical expertise. Thus, the final conceptual hierarchy was categories that contained codes.

    Data Mapping IV: The Role of Clinical Evidence in Decision Making

    To answer Research Question 4, we used the same methodology as in Data Mapping III, to code for descriptions of the role of clinical evidence in relation to other types of evidence when making a clinical decision. We operationalized descriptions of the “role of clinical evidence” as its function or recommended/expected use in evidence-based decision making. Function statements were descriptive; they described how clinicians commonly use one or more aspects of clinical evidence in decision-making processes. Statements of recommended/expected use, on the other hand, were prescriptive and described how clinicians should use one or more aspects of clinical evidence to make evidence-based decisions. For example, if an author stated that clinicians should prioritize research evidence over clinical evidence that would be considered a statement of recommended/expected use, as it describes how the author believes clinical evidence should be used in decision-making. As in Data Mapping III, codes that were conceptually similar were grouped into broader categories (Williams & Moser, 2019) that described how clinical evidence is used to make good clinical decisions.

    Results

    A total of 972 articles were identified, which included 259 articles identified through ASHAWire and 713 articles identified through Google Scholar. After removing duplicates, the application of inclusion and exclusion criteria returned 78 published works, which were reviewed in full (see Figure 2 and Supplemental Material S1). Of note, the exclusion criteria barred some influential works from the corpus that were not peer-reviewed articles (e.g., ASHA, n.d.-a, 2004a; Dollaghan, 2007) or were from other fields (e.g., Ericsson & Lehmann, 1996).

    Question 1: Terms and Definitions of Clinical Evidence

    Terms

    Across the 78 articles, 98 terms were used to describe aspects of clinical evidence (see Supplemental Material S2). Approximately, one third of articles (23/78) used more than one term. Some terms were used across authors (e.g., “clinical expertise”), whereas others were unique to one publication, (e.g.., “craft-based knowledge,” Justice, 2010, or “indirect evidence,” Dijkers et al., 2012). Descriptions of clinical evidence were often presented as recommendations for ideal clinical practice patterns or descriptions of the sort of clinical evidence that should be used to inform EBP, rather than thorough investigations of these concepts. For example, Fey (2006) described the clinical importance of clinician self-evaluation and integration of clinician experience but did not explicitly define what either term meant.

    Descriptions of Clinical Evidence

    The meaning of individual terms was not consistent across articles. Authors used similar terms to represent dissimilar concepts. For example, authors in a series of interdisciplinary publications describing the SCIRehab Project (Brougham et al., 2011; Gassaway et al., 2009; Gordan et al., 2009; Horn et al., 2015; Whiteneck & Gassaway, 2012, 2013; Whiteneck et al., 2009) described “practice-based evidence” as a “research methodology” by which aspects of the scientific method were used to collect retrospective data from practice contexts—not as a method that is used to collect data in session by practitioners. In contrast, publications in speech-language pathology journals (n = 7) described “practice-based evidence” as procedures by which SLPs generate clinical hypotheses and systematically collect data during treatment to test hypotheses (Baker & McLeod, 2011; Crooke & Olswang, 2015; Donaldson & Stahmer, 2014; Riedeman & Turkstra, 2018; Smith, 2018; Swift et al., 2017). Of these seven articles, four suggested that “practice-based evidence” also encompassed application of the scientific method, knowledge integration, clinician skill, decision-making, and/or internal verification/validation of clinical data.

    Authors also described different terms analogously or nested terms in varying ways within an EBP hierarchy. For instance, Baker and McLeod (2011) describe clinical expertise as the integration of research, clinical, and patient/family evidence. Others suggested this term encompassed various aspects of clinician skills, decision-making, experiences, attitudes, and opinions (e.g., Kamhi, 2006; Thome et al., 2020). Iacono and Cameron (2009) described “clinical opinion” as a type of “internal evidence” (p. 237), yet Donaldson and Stahmer (2014) presented “internal evidence” as analogous to “practice-based evidence,” which was subsequently defined as “systematic and repeated data collection” (p. 271). While these various terms are implicitly linked, the relationships or differences between these clinical evidence terms were not explicitly stated for greater than 80% of terms used in the corpus (n = 80/98 terms).

    Question 2: Types of Clinical Evidence

    In Research Question 2, we sought to evaluate the types of clinical evidence that were described in the literature. Three broad types of clinical evidence arose from the analysis, although research evidence was at times described as clinical evidence.1 To describe these types using consistent terminology, we selected the most-used terms to represent each of these types: clinical opinion, clinical expertise, and practice-based evidence.

    Clinical opinion (n = 39) described an intrinsic construct—the dynamic, implicit viewpoints of researchers who stated or implied that a given clinician (frequently themselves) was an expert. Self-proclaimed expert opinion was often accompanied by a limited description of the expert's qualifications (e.g., number of years in practice) but did not otherwise describe how those qualifications led to expertise. Some authors described a negative perception of clinical opinion, describing it as implicit or biased (e.g., Cardin & Hudson, 2018; Goldstein et al., 2007; Justice, 2010; Mcleod & Baker, 2014; Muñoz, 2017; Selin et al., 2019). Clinical opinion also included attitudinal constructs like a clinician's personal values, which authors typically described as positively impacting clinical outcomes (e.g., Roulstone, 2011).

    Clinical expertise (n = 54) also represented a construct intrinsic to the clinician, in which they dynamically integrate multiple sources of knowledge and gain technical skills to select appropriate measures, engage in consistent practice, and collect data. While clinical opinion was often described as lacking rigor, such as having preferences without data, descriptions of clinical expertise often discussed clinicians as being reflective and self-aware of their own knowledge. While clinical opinion was related to clinicians' biases, personal values, and opinions, clinical expertise described the mixing of a clinician's knowledge, prior clinical experiences, choice to practice systematically, use of demonstrable technical skills in intervention, and means of measuring intervention outcomes.

    Practice-based evidence (n = 28) described static clinician-generated client data that are interpreted to test a clinical hypothesis or answer a clinical question. Unlike the prior two constructs, practice-based evidence was described as extrinsic to the clinician: the product that is generated from a clinician's systematic measurement, aggregation, and interpretation of data. While clinical expertise may include skilled data collection, practice-based evidence is the data itself: clinical evidence derived from clinical practice. This is unlike practice-based research, which is research evidence and thus generated through the steps of the scientific method and externally validated. Though practice-based evidence is generated systematically, it is clinician generated and does not follow all steps of the scientific method.

    Question 3: Positive and Negative Aspects of Clinical Expertise

    We sought to determine how authors described aspects of clinical expertise positively and negatively. Sixty-eight of 78 articles included a positive/negative description of clinical expertise. Six categories describing aspects of clinicians were identified (see Supplemental Material S3): (a) interpersonal skills and attributes, (b) technical clinical skills, (c) experience, (d) means of measuring intervention outcomes, (e) tacit knowledge/bias, and (f) systematicity. Operational definitions and counts for the codes and categories may be found in Supplemental Material S3.

    Interpersonal Skills and Attributes

    All 18 publications that described interpersonal skills did so by noting the value of positive interpersonal skills and attributes. Most of the articles described the desirability of certain personality traits, such as empathy or compassion. Some described expert clinicians as those who have positive communication skills and the ability to work successfully as a member of professional teams (n = 5).

    Technical Clinical Skills

    All 22 articles discussing this category positively described specific clinical or procedural skills that made clinicians effective, such as clinicians' ability to work within different practice contexts and the fidelity of intervention (n = 12).

    Experience

    Most of the 23 articles described experiences that were important for expertise development, including intentional mentorship and clinical experiences (n = 13), educational history and clinical training (n = 23), and the growth in clinical proficiency that results from intentional practice experiences over time (n = 7). Far fewer articles (n = 4) described experience in a negative light. These negative descriptions suggested that accumulated clinical experiences were insufficient for expertise development.

    Measuring Intervention Outcomes

    Articles that referenced measuring outcomes (n = 28) always positively characterized the importance of collecting data on the outcomes of intervention. This category was frequently related to the terms “practice-based evidence” and “practice-based research.” A subset of publications (n = 11) recommended using a research methodology, such as single-case experimental design, to measure the outcomes of intervention.

    Tacit Knowledge and Behavior

    This category was very polarized. Nineteen articles reported positive, negative, or mixed views of tacit knowledge and behaviors, which were sometimes described as personal or clinical biases. This construct frequently referenced the insights, intuitions, and impulses of clinicians. Approximately half of the articles in this category (10/19) described such biases or tacit knowledge as something valuable or positive—often as an attribute that allowed expert clinicians to respond quickly and fluidly within sessions or to individualize treatment for their clients. The other half of articles (11/19) described tacit knowledge negatively. Of these, five suggested tacit knowledge is biased, unvalidated, or unreliable. Six characterized tacit knowledge as a problematic foil to EBP because it leads to habitual or uncritiqued practice.

    Systematicity

    The largest proportion of articles (33/68) described the value of clinical practices that are structured, reflective, or deliberate in nature. All these articles described systematic practice positively. Nearly half of the 33 articles for this category (n = 16) described aspects of organized thinking or stepwise problem-solving that were important for systematic practice. Twelve articles described the importance of explicit, organized knowledge that developed from repeated clinical experiences or testing clinical hypotheses. Seven articles suggested that self-reflection processes support the development of systematic practice. Six articles suggested that documenting intervention methods is important for systematically determining the reason for a particular outcome. Few articles (4/68) described how generating clinical questions or hypotheses supports the organization of explicit clinical knowledge.

    Question 4: The Role of Clinical Evidence in Making Good Clinical Decisions

    We found that 53 of 78 articles included a description of the role of clinical evidence in making a good clinical decision. Overall, the role of clinical evidence in good clinical decision-making was described in relation to five other categorical constructs. These categories (summarized in Supplemental Material S4) included (a) integrating multiple sources of information, (b) aligning with professional culture, (c) operating under consensus recommendations, (d) prioritizing client and family values, and (e) prioritizing research evidence.

    Integrating Multiple Sources of Information

    All 28 member articles for this construct described the importance of clinicians weighing and integrating multiple sources of information to make good clinical decisions. Typically, authors who described the importance of integrating multiple sources of information did so by listing the types of information clinicians should consider. These lists included the types of evidence in the ASHA (2004a, 2004b) EBP model (research evidence, clinical expertise, and patient/family values), or terms like those described in the results to Question 2. However, other sources of information in decision making were also included in these lists, such as the clinician's theoretical perspective (e.g., Fey, 2006), clinical opinions (e.g., Kamhi, 2006), context of service delivery (e.g., Swift et al., 2017), applicability given local resources (e.g., Gillam & Gillam, 2006), local policies (e.g., McCauley et al., 2009), the clinician's own analysis and problem-solving (e.g., Goldstein et al., 2007), professional education (e.g., Iacono & Cameron, 2009), and practice-based evidence (e.g., Dodd, 2007; Kamhi, 2011). Overall, the articles asserted that clinicians should draw from a broader base of factors to make decisions, which differs from the traditional three sources of the ASHA (2004a, 2004b) EBP triangle. Authors do not appear to be limiting their recommendations to the former (ASHA, 2004a, 2004b) or current ASHA models (n.d.-a), or the Dollaghan (2007) model of EBP when describing the types of evidence involved in EBP.

    Aligning With Professional Culture

    The 13 articles with membership in this category described how clinicians should integrate advice from local clinical expertise or broader expert communities when making clinical decisions. Five articles described how clinicians should access the clinical expertise of other SLPs in their local practice context. Four recommended that clinicians engage in communities of practice or community–academic partnerships to create new knowledge. Three articles described the importance of seeking other allied professionals' clinical expertise as part of interprofessional practices.

    Operating Under Consensus Recommendations

    Seven articles identified the importance of aligning one's clinical practices with practice recommendations or guiding statements from professional organizations (e.g., ASHA Practice Policies).

    Prioritizing Client and Family Values

    Ten articles discussed the importance of integrating client and family values into decision making. This included recommendations for individualizing intervention approaches and using family-centered intervention approaches.

    Prioritizing Research Evidence

    Most articles (38/58) described the importance of clinicians knowing, translating, and applying research evidence to make good decisions. Authors often implied, or in some cases explicitly stated, that research evidence should be weighted more heavily in decision making than clinical evidence because of its reliability, validity, and processes of external verification by peer review. For instance, 11 of 38 articles specifically stated that research evidence was the most important part of the EBP triad, and four suggested that when research evidence and clinical evidence are at odds, research evidence should be prioritized. In the articles that discussed prioritizing research evidence, authors suggested that sources of clinical evidence should only be used when research evidence was lacking (n = 8). Many of the articles (21/38) suggested that expert clinicians ought to be familiar with current literature results and theory. Authors often (12/38) indicated that good clinical decisions hinge on clinicians knowing how and when to translate research results or theory into practice.

    Summary of Results

    Across the 78 articles reviewed, authors used 98 different terms to describe aspects of clinical evidence. These terms were not used consistently, and they held different meanings for different authors and articles. Three types of clinical evidence were discussed in the literature: clinical opinion (self-proclaimed skill, belief, or personal bias), clinical expertise (involving demonstrable skill and explicit knowledge), and practice-based evidence (systematic data or information collection from application of an intervention). Authors often discussed positive and negative aspects of clinical expertise as enmeshed with descriptions of the clinician themselves, whereas practice-based evidence was described as extrinsic to the clinician: the product that was generated from systematically measuring patient outcomes and reflecting on them over time.

    Negative descriptions of clinicians were associated with clinical opinion (self-proclaimed skill/bias), but positive descriptions were connected to clinical expertise (demonstrable knowledge/skill). Descriptions of clinical opinion referenced a practitioner's views based on a single source of information that lacked rigor, such as an attitude or years of experience. Conversely, descriptions of clinical expertise referenced multiple, positive characteristics of clinicians who are skilled, are experienced, engage in practice systematically, measure the outcomes of intervention, and who are strong communicators with positive interpersonal traits.

    Good clinical decisions were characterized as those that integrated multiple sources of information, capitalized on the expertise of others, aligned with professional consensus statements and professional culture, and prioritized research evidence, as well as client/family values. Frequently, authors suggested that integrating many sources of information led to good decision making, often recommending more than the traditional three sources of evidence referenced by ASHA (n.d.-a, 2004a, 2004b) or Dollaghan (2007). Poor clinical decisions were described as based on just one or few sources of evidence. While different types of clinical evidence took on different roles within each of the five categories that marked good clinical decisions, positive attributes of clinical expertise were described in all categories of good clinical decision making.

    Discussion

    Our results clarify the language used by authors in the field to describe clinical evidence and provide initial insight into clinical evidence as a distinct and important part of the EBP models of speech-language pathology. Based on our findings, we suggest concepts, terms, and processes that SLPs may use to begin discussions of clinical evidence with others, and suggest areas of future research exploring the place and role of clinical evidence in EBP models.

    Clinical Evidence is Evidence

    While early models of EBP used “evidence” to refer implicitly or explicitly to external research evidence, we found that authors described clinical evidence in the literature as a distinct source of evidence, one that was separate from research evidence. While the three types of clinical evidence identified (clinical opinion, clinical expertise, and practice-based evidence) have been historically enmeshed in the literature, we found that they are separable and definable constructs. By explicitly identifying the components that differentiate these three types of evidence from each other and from other sources of evidence, clinicians may identify, collect, appraise, and discuss their clinical evidence to determine its value relative to other forms of available evidence.

    Clinical Evidence Can Be Appraised and Improved

    Clinical Expertise

    Our results support the importance of differentiating between clinical opinion as a poor-quality, unidimensional source of clinical evidence, and clinical expertise as a high-quality, multidimensional source of clinical evidence that is intrinsic to the clinician. Clinical expertise was described as stemming from observable/demonstrable categories of clinical skills, knowledge, and practices. This finding does not support the merger of clinical expertise with clinical opinion in the Higginbotham and Satchidanand (2019) diamond EBP model but suggests that clinical expertise should be appraised and prioritized over clinical opinion as a distinct source of evidence. As an explicit construct, clinical expertise can be appraised and intentionally improved, whereas clinical opinion represents a clinical belief or attitude that is not regularly updated through reflection and intentional practice. Clinicians may reflect on their skills, attributes, communication, experience, systematicity, and measurement practices to self-assess their expertise broadly or to generate new insights and evidence about a specific population or practice. Clinicians may use this information to explicitly discuss their clinical expertise for a particular case with other clinicians, professionals, researchers, and clients/families. Furthermore, the features that characterize clinical expertise are tangible outcomes that may be explicitly taught in preservice training programs and continuing education offerings.

    Practice-Based Evidence

    Authors within the corpus described practice-based evidence to include reviewing and identifying patterns in local data (e.g., King et al., 2007; Wheeler-Hegland et al., 2009), comparing local data to published research outcomes or confidence intervals (Gillam & Gillam, 2006), and validating local data/interpretations (e.g., Cardin & Hudson, 2018; Cirrin & Gillam, 2008; Douglas et al., 2019; Fey, 2006; Kamhi, 2011; McCurtin et al., 2019). We found broad author support for the regular, systematic collection of data from clients, and many authors communicated the importance of integrating this source of evidence to make good clinical decisions. Practice-based evidence represents data extrinsic to the clinician, distinguishing it from intrinsic clinical expertise. This source is also different from research evidence because the scientific method is not strictly followed, and the design/goals of regular data collection necessarily differ from the design/goals of data collected for research. Nevertheless, systematic data-informed clinical decision making is a recommended clinical practice that predates even the early ASHA statements on EBP (e.g., Olswang & Bain, 1994) and aligns with early descriptions of evidence-based medicine that reference clinical data (Sackett & Rosenberg, 1995). Clinicians may find it useful to collect, appraise, and discuss their de-identified practice-based evidence within communities of practice with other SLPs and allied professionals. As part of community–academic partnerships, clinicians may present data summaries to stakeholders to justify scale-up research on a method/intervention of consequence or to evaluate the implementation of a method/intervention in the community. Preservice training programs and continuing education offerings should (a) teach how to map different data collection methods to client goals/objectives and intervention methods, (b) provide practice opportunities to check/appraise different forms of practice-based evidence, and (c) teach descriptive, quantitative, and qualitative methods of aggregating and analyzing data collected from multiple clients.

    Although authors described practice-based evidence clearly and positively, there was little consensus in the literature as to its place in the EBP model. Future research should consider (a) where practice-based evidence fits within current EBP models, (b) how should one appraise practice-based evidence, and (c) how should one integrate information from practice-based evidence in clinical decisions?

    Good Clinical Decision Making Is Multidimensional and Distinct From Clinical Expertise

    Our results identify that authors referenced multiple sources of evidence to appraise and integrate when making good clinical decisions, many more than the traditional three sources of evidence referenced in the EBP models of speech-language pathology (ASHA, 2004a, 2004b; Dollaghan, 2007). Decisions based on one or few evidence sources were described negatively by authors and aligned with decisions based on clinical opinion. Clinicians should avoid making decisions based on any single stream of evidence (e.g., only personal beliefs, only research evidence, only practice-based evidence). High-quality clinical evidence (i.e., clinical expertise and practice-based evidence) was included in the multiple sources of evidence to consider in the construction of a good evidence-based decision.

    We found that authors described clinical expertise as important to, but distinct from, the process of clinical decision making. While EBP models have historically collapsed clinical expertise and clinical decision making into one construct (e.g., Dollaghan, 2007) or described the process of clinical decision making as an indistinct integration stage (ASHA, 2004a, 2004b), differentiating these two constructs is important. Clinical expertise is defined by the collective skills, knowledge, attributes, experiences, and practice-patterns that accumulate over time as intrinsic characteristics of effective clinicians and gradually improve through reflection, practice, appraisal, and discussion. Alternatively, clinical decision making is a process by which clinicians iteratively seek, appraise, weigh, and assemble multiple sources of evidence that converge toward one, or sometimes, several paths of action. Without acknowledging clinical expertise as distinct from decision-making processes, clinicians cannot generate, appraise, or discuss it as a source of evidence. If clinical expertise is not explicit and distinct, it cannot be integrated with the multiple other sources of evidence that characterize good evidence-based decisions. Therefore, clinical expertise is not the quality of the clinical decision or the act of deciding; it is a form of evidence to be integrated before clinical decision making may begin.

    Clinical experts may collect and appraise practice-based evidence as another source of evidence to integrate when making decisions. Practice-based evidence may be collected to monitor the outcomes of an evidence-based decision, to test hypotheses, or to serve as the foundation of a discussion with others. When the integration of multiple sources of evidence point to more than one possible decision, referencing clinical expertise and collecting practice-based evidence can clarify uncertainty. If uncertainty persists, practice-based evidence can be collected to test/monitor the effectiveness of the decision.

    By adopting the shared language of clinical evidence and clinical decision-making presented in this review article, SLPs may better discuss their clinical evidence with researchers, families, allied professionals, and each other. Those responsible for revising and disseminating EBP models should consider how our findings inform future model development. Future research should consider testing the acceptability and implementation of these findings by practicing SLPs.

    Strengths and Limitations of This Scoping Review

    This review represents the first attempt to clarify and summarize the published language describing clinical evidence, making this work highly relevant to clinicians and researchers interested in EBP. Our methods were informed and conducted by a diverse research team representing researchers with expertise in scoping reviews and clinicians working in academic and community contexts. We made every attempt to present our methods with transparency and conduct the review rigorously, with iterative checks in place to reduce bias. However, fully controlling for reviewer bias is impossible, and our findings must be interpreted with consideration for such biases.

    This work emphasized models of EBP developed by American organizations for American SLPs. Our search of the ASHA journals aligned with this focus, but our results may not be generalizable to the EBP practices of non-American SLPs. The use of Google Scholar benefited the breadth of our search and identified articles not found in our database search. However, the use of Google Scholar limits the precise replicability of results via the ranking algorithm of this academic search engine. Because we did not search every known database, our search strategy may have overlooked some articles that would have met inclusion criteria. While we piloted synonyms for search terms (e.g., “therapy”), we did not use all possible search terms (e.g., “implementation”), which may have also limited the number of articles we found. We fulfilled Items 1–4 and 6–22 of the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist but did not meet Item 5, as the search protocol was not registered at the start of this project. Finally, this study evaluated the way clinical evidence is conceptualized and discussed by researchers. While this is a limitation inherent to a scoping review of the literature, these views may not be indicative of clinicians' perspectives—an irony that is not lost on the authors.

    Conclusions

    Clinicians may use the language of clinical evidence described in this review article to discuss the quality of their clinical evidence with other clinicians, researchers, clients/families, and allied health professionals. Researchers may expand on this work by exploring and writing about EBP using the vocabulary defined in this review to ensure consistency across researchers and publications. The descriptions of clinical evidence proposed herein should be useful in providing a unifying vocabulary to generate “collaborative, critical discourse” (Osborne, 2010, p. 463) that supports the knowledge, skill, and decision making of clinicians. Unifying the language of clinical evidence should improve our ability to investigate and apply EBP models and to engage in conversations about clinical evidence from a place of shared understanding. It is essential that researchers and clinicians engage in meaningful conversations about clinical evidence that are founded on mutual respect and appreciation. Critically, we hope that this work leads clinicians and researchers to discuss, appraise, and refine sources of clinical evidence and the processes for making good clinical decisions, together.

    Data Availability Statement

    The raw data summary tables supporting our results are included in this submission as supplemental materials.

    Supplemental Material on Figshare

    Acknowledgments

    This work was supported by Midwestern University. The authors would like to acknowledge the importance of each of the authors' preservice and postservice clinical and research training, the open lines of communication between academic and clinical faculty, and the informal and formal community-based, clinical–academic partnerships that contributed to the development of this review article. Such experiences represent the translational and community ideals of the TAG lab, and were central to promoting communication and perspective taking between clinicians and researchers to ensure a high-quality, representative review.

    An * is used to denote an article that met inclusion criteria within the corpus.

    References

    Footnote

    1The research evidence described as clinical evidence was often termed practice-based research (n = 29). Practice-based research designs were typically aligned with clinical research trials or research conducted in clinical settings, including retrospective studies and feasibility designs. While this is certainly essential and valuable work, these designs can already be evaluated based on research evidence metrics and were thus outside the scope of this review article.

    Author Notes

    Disclosure: The authors have declared that no competing financial or nonfinancial interests existed at the time of publication.

    Correspondence to Schea Fissel Brannick:

    Schea Fissel Brannick is an Associate Professor at Midwestern University. George W. Wolford was under contract as an Assistant Professor at Appalachian State University while revising this review article. Laura L. Wolford was under contract as an Assistant Professor in the Department of Communications Sciences and Disorders, MGH Institute of Health Professions, Boston, MA while revising this article. Kayleigh Effron is currently a practicing speech-language pathologist working in the greater Phoenix, AZ, area. Schea Fissel Brannick and George W. Wolford equally share the first author position, as each contributed essential and substantive ideas, content, and writing efforts to the development and submission of this article.

    Editor-in-Chief: Erinn H. Finke

    Editor: Holly L. Storkel

    Additional Resources