SMARTer Approach to Personalizing Intervention for Children With Autism Spectrum Disorder

Purpose: This review article introduces research methods for personalization of intervention. Our goals are to review evidence-based practices for improving social communication impairment in children with autism spectrum disorder generally and then how these practices can be systematized in ways that personalize intervention, especially for children who respond slowly to an initial evidence-based practice. Method: The narrative reflects on the current status of modular and targeted interventions on social communication outcomes in the field of autism research. Questions are introduced regarding personalization of interventions that can be addressed through research methods. These research methods include adaptive treatment designs and the Sequential Multiple Assignment Randomized Trial. Examples of empirical studies using research designs are presented to answer questions of personalization. Conclusion: Bridging the gap between research studies and clinical practice can be advanced by research that attempts to answer questions pertinent to the broad heterogeneity in children with autism spectrum disorder, their response to interventions, and the fact that a single intervention is not effective for all children.

A utism spectrum disorder (ASD) represents a widely heterogeneous group of disorders that presents a challenge to diagnosis and treatment. This is particularly true for language acquisition in children with ASD. Delays in expressive and receptive language become apparent in children with ASD between 18 and 36 months (Mitchell et al., 2006). Whereas about half of all preverbal children with ASD become verbally fluent by the time they enter kindergarten, the other half have significant delays, with about 30% remaining "minimally verbal" (fewer than 20 functional, flexible words; per consensus terminology of the National Institutes of Health workshop; National Institute on Deafness and Other Communication Disorders, 2010;Tager-Flusberg & Kasari, 2013), despite children's access to recommended interventions. In addition, associated features of ASD (e.g., challenging behaviors, cognitive ability) further complicate symptom presentation and response to treatment (Lecavalier et al., 2017;Lerner & White, 2015). The issue is how we can improve outcomes recognizing the vast heterogeneity among children with ASD. Speech and language therapists are faced with a linguistically, culturally, cognitively, and behaviorally diverse group of children with ASD and must address this heterogeneity in both the selection of interventions and measuring response to interventions. Because it is clear that a single intervention cannot be effective for all children with ASD (Kasari & Smith, 2013), therapists must have many different tools in their toolbox and know whether and when to apply these different tools to maximize outcome. Indeed, most therapists use eclectic approaches when working with children , but there are few guidelines to help the therapist select tools that are known to be effective, given the symptom presentation of a particular child.
Research can address issues regarding which intervention approach to use for a given child, whether and when to change an intervention, and how intervention should be augmented based on child response. These goals, however, rely on an understanding of both a variety of intervention approaches and the methodology required to make these complicated treatment decisions. Adaptive interventions, and more specifically Sequential Multiple Assignment Randomized Trials (SMARTs), allow researchers to address the heterogeneous intervention needs of individuals on the autism spectrum (Collins, Murphy, & Bierman, 2004). Through flexible, yet systematically applied, combinations of treatment components (e.g., dose, length of treatment, treatment augmentation) allowed by SMART designs, speech and communication researchers can begin to answer many essential questions regarding personalization of treatment.
Therefore, our goal in the sections below is to describe the types of issues that therapists grapple with in providing appropriate treatment to children with ASD on their caseload and the ways in which research can begin to provide guidelines for these decisions. We first describe the evidence behind a focus on social communication behaviors as critical targets of early intervention for children with ASD and their importance to later cognitive and language outcomes.
We provide examples of existing intervention approaches that target elements of social communication (i.e., behavioral and naturalistic developmental behavioral intervention [NDBI] approaches) and evidence for this work from intervention studies from our lab and others. We then highlight newer methodological approaches aimed at personalizing interventions that hold promise for improving child language outcomes.

Social Communication Skills to Target With Intervention
Given that social communication skills represent a core deficit in the behavioral repertoire of children with ASD, these skills should be important early intervention targets (Kasari, Gulsrud, & Jeste, 2016;Mundy, 2016). Social communication skills include nonverbal gestures and language used to share experiences with others ( joint attention skills) and request help from others (requesting skills). Most impaired are joint attention gestures used to initiate interactions with others, such as showing a toy to a parent, pointing to indicate something of interest (e.g., a plane flying overhead), and alternating looks between the parent and an object with shared positive affect. Children with ASD also use less commenting language-language indicating they are establishing a joint attention focus. While requesting gestures can be delayed, they often are used more frequently by children with ASD than joint attention gestures, which are typically very delayed, and even absent for some children (Mundy, Sigman, Ungerer, & Sherman, 1986;Mundy, Sigman, Ungerer, & Sherman, 1987).
Other social communication behaviors are also impaired in children with ASD and include joint engagement and play skills. Joint engagement refers to the duration of time a child is truly connected with another person. During joint engagement, the child is recognizing the other by both responding and initiating communicative behaviors. Longer periods of joint engagement provide the child with more opportunities to communicate. Working toward joint engagement is important because too often children with ASD are object engaged to the exclusion of others (Adamson, Bakeman, Deckner, & Romski, 2009;Kasari, Gulsrud, Paparella, Hellemann, & Berry, 2015). Finally, play is often restricted in children with ASD, with the greatest challenge in demonstrating flexible, creative play skills. Many young children can learn to play functionally with toys but struggle with creative play at symbolic levels, such as pretending that dolls have life or substituting one object as if it is something else (e.g., a block as a hat; Ungerer & Sigman, 1981). Play in particular is important as it serves as a context (the topic) for the child to learn other skills, such as communication skills.
All of these targets are vital for later cognitive and language outcomes. Several studies demonstrate that children who have more joint attention skills also have better expressive language skills (Mundy et al., 1987;Toth, Munson, Meltzoff, & Dawson, 2006;Wetherby, Watt, Morgan, & Shumway, 2007;Yoder, Stone, Walden, & Malesa, 2009). In longitudinal studies of children with ASD, children's early joint attention skills predicted their later expressive language skills (Charman et al., 2003;Loveland & Landry, 1986;Mundy, Sigman, & Kasari, 1990). Children who play at higher symbolic levels also demonstrate better cognitive and language skills (Kasari, Gulsrud, Freeman, Paparella, & Hellemann, 2012;Sigman & Ruskin, 1999). Therefore, it is essential that these early precursors to social communication be active targets of intervention for clinicians working with children with ASD.
gestures and play are less often emphasized as important targets of intervention.
The evidence for DTT is that children with ASD can make significant strides (reportedly about 30%), but approximately 20% make no developmental gain, and another 50% make only moderate gains, not reaching normative standards despite many hours per week of individual instruction over several years (Eldevik et al., 2010). When intervention programs are based on DTT, the largest gains are in cognitive domains (e.g., IQ), but the smallest gains are in social communication and spoken language (Smith, Groen, & Wynn, 2000). The skills that children gain through intervention with DTT are, instead, related to the intervention targets specific to DTT (e.g., labeling, preacademic tasks), areas that are often domains included in cognitive assessments. Therefore, although DTT may effectively target specific skills, it is not a comprehensive intervention that can address all core deficits of individuals with ASD. Augmentation with other targeted interventions is necessary for a child to make gains in several domains of impairment.

Naturalistic Developmental Behavioral Interventions
Other early intervention approaches apply NDBI approaches and emphasize changes in social communication and language skills instead of IQ (Schreibman et al., 2015). Interventions grounded in these combined behavioral and developmental models can improve language outcomes in preschoolers with ASD. There are many potential social communication treatment targets, and many studies have examined the social communication abilities of children with ASD, especially joint attention (e.g., Dawson et al., 2004;Mundy et al., 1990). Joint attention constitutes many different skills, and these skills, as noted above, are evident early in children's development, mostly appearing in the second year of life. Increasingly, intervention studies have also taught joint attention skills to children with ASD. Many studies, particularly single-case design (SCD) studies, focus on only one joint attention skill (e.g., a point to share). In these types of research designs, researchers can carefully track changes in this single skill as they intervene. Joint attention, however, includes multiple skills, and children use all of these separately or together. For example, they may point to share attention to something, comment on the event, alternate their gaze back and forth between the event and the person, and share affect. It is the flexible and integrated use of these skills subsumed under joint attention that ultimately allows for a child to fluidly engage with others.
While NDBI models are more likely to teach gesture use prior to or in addition to spoken language, the variability in NDBI models is notable. Despite intervening on social communication targets, many do not measure social communication outcomes specifically (e.g., joint attention gestures). Because we are concerned with social communication development specifically in this review article, we describe below an NDBI focused exclusively on the social communication impairment in young children with ASD. This NDBI model is a tightly connected social communication module that focuses on multiple skills within social communication, including joint attention and requesting gestures, play skills at the child's developmental level, and joint engagement between adults, peers, and the child. Outcomes of social communication skill are measured and reported.
Joint Attention, Symbolic Play, Engagement, Regulation (JASPER) intervention. JASPER emphasizes the development of social communication, specifically joint attention and joint engagement, and has obtained significant gains in these targets. The intervention itself can be delivered by therapists, parents, and teachers. JASPER studies have also reported greater gains in language and cognition over control interventions in both short and long term, from 1 month, 1 year, and 5 years postintervention (Kasari, Paparella, Freeman, & Jahromi, 2008;Shire et al., 2017).
The goals of JASPER are to improve the joint engagement between a social partner and the child with ASD through specific strategies that capitalize on the child's play with objects. Play skills are directly targeted and taught following a developmental progression of play skills. While developing play routines aimed at increasing joint engagement, other communication skills are taught, including requesting and joint attention skills (language, eye contact, and gestures, i.e., point, show, and give with the intent to share), spoken or augmented language (via tablet applications), and behavior and emotional regulation. By developing play routines and increasing joint engagement, the child is present and attending to the activity with another social partner. Hence, the more children are engaged in an activity with another individual, the more opportunities they have to learn from the activity by sharing language or gestures (i.e., demonstrating joint attention skills) with each other.
The first study of JASPER tested the components of teaching joint attention or play skills separately and examined the effect of content (i.e., joint attention or play) on standardized cognitive and language tests. We learned several things from this randomized controlled trial of 3-to 4-year-old children with ASD (Kasari, Freeman, & Paparella, 2006;Kasari et al., 2008). First, when the content was taught (joint attention or play), children gained significantly more of those skills postintervention. Second, joint engagement improved for children receiving both experimental interventions ( joint attention or play) compared with the contrast intervention, which was an ABA discrete trial intervention that did not address joint attention or play. Third, both experimental interventions resulted in greater language gains on standardized tests than did the contrast intervention. Fourth, children who had the least amount of language to begin with (fewer than five words) gained the most language if they received the joint attention intervention. Thus, these data are consistent with longitudinal studies finding that joint attention skills predict later language abilities. Finally, the model was mediated through therapists, and parents were not involved in the intervention. However, joint engagement improved with both experimental interventions and generalized to parents during a parent-child play interaction. A main takeaway from this study was that teaching content on joint attention and play skills was associated with later social communication and language outcomes and, therefore, should be included as intervention targets in early intervention programs.
We followed this initial study by combining the two content areas into one intervention called JASPER, reflecting important targets of intervention. We tested JASPER by teaching parents the intervention to use with their toddlers in order to address whether children with little language would benefit from JASPER (Kasari, Gulsrud, Wong, Kwon, & Locke, 2010). We found significant increases in joint engagement, responding to joint attention and functional play skills (Kasari et al., 2010) in the experimental group compared with the wait-list control group, and these skills were maintained over a year of follow-up. A number of other randomized controlled JASPER studies followed, each addressing different questions. One question was to what extent parents needed direct hands-on coaching to improve child outcomes or if they would benefit as much from receiving parent education on how to intervene on social communication with their child, but without their child present. In two different studies, we found that hands-on coaching of parents was critical to positive outcomes in joint attention, play, and engagement in their children. One study focused on low-income preschoolers at home (Kasari, Lawton, et al., 2014) and the other, on toddlers in the clinic setting . We have also tested JASPER with preschoolers with intellectual disability with ASD in schools (Goods, Ishijima, Chang, & Kasari, 2013) and minimally verbal school-aged children (5-8 years of age) in clinic (Kasari, Kaiser, et al., 2014). Both of the aforementioned studies were mediated through therapists, although parents also received intervention instruction with the minimally verbal schoolaged group (Shire et al., 2015). Finally, JASPER has been transferred to teaching assistants and teachers in three studies (Chang, Shire, Shih, Gelfand, & Kasari, 2016;Lawton & Kasari, 2012;Shire et al., 2017). These studies have yielded very consistent results. Joint engagement has improved significantly across all of the studies compared with control/ contrast conditions. Changes in target skills can be made in as few as 20-30 sessions over 6 to 12 weeks. Improvements in joint attention (responding and initiating), play skills (functional and symbolic), and joint engagement have been noted, with differences among studies related to child age and initial abilities. Importantly, therapists, teachers, and parents can all reach adequate levels of fidelity in order to implement JASPER across multiple settings and routines.

Heterogeneity in Intervention Response
There is considerable evidence that joint attention and play skills can be taught to children with ASD, that many different people can implement the intervention, that changes can be made in short windows of time, and that improvements can have lasting effects on later language and cognitive abilities. However, one of the biggest challenges for speech and communication professionals implementing interventions with children on the autism spectrum is identifying an intervention approach that best suits that child's needs given his or her heterogeneous presentation of autism symptoms and associated features. This heterogeneity is also seen in response to intervention. Some children make rapid change, whereas others progress slowly or make no gains. What accounts for this heterogeneity, and what elements of the intervention might be associated with this variability in response? In order to develop a SMART research design, clinicians and researchers must have some knowledge about the impact of heterogeneity on treatment. This knowledge can be derived from clinical intuition or prior research findings. Below, we discuss the child characteristics that we have found to impact social communication treatment response.

Child Characteristics That Predict Treatment Outcomes
In choosing an intervention for a particular child, therapists consider several factors. Most therapists would consider how appropriate an intervention would be for a particular child based on areas of need (e.g., joint attention, symbolic play), age, and ability level of the child prior to intervention. These child factors might predict child success in the intervention.

Age of the Child
Studies find that younger children make greater progress in intervention than older children, although the reasons for this are not completely clear (e.g., Rogers et al., 2012). Therapists do consider the age of the child when selecting an intervention, particularly when the target is based on development. For example, one cannot expect that children will improve on symbolic play skills when they are not yet showing functional play skills. Indeed, in short-term studies (e.g., 3 months) of toddlers, outcomes have yielded improved functional play skills, whereas for preschoolers, symbolic play has improved (Kasari et al., 2010;Kasari, Lawton, et al., 2014). In choosing an intervention target, it is important to assess the child's developmental readiness to learn a particular skill. For example, it is likely the child needs to learn first how to play at increasingly more complex functional play levels before being taught symbolic skills.

Initial Social Communication and Language Abilities of Children
Having some skills prior to beginning intervention seems important to children's progress in intervention. Greater skill in initiation of joint attention allows children to continue to develop their social communication skills. Similarly, having more language helps children to continue to progress in their language. At least one study found that having even one word (expressive language) prior to intervention resulted in better later language outcomes than having no words (Schreibman & Stahmer, 2014).
Both the age of the child and initial abilities have been associated with intervention outcomes. These factors are often considered in predicting treatment success. In addition to predictors of success for children with ASD, some factors may be associated with better or worse progress in a particular intervention. These factors, such as age or initial ability, therefore, can also be used to identify subgroups of children for whom an intervention works best. When considered as a means to subgroup children, the factors are often referred to as moderators of the intervention. Finding these moderators of intervention (e.g., age, initial ability) is critical because interventions may then be adapted to better suit the needs of these particular subgroups. An understanding of critical intervention moderators is essential in order to design a successful SMART. Systematic adaptation of intervention using a SMART design is possible through an understanding that certain subgroups of children do not adequately respond to intervention. The SMART, then, allows researchers to test intervention modification that may address the unique needs of these subgroups. One method of intervention modification is to use multiple, social communication interventions that target specific skills that may benefit a child. These intervention "modules" (i.e., intervention components that target specific, yet distinct skills) may then be combined to optimize treatment for a child.

Combining Modular Interventions to Optimize Treatment Gains
An ideal intervention modification is to replace the existing, or add an additional, modular intervention that may differentially target the same or associated skills. The combination of modular interventions may then create an optimal treatment sequence for a child. There is evidence in other areas of childhood disorders that flexibly applying modules (or combinations of strategies) from different interventions can result in improved outcomes over a singular intervention approach (Weisz et al., 2012). For example, children who are anxious and receiving cognitive behavioral therapy for anxiety disorder may on average improve by reducing anxiety over the course of the intervention. However, for some children to make significant progress, other modules may be needed, including ones addressing behavior problems or depression. Thus, flexibly applying modules based on individual needs can significantly improve outcomes. Unfortunately, we have few examples of combined modular approaches in interventions for children with ASD.
In the case of autism treatment, interventions improve communication for many children on average, but we have limited information on what to do when children respond slowly to their first-line treatment and when a change in intervention should be made. In ABA models such as DTT, a common recommendation for children who do not meet their goals is to increase dose or intensity of intervention (i.e., number of intervention hours; Lovaas, 1993). Studies in which participants receive high-intensity ABA intervention have reported more favorable outcomes than studies in which participants have received lower intensity ABA intervention (Eldevik et al., 2010). In some studies, higher intensity of intervention (i.e., greater hours per week) has correlated with more rapid skill acquisition (Linstead et al., 2017), but there are exceptions (Smith et al., 2000). Apart from one early study (Lovaas, 1987), there have not been experimental tests of whether increasing intensity is a successful strategy for improving communication and language outcomes, especially when children are responding slowly to intervention.
An alternative recommendation in ABA might be to switch to different ABA instructional formats (e.g., embedding teaching trials in activities during the daily routine). However, little information is available on the efficacy of this approach, and recommendations seldom include more recently developed NDBIs such as JASPER. In an early study by Kasari et al. (2006), when DTT was used to prime the targets of JASPER, children showed large language gains (Cohen's d = 0.63-1.2). Thus, the combination of DTT and JASPER may be most potent for improving essential core social communication impairments in young children with ASD. However, this is not typically a first-line treatment approach due to the increased training costs and need for flexible expert clinicians who can systematically combine approaches. Moreover, all children in the study received the same combination of treatments; thus, it is unknown if all children needed both priming and JASPER or whether some children's delays could be improved by only one of the methods. It is possible that the more intensive combined approach may better serve slowly responding children, thus suggesting an increase in resources only when indicated by slow response.

Manipulating the Modules: Additional Options for Treatment Modification
The optimal combination of modular interventions offers clinicians incredible flexibility in managing the treatment of children on the spectrum. However, there are other possible elements of intervention that can be modified, in addition to treatment approach, which may affect a child's treatment response and should be included in a clinician's toolbox. These could include the dose of intervention, who delivers the intervention, and where the intervention takes place. Unfortunately, we know little about how these potential intervention modifications impact treatment response for groups of children. The sections below address questions regarding dose and method of delivery and how inclusion of these variables may be important elements of a SMART research design.

Treatment Dose
The marked delay in language acquisition despite quality intervention for some children with ASD has led many clinicians and parents to question if their child is receiving an adequate dose to produce therapeutic benefit. The definition of dose in intervention research, however, is highly variable. Generally, definitions of dose fall under broad categories such as dose frequency (e.g., number of hours of intervention during a specific time; Lovaas, 1987; and the number of teaching episodes per session; Julien & Reichle, 2016;Warren, Fey, & Yoder, 2007) or diversity of services received (e.g., mental health services, medical evaluation and assessment, speech therapy; Shattuck, Wagner, Narendorf, Sterzing, & Hensley, 2011), among others (Warren et al., 2007).
Although lack of consensus in definition of dose can make summaries (i.e., meta-analyses and systematic reviews) of findings across studies difficult to synthesize (Hudry & Dimov, 2017), arrival at a singular, clear definition of dose is not as important as identification of appropriate dosing specifications for particular children. In the context of the real world, it is essential that a clinician's practice is responsive to the specific needs of a child. Critical questions that concern parents and researchers alike are "Is more truly better?" and "What is more?" Some potential dosing questions for therapists concern how long a session should last (e.g., 30 min, 60 min), how often (one time per week to five times per week), or the dose distributed over children (individual therapy or group therapy). If more intense therapy is needed, can dose be increased by teaching others, such as teachers and parents?

Methods of Intervention Delivery
There are a number of potential methods of intervention delivery including location of service (e.g., home, school), service provider (e.g., clinician, parent, peer), mode (e.g., Internet, computer, or device assisted), and format of service (e.g., individual, group, group composition).
Despite an understanding that increasing the number of contexts (e.g., service delivery agents, settings) over which a service is provided often increases generalizability of a skill for a child (e.g., adding parent training; Rao, Beidel, & Murray, 2008;Schreibman et al., 2015), few interventions have been tested across several different methods of delivery. Too often, there is a gap between research and practice such that research is conducted in a highly controlled environment, and we cannot answer questions regarding the potential effectiveness of an intervention should it be delivered in the community, by community service providers.
When considering different methods of intervention delivery to improve generalization, researchers must evaluate the appropriateness of the match between method of intervention delivery and the service/intervention to be delivered. Some studies use the researcher or therapist as the person who delivers the intervention in context but may also train others to deliver the intervention. Social skills intervention programs, for example, have used several different service delivery agents, including adults (i.e., direct instruction by adult researcher or therapist), peers (i.e., training peers how to engage with a child with ASD), and combination approaches (i.e., instruction by an adult in a group format; Reichow & Volkmar, 2010). Although variation in agent is important, exploration of efficacy in context is also essential. The most naturalistic context for development of social skills for elementary-age students is likely the school playground, where children with ASD are most likely to be found isolated or unengaged (Kasari, Locke, Gulsrud, & Rotheram-Fuller, 2011). For this reason, researchers have extended our understanding of social skill program efficacy by testing adult-(e.g., Kasari, Rotheram-Fuller, Locke, & Gulsrud, 2012;Kasari, Dean, et al., 2016;Kretzmann, Shih, & Kasari, 2015) and peermediated  interventions in the school setting in order to evaluate efficacy of social skill programs in the most natural context.
Similarly, there are now more examples of teachers or paraprofessionals teaching specific interventions to be delivered in schools for preschool-aged children with ASD Lawton & Kasari, 2012;Shire et al., 2017;Stahmer et al., 2015). In the preschool setting, teachers deliver the intervention with the researcher or therapist coaching or training the teacher in a specific intervention.

How Can Research Help Clinicians Personalize Therapy?
With the number of therapeutic tools available to clinicians who serve children with unique needs in speech and communication, clinicians naturally wonder how to make important clinical decisions regarding approach to, and changes in, a child's treatment plan. In order to answer this question, clinicians need access to appropriate and meaningful measures and knowledge of benchmarks for response in order to monitor a child's progress.
The first issue is to determine how one should determine slow or no response. The second is to determine what to change in the treatment to boost the child's response.

Defining Response to Intervention Measures
Evaluation of children's outcomes not only at the completion of intervention but also over the course of intervention is critical to better understand how a child is responding to a given treatment. Ongoing collection of children's data allows for repeated checks throughout treatment to not only monitor progress but also adapt the intervention as necessary for those who are making limited progress. However, it is essential that measures that predict and monitor progress are specific to the intervention being tested, rather than general predictors of global improvement (Yoder & Compton, 2004). Currently available research offers little direction for clinicians regarding measures for progress monitoring in daily practice given a specific intervention approach. As a result, clinicians often have limited guidance in making data-based decisions about individualization and adaptation of evidence-based interventions. Due to a lack of objective reliable markers of treatment response, most interventionists rely on their own expert clinical judgment and/or the consensus judgment of those around them to determine when treatment should be augmented (Steidtmann et al., 2013). However, relying on expert clinical judgment can hamper dissemination and replication, especially to less-expert clinicians, less-supported context, and when clinical decision rules and evaluation of children's outcomes can vary across children.
Most research measurements in ASD are not designed for making quick decisions regarding response, as they involve tedious video coding or assessments such that rapid assessment is unlikely. In addition, intervention research uses commonly reported measurements such as effect sizes that are often based on one measure and may be of little help to clinicians who need to identify the treatment plan likely to be most effective for a particular participant who has several characteristics that may influence his or her treatment response. One method used in research studies that can also be used by clinicians to provide a rapid assessment of response is the Clinical Global Impression (CGI; Guy, 1976). The CGI is a global impression of the child's progress in a targeted outcome during intervention sessions. The emphasis is on the global impression-the majority of the time in the intervention session, rather than one specific moment. The rating is not driven by data or assessments but a clinical rating based on the clinician's expertise and experience with the targeted outcome. The CGI can be easily adapted to include any targeted child outcomes, such as social communication, and multiple CGIs can be readily collected for multiple outcomes during one intervention session. Using the CGI, clinicians apply a 7-point rating scale indicating the severity of the child's social communication challenges at each time point and a 7-point rating of improvement from entry to the next assessment time point where high scores indicate greater severity or declining progress. CGIs are a practical quantifiable measure used reliably in day-to-day clinical practices to facilitate treatment augmentations (Busner & Targum, 2007) and have successfully been used to measure child severity and improvement in other studies of ASD (e.g., assessing social communication of minimally verbal children with ASD; Chez et al., 2002;Fankhauser, Karumanchi, German, Yates, & Karumanchi, 1992). Whether using a CGI or another method, the key issue is that measuring response must be fairly easy, fast, and reliable, so that changes, if needed, can be made rapidly but systematically in the child's treatment.

Making Data-Based Decisions
Existing statistical methods used to identify predictors of treatment (e.g., regression) are informative, yet flawed, for application in data-based decision making. Research describing predictors of response to treatment abound (e.g., Fisher, 2017;Fletcher, McAuliffe, Lansford, Sinex, & Liss, 2017;Justice, Jiang, Logan, & Schmitt, 2017), yet none of these studies can provide meaningful information regarding adequate and inadequate responses to treatment because of the limitations of standard regression-based methodology. Although traditional regression-based statistical approaches can provide information about the relative influence of one predictor or another (e.g., cognitive ability, age), these methods cannot provide informative cutoffs regarding child progress. Data-based cutoffs would be helpful, however, as a score above the cutoff would indicate, based on prior research, that continuation of the intervention would be beneficial. In contrast, a score below the cutoff would indicate that a change in intervention may be more beneficial for this child.
An alternative statistical method to take under consideration is the usage of the Classification and Regression Tree (CART; Breiman, Friedman, Stone, & Olshen, 1984), a type of data mining method that allows a researcher to identify informative cutoffs that can serve as indicators of treatment response. CART is a machine learning method that repeatedly splits a sample into more similar groups within each variable of interest (e.g., age, cognitive level, gender) allowing researchers to identify potential cutoffs for each variable. Hence, CART generates a final set of predictor variables and potential cutoff values within those variables that can inform decision making about treatment responder status. A researcher and practitioner can then look more closely at each defined cutoff for a specific predictor generated by CART and create subgroups of children to make data-based decisions about the individualization and adaptation of evidence-based social communication interventions. This statistical method can objectively help define patterns of relationships that may be meaningful in clustering children based on their responses to an intervention. In the context of the JASPER intervention, children who received the intervention may have their targeted outcome measures (i.e., social communication, cognitive, and engagement) collected at baseline, midpoint, and exit. CART can be used to define potential cutoff values in baseline cognitive measures or change in engagement from baseline to midpoint to create subgroups of children with similar social communication profiles at exit. These cutoffs can then help inform clinicians about the progress of their children's social communication prior to treatment exit. Another advantage of using CART is its ability to rank the importance of the predictors/variables in relation to the targeted outcome. Using these data, clinicians can begin to select the variables associated with treatment response and develop and evaluate adaptive pathways of treatment and operationalize sequential clinical decisions.
As an example, Shih, Patterson, and Kasari (2016) examined the results of a peer engagement intervention during recess at school for children with ASD. They aimed to understand whether response to treatment could be determined early (at midpoint) that would predict the final outcome at end of treatment. If so, then a change at midpoint for children who were responding slowly might help them achieve a better outcome at end of the treatment. They employed CART to examine potential peer engagement (at preintervention and midintervention) cutoffs to determine treatment response at postintervention. The tree resulting from CART identified four distinct subgroups. From this study, a benchmark cutoff at midpoint was determined as indicating good outcomes at end of treatment. This benchmark (14% improvement over baseline) yielded positive outcomes at end of treatment and could be used to inform the construction of a future adaptive intervention (e.g., children who respond slowly, i.e., less than 14% improvement) that might need a different or augmented intervention at midpoint to improve response to intervention by exit. These types of methods can thus help in determining a benchmark for improvement within a window of time if this information is not already known.

Evaluating Data-Based Decisions Using Adaptive Research Designs
So far, we have described many of the necessary components that clinicians need in order to personalize intervention for a child. Clinicians already have many of these tools, including a number of intervention approaches that provide targeted intervention for specific skills. Others, such as meaningful outcome measures that can assess change in each of these targeted interventions and established cut points on specific variables that indicate adequate response to treatment, can be refined or identified. The question that remains unanswered is how to combine these interventions, in what sequence, with what dose and duration, in an optimal order to meet the specific needs of a child. Fortunately, existing research designs, such as the SMARTs, are able to address many of these questions.

Adaptive Treatment Designs
Adaptive interventions are unique in that they involve assigning children to different sequences of treatments at specific time points based on the child's evolving status (i.e., response vs. slow response). Different adaptive interventions may be built within a SMART to evaluate the relative effectiveness of each adaptive intervention with each other. Although a standard randomized clinical trial (RCT) is a well-controlled and systematic approach for comparing interventions head-to-head, these methods do not permit investigators to open the "black box" to understand which, how, or why the interventions that make up an effective adaptive intervention work with or against each other. Moreover, a standard RCT would not allow for the investigation of baseline (e.g., number of social communicative utterances [SCUs] at baseline) and time-varying moderators (e.g., joint attention skills from baseline) to early response.
As an alternative to RCTs, some researchers recommend SCDs where each participant acts as his or her own control because these designs can provide efficient preliminary efficacy testing of an intervention component (Smith et al., 2007). The typical designs (i.e., reversal, multiple baseline), however, cannot determine the time or context in which a certain intervention option is most efficacious. SCDs do not clearly articulate decision points for delivery of a component or examine moderators of observed effects (Klasnja et al., 2015).
In general, SMART designs often consist of two or more phases where participants are randomized to an initial treatment during the first phase, followed by an assessment of treatment response (e.g., rating of CGI improvement based on children's change in targeted outcome in the JASPER model: social communication from baseline to early response) after a time, and depending on the response, a participants' treatment plan may be augmented or the participant may be randomized to receive the next phase treatment options. The SMART intervention is designed before the study begins. Thus, there are several critical decisions that researchers must make in building an adaptive intervention that can meet the needs of the population the researcher hopes to study, namely, (a) intervention sequence, (b) intervention dose, and (c) decision rules regarding when to change the treatment approach. The design of the SMART can be one of the most difficult steps in the design of the research study, due to the flexibility in research design and the many available options for manipulating critical variables including intervention approach, dose, and intensity. It is critical that researchers understand there is no one correct answer when designing a SMART. Decisions regarding intervention sequence, dose, and decision rules are guided centrally by the research questions of the investigator and are outlined below.
Intervention sequence. One of the first questions that a researcher must address is the desired sequence of intervention. When designing an adaptive intervention, the researcher considers in what order the interventions should be delivered and in what combination. These decisions should be guided by the research questions that are posed. For example, a study using a SMART design by Kasari, Kaiser, et al. (2014) aimed to study two potential components for greater personalization of intervention, namely, dose of treatment (i.e., hours per week) and augmentation of an NDBI (i.e., JASPER plus a language intervention known as enhanced milieu teaching that prompts spoken language) with a speech-generating device (SGD; a tablet with speech-generating software). Because relatively limited information exists regarding the effectiveness of using tablets with speech-generating software (SGDs) as a method of service delivery/augmentation, the study aimed to determine the role of SGDs, whether the device was most beneficial when given at the start of intervention or only later when slow response was detected. The question on how best to augment intervention for slow responders also included dose; whether intervention increased to three times per week rather than two times resulted in better outcomes. In order to answer these questions, the intervention design was split into two stages. In the first phase, minimally verbal children with ASD, ages 5 to 8 years, were randomized to receive the NDBI with or without the SGD. After 3 months (24 sessions), researchers determined slow responders as not increasing in the outcome measure by at least 25%. Slow responders who had received the intervention without the SGD were then randomized to receive increased intensity of intervention (three times per week) or augment with an SGD. Slow responders who started in the intervention with the SGD received increased intensity of the same intervention. Therefore, the sequence of the initial stage of intervention could help us to understand if starting with the behavioral intervention alone or intervention with an augmented SGD provides greater overall response. The second question allowed us to understand how best to augment an existing intervention for slow responders to intervention alone, the intervention that required fewer initial intervention modules. This is one specific example to illustrate the potential flexibility allowed by the SMART design. A researcher may instead be interested in understanding questions of dose. For example, a researcher may be interested in dissecting the question of "Is more better?" for minimally verbal children with ASD. One may choose to start by randomizing to intervention one time per week, compared with three times per week. In the second phase, for nonresponders, the dose may be increased. Or the researcher may be interested in how long we should implement the intervention with the child before measuring response. Should we wait 3 months, 6 months, or longer?
Tailoring variables. Unfortunately, there is often insufficient empirical evidence or theoretical background to confidently select appropriate tailoring variables (i.e., decision rules), such as "how long should initial treatment be before clinicians augment the child's treatment plan." This is where clinicians must rely on previous research, in addition to clinical judgment, to make informed hypotheses about decision rules and outcomes. CART methods, as described above, can help researchers to objectively select tailoring variables and cutoffs for these variables based on prior research. Kasari, Kaiser, et al. (2014) were faced with this exact dilemma. Because researchers were, at the time, unclear about cutoffs for early response and how best to assess early response in minimally verbal children with ASD, several objective assessments were used when determining tailoring variables during study design. Ultimately, the tailoring variable used to determine treatment response at the end of Phase 1 was based on a battery of 14 measures derived from two sources, a natural language sample and therapist-child sessions, to capture both structured and unstructured language use. This combination of different contexts was selected to ensure that the child was able to generalize his or her new skills across contexts. An increase of 25% over baseline was decided to indicate adequate progress across 24 sessions, the first phase of the SMART design. This is just one example of how SMART designs may be implemented; however, using time-consuming assessments to determine initial response is not ideal (as discussed above). One would likely want to use a quicker measure as part of the intervention in order to make timely changes in the treatment plan (such as the CGI). In the aforementioned study, several measures were selected due to the lack of guidance in the literature as to what would be a meaningful response indicating slow response.
Defining outcome variables. Careful consideration must also be given to decisions regarding the proximal and distal measures that would be used to determine treatment success. When deciding on outcome or end point measures, a researcher must both evaluate prior literature and use clinical judgment to answer the question: "Does this assessment appropriately measure the outcome that we are trying to move with this intervention?" In the study by Kasari, Kaiser, et al. (2014; see also Almirall et al., 2016) SCUs was used as the primary outcome. The SCUs were the total number of spontaneous communicative utterances that included spoken language and communicative gestures. This outcome was chosen because it represented the goal of the intervention-to increase opportunities for children to learn communication that included gestures to request and share with others and increase spoken language, especially language used to comment. The outcome was also based on child initiations and not just responses as many interventions have relied on.
The authors found that the adaptive interventions leading to the greatest SCUs at postintervention were the adaptive interventions that began with the intervention plus the SGD and increased dose of the intervention and SGD among children who were slow responders. Thus, adding the SGD later to the intervention among slow responders was not as successful as starting with the SGD from the beginning and increasing dose for slow responders.
One aspect of the intervention reported by Kasari, Kaiser, et al. (2014) is that, during the second phase, parents of children in both initial arms of the study received parent training as well. As noted above, the focus was on slow responders. However, the researchers could have tested interventions for children who were fast responders too. For example, adding in parent training for children who are fast responders may further boost their response to the intervention but insuring greater potential for generalization across contexts (clinic and home).
SMART designs are utilized in intervention research for building empirically supported personalized intervention by evaluating the targeted outcomes attributed to each clinical decision rule (Lei, Nahum-Shani, Lynch, Oslin, & Murphy, 2012;Nahum-Shani et al., 2012). Because, as clinicians and researchers, we often do not have firm clinical guidelines to determine treatment response and optimal sequencing of intervention, SMART designs are helpful in establishing an evidence base for these decisions and offer incredible flexibility to researchers. Yet, it is imperative that a researcher thinks critically about his or her research question and how best to address questions of tailoring variables, outcome measures, and intervention sequence if a SMART is necessary.
In addition, "powering" for SMART designs is dependent of the primary research question. A common misconception is that SMARTs require larger sample sizes compared with traditional randomized control trials because a large number of participants must end up in each of the final subgroups of a SMART design. This belief stems from a misunderstanding of how the data are commonly analyzed. For example, it is common to think that the data arising from a SMART is analyzed as a comparison of all the subgroups  and such an analysis may, indeed, need large sample sizes. However, power calculation for any design, including SMART designs, is dependent on the primary analysis (i.e., primary hypothesis), which may not be the same as the comparison of all the subgroups. For example, in Kasari, Kaiser, et al., 2014, the SMART design was powered based on the primary research question, whether children who received the augmentation of JASPER with an SGD resulted in greater SCUs compared with children who received the JASPER intervention only. This research question is equivalent to a two-arm randomized controlled trial's primary research question. Consequently, the power/sample size calculation for this SMART design was equivalent to the power/sample size calculation for a two-arm randomized controlled trial. However, if the primary research question was tailored to the slow responders then the power calculation would need to incorporate an estimate for the responder rate (i.e., power/sample size calculation indicates that a total of 60 slow responders are needed to detect meaningful difference with an estimated 40% early responder rate based on previous studies. Then, the total sample for the study would require 100 participants in order to have at least 60 slow responders).

Summary
Heterogeneity in response to current interventions is a known fact for children with ASD. A single intervention will not be effective for all children, and in fact, eclectic interventions are generally indicated. Eclectic, however, does not mean random. Interventions must be informed by solid research evidence in addition to clinical experience and judgment. What we currently know is that a focus on social communication is important for later cognitive and language outcomes, but other elements of interventions vary depending on child characteristics. Research can help clinicians systematize response to interventions by establishing benchmarks. Having systematic guidelines for when to augment or minimize the treatment protocol based on child response allows the clinician to replicate effective practices over multiple children and gets us closer to effective, systematic personalization of interventions.