Copyright by Shea McNeill Pilgrim 2010 The Dissertation Committee for Shea McNeill Pilgrim certifies that this is the approved version of the following dissertation: Child and Parent Experiences of Neuropsychological Assessment as a Function of Child-Centered Feedback Committee: ________________________________ Deborah Tharinger, Co-Supervisor ________________________________ Melissa Bunner, Co-Supervisor ________________________________ Timothy Keith ________________________________ Stephanie Cawthon ________________________________ Alissa Sherry ________________________________ Nancy Nussbaum Child and Parent Experiences of Neuropsychological Assessment as a Function of Child-Centered Feedback by Shea McNeill Pilgrim, B.A. Dissertation Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy The University of Texas at Austin August 2010 DEDICATION I dedicate this document to my three amazing daughters. You are my inspiration, my strength, my humility. You make me want to reach for the stars; then you make sure that I always keep life in perspective. My greatest wish for each of you is that you will someday find your own passion… and follow it. v ACKNOWLEDGEMENTS There are a number of people I wish to acknowledge for their help in conceptualizing and completing this project. First and foremost, Lauren Gentry – from listening to my rantings, proofing my documents, writing fables, helping to clarify my ideas, the list goes on and on – your support has been unparalleled. Kiara Alverez was also very helpful with writing fables. I wish to acknowledge Deborah Tharinger for your eternal patience, understanding, insight, and guidance. This project would never have come to fruition without you. Melissa Bunner was excited about and supportive of a crazy idea from someone she’d never met; she and Nancy Nussbaum were instrumental in facilitating this project through the clinic. I also wish to thank Tim Keith, Stephanie Cawthon, and Alissa Sherry for your helpful edits and suggestions. Finally, to my family, who has put up with a lot when it comes to me and college. You’ve always been there for me with patience, encouragement, meals, and childcare. I promise you, once and for all, that I’m finally done with school. vi Child and Parent Experiences of Neuropsychological Assessment as a Function of Child-Centered Feedback Shea McNeill Pilgrim, Ph.D. The University of Texas at Austin, 2010 Supervisors: Deborah Tharinger and Melissa Bunner Research has paid little attention to clients’ experience of the psychological assessment process, particularly in regard to the experiences of children and their parents. Advocates of collaborative assessment have long espoused the therapeutic benefits of providing feedback that can help clients better understand themselves and improve their lives (Finn & Tonsager, 1992; Fischer, 1970, 1985/1994). Finn, Tharinger, and colleagues (2007; 2009) have extended a semi-structured form of collaborative assessment, Therapeutic Assessment (TA), with children. One important aspect of their method, drawn from Fischer’s (1985/1994) example, is the creation of individualized fables that incorporate assessment findings into a child-friendly format. The fables are then shared with the child and parents as assessment feedback. This study evaluated whether receiving this type of individualized, developmentally appropriate feedback would affect how children and their parents report experiencing the assessment process. vii The assessment process, with the exception of child feedback, was standard for the setting. Participants were 32 children who underwent a neuropsychological evaluation at a private outpatient clinic, along with their parents. Multivariate and univariate statistics were used to test differences between two groups: an experimental group that received individualized fables as child-focused feedback and a control group. Children in the experimental group reported a greater sense of learning about themselves, a more positive relationship with their assessor, a greater sense of collaboration with the assessment process, and a sense that their parents learned more about them because of the assessment than did children in the control group. Parents in the experimental group reported a more positive relationship between their child and the assessor, a greater sense of collaboration with the assessment process, and higher satisfaction with clinic services compared to the control group. Limitations to the study, implications for assessment practice with children, and future directions for research are discussed. viii Table of Contents Chapter I: Introduction..................................................................................................... 1 Chapter II: Review of the Literature................................................................................ 6 Overview of Assessment...................................................................................... 6 Overview of Neuropsychological Assessment......................................... 7 Traditional Assessment Models................................................................ 9 Traditional Assessment with Children..................................................... 11 Breaking Tradition............................................................................................... 13 Development of a Collaborative Approach to Assessment..................... 13 Collaborative/Therapeutic Assessment................................................... 16 Underpinnings of Collaborative/Therapeutic Assessment...................... 18 Benefits of Collaborative/Therapeutic Assessment................................. 20 Feedback.............................................................................................................. 22 Assessment Feedback in an Historical Context....................................... 22 Resistance to Providing Feedback........................................................... 24 The Importance of Feedback................................................................... 27 Isolating Feedback................................................................................... 29 Feedback With Children...................................................................................... 31 Children Referred for Neuropsychological Assessment.......................... 33 An Underutilized Opportunity................................................................. 35 Storytelling and Children......................................................................... 37 ix Individualized Fables as a Mechanism for Growth................................. 38 Creating the Individualized Fable............................................................ 43 Contribution of this Study.................................................................................... 44 Research Hypotheses and Rationales................................................................... 45 Chapter III: Method.......................................................................................................... 54 Participants........................................................................................................... 54 Measures............................................................................................................... 55 The Parent Experience of Assessment Survey (PEAS)............................ 55 The Child Experience of Assessment Survey (CEAS)............................. 57 Parents’ Positive and Negative Emotions (PPNE-C)............................... 59 Children’s Positive and Negative Emotions (CPNE-S)........................... 60 The Client Satisfaction Questionnaire (CSQ-8)....................................... 61 Procedure.............................................................................................................. 61 Approval by the Human Subjects Committee.......................................... 61 Recruitment and Protection of Participants.............................................. 61 Assessment................................................................................................ 62 Parent Feedback Meeting.......................................................................... 63 Scheduling of Child Feedback Session..................................................... 64 Creation of Individualized Fables............................................................. 64 Experimental Versus Control Group........................................................ 65 Assignment to Treatment Group.............................................................. 65 Child Feedback Session............................................................................ 66 x Treatment Integrity................................................................................... 66 Data Collection..................................................................................................... 67 Pre-assessment measures.......................................................................... 67 Post-assessment measures........................................................................ 67 Chapter IV: Results.......................................................................................................... 69 Overview.............................................................................................................. 69 Descriptive Statistics............................................................................................ 70 Preliminary Analyses........................................................................................... 72 A Priori Analysis of Power...................................................................... 72 Missing Data, Outliers, and Assumptions................................................ 72 Equality of Groups................................................................................... 74 Correlations of Dependent Variables....................................................... 75 Main Analyses...................................................................................................... 77 Hypothesis 1............................................................................................. 77 Hypothesis 2............................................................................................. 79 Hypothesis 3............................................................................................. 82 Hypothesis 4............................................................................................. 83 Hypothesis 5............................................................................................. 84 Hypothesis 6............................................................................................. 85 Hypothesis 7............................................................................................. 85 Chapter V: Discussion...................................................................................................... 89 Method..................................................................................................... 89 xi Overview of Results................................................................................. 90 Implications............................................................................................. 101 Limitations.............................................................................................. 105 Future Directions.................................................................................... 108 Appendices..................................................................................................................... 110 Appendix A: Parent Experience of Assessment Survey (PEAS)....................... 110 Appendix B: Child Experience of Assessment Survey (CEAS)........................ 113 Appendix C: PPNE – C...................................................................................... 115 Appendix D: CPNE – S...................................................................................... 116 Appendix E: Client Satisfaction Questionnaire (CSQ-8)................................... 117 Appendix F: Parent Consent Form..................................................................... 118 Appendix G: Child Assent Form........................................................................ 122 Appendix H: Creation of Child Fables............................................................... 123 Appendix I: Checklist for Feedback Sessions.................................................... 125 Appendix J: Checklist Compilations of Results................................................. 127 Appendix K: Sample Fables............................................................................... 132 Appendix L: Results by Item – CEAS................................................................ 141 Appendix M: Results by Item – PEAS............................................................... 143 References...................................................................................................................... 147 Vita................................................................................................................................. 158 1 CHAPTER I Introduction Man first of all exists, encounters himself, surges up in the world – and defines himself afterwards. Jean Paul Sartre, 1946 In the human quest to achieve self-understanding, psychological assessment can be of great personal value. The decision to pursue a psychological assessment or evaluation may be based upon a myriad of circumstances. For instance, a change in mood, a traumatic experience, difficulties at school or work, or a desire to clarify strengths and weaknesses for a career change are a sampling of reasons one might seek out an evaluation. Psychological assessment, however, has historically been regarded as the work of an “objective observer” examining a “passive organism” (Berg, 1985). It is unlikely that human experience and performance could ever be adequately reduced to that of a passive organism as all human interaction invariably results in more than can be quantified by objective data. Further, if the data gleaned from a psychological evaluation is collected, analyzed, and interpreted solely by an objective assessor, then only one perspective of this complex human interaction is being captured. The unique relationship of assessor and client is rife with such complexity. If the client is a child, yet another layer of interaction must be considered – that which occurs between assessor and parent. The assessment experiences of these clients, both children and parents, are ripe for exploration. 2 Regardless of one’s developmental stage or cognitive ability, humans naturally interact with, internalize, respond to, and change because of stimuli from their physical and interpersonal environment. We “shape, as well as are shaped by” our world (Fischer, 1985, p. 46). The environment of a neuropsychological evaluation is no exception. While the instruments and procedures used are objective and standardized, there still exists two human beings involved in a unique interaction. Little attention has been paid to the consumer’s experience of neuropsychological assessment (Bennett-Levy et al., 1994); however, preliminary research suggests it to be a challenging and frustrating experience for adults (Westervelt et al., 2007). There has been even less research attention paid to what children and parents may experience over the course of a child’s assessment. The purpose of the present study is to provide an empirical examination of these experiences; specifically, this investigation explores children’s affective reactions to the assessment experience, impressions of rapport with their assessor, feelings of having learned new things about themselves, and affective impressions of their own challenges and future. Additionally, the study is designed to address these same experiences from the point of view of the children’s parents. Research in the field of personality assessment is accumulating support for the therapeutic benefits of a collaborative approach to assessment (Finn, 2007; Finn & Tonsager, 1997; Tharinger, Finn, Gentry, et al., in press; Tharinger, Finn, Wilkinson, & Schaber, 2007). One tenet of this collaborative/therapeutic approach is that thoughtful, individualized feedback can help adult and child clients better understand themselves and improve their lives (Finn, 2007; Fischer, 1985). Giving neuropsychology clients feedback 3 based upon more collaborative and humanistic principles can be an intervention in its own right (Gass & Brown, 1992), and may increase consumer satisfaction in the assessment process. In the case of child assessment, consumer satisfaction implicates not only the child’s, but also the parents’ experience and satisfaction with services rendered. Additionally, collaborative feedback with children and their parents may pave the way for a more personal investment in the follow-through of realistic interventions and recommendations, a critical step toward improving quality-of-life for neuropsychological patients. Finn’s (2007) model, Therapeutic Assessment (TA), is a semi-structured, six-step approach characterized by engaging the client with transparency, collaboration, respect, and assessment feedback personalized to the client’s life. Therapeutic Assessment has been successfully modified for evaluating children in a collaborative and humanistic manner, the result of which is essentially a family systems intervention (Tharinger et al., 2007). Case studies evaluating TA with children have resulted in benefits for the child, including reduced symptomatology, greater understanding for self and parents, good rapport with the assessor, and increased feelings of being appreciated and trusted. Positive effects have also been found regarding parents’ experience and satisfaction with Therapeutic Assessment. Parents have reported increased empathic understanding of their child, increased positive and decreased negative affect regarding their child’s challenges and future, and decreased resistance to exploring the systemic influences on the child’s difficulties (Tharinger et al., 2007; Tharinger, Finn, Gentry, et al., in press). One important aspect of the TA method with children, drawn from Fischer’s (1985) work, is 4 the creation of individualized therapeutic fables that are shared with children and their parents as a developmentally accessible form of assessment feedback. This promising collaborative and therapeutic technique has yet to be studied in isolation, an important step for establishing the empirical efficacy of any psychological treatment. The unique setting of a child neuropsychological clinic provides an opportunity to study this collaborative technique in relative isolation. With this in mind, the purpose of the proposed research study is to determine whether assessment results translated into individualized therapeutic fables have an influence on children’s and parents’ experiences of the child’s neuropsychological assessment. Specifically, child experiences of assessment were evaluated by perceptions of: 1) new information learned about their difficulties; 2) the quality of their relationship with the assessor; 3) positive and negative affective reactions to the assessment; and 4) positive and negative feelings about their challenges and future. Parent experiences of their child’s assessment were evaluated by perceptions of: 1) new information learned about their child’s difficulties; 2) the quality of their child’s relationship with the assessor; 3) positive and negative affective reactions to their child’s assessment; 4) positive and negative feelings about their child’s challenges and future; and 5) satisfaction with the assessment services rendered. Interpretations of the treatment effect were based on the differential perceptions of children and their parents after receiving individualized assessment feedback in the form of a therapeutic fable, compared with children and their parents in a control group. There are two major goals of this study. The first is to ascertain whether the provision of child assessment feedback at an 5 individualized, developmentally appropriate level positively impacts the child. The second major goal is to assess whether this feedback can positively impact parental perceptions as well, with the hope that parents will develop a deeper, and perhaps more empathic, understanding of their child’s neuropsychological difficulties. A secondary goal is to determine if the provision of child feedback in this format affects parents’ ratings of satisfaction with their child’s neuropsychological assessment. 6 CHAPTER II Review of the Literature The proposed research study is an examination of the efficacy of a single and specific component of neuropsychological assessment of the child, that of the communication of results, commonly referred to as feedback. Feedback, as will be outlined in the following sections, has not been as extensively studied as other components of psychological testing or assessment. As such, the first aim of this review is to provide a glimpse into the historical development of psychological assessment, which encompasses neuropsychological assessment, and how these origins have influenced the delivery of assessment feedback. Next, a contrast is drawn between traditional psychological assessment and collaborative/therapeutic assessment, how the two methods approach the role of feedback, and how they each address the unique practice of assessing children. Various documented benefits experienced by clients receiving feedback after both traditional and collaborative/therapeutic assessments are provided, as well as possible theoretical rationales that may be implicated in these benefits. Finally, a collaborative approach to providing assessment results to children through individualized fables is described. Overview of Assessment Psychological testing as it exists today is a “contemporary representation of a process that is as long as the history of humankind – the effort to identify the nature of individual differences and to account for both the similarities and uniqueness of each human’s experience” (Beutler & Rosner, 1995, p. 3). By objectively analyzing written 7 and verbal samples of thought and behavior in standardized conditions, psychological testing seeks to draw conclusions and make generalizations about an individual’s traits. Testing is an invaluable tool for the clinical practitioner as it provides diagnostic information such as performance profiles, standard scores, and age and grade equivalents (Bracken, 2006). Psychological assessment, on the other hand, subsumes psychological testing. Assessment ideally includes the incorporation of behavioral data drawn from outside of the standardized testing situation, such as observations in home or school settings and accounts from significant others. By incorporating standardized test results into information gathered about the client’s development, schooling, and life functioning, psychological assessment provides a broader picture of the whole person. The typical progression of a psychological assessment encompasses four distinct processes: identification and clarification of the problem to be addressed; selection and implementation of methods for extracting the information needed; integration of sources of information collected from within and outside of testing; and the reporting of opinions and recommendations (Beutler & Rosner, 1995). While all four of these processes are essential to effective psychological assessment, the literature review will concentrate on the fourth stage. The interpretation and reporting of opinions and recommendations, or assessment feedback, is the focus of the proposed research study. Overview of Neuropsychological Assessment Clinical neuropsychology is an applied science that emerged from two distinct but related traditions: the psychological laboratory and the practice of psychometrics. True to these influences, neuropsychological assessments strive to isolate a particular ability or 8 skill from other abilities or skills (Westervelt, Brown, Tremont, Javorsky, & Stern, 2007) in a laboratory-type setting. The major goal of a neuropsychological assessment is to categorize how the subject behaviorally manifests the symptoms of brain dysfunction in order to identify a diagnosis (Lezak, Howieson, & Loring, 2004). Neurological dysfunctions generally cause cognitive and other neuropsychiatric impairments such as learning disabilities, attention-deficit disorders, dementia, or autistic spectrum disorders. The expressions of these impairments can be measured with a neuropsychological evaluation, using various analytic instruments and test batteries to categorize strengths and weaknesses as an aid in diagnosis, management, and long-term care planning for patients. Neuropsychological assessments have been described as being advantageous to use for treatment planning because of their objectivity, portability, and relevance to the functionality of the brain (Therapeutics and Technology Assessment Subcommittee of the American Academy of Neurology, 1996). Another important feature of neuropsychological assessment is the attempt to solicit optimal performance from the client and to be able to compare that performance with applicable norms. To this end, an attempt is made to keep a “sterile” environment – distractions are kept to a minimum, tasks are initiated and concluded by the evaluator, and the patient contributes minimally to the structure and processes of the testing situation (Crosson, 2000). In spite of, and perhaps because of, this necessary objective to assess patients at their best and within the most standardized conditions possible, the process of undergoing a neuropsychological evaluation is naturally anxiety-provoking. The patient enters an unknown situation at a clear disadvantage – they do not know what to expect, they are 9 generally not given information about what each test is measuring, and they are not given information during testing about whether or not they are responding correctly. Neuropsychological testing has been found to be time-consuming, challenging, and frustrating for patients (Westervelt et al., 2007). Moreover, the results are often misunderstood (Donofrio, Piatt, Whelihan, & DiCarlo, 1999). These are factors that could potentially impact patient compliance as well as patient perception of the value of a neuropsychological evaluation (Westervelt et al.). As such, a major challenge that the field of clinical neuropsychology is currently facing is to develop a process for competently assessing brain dysfunction that is also responsive to the needs of these clients (Brenner, 2003; Gorske, 2007). It has been suggested that, in the wake of such sophisticated computer imaging techniques of the brain, the survival of neuropsychology as a discipline may actually depend on the development of methods that meet clients’ needs as opposed to merely diagnosing their cognitive deficits (Gorske, 2007). As human beings have a natural desire to learn about themselves, discussing assessment results with clients is a way to meet clients’ needs. Indeed, providing results in a feedback session after an assessment is one process that has been shown to positively impact patient perceptions of the utility of neuropsychological assessment (Westervelt et al., 2007). Traditional Assessment Models Finn and Tonsager (1997) labeled the assessment model historically followed by psychologists and neuropsychologists as the “information gathering” model, the goal of which is to categorically describe and diagnose patients in order to make decisions and communicate about them between professionals. Also called the “medical model” 10 (Mutchnick & Handler, 2002), this traditional approach has historical ties to the mass group standardized testing and classification of World War I soldiers. At that time, nearly two million men were sorted and assigned to appropriate duties based upon the Army Alpha and Beta tests (Gallagher, 2003). The traditional assessment approach, from these military origins, is characterized by a unidirectional flow of information in which the client, often referred to as “patient” or “subject” (Mutchnick & Handler), has little input in the processes of the evaluation. Moreover, depending on the particular evaluation circumstances, the client may or may not understand the reasons for assessment and may or may not be given the results. In this model, the necessary standardized protocols are followed, data comparisons are made with appropriate norms, and relevant diagnostic categories are revealed. The “expert” assessor evaluates and diagnoses the patient and then applies a treatment, or recommends a treatment to a referral source, in order to “repair the problem” (Mutchnick & Handler). The use of this traditional model within psychology and its fit within medicine instinctively draws a necessary comparison between the two fields and raises important questions about the nature of the role of medical doctor versus psychological assessor. Mutchnick and Handler (2002) compare the traditional model’s assessor-assessee relationship to that of the hierarchical nature of the doctor-patient relationship in medicine. Perhaps because it is easier to disagree with a personality construct than with something immutable such as blood type, Appelbaum (1970) noted that the psychological report writing, historically tailored to other professionals, is not merely a scientific and technical process as it is in a medical setting. Rather, psychological reports are “political, 11 diplomatic, strategic persuasions functioning in a complex, sociopsychological context” (p. 349). To this end, the psychological assessor must play many roles: sociologist, politician, diplomat, salesman, artist, and psychologist (Appelbaum), in order to make their case to the desired audience. Beutler (1995) extends the role of assessors even further, describing the necessary responsibility shifts of “the psychological expert” during the course of an assessment – the clinician must be a consultant, in order to establish the nature of the referral concerns, a psychological technician, whose behavior is controlled by standardization guidelines, and a measurement expert in order to accurately interpret and draw conclusions from technical data. With such responsibility, it is especially important for the assessor to be attuned to the nature of their relationship with the client, for “when the clinician takes on the role of assessor, there is a shift in the balance of power; clients often feel powerless in the face of psychological assessment, especially since the focus of assessment is on personal elements of the self” (Lillie, 2007, p. 153). Traditional Assessment with Children Psychological assessment is a distinctly different process when working with children than it is when working with adults. While psychological evaluation, and especially neuropsychological evaluation, can be stressful for adults (Westervelt et al., 2007), testing can be particularly anxiety-provoking for some children (Handler, 2007). In the mind of a child, the nature of the evaluation procedure is likely equated to a trip to the doctor’s office, while the structure and content of the sessions may be reminiscent of standardized testing situations in the classroom. Handler rightly questions what children referred for evaluation think when they are brought in to see a “doctor” that is going to 12 help them with their ever-present problems, only to be asked a bunch of “puzzling, irrelevant” questions. Consequently, an especially essential component when working with children is to clearly explain the process in language that they can understand (Colley, 1973). Additionally, many children are naturally curious about their responses and often ask whether their answers are right or wrong; examiners, however, are not allowed to respond to these requests on most tests. Some children may understand and accept these unnatural interactions with an examiner, while others may become confused or frustrated. “Failure experiences” are also an integral part of many psychodiagnostic tests – the examiner can only move on to the next subtest after the examinee has made a certain number of consecutive errors. Handler (2007) posits that these repeated built-in error exercises may “engender feelings of frustration or abandonment, along with a great deal of apprehension and stress” (p. 56). While this type of experience may well be handled adequately by many children, the population of clients referred for psychoeducational or neuropsychological evaluations are often under stress already, experiencing learning or emotional difficulties in their homes and schools. Handler (1998) suggests that built-in failure experiences, especially if they are repeated and not adequately processed with children, likely impede optimal performance because children become distressed and demoralized. Although qualified examiners may make “heroic efforts” to establish rapport, offer support, and encourage the child, they must nonetheless adhere to standardization with a minimum of guidance or explanation (Handler, 2007). 13 Interestingly, however, some clinicians see psychodiagnostic testing as having the opposite effect with children and consider these tests as having success exercises built into them. From this point of view, when children begin to fail a subtest, they get the reward of moving on to the next test (Lyon, 1995). Lyon further suggests that, while the nature of the assessment process and tasks involved may seem daunting, “most children enjoy the games and puzzles” (p. 2) and become fairly relaxed during the process. Regardless of the perspective taken, the reality is that each child approaches the process individually, and thus experiences it differently. Moreover, many other factors such as individual examiner characteristics, the nature of the referral concern, previous assessment experience, and communication about the process between parent and child will impact their experience. Consequently, Handler’s (2007) point regarding the importance of processing the experience with the child is well-taken and is critical to discovering what the child is actually experiencing. Assessors can then make efforts, if necessary, to enhance the child’s experience, improve the validity of their test scores, and thus enhance the ultimate effectiveness of the assessment. Moreover, “processing the experience” and emphasizing a relational attitude, as opposed to applying a set of techniques, are basic tenets of collaborative/therapeutic approach to assessment with children and adults (Finn, 2007; Purves, 2002). Breaking Tradition Development of a Collaborative Approach to Assessment As large group testing became less central to psychological assessment in the 1960s and 1970s, many humanistic psychologists and scholars began calling for a re- 14 evaluation of the purposes and practices of psychological assessment (Fischer, 1985). While some clinicians, disenchanted by the medical model, rejected the practice of assessment altogether, others independently began to adjust the ways in which they interacted with adults (Fischer, 1985; Mosak & Gushurst, 1972; Richman, 1967; Schectman, 1979) and children (Colley, 1973; Fisher, 1970, 1985). During testing and feedback sessions, these clinicians began to incidentally notice that their clients seemed to benefit from using more humanistic and interpersonal techniques. For example, Mosak and Gushurst discovered that simply the act of giving a client their assessment results signified that the clinician had “a genuine belief in the patient’s strength” and in their ability to handle undesirable information (p. 542), which then became noticeable in the client’s mood and in their engagement in treatment. Richman noted similar improvements in difficult inpatient populations, concluding that a “skillful sharing of test results with the patient is often beneficial, especially for the very disturbed, when conducted by a psychologist trained in both testing and psychotherapy” (p. 63). A collaborative approach to assessment contrasts a “static” orientation with a “process” orientation, the argument being that the language in which assessment results are presented to clients affects their perception of themselves and of the evaluation (Schectman). It is theorized, then, that there is potential for a different and more hopeful experience if test findings are presented to the client from an experiential, or internal, point of view rather than from a descriptive, or external, point of view. With children, Colley (1973) emphasizes the necessity of describing the purpose of the assessment to children in developmentally appropriate language, and then explaining to the child what is not wrong; he or she may have 15 internalized from adults such labels as “lazy” or “dumb.” Fischer (1985) also emphasizes the importance of addressing children. This author describes a successfully collaborative assessment of a young boy inspired by the therapeutic drawing work of Winnicott (1971). Fischer observed the child at home in the context of his family environment, processed portions of the testing that were difficult for the child at the time that they occurred, and met him at his developmental level when giving assessment feedback in the form of a fictional story (Fischer, 1985). Fischer developed her experiential work with adult and child clients into a collaborative approach to assessment conceived of as an “understanding of a particular person’s situation as he or she lives it and of personally viable options within it” (1985, p. 47). Outlined in her ground-breaking work, Individualizing Psychological Assessment, this “human-science” model follows six guiding principles: a) primary emphasis is on life events, not test scores and theoretical constructs; b) events that occur in session are processed as they occur, and are considered as important data; c) collaboration occurs with the client; d) clients are expected to react to the testing in different ways, and these reactions are processed with the client; e) assessors consider all data as behavior in context; and f) the goal is to practice “authentically,” which means to strive for what is possible while acknowledging the necessary limits and requirements of clinical reality. Interestingly, paralleling the field of psychology, medical research literature also reflected a shift in the medical model of diagnosis and in the traditional attitudes of the doctor-patient relationship. Approaches such as shared decision-making (Quill & Brody, 1996) and patient-centered medicine (Laine & Davidoff, 1996) both emphasize the 16 importance of medical providers collaborating with patients while projecting an empathic, attentive attitude in order to facilitate disclosure of concerns, increase participation in treatment planning, and improve compliance with recommendations (Pegg et al., 2005). Indeed, engaging clients in a relationship and giving them information tailored to their personality and life shows respect for their autonomy, consideration for their capabilities, and value for their input as an essential contributor to their treatment (Clair & Prendergast, 1994); Pegg et al.). Thus, a more collaborative approach to client-provider interaction, influenced by a common desire to make the experience more “humane, respectful, and understandable,” to consumers (Finn, 2007, p. 5), simultaneously emerged in both the medical and psychological fields. Collaborative/Therapeutic Assessment According to Finn (1996), the traditional approach to psychological assessment “ignores the interpersonal context of clients’ test responses and promotes a mechanistic, de-humanized approach to psychological testing” that is possibly “harmful to clients and at best benefits them only indirectly” (p. 83). Moreover, the traditional assessment approach has been described as imparting stress, anxiety, anger, and confusion on clients (Handler, 2007). In contrast with this traditional, “information gathering,” model of assessment, Finn (2007) describes “therapeutic assessment” as an attitude, where: the goal of the assessor is more than collecting information that will be useful in understanding and treating the patient. In therapeutic assessment, in addition, assessors hope to make the assessment experience a positive one and to help 17 create positive changes in patients and in those individuals who have a stake in their lives. (p. 4) The process itself is intended to be transformative (Handler, 2007) and to leave clients with positive changes in their lives after the evaluation is complete (Tharinger et al., 2007). Collaborative/therapeutic assessment can be conceived of as an attitude about the relationship with the client, as opposed to a set of techniques (Purves, 2002), and is likely practiced by many gifted clinicians without their ever even knowing it (Finn, 2007). In addition, whereas traditional assessment’s intent is to help clients indirectly, therapeutic assessment’s intent is to help directly (Finn, 2007). This idea that psychological assessment can be a therapeutic intervention is a “major paradigm shift in how assessment is typically viewed” (Finn & Tonsager, 1992, p. 286). Taking collaborative/therapeutic procedures even one step further, Finn (1996, 2007) developed Therapeutic Assessment (TA), a semi-structured, six-step model for engaging clients in collaborative/therapeutic psychological assessment. The six steps of a Therapeutic Assessment, according to Finn, are each important in their own right, and include: 1) the initial session, in which assessment questions are solicited directly from clients; 2) standardized testing sessions; 3) intervention sessions; 4) summary/discussion sessions; 5) individualized written feedback for clients and standardized reports for referral sources; and 6) follow-up sessions. While a general flow chart guiding the provision of TA is clearly outlined by Finn, similar to Fischer (1985), Finn acknowledges the reality of client, clinic, and clinician variables that may preclude following all six steps and encourages clinicians to adapt the model to their particular needs and circumstances. 18 Tharinger, with Finn and colleagues (2007) have adapted and extended the principles of Therapeutic Assessment to be used with children and their families. Promising results from case study evaluations have shown that families involved with this type of assessment experience what amounts to a brief family systems intervention. Thus, the explicit goal of TA, to leave clients positively changed, seems to be possible with TA with children. Additionally, the goals of TA with children, to help parents develop more understanding and empathy for their child’s difficulties and to recognize the systemic potential for change, are within reach using this approach (Tharinger et al., 2007; Tharinger, Finn, Gentry, et al., in press). Theoretical Underpinnings of Collaborative/Therapeutic Assessment Three characteristics of personal motivation are thought to account for the therapeutic value of a collaborative/therapeutic assessment: a) self-verification; b) self- enhancement; and c) self-efficacy/self-discovery (Finn & Tonsager, 1997). Self- verification theory addresses a person’s aspiration to have his or her self-concept and reality affirmed by others, whether that self-concept is positive or negative (Swann, 1990). TA specifically attends to self-verification through a deliberate organization of feedback findings, always first revealing testing results that are consistent with, thus reaffirming, the client’s self views (Finn, 2007; Finn & Tonsager). In the course of a child’s assessment, collaboration with the parents is an essential step toward “circumventing the defensive aspects of the self-verification process” so that the child may be seen in a new light (Tharinger, Finn, Hersh, et al., 2008, p. 603). Attempting 19 feedback without attending to self-verification tendencies may result in the parents’ rejection of the assessment findings. Self-enhancement can be conceived of as the desire to be appreciated and loved by others and to feel good about oneself (Finn & Tonsager, 1997). Evidence for this construct is likely linked to the collaborative, interpersonal nature of the process in which assessors strive to be respectful, open, and humane with their clients (Finn, 2007). Attending to self-enhancement may result in greater rapport between client and assessor, which has been linked to therapeutic alliance. Therapeutic alliance, in turn, has been linked to positive future therapeutic outcomes (Ackerman, Hilsenroth, Baity, and Blagys, 2000). The third variable posited by Finn and Tonsager (1997) is self-efficacy/self- discovery, which refers to the need to feel in control of one’s environment as well as to garner self-knowledge and achieve personal growth. The collaborative nature of therapeutic assessments also influences this construct by engaging the client in all phases of the process – from start to finish – including soliciting clients’ assessment questions and asking clients to provide their input as assessment results are interpreted. Finn asserts that by acknowledging and attending to these constructs during the course of an assessment, clients will become positively changed and experience more beneficial effects than they would from an assessment conducted following the “information gathering” model. 20 Benefits of Collaborative/Therapeutic Assessment Qualitative case studies have described the beneficial effects of engaging these three personal motivations in adults (Finn, 1996, 2007; Fischer, 1985) and children (Fischer; Hamilton et al., in press; Mutchnick & Handler, 2002; Pollak, 1988; Purves, 2002; Tharinger et al., 2007) throughout the course of psychological assessment. For instance, therapeutic assessments have been found to provide clients with greater self- knowledge, self-understanding, self-esteem, and gains in therapy (Finn, 1996, 2007; Hander, 2007). Various benefits to future therapeutic endeavors have also been postulated, including (a) creating a therapeutic alliance that persists into the initial stages of psychotherapy, (b) setting goals for psychotherapy, and (c) refocusing stagnate treatment (Finn, 2007; Finn & Tonsager, 1997). Still, very few studies have attempted to empirically compare therapeutic assessment approaches with those of a traditional, information-gathering assessment model. However, Ackerman and colleagues (2000) did so and found that a therapeutic approach to assessment with adult clients can serve to decrease early drop-out rates for the assessment clients in subsequent individual therapy. These authors also found that the therapeutic alliance that was developed during the assessment phase of treatment was related to alliance early in therapy (Ackerman et al.). Therapeutic alliance, a measure of the personal connection and rapport experienced between client and clinician, is enhanced through collaboration and other humanistic principles (Finn, 2007; Ackerman et al.). Hilsenroth, Peters, and Ackerman (2004) replicated the findings of Ackerman and colleagues, similarly finding that the therapeutic 21 alliance developed during a collaborative/therapeutic assessment had a significant positive relationship to the alliance developed with the therapist throughout subsequent formal psychotherapy. When engaging children and parents in a collaborative/therapeutic assessment, parents have been positively impacted through cognition, interpersonal variables, systemic functions, and affect changes (Tharinger et al., 2007). Case study reports find that parents perceive: reduced symptomatology expression in their child; increased hope for the future of their child and the functioning of their family; having learned new things about, and having an increased understanding of, their child; decreased negative affect for their child, including less guilt, less shame, and more patience; and high satisfaction with the assessment process (Hamilton et al., in press; Tharinger et al, 2007; Tharinger, Finn, Gentry, et al., in press). Children having experienced a Therapeutic Assessment have reported strong rapport with the assessor, more understanding for self and parents, and self-enhancement feelings including being appreciated and trusted (Hamilton et al., in press; Tharinger et al, 2007). What has yet to be determined through research, however, is the unique contribution that each of the different aspects of a collaborative/therapeutic approach to assessment has to the change process. The provision of feedback is one definitive feature of these approaches, as sharing and exploring assessment results with clients is one of the three major goals of therapeutic assessors (Finn & Tonsager, 1997), and is an integral component of the TA process with children and adults. For clinicians practicing collaborative/therapeutic assessment, clients are intimately involved in the feedback 22 process, to the point that the phrase “providing feedback” does not capture the event. Finn (2007) uses the term “summary/discussion session” to refer to feedback, highlighting the importance of the client being involved in substantiating test findings and interpreting the results in the context of their life problems. Moreover, collaborative/therapeutic assessors are ever-cognizant that test scores are not able to define a person, thus the need for validation from the client (Finn, 2007; Fischer, 1985). Fischer (1970) argues that it is the client who is in the best position to confirm or clarify the evaluator’s impressions, and that the process of providing feedback should be a continuation of a dialogue with the client rather than a pronouncement of answers from the evaluator. Indeed, “… if clients get a chance to ‘co-edit’ and ‘revise’ their existing stories with the guidance of the assessor, they are more likely to accept and remember those stories than if the assessor tries to impose a revised understanding at the end of the assessment” (Tharinger, Finn, Hersh, et al., 2008, p. 603). A question remains, however, about whether clients can benefit from a collaborative feedback session without having undergone an assessment that adheres to the tenets of TA. Though perhaps not intended to address this particular question, feedback has been studied, in relative isolation, by several researchers. Feedback Assessment Feedback in an Historical Context In the traditional information-gathering, or medical, model of assessment, focal attention is paid to collecting pertinent data, adhering to standardized protocol, and assigning accurate diagnoses (Mutchnick & Handler, 2002). Historically, then, reporting 23 the results of the assessment to the client has been given much less consideration. Early accounts of the feedback process even cautioned against revealing much information to patients. Klopfer, Ainsworth, Klopfer, and Holt set this stage in 1954, asserting that feedback should be superficial, focusing only on the positive aspects of findings so that the patient can leave the examination feeling “pleased and satisfied” (as cited in Berg, 1985). Berg declares that “the paucity of attention paid to the role of assessment feedback is striking” (p. 52) and is likely due to the origins of diagnostic psychological testing. Originating from medicine and from measurement theory as it is applied to psychiatric problems, in this traditional view the expert pronounces to the patient, or more often, the referral source, the diagnosis after the examination is completed. Unfortunately, however, in traditional psychological practice, just as in medicine, diagnosis and treatment are regarded as separate processes; assessment feedback gets caught in the middle as it does not easily fit into either category. Moreover, since patients often do not initiate assessments themselves, examiners may feel that their primary role is to serve the referral source and only indirectly, the patient. In fact, ethical concerns regarding patients’ rights to receive their test results did not gain support until the 1970s, and inclusion in the American Psychological Association’s Ethical Principles of Psychologists did not occur until 1981 (Berg, 1985). Although it is now ethically mandated (APA, 2002), there is still a significant lack of research in this area, to such an extent that giving feedback to clients about their assessment performance may be the most neglected aspect of psychological assessment (Gass & Brown; Pope, 1992). 24 The current state of clinicians’ attitudes toward direct client feedback is clearly more encouraging. A shift in the views of the purpose of assessment is occurring, with many clinicians acknowledging that assessment is not necessarily only diagnostic, but can be therapeutic as well (Allen, Montgomery, Tubman, Frazier, & Escovar, 2003). Still, very little empirical research addresses assessment feedback practices. One study by Donofrio and colleagues (1990) did reveal that two-thirds of adult consumers in a neuropsychology outpatient clinic received an interpretation of their testing results. However, of those who did receive results, a large percentage had trouble understanding and remembering the information (Donofrio, et al.). This is concerning considering the fact that “the importance of assessment often lies in the utility of its interpretations and recommendations” (Smith, Wiggins, & Gorske, 2007). Indeed, Gass and Brown (1992) assert that, after having undergone a neuropsychological examination, “patients… commonly remark that they were never informed of their results” (p. 272). APA now mandates giving test feedback in most cases, and recent research indicates that 71% of surveyed personality and neuropsychological assessment psychologists frequently give in-person feedback to clients, however, information of clients’ ages was not collected (Smith et al., 2007). The frequency of feedback provision directly to children is likely lower, although research has yet to address the issue. Resistance to Providing Feedback Pope (2002) posits that when the fundamental aspects of feedback are carefully considered, the process can “provide a context of clear communication within which the purpose of an assessment can be achieved;” when neglected, however, it tends to “limit, 25 and sometimes destroy entirely, the usefulness of an assessment” (p. 268). There are a multitude of factors that may explain why, historically, assessment professionals may have neglected giving clients their results, even regarding feedback as a “minefield that must be approached with wary caution” (Berg, 1985, p. 55). One contributing factor may be that assessors were often not adequately trained in the task (Butcher, 1992; Pollak, 1998). Standardized assessment is straightforward, manualized, thoroughly taught in training programs, and becomes effectively mastered with practice; the feedback meeting, however, can be a delicate process that requires sensitivity and interpersonal, even therapeutic, skills (Pollak, 1988). Perhaps a dearth of graduate and professional training contributes to a lack of confidence for some clinicians in their ability to translate a psychological conception of a client’s performance into language that is accessible (Lillie, 2007; Pollak; Smith et al., 2007). Theoretically, feedback challenges clinicians to “build a bridge” between the testing instrument and its clinical utility – and clinical utility can mean anything from pathological diagnoses to “empathic overtures that can bolster self-understanding and self-acceptance” (Quirk, Strosahl, Kreilkamp, & Erdberg, 1995). However, among highly trained specialists in theoretical and psychodiagnostic constructs, the ability to meaningfully communicate these constructs to lay persons may be a skill that is taken for granted (Appelbaum, 1970). Another possible explanation for feedback resistance is a fear of upsetting or harming clients. While clinicians likely no longer actively avoid addressing difficult topics in feedback simply due to the possibility that they may not leave their testing clients feeling “pleased and satisfied” (Klopfer et al., 1954, as cited in Berg, 1985), 26 assessors may still fear upsetting or causing anxiety in their clients by disclosing negative information (Butcher, 1992; Pollak, 1988). Historically, it has been thought that medical and psychiatric patients should be protected from information that may cause distress, harm, or denial (Kitamura, 2005). Additionally, consistent with many current concerns about psychological and medical practice, the climate of managed care creates a situation in which “the bureaucratic allocation of time” allows little opportunity to engage in an assessment feedback session in order to address the clients’ questions and concerns (Pope, 2002). In fact, Pollak posits that a clinician with a heavy testing load may view feedback as an imposition after testing and report writing have been completed. Clinicians may have to donate their time in order to provide adequate feedback to their clients. An additional possibility is that clinicians may believe that feedback will not be useful to the recipient. Inpatient-focused neuropsychological research finds that patients with a traumatic brain injury frequently have diminished cognitive capacity; as such, hospital personnel “often give patients minimal treatment-relevant information on the assumption that it will not be comprehended and thus be of little value to them” (Pegg et al., 2005). Another impediment to the practice, being unsure of who their consumer is, whether patient or referring professional (Butcher, 1992), may cause confusion or a diffusion of responsibility which may influence whether or not, and from whom, a client receives feedback after their testing. Furthermore, primarily diagnostic psychologists may shy away from more than a superficial commitment to the feedback process, seeing it as 27 more of a therapeutic role, involving not only observations and interpretations but also an attempt to influence the attitudes and behavior of the client (Pollak, 1988). In conclusion, psychologists and psychological training programs have traditionally not given the assessment feedback process considerable attention. This is unfortunate, considering that “the value of even the most comprehensive and expertly conducted psychological assessment is significantly diminished if findings are misunderstood and recommendations go unheeded” (Pollak, 1988; p. 145). The Importance of Feedback Furthering the inherent ethical obligation to share assessment information with clients is research support for the personal benefits and clinical importance of collaboratively providing feedback to adults (Allen et al., 2003; Brenner, 2003; Finn & Bunner, 1993; Finn & Tonsager, 1992; Goodyear, 1990; Knoff, 1982; Malla et al., 1997; Pegg et al., 2005; Newman & Greenway, 1997) and children (Finn, 2007; Fischer, 1985; Hamilton et al., in press; Purves, 2002; Tharinger et al., 2007; Tharinger, Finn, Gentry, et al., in press; Tharinger, Finn, Hersh, et al., 2008). Studies have shown that receiving feedback about their test performance increases consumers’ compliance with recommendations (Brenner, 2003), predicts better health outcomes (Pegg et al., 2005), and is useful to parents for educational planning (Human & Teglasi,1993; Knoff, 1982). Personal benefits are also implicated, with adult and child clients alike who receive feedback reporting increased self-esteem, increased feelings of hope, enhanced self- awareness and self-understanding, greater motivation to seek mental health services, decreased symptomatology, and lessening feelings of isolation (Finn & Butcher, 1991; 28 Finn & Tonsager, 1992; Finn & Tonsager, 1997; Hamilton et al., in press; Newman & Greenway, 1997; Tharinger et al., 2007; Tharinger, Finn, Gentry, et al., in press; Tharinger, Finn, Hersh, et al., 2008). The delivery of personalized test results, then, whether at the end of a traditional assessment or as part of a collaborative/therapeutic assessment, is clearly beneficial to clients. However, although the theoretical mechanisms by which positive change occurs have not been empirically verified, there are several constructs that appear to be implicated. As introduced earlier, Finn and Tonsager (1997) postulated that self- verification, self-enhancement, and self-efficacy/self-discovery may be enhanced through Therapeutic Assessment. It is important to note, however, that many studies cited as resulting in positive change (e.g. Finn & Tonsager, 1992; Newman & Greenway, 1997) did not precisely follow Finn’s (2007) semi-structured model of Therapeutic Assessment. In particular, these two studies employed two distinct collaborative/therapeutic techniques: soliciting assessment questions during the initial interview and delivering individualized feedback. Finn and Tonsager (1992) administered the MMPI-2 to a group of college students and then gave them individualized feedback about their results. Results showed that those students receiving feedback experienced significant decreases in symptomatology, increases in self-esteem, and greater hopefulness for the future compared with controls who only received examiner attention (Finn & Tonsager). Newman and Greenway (1997) then replicated and improved upon Finn and Tonsager by administering the MMPI-2 to both experimental and control groups. The experimental 29 group who received test feedback reported increased self-esteem and decreased symptomatic distress. These results seem to imply that beneficial changes related to self-enhancement and self-discovery can be measured even without the full semi-structured TA procedure, although that is not to say anything of the magnitude of change. Moreover, it is difficult, if not impossible, to remove collaboration from an assessment that is conducted by a clinician who naturally practices with a relational and interpersonal style. Still, for research purposes, it is important to attempt to study potentially beneficial treatments broken down into their component parts. Isolating Feedback Several researchers have attempted to isolate feedback as a variable in creating positive change. Although research to date has not attempted to study feedback in isolation with child clients, the following studies with adult clients may provide a glimpse into differentiating the effects of a collaborative/therapeutic assessment that includes feedback from those changes resulting solely from a traditional assessment with the addition of feedback. Allen and colleagues (2003) empirically evaluated positive outcomes experienced by clients receiving test feedback after undergoing a psychological assessment that was not characterized as collaborative or therapeutic. These authors specifically evaluated two processes theorized to enhance therapeutic outcomes: rapport-building between examiner and client and self-enhancement. Eighty-three adults were given a personality measure, with the experimental group receiving feedback on results and the control group 30 receiving general information about the test. Controlling for examiner attention, results showed that giving clients personal assessment information positively affected their feelings of rapport with the assessment examiner and their sense of self-regard, self- competence, self-understanding, and positive evaluations of the experience (Allen et al.). As such, solely receiving feedback seems to provide benefits similar to those postulated as coming from collaborative/therapeutic assessment, such as rapport-building, a precursor to therapeutic alliance, self-discovery, and self-enhancement. Importantly, Allen and colleagues concluded, as did Finn and Tonsager (1992) and Newman and Greenway (1997), that the positive effects experienced by clients after receiving assessment feedback were not a function of examiner attention. In another empirical study within an inpatient hospital setting, 28 military personnel with moderate to severe traumatic brain injury were given individualized information about their injuries, abilities, and rehabilitation progress (Pegg et al., 2005). These authors found that receiving personalized information benefited patients more than just receiving general information. This personalized feedback increased patients’ overall functional independence, efforts in physical therapy, and satisfaction with rehabilitation treatment (Pegg et al.). Additionally, neuropsychological assessment patients suffering from psychotic disorders have experienced beneficial gains in rehabilitation (Malla et al., 1997) and inpatient clinical populations have reported greater satisfaction with assessment (Finn & Bunner, 1993) after receiving detailed verbal results in a sensitive manner. Thus, additional evidence for positive change after feedback in isolation can be tied to the theoretical constructs implicated in Therapeutic Assessment. 31 Another theoretical rationale for the benefits from feedback may be accounted for by looking at studies involving adults of limited cognitive capacity. The mechanism of “compensatory control” has been cited as a plausible explanation for positive results seen in impaired neuropsychological patients who receive individualized feedback (Baltes & Baltes, 1986). For, perhaps because of the emotional and personal meaning of individualized information, even if this information was not fully understood, patients experienced an increase in “cognitive control” over their circumstances, resulting in increased involvement in their treatment and better health outcomes (Baltes & Baltes; Pegg et al., 2005). Thus, positive outcomes have been documented from a variety of orientations, including both collaborative and traditional assessments, after clients receive feedback. In fact, an extensive review of psychological test interpretation literature concluded that clients who receive feedback experience greater gains than controls, regardless of variations in outcome criteria (Goodyear, 1990). Feedback With Children The process of child-directed feedback has been given even less training and research attention than that of adults (Berg, 1985; Tharinger, Finn, Hersh, et al., 2008) and APA (2002) guidelines for feedback requirements leave it open to interpretation. Ethical standard 9.10, Explaining Assessment Results states: Regardless of whether the scoring and interpretation are done by psychologists, by employees or assistants, or by automated or other outside service, 32 psychologists take reasonable steps to ensure that explanations of results are given to the individual or designated representative. (p. 14) Understandably, the child’s parent(s) (or legal guardians) are considered the designated representatives of a child’s psychological evaluation, as, legally, they are the clients who give consent and have legal access to the results. An explanation of the findings, then, is only ethically obligated to the parent(s). This is a regrettable circumstance that therapeutic assessors would consider an important missed opportunity (Tharinger, Finn, Wilkinson, et al., 2008). Nonetheless, the provision of feedback to children is left to the discretion of individual practitioners. It seems the same rationale proposed by Pegg and colleagues (2005) for neglecting to give feedback to TBI patients may be being applied to children – that is that only minimal information needs to be shared with children given their limited capacities for comprehending and utilizing it. While some clinicians rarely if ever engage children in the feedback process, it is likely that most make the decision on a case-by-case basis. Colley (1973), for instance, asserts that most children with mental ages of six and above can benefit from test data interpretation, even if it is on a very basic level. Others, however, are less than optimistic about the utility of providing children with assessment feedback. For instance, Baron’s (2004) handbook for child neuropsychological testing states that if the assessment feedback session involves issues too serious or threatening for the child to be able to integrate or make use of because of emotional or cognitive immaturity, then the parents or neuropsychologists might want to exclude the child from participation. These guidelines go on to state that the sharing of results with the child can potentially be a 33 valuable experience, but may not be possible due to the child’s age, schedule, or medical condition. Baron does not explicate, however, what the age or situational issues might be that would preclude the child from participating in feedback. The text goes on to warn that, while some children do favor direct feedback, others “may reject the evaluation process and its final interpretation” and refuse to show up for an additional session (p. 75). While this would certainly be an obstacle to providing feedback, the very tenets of a collaborative/therapeutic approach would address and, ideally, overcome, a client’s negative feelings about the assessment; at the same time, an assessor committed to practicing collaboratively would argue that results achieved from a client who is rejecting of the evaluation procedures are possibly inaccurate (Handler, 2007) or, at the very least, ineffectual for developing useful interventions (Fischer, 1985). Thus, clients who appear uncooperative may just be victims of a hierarchical and inaccessible information- gathering model of assessment. Children Referred for Neuropsychological Assessment Clinical research has yet to investigate the practice of children’s participation in feedback after a neuropsychological assessment. Baron (2004) advises that some children who undergo neuropsychological testing may be unable to benefit from an interpretation of results, while others may find the experience valuable. This advice seems to be based upon the cognitive and emotional maturity of the child in question, as well as the seriousness of the problems the child is experiencing. If this is the case, then the decision to include a child in a feedback session would be a unique one made for each child. The impetus for seeking a neuropsychological assessment of a child may be for a variety of 34 reasons. In the case of organic or neurological dysfunction or injury, such as a seizure disorder, childhood cancer, or traumatic brain injury, an assessment may be requested to characterize baseline or post-operative functioning. More commonly, however, an assessment is instigated because of behavioral or emotional concerns, learning difficulties, or attentional problems (Baron, 2004). Therefore, it is reasonable to assume that children referred for a neuropsychological evaluation are having problems in their daily lives. When a child is having problems at school or at home and the root of these problems is not understood, the child may be mislabeled by teachers or parents as bad, disruptive, unintelligent, or lazy. Obtaining a neuropsychological profile of a child’s cognitive, attentional, memory, and academic functioning can prove invaluable to parents and teachers for planning interventions, as well as for establishing an empathic understanding of the child’s problems. For the child, the results of such an evaluation are likely to have an impact, be it direct or indirect, on their life. Perhaps the question of child feedback should not be whether or not they receive it; perhaps the variables should be how much information is shared, and in what format. Providing an explanation of the results to children and their parents in an appropriate format is a necessary step toward improving outcomes for children who undergo a neuropsychological evaluation. The key phrase here is “in an appropriate format.” Baron’s warnings about providing feedback to children who are unable to handle the information is well-taken. The seriousness of the child’s condition, the threatening nature of findings, and the maturity level of the child are all essential considerations to the provision of test results. Proponents of 35 collaborative/therapeutic assessment would argue, however, that it is not a matter of whether or not children should receive feedback after a neuropsychological assessment; rather, the issue is how to give the child beneficial information that is individualized to their condition, be it serious, threatening, or not, and also developmentally appropriate for their unique cognitive and emotional maturity. After a neuropsychological assessment, the necessary information about their impairments, cognition, and emotionality has already been summarized and interpreted for the child’s parent(s). The information is all there; what is needed next is a re-interpretation of the information for the benefit and understanding of the child. An Underutilized Opportunity For many assessment practitioners who work collaboratively with children (Finn, 2007; Fischer, 1985; Hamilton et al., in press; Handler, 2007; Purves, 2002; Tharinger, Finn, Gentry, et al., in press; Tharinger, Finn, Hersh, et al., 2008; Tharinger, Finn, Wilkinson, et al., 2008) it would be considered a rare exception not to provide the child with individualized information about his or her assessment performance. To these professionals, who are committed to engaging children in a process that is “respectful and responsive” to their experience, sharing the results in a child-friendly manner is an essential component (Tharinger, Finn, Wilkinson, et al.). Importantly, this child-friendly focus is not limited just to providing feedback, as a crucial component of Therapeutic Assessment with children is to have the parents, with guidance from the clinician if needed, discuss the purpose of the assessment with their child. Finn (2007) explains the importance of this step, which has relevance to the goals of the feedback session with the 36 child and also sets the stage for what the parents can come to expect of the forthcoming evaluation. Finn states: In general, by asking parents to discuss the assessment with their child, I am sending the message that “nothing is too bad to talk about.” Also, I am modeling respect and empathy for the child, as if to say, “It may be scary and confusing for your child to come for an assessment without being given some explanation first. Your child has a right to be told something about why she is being asked to do the testing.” (Finn, p. 198) Indeed, helping children understand the purpose and process of an assessment, inviting their participation in determining the goals of the assessment, and providing an explanation of the results of the assessment, are goals of psychologists who strive to work collaboratively and therapeutically with children (Tharinger, Finn, Wilkinson, et al., 2008). Providing assessment results to children in a way that they can understand, appreciate, and relate to is not necessarily an easy or straightforward process. In fact, Tharinger, Finn, Wilkinson, and colleagues (2008) state that providing feedback to children may be even more difficult than it is to provide feedback to parents and teachers. Additionally, assessors who have attempted to deliver a summary of strengths and weaknesses to a child or young adolescent have likely encountered blank stares and a sense of overwhelm, as a child cannot be expected to comprehend their functioning and that of their family in black-and-white terms. Assessors may then, understandably, feel “ineffective and vulnerable in their relationship with the child,” and decide to focus their 37 efforts on the parents instead (Tharinger, Finn, Wilkinson, et al., p. 2). Many clinicians (Finn, 2007; Purves, 2002; Tharinger, Finn, Hersch, et al., 2008; Tharinger, Finn, Wilkinson, et al.) have successfully overcome these challenges. Drawn from Fischer’s (1985) work as well as that of Mutchnick and Handler (2002), these practitioners routinely create individualized fictional stories or fables that incorporate assessment findings into a child-friendly format. The fables are then shared with the child, and often the child’s parents, as assessment feedback. Storytelling and Children Children become introduced to the phenomenon of storytelling at an early age. Beginning in infancy, caregivers sing lullabies, tell nursery rhymes, read books, and relate personal stories over and over to their children. Stories can be used to soothe and comfort, introduce a milestone, teach new skills, or simply spend time together. From Goodnight Moon (Brown, 1947) and Hop on Pop, (Seuss, 1963) to Judy Blume and Star Wars chapter books, the stories that are read and the books that are chosen are instinctively developmentally appropriate for the child. After all, a child is not likely to attend to a story that does not meet them at a level that they can comprehend and appreciate. Fables are a representation of storytelling in its earliest form. They are defined as short stories with the feature of anthropomorphism, or the attribution of human qualities to animals, plants, and forces of nature. Fables are intended to teach a lesson, offer advice, and ultimately, reinforce what makes us human (Hayashi, 2001). Used with children by parents, teachers, and elders for centuries, fables also have a short history of 38 use as a therapeutic and intervention technique (Bhattacharyya, 1997; Holmes, 1993; Mutchnick & Handler, 2002). Story-telling is naturally incorporated into therapy, as “every psychotherapeutic case-history is, in a sense, a ‘fiction’ – a story wrought collaboratively by patient and therapist from the raw materials of memory, history, dreams, transferential relationship and theoretical perspective” (Holmes, 1993, p. 127). While some may argue that such a practice seems out of place in the age of empirical validation, many can recognize its fit and utility. Indeed, the missing link “may lie in the common quest in the study of myths and the investigation of the self. After all, a reconstruction of what has gone on, and how that affects what is now and what will be,” is the primary goal of both science and storytelling (Bhattacharyya, p. 1). It is this “reconstruction” of a child’s assessment experience and the information gleaned from that assessment that forms the basis of an individualized therapeutic fable written for a child as feedback. Individualized Fables as a Mechanism for Growth A by-product of collaborative/therapeutic assessment which developed from humanistic principals, individualized fables may similarly be conceptualized as an extension of interpersonal theory and play therapy principals brought into the feedback session. Many of the tenets of a “child-centered philosophy” of play therapy are recognizable as the grounds for creating individualized fables: children are unique and worthy of respect; they have an innate tendency toward growth and self-discovery; they have the capacity to overcome their obstacles; they are capable of self-direction and of meeting their challenges creatively; and children must me met at their level – they are not 39 miniature adults” (Landrath, 2002, p. 54). From this, it may be reasonable to hypothesize that some of the theoretical constructs implicated in play therapy, in addition to those from collaborative/therapeutic assessment, may come into play for the child who is presented with child-centered feedback in the form of an individualized fable. These constructs come from Rogers’ (1951) child-centered theory of personality structure, and involve attending to 1) the person; 2) the self; and 3) the phenomenal field for the optimal development of the child. The Person. The person is the child and everything he or she is – thoughts, feelings, and behaviors. This child as person continually strives to exist in an ever- changing world by becoming a more positively functioning person, always moving toward self-understanding, self-discovery, and self-improvement (Landrath, 2002; Rogers, 1951). The simplified and entertaining format of a fable presents a more digestible form of new information for the child to be able undertake this journey toward self-understanding and improvement. Children are used to reading fictional stories, and these stories are interesting to them especially if they can identify, in some way, with the characters. Fables written for children as feedback include information that the child, him or herself, has directly shared with the assessor. That way, the child automatically identifies with the main character in the fable at some level and thus becomes engaged. Once a child is engaged in the story, the door to introduce new or more difficult information – and information that was obtained more indirectly through the assessment – is opened somewhat, the degree of which is uniquely determined. The child can read about a character that is just a little bit like them, who then has to deal with problems 40 (e.g. homework, paying attention, anxiety, a parents’ divorce) that are kind of like their own problems. The child can discover ways that this other character might handle these problems. This familiar but fictional characterization of someone else’s problems ideally gives the child a vehicle to “try out new conceptions of the self without being overwhelmed” (Tharinger, Finn, Wilkinson, et al., p. 612). Consequently, the process allows the child to take in new information about themselves and their difficulties that otherwise may not have been accepted or understood. A well-constructed fable can also help to normalize the problems experienced by the child when they realize that others have to deal with problems similar to their own. In addition to addressing misconceptions about the novelty of their life situation, fables can help to diminish children’s shame and negative self-conceptions, in that the stories help reframe the child, for instance, from “bad” to sad or from “dumb” to dyslexic (Tharinger, Finn, Wilkinson, et al., 2008). Thus, the child is on the road to a more complete understanding of him or herself. The child is then likely to experience decreased negative feelings of shame, embarrassment, or inadequacy and increased feelings of hope and self- esteem through this experience of self-discovery. The Self. The presentation and individualized nature of a feedback fable attends to the concept of self. Consistent with the Therapeutic Assessment model, an individualized fable written as feedback is introduced to the child as “A story that was written especially for you.” Creating an individualized fable especially for a child, and then presenting the story as such, addresses the developing child’s need to feel that they are respected and liked by others. Also a fundamental aspect of play therapy, in the case of assessment, the 41 child is shown that they are respected and appreciated by the assessor so much so that the assessor wrote a story just for them. Human beings in general, and children especially, have a desire to be recognized as valuable and appreciated by others (Landrath, 2002). These feelings of being respected, valued, and liked by others facilitate a sense of belonging for the child, and their existence is a necessary component of a healthy development of the self (Kohut, 1977). Moreover, this need for positive regard received from others is reciprocal, in that, as the regard is recognized as being applied to the child, the child then applies the positive regard to him or herself (Rogers, 1951). As the child begins to hear their special story and recognizes him or herself and their life situation within it, these needs for feeling accepted and valued are furthered fulfilled, thereby increasing the child’s positive regard for him or herself. Using individualized fables as feedback can also help children feel as if their voice is really being heard and that their feelings are being acknowledged, thus providing them with “an intense experience of positive accurate mirroring.” (Tharinger, Finn, Wilkinson, et al., 2008, p. 612). “Positive accurate mirroring,” from self-psychology, has been considered a developmental necessity that enables the recipient to function with a healthy degree of self-regard and self-esteem as an adult (Kohut, 1977). Ideally received throughout childhood from the child’s parents, positive accurate mirroring has been shown to increase in adults after receiving test feedback (Allen et al., 2003; Finn & Tonsager, 1992; Newman & Greenway, 1997) and is likely implicated in increases of self-understanding, self-esteem, and self-enhancement measured in these studies. Children who are exposed to a fable that reflects their experiences and feelings and that 42 seems to portray their functioning and family in an understandable way often experience a sense of finally being really understood for the first time; thus, their thoughts and feelings are being accurately mirrored by the assessor. As such, positive accurate mirroring is a way that individualized fables may influence children’s positive regard for their assessor and positive feelings about their assessment experience, in addition to the development of positive feelings about themselves and their future. The Phenomenal Field. The phenomenal field of the child is their own personal reality – everything that the child experiences be it internal or external. It is the child’s perception of reality that must be truly understood if the child is to be fully understood. In order to engage parents and ultimately instigate the family systems changes that may be necessary to help a child overcome their problems, the behavior of a child must be investigated through the eyes of that child (Landrath, 2002). As an essential component to collaborative and therapeutic assessment, parents are always invited and encouraged to be present for the presentation of their child’s individualized feedback fable. During a child feedback meeting, parents witness the child’s reactions to and comprehension of the problems faced by the fictional character. This process creates an opportunity for the parent to better understand their child and to see their child’s problems from their child’s point of view. Fables therefore enhance the parents’ ability to empathize with the child’s problems, as well as learn more about how their child functions in his or her unique world. This may then provide a way for parents to construct, or accept, a new story about the difficulties their child is having, how those difficulties might be affected by the systemic functioning of the family, and how the family may be able to intervene. 43 Additionally, the story is a “transitional object” that can be kept, illustrated, and referred to by the child and the parent(s) for reminders after the assessment is over (Tharinger, Finn, Wilkinson, et al., 2008). In this way, the parents are provided with a tool to use in the future to discuss with their child his or her problems and work through potential solutions. This process of engaging parents with their children in collaborative, individualized feedback shows promise for treatment efficacy for many families (Farmer & Brazeal, 1998). Creating the Individualized Fable Integrating a child’s psychological assessment findings into a fictional but relevant story essentially amounts to a melding of art with science. However, although this may seem like an intimidating undertaking, creating fables is very feasible and they “flow quite easily and are enjoyable to construct” once a clinician has the motivation to attempt the task. It is suggested that assessors “free themselves from the ways of formal professional writing” and utilize their imaginations (Tharinger, Finn, Wilkinson, et al., 2008, 612). A basic familiarity with children’s literature, an ability to write for a younger audience, and personal information gleaned from projectives or other assessment sources, such as pets, favorite characters, wishes, etc., are also important considerations for a story that the child can connect to. The theme of an individualized fable comes from a keen interest of the child – a favorite cartoon, book, or animal for example – with the main characters being fictional representations of the child and his or her family, teachers, or friends living in the setting consistent with the chosen theme. The problems introduced to the characters come from 44 clinical impressions of the real challenges that the child is facing, and can be physiological, social, educational, systemic, or a combination of all of these. Potential solutions offered must represent actual and viable possibilities from the child’s current life situation. Not only must the child feel represented in the story, but the story will ideally also resonate with caregivers and increase their empathy, sense of empowerment, and motivation to change in order to help their child (Tharinger, Finn, Wilkinson, et al., 2008). (see Appendix H) Contribution of this Study Finn (2007), the founder of Therapeutic Assessment, encourages psychologists to take what they can use from his TA model, in case, for various reasons, it may not be practical to implement all of the steps in certain settings. In addition, Newman and Greenway (1997) call for future research in the efficacy of therapeutic assessment to parse out the various aspects of the process in order to determine what possible influence each component might have on beneficial effects experienced by clients. To this end, this goal of this research study is to compare outcomes from two groups of children undergoing a neuropsychological evaluation, in which the groups theoretically differ on only one variable – that of whether or not the child receives individualized feedback after the assessment. This research will contribute to the literature base on the utility of assessment feedback to children by being the only known empirical study to attempt to isolate a single therapeutic assessment technique and evaluate its efficacy for child outcomes. Moreover, in a neuropsychological context in which parents are generally the assessment 45 information gatekeepers, this study also breaks new ground by actively engaging the child in the assessment feedback process and addressing him or her at an appropriate developmental level. The hope is that by altering the child’s neuropsychological evaluation process in a minor way, their experience of assessment is enhanced while the process remains valid and standardized. Research Hypotheses Hypothesis 1 A. Children who receive assessment feedback in the form of individualized fables will report that they learned new things about themselves through the assessment at a greater level than will children who do not receive this feedback. B. Children who receive assessment feedback in the form of individualized fables will report a more positive relationship with the assessor than will children who do not receive this feedback. C. Children who receive assessment feedback in the form of individualized fables will report more positive feelings about the assessment than will children who do not receive this feedback. Rationale Although children’s experiences of assessment as a function of feedback presented in the form of individualized fables has yet to be empirically investigated in isolation, the use of stories and fables has been reported to be a way to meet children at their level (Fischer, 1985), and help them absorb complicated information in a non- 46 threatening way (Mutchnick & Handler, 2002; Tharinger et al., 2007). Qualitative case study analyses show that children perceive having learned new things, a strong rapport with the assessor, reduced symptomatology, and decreased negative feelings about the assessment after individualized fables are presented as part of a Therapeutic Assessment (Tharinger et al., 2007; Tharinger, Finn, Wilkinson, et al., in press) In addition, adult neuropsychological patients with limited cognitive capacity who receive personalized feedback without a therapeutic assessment display greater cognitive control over their recovery (Baltes & Baltes, 1986), suggesting that learning new things about one’s situation impacts self-efficacy/self-discovery, as well as competency. In addition to considering findings that adult clients who receive assessment feedback without a therapeutic assessment experience increased rapport-building with the assessor and increased self-discovery (Allen et al., 2003; Malla et al., 1997; Pegg et al., 2005), it is posited that children receiving individualized, therapeutic feedback after a non-therapeutic assessment will report an experience of having learned new things, having a positive relationship with their assessor, and fewer negative feelings about the assessment. Moreover, studies with adult clients receiving feedback suggest that the resulting benefits are not a function of examiner attention (Allen et al., 2003; Finn & Tonsager, 1992; Newman & Greenway, 1997; Pegg et al., 2005), therefore, the positive changes expected for children in the experimental individualized feedback group are hypothesized to be greater than in children in the control group. 47 Hypothesis 2 A. Parents of children who receive assessment feedback in the form individualized fables will report that they learned new things about their child through the assessment at a greater level than will parents of children who do not receive this feedback. B. Parents of children who receive assessment feedback in the form of individualized fables will report that their child has a more positive relationship with the assessor than will parents of children who do not receive this feedback. C. Parents of children who receive assessment feedback in the form of individualized fables will report fewer negative feelings about the assessment than will parents of children who do not receive this feedback. Rationale Although the assessment experiences of parents after their children receive individualized feedback in the form of fables has yet to be assessed in isolation, parents with children engaged in a Therapeutic Assessment that includes individualized fables as feedback perceive having learned new things about their child, having the perception that the assessor cared about and respected their child, and positive feelings about the assessment process (Hamilton et al., in press; Tharinger et al, 2007). Therefore, it is hypothesized that these beneficial effects will also be measured in parents who experience individualized feedback with their child in the context of a non-therapeutic assessment. Moreover, studies with adult clients receiving feedback suggest that the 48 resulting benefits are not a function of examiner attention (Allen et al., 2003; Finn & Tonsager, 1992; Newman & Greenway, 1997; Pegg et al., 2005), therefore, the positive changes expected for parents whose children are in the experimental individualized feedback group are hypothesized to be greater than in parents whose children are in the control group. Hypothesis 3 A. Children who receive assessment feedback in the form of individualized fables will report an increase in positive emotions related to their challenges and future at post-assessment when compared with pre- assessment values. B. It is further hypothesized that the magnitude of this change in positive emotions between pre- and post-assessment for children receiving assessment feedback in the form of individualized fables will be greater than that of children who do not receive this feedback. Rationale Although children’s positive feelings about their challenges and future have yet to be empirically investigated as a function of individualized feedback in isolation, the use of stories and fables has been reported to be a way to meet children at their level (Fischer, 1985), and help them absorb complicated information in a non-threatening way (Mutchnick & Handler, 2002; Tharinger et al., 2007), thus providing an increased opportunity for children to experience positive affect. Qualitative case study analyses of TA with children that includes the use of therapeutic fables show that children report 49 increased positive feelings such as self-esteem, trust, and hope for the future (Tharinger et al., 2007; Tharinger, Finn, Wilkinson, et al., in press). Moreover, using fables can help children “feel validated and understood” resulting in increases in positive affect (Tharinger, Finn, Wilkinson, et al.). Additionally, studies with adult clients receiving feedback suggest that the benefits that result are not a function of examiner attention (Allen et al., 2003; Finn & Tonsager, 1992; Newman & Greenway, 1997; Pegg et al., 2005), therefore, the increase in positive emotions expected for children in the experimental individualized feedback group is hypothesized to be greater in magnitude than for children in the control group. Hypothesis 4 A. Children who receive assessment feedback in the form of individualized fables will report a decrease in negative emotions at post-assessment when compared with pre-assessment values. B. It is further hypothesized that the magnitude of this change in negative emotions between pre- and post-assessment for children receiving assessment feedback in the form of individualized fables will be greater than that of children who do not receive this feedback. Rationale Although children’s negative feelings about their challenges and future have yet to be empirically investigated as a function of individualized feedback in isolation, qualitative case study analyses of TA with children that includes the use of therapeutic fables show that children report decreased negative affect; for example, fables can 50 diminish children’s shame and negative self-conceptions (Tharinger, Finn, Wilkinson, et al., in press). Additionally, studies with adult clients receiving feedback suggest that the benefits that result are not a function of examiner attention (Allen et al., 2003; Finn & Tonsager, 1992; Newman & Greenway, 1997; Pegg et al., 2005), therefore, the decrease in negative emotions expected for children in the experimental individualized feedback group is hypothesized to be greater in magnitude than for children who are in the control group. Hypothesis 5 A. Parents of children who receive assessment feedback in the form of an individualized fable will report an increase in positive emotions related to their child’s challenges and future at post-assessment when compared with pre-assessment values. B. It is further hypothesized that the magnitude of this change in positive emotions between pre- and post-assessment for the parents of children receiving assessment feedback in the form of an individualized fable will be greater than that of the parents of children who do not receive this feedback. Rationale Although parents’ positive feelings about their child’s challenges and future have yet to be empirically investigated as a function of individualized feedback in isolation, case studies of TA that includes therapeutic fables as child feedback show that parents become more patient, empathic, compassionate, and hopeful towards their child’s challenges and future after the assessment (Hamilton et al., in press). Moreover, studies 51 with adults receiving feedback without participating in a therapeutic assessment show increased self-regard and self-competence (Allen at al., 2003), and it is hypothesized that these self-enhancement processes will translate into parental increase in positive affect for their child. Additionally, studies with adult clients receiving feedback suggest that the benefits that result are not a function of examiner attention (Allen et al., 2003; Finn & Tonsager, 1992; Newman & Greenway, 1997; Pegg et al., 2005), therefore, the increase in positive emotions expected for parents to have for their children in the experimental individualized feedback group is hypothesized to be at a greater magnitude than for parents of children who are in the control group. Hypothesis 6 A. Parents of children who receive assessment feedback in the form of individualized fables will report a decrease in negative emotions related to their child’s challenges and future at post-assessment when compared with pre-assessment values. B. It is further hypothesized that the magnitude of this change in negative emotions between pre- and post-assessment for the parents of children receiving assessment feedback in the form of an individualized fable will be greater than that of the parents of children who do not receive individualized assessment feedback. Rationale Although parents’ negative feelings about their child’s challenges and future have yet to be empirically investigated as a function of individualized feedback in isolation, 52 case studies of TA that includes therapeutic fables as child feedback show that parents report decreased negative affect, including feeling less frustrated, less guilty, less anxious, and less like they want to give up (Hamilton et al., in press). Moreover, TA case studies show that parents’ positive feelings are significantly increased, and that the reduction of their negative feelings about their child is even more significant (Tharinger et al., 2007). Additionally, studies with adult clients receiving feedback that is not a part of a therapeutic assessment suggest that the benefits that result are not a function of examiner attention (Allen et al., 2003; Finn & Tonsager, 1992; Newman & Greenway, 1997; Pegg et al., 2005), therefore, the increase in positive emotions expected for parents to have for their children in the experimental individualized feedback group is hypothesized to be greater in magnitude than for parents of children who are in the control group. Hypothesis 7 Parents of children who receive assessment feedback in the form of an individualized fable will report a higher level of satisfaction with services than parents of children who do not receive this feedback. Rationale Although parents’ perception of satisfaction with services for their child’s neuropsychological assessment has yet to be empirically investigated as a function of individualized feedback in isolation, parents participating in TA that includes individualized fables report high satisfaction with services (Hamilton et al., in press; Tharinger et al., 2007). Moreover, clients in adult inpatient settings who were given 53 detailed feedback information reported greater satisfaction with assessment (Finn & Bunner, 1993) and greater satisfaction with rehabilitation treatment (Pegg et al., 2005). 54 CHAPTER III Method Participants The participants in this study are a fairly homogeneous sample of 32 children and 32 parents (one parent per each participating child). Originally, 46 children who underwent a neuropsychological assessment at the participating clinic between December 2008 and May 2009 and their parent volunteered to participate in the study. Of these original 46, 32 child-parent pairs fulfilled the study requirements by returning to participate in the child feedback session and completing all research measures. The children who completed the study were 23 males and 9 females between 6 and 13 years of age. The mean age of child participants was 9.0 years (SD = 1.79). The majority of children in the sample were Caucasian (88%), followed by Other/Unknown (6%), Hispanic (3%), and Asian/Pacific Islander (3%). Parent participants included mothers (81%), fathers (15%), and one guardian/aunt. The majority of participating families were of middle to upper socioeconomic status as determined by parental level of educational attainment. Parent respondents had completed an average of 16.7 years of schooling (SD = 1.53). Moreover, 90% of parents had obtained at least an undergraduate degree, and 49% of parents completed a graduate degree. The children involved were referred for an evaluation to one of two participating neuropsychologists at a private outpatient neuropsychological clinic in central Texas. As is typical in outpatient neuropsychological assessment settings, referral concerns for the 55 participant children included inattention and hyperactivity, academic difficulties, and social/emotional issues. Referring parties were generally the child’s parents and the child’s physician. Child participants with a Full Scale IQ of less than 70 were excluded from the study, although no children who were involved met this exclusionary criterion. As a result of their evaluation, a majority of the children involved (63%) in this study received a diagnosis of an attention-deficit disorder (ADHD – Combined Type, ADHD – Primarily Inattentive Type, or ADHD – Not Otherwise Specified). Attention- Deficit/Hyperactivity Disorder (ADHD) is the most commonly diagnosed psychiatric disorder in children. The current estimate of the prevalence of ADHD is 3-7% of school- aged children (American Psychiatric Association, 2000). Children with attention-deficit disorders commonly have problems at school and at home with various symptoms of inattention, hyperactivity, and impulsivity. The second most common diagnosis received was Central Auditory Processing Disorder (CAPD), followed by dysgraphia. CAPD is a neurologic impairment involving varied deficits in understanding auditory information and language. Dysgraphia is basically poor handwriting skills (formation of letters and words) and is commonly diagnosed in children with ADHD. Measures The Parent Experience of Assessment Survey (PEAS) The PEAS is a 64-item paper and pencil questionnaire, the purpose of which is to capture the experience of a child’s psychological assessment from a parental point of view. Recently developed by a university-based therapeutic child assessment clinic (Finn 56 & Tharinger), this measure follows a 5-point Likert scale format (Likert, 1932) with response options from “Strongly Disagree” to “Strongly Agree” available. Six subscales for the PEAS were determined through factor analysis after an 80-item pool was sorted by nine independent judges, experts in assessment. The subscales of the PEAS serve as independent measures of various interpersonal experiences with psychological assessment as there is no total, or global, score to be calculated. For the purposes of the current study, the term “Assessor” in this measure was verbally explained to each parent completing the form to mean ‘the assessment team,” comprised of both the neuropsychologist and the child’s assessor. The six subscales are: Learned New Things, Positive Assessor-Child Relationship, Negative Feelings About the Assessment, Positive Assessor-Parent Relationship, Collaboration/Informed Consent, and Family Involvement in Child’s Problems. Data from this study and previous studies were used to calculate a measure of internal consistency for this instrument. Intercorrelations of the 64 item scores yielded a Chronbach’s alpha coefficient of α = .90. Learned New Things. The Learned New Things subscale measures the extent to which parents feel that their understanding of their child and the child’s difficulties has changed as a result of the assessment. Items such as “I understand my child so much better now” and “I am more aware of my child’s strengths” are included. Positive Assessor-Child Relationship. The Positive Assessor-Child Relationship subscale includes ten items such as “My child felt comfortable with the assessor” and “The assessor worked well with my child.” 57 Negative Feelings About the Assessment. Some sample items from the Negative Feelings About the Assessment subscale include “The assessment made me feel guilty,” “I was anxious throughout the assessment,” and “At the end of the assessment, I was left feeling angry.” Positive Assessor-Parent Relationship. Items included within this subscale are “I felt close to the assessor,” “I felt the assessor respected me,” “I felt the assessor looked down on me,” and “I trusted the assessor.” Collaboration/Informed Consent. This subscale included items such as “The assessor helped me explain the assessment to my child,” “I felt like part of a team working to help my child,” “I understood the goals of the assessment,” and “The assessor asked me if the assessment findings seemed right to me.” Family Involvement in Child’s Problems. Some sample items from this scale are “My family has little to do with why my child has problems,” “My child is the only person in our family who needs to change,” and “I now see how my family’s problems affect my child.” (see Appendix A) The Child Experience of Assessment Survey (CEAS) The CEAS is a 30-item exploratory instrument designed to capture the experience of a psychological assessment from the child’s point of view. This measure follows a 5- point Likert scale format (Likert, 1932) with response options modified to be more child- friendly: “Really, Really Not True,” “Not True,” “Kind of True,” “True,” and “Really, Really True.” Five subscales for the CEAS were conceptualized theoretically after a 50- item pool was sorted by nine independent judges, experts in assessment. The subscales of 58 the CEAS serve as independent measures of various interpersonal experiences with psychological assessment as there is no total, or global, score to be calculated. Prior to each administration of this measure, the name of the child’s assessor was handwritten in each (The assessor) blank, so that the child might better understand the meaning of the statements. The five subscales are: Learned New Things, Feelings About the Assessment, Child-Assessor Relationship, Perception of Parent Understanding, and Collaboration. Data from this study were used to calculate a measure of internal consistency for this instrument. Intercorrelations of the 30 item scores yielded a Chronbach’s alpha coefficient of α = .88. Learned New Things. The Learned New Things subscale attempts to measure the extent to which children feel that their self-understanding has increased as a result of the assessment. Items such as “Now I better understand my problems,” “I learned that I am good at some things I didn't know about,” “Now I know more about why some things are harder for me,” and “I will think about myself differently now” are included. Feelings About the Assessment. The Feelings About the Assessment subscale attempts to measure the child’s affective reactions to the assessment process itself, as well how the child feels about him or herself after participating in the assessment. The subscale includes items such as “I felt the assessment was fun,” “I'm glad I did the assessment,” “I felt the assessment was boring,” “I am proud of myself for doing the assessment,” and “I felt the assessment was a waste of time.” Assessor-Child Relationship. The Assessor-Child Relationship scale attempts to capture a sense of rapport developed between the examiner and the child that may also be 59 related to therapeutic alliance. Some sample items from the Assessor-Child Relationship subscale include “(The assessor) seemed to care about me,” “I liked (the assessor),” (The assessor) and I had fun together,” “(The assessor) was mean to me,” and I looked forward to coming to see (the assessor).” Perception of Parent Understanding. The Perception of Parent Understanding subscale attempts to ascertain whether the child feels as if their parents learned more about them, understand them better, and have more empathy for their problems after the assessment. Items such as “I think my parents will understand me better now,” “Maybe my parents will go easier on me now,” and “Maybe, after this assessment, my parents will realize its not all my fault.” are included. Collaboration. Finally, the Collaboration subscale included items related to the extent to which the child felt informed and included during the various stages of the assessment, including during the feedback session. Some sample items are: “(The assessor) explained why each test was important;” “(The assessor) helped me understand things about myself from the tests;” “(The assessor) asked me what it was like to take the tests” and “(The assessor) helped me understand why we were doing the tests.” (see Appendix B) Parents’ Positive and Negative Emotions About Their Child (PPNE-C) The PPNE-C is a 20-item pilot self-report questionnaire designed to investigate potential change in parental empathy and hopefulness about their child’s challenges and future over the course of a psychological assessment. The PPNE-C has been used in several case study examinations of parental affect change after their child’s Therapeutic 60 Assessment. The scale was adapted to be used by parents and children from the Positive and Negative Affect Schedule – Expanded Form (Watson & Clark, 1994). The PANAS- X is a 60-item questionnaire that measures the larger scales of Positive Affect and Negative Affect and also eleven specific affect states such as fear, sadness, and guilt. The scales’ internal consistency coefficients range from 0.72 to 0.93, maintain adequate construct validity, and present a significant inter-rater reliability between subjects and their peers (Watson & Clark). The PPNE-C questionnaire reads, “Today as I think about my child’s challenges and future I feel…” and lists positive emotions (e.g. “patient,” “sympathetic,” “determined,” “hopeful”) and negative emotions (e.g. “frustrated,” “like I want to give up,” “at my wits’ end,” “anxious”). The parent is to rate each emotion using a 5-point Likert scale format (Likert, 1932) with response options from “Strongly Disagree” to “Strongly Agree.” The mean scores are totaled for each of the two subscales, Positive Emotion and Negative Emotion. (see Appendix C) Children’s Positive and Negative Emotions About Themselves (CPNE-S) The CPNE-S is a 18-item pilot instrument created for this study to investigate potential change in children’s feelings about their challenges and future after the course of a psychological assessment. Similar to the PPNE-C, the questionnaire reads, “Today as I think about my challenges and future I feel…” and lists positive emotions (e.g. “good,” “proud,” “like I can handle it”) and negative emotions (e.g. “bad,” “alone,” “like I want to give up”). Similar to the CEAS, this measure follows a 5-point Likert scale format (Likert, 1932) with response options modified to be more child-friendly: “Really, Really 61 Not True,” “Not True,” “Kind of True,” “True,” and “Really, Really True.” The mean scores are totaled for the two subscales, Positive Emotion and Negative Emotion. (see Appendix D) The Client Satisfaction Questionnaire (CSQ-8) The CSQ-8 (Larsen, et al., 1979) is the most widely used measure for general client satisfaction. Although originally normed for adult clients, it has more recently been used in parent satisfaction studies. The CSQ-8 is a single factor scale with a reported internal consistency (Chronbach α) of .93 - .96. This measure was included within the current study in order to ascertain whether receiving individualized feedback for children affects overall parent satisfaction of the services they received from the clinic. (see Appendix E) Procedure Approval by the Human Subjects Committee Prior to beginning the study, permission was obtained from the Institutional Review Board for the Protection of Human Subjects of the University of Texas at Austin (IRB Approval Protocol # 2008-10-0009). This study was also approved by the appropriate officials of the private neuropsychological clinic. In addition, the Code of Ethics for research with human subjects of the American Psychological Association (APA, 2002) was adhered to. Recruitment and Protection of Participants Upon initiation of a contract for services between clients and the participating neuropsychological clinic, the parents of clients between the ages of 6 and 13 years were 62 approached by the neuropsychologist and the principal investigator with an offer to participate in this study. Willing participants were given a detailed letter describing the study, their participation, and their rights as a research participant. It was explained to the parents that they would return to the clinic with their child for the child feedback session, and that their child would receive a personalized story about their assessment to keep. The nature of the control and experimental group differences were not explained to the participants. Clients were informed that their participation in this study was entirely voluntary and the decision of whether or not to agree would have no bearing on their child’s evaluation, or on their present or future relationship with the participating clinic. In addition, participants were informed of their right to discontinue the study at any time, for any reason, without consequence. Informed consent from parents with consent for audio recording (Appendix F) and assent from children (Appendix G) were obtained. Assessment Neuropsychological evaluations for each participant were conducted per the standard practice of the participating clinic. A portion of each assessment was conducted by a licensed neuropsychologist, including the clinical interview and the clinical motor exam. The remainder of assessment batteries were administered by psychometrists employed by the clinic or by practicum students from a doctoral psychology program. A total of seven assessors participated in the study. Testing sessions were, with two exceptions, completed in one day between 8:30 a.m. and 4:00 p.m., with a one-hour break for lunch. Assessments for the two exceptions were each completed in two half-day 63 sessions. All evaluations were supervised by licensed psychologists and took place at the participating neuropsychological clinic. Comprehensive neuropsychological evaluations are fairly standardized at this clinic with some variation based upon referral concerns. The evaluations for the participants in the study included an assessment of cognitive ability (Wechsler Intelligence Scale for Children, 4 th Edition (WISC-IV) or Kaufman Assessment Battery for Children, 2 nd Edition (KABC-II)), an assessment of academic achievement (Woodcock Johnson Tests of Achievement, 3 rd Edition (WJ III) or Kaufman Test of Educational Achievement, 2nd Edition (KTEA-II), an assessment of attentional capabilities (Conners' Continuous Performance Test or Gordon’s Diagnostic System), an assessment of memory functioning (California Verbal Learning Test (CVLT-C or CVLT- 2); TOMAL-2), a test for auditory processing disorders (SCAN-A or SCAN-C), an assessment of motor coordination and visual processing (Beery Visual Motor Integration (VMI) or Rey Osterrieth Complex Figure-Copy, Immediate, and Delayed Recall (Rey- O)), assessments of sensory and motor functioning (lateral dominance; grip strength; Grooved Pegboard; Reitan-Klove Sensory-Perceptual Examination, clinical motor exam), and an assessment of social and emotional functioning (Kinetic Family Drawing and/or Haak Sentence Completion task; Achenbach Child Behavior Checklist: Parent and Teacher Reports). Parent feedback meeting Upon completion of the evaluation, the parent(s) returned within one to three weeks for an assessment feedback session with the licensed neuropsychologist, as per the 64 standard practice of this clinic. In this setting, it is uncommon for the children to receive assessment feedback directly from the neuropsychologist. Although it may be offered by the neuropsychologist as an option, no participating parent(s) and child accepted this option. Scheduling of the child feedback session After participating parent(s) completed the standard assessment feedback session with the licensed neuropsychologist, they were contacted and scheduled for their child feedback session. Creation of individualized fables Of the seven assessors who agreed to participate in the study by conducting child feedback sessions, two assessors also volunteered to help write fables for the children they tested. These two assessors participated in a training session conducted by the principal investigator and were given instructions and literature to review pertaining to therapeutic assessment and fable writing (see Appendix H). Fables written by these volunteer assessors were reviewed and revised by the principal investigator and by a graduate student volunteer from the Therapeutic Assessment Project. The remaining fables for the study were written by two graduate student volunteers from the Therapeutic Assessment Project who are experienced at writing therapeutic fables, and by the principal investigator. Final approval of all fables was given by Deborah Tharinger, Ph.D., licensed psychologist and director of the Therapeutic Assessment Project. 65 Experimental group versus control group The structure and process of the experimental and control group child feedback sessions were identical. The difference between the groups was the timing of the research measures. For the experimental group, research measures were completed immediately after the feedback sessions. For the control group, research measures were completed immediately before the feedback sessions. Assignment to treatment group Most participants (exceptions are noted below) were randomly assigned to either an experimental group or a control group for this study. Neither the neuropsychologists, the assessors, nor the fable-writers were aware of the treatment group assignment until after each assessment was completed and each fable was written. A coin toss was used for the random assignments (heads = experimental group; tails = control group. On one occasion after a coin toss had determined a child to be in the experimental group, the assessor scheduled to conduct the child feedback session became detained elsewhere. As the child and parent had already arrived for their scheduled session, the principal investigator made the decision to give the post-assessment research measures first (thus transferring this case into the control group) and then to present the fable to the child and parent without the assessor. On three other occasions involving five subjects, it was not possible to schedule the child feedback sessions at a time or place that the assigned assessor could attend. For each of these cases, a decision was made to assign them to the control group, collect the research measures at the beginning of the child feedback session, and present the fable to the child without the assigned assessor present. 66 Child Feedback Session During each child feedback session, the assessor met with the child and parent(s) for a sharing of the individualized fable as assessment feedback. The assessors began each meeting by thanking the child and parent(s) for their participation in the study. They then introduced the fable to the child as a story written especially for them based upon the day they previously spent at the clinic and on what was learned from the testing. Assessment results and scores were not explicitly discussed unless they were directly solicited from the parent or child. Each child was asked to choose who they would like to have read the story. After the fable was read by the person chosen by the child, the assessor solicited any changes desired for the fable from the child or parent(s). The fable was given to each child to keep, the participants were again thanked for their efforts and participation, and goodbyes were said. Each session lasted between 15 and 45 minutes. The sessions were audiotaped when permission to audiotape was obtained from parents. The assessors were asked to fill out a feedback meeting checklist after each session. (see Appendix I) Treatment integrity In order to ensure that the child assessment feedback sessions were competently conducted, all sessions in which audiotape consent was obtained were audiotaped. Three experimental group feedback sessions and three control group feedbacks sessions were randomly chosen for this integrity check. The principal investigator reviewed the audiotapes for the following criteria: 1) greeting participants; 2) thanking participants for their participation; 3) introduction of the fable; 4) asking the child to choose a reader; 5) 67 soliciting changes to the fable after it has been read; and 6) thanking the participants again for their participation. Results indicated that the assessors competently conducted the feedback sessions. Data Collection All quantitative data were collected by the principal investigator. Confidentiality of assessment and research records was considered the highest priority. No identifying information was included on research measures; rather, participants were assigned a numerical code. A summary of data collection in included as Table 1. Pre-assessment measures. On the morning of the scheduled assessment session, each child participant completed the CPNE-S. Each participating parent completed a brief demographic form and the PPNE-C. Post-assessment measures. Immediately before or immediately after the child assessment feedback session (depending on treatment group), each child participant completed the CPNE-S and CEAS; each participating parent completed the PPNE-C, PEAS, and the CSQ-8. Table 1 Summary of Data Collection Pre-Assessment Measures Data Collected CPNE-S Children’s Positive and Negative Emotions About Themselves PPNE-C Parents’ Positive and Negative Emotions About their Child Demographic Information regarding parent gender and educational attainment Post-Assessment Measures Data Collected CPNE-S Children’s Positive and Negative Emotions About Themselves PPNE-C Parents’ Positive and Negative Emotions About their Child 68 CEAS Child Experience of Assessment Survey PEAS Parent Experience of Assessment Survey CSQ-8 Client Satisfaction Questionnaire 69 CHAPTER IV Results Overview Complete data were collected from 32 children and 32 parents. For the purposes of this study, the child and parent data were analyzed separately as distinct and unrelated groups. The child data include five subscale scores from the CEAS: Learned New Things, Child-Assessor Relationship, Feelings About the Assessment, Perception of Parent Understanding, and Collaboration. The child data also include pre- and post- assessment measures of positive and negative affect from the CPNE-S. The parent data include six subscale scores from the PEAS: Learned New Things, Positive Assessor- Child Relationship, Negative Feelings About the Assessment, Positive Assessor-Parent Relationship, Collaboration/Informed Consent and Family Involvement in Child’s Problems. The parent data also include pre- and post-assessment measures of positive and negative affect from the PPNE-C, as well as an overall mean score for the CSQ-8. The following sections detail the results from the study. First, descriptive statistics describe the sample and the general trends of the data. Main analyses follow, with results given for each proposed hypothesis. The originally proposed hypotheses analyzed three subscales from the CEAS (Hypothesis 1) and three subscales from the PEAS (Hypothesis 2). As an exploratory procedure, independent analyses for each of the two subscales from the CEAS and the three subscales from the PEAS that were not included in the originally proposed hypotheses were also conducted. 70 Descriptive Statistics Descriptive statistics for the child measures are presented in Table 2 and statistics for the parent measures are presented in Table 3. These include means and standard deviations by group and across the total sample for each of the dependent variables. As previously outlined, the CEAS, PEAS, CPNE, and PPNE instruments follow a 5-point Likert scale (M = 3.0). The CSQ-8 uses a 4-point Likert scale (M= 2.5). Table 2 Descriptive Statistics for the Child Dependent Variables Measure/Subscale Control (n = 17) M SD Experimental (n = 15) M SD Total sample (N = 32) M SD CEAS Learned New Things 3.05 .70 3.77 .92 3.39 .88 Child-Assessor Relationship 3.91 .50 4.40 .70 4.14 .64 Feelings About Assessment 3.43 .85 3.77 1.02 3.59 .93 Collaboration 3.23 .61 4.00 .82 3.59 .81 Parent Understanding 3.30 .59 3.99 .72 3.63 .73 CPNE-S Pre- Positive Affect 3.79 .72 3.68 .74 3.74 .72 Pre- Negative Affect 1.62 .61 1.72 .92 1.66 .76 Post- Positive Affect 3.94 .89 3.98 .41 3.96 .87 Post- Negative Affect 1.51 .43 1.63 .83 1.57 .64 71 Table 3 Descriptive Statistics for the Parent Dependent Variables Measure/Subscale Control (n = 17) M SD Experimental (n = 15) M SD Total sample (N = 32) M SD PEAS Learned New Things 3.72 .49 4.01 .43 3.85 .48 Assessor-Child Relationship 3.72 .55 4.19 .37 3.94 .52 Negative Feelings 1.75 .43 1.53 .40 1.65 .42 Assessor-Parent Relationship 4.04 .50 4.29 .43 4.16 .48 Collaboration/Informed Consent 3.80 .38 4.24 .33 4.01 .42 Family Involvement 2.95 .56 2.95 .68 2.95 .61 PPNE-C Pre- Positive Affect 3.94 .34 4.07 .34 4.00 .34 Pre- Negative Affect 2.20 .76 2.39 .64 2.28 .71 Post- Positive Affect 4.18 .43 4.26 .40 4.22 .41 Post- Negative Affect 2.12 .67 2.09 .62 2.10 .64 CSQ-8 3.63 .31 3.87 .14 3.74 .27 72 Preliminary Analyses A priori analysis of power An a priori power analysis was run using G*POWER 3.0 (Faul, Erdfelder, Lang, & Buchner, 2007) to determine the appropriate sample size necessary to detect significant effects if they exist. For hypotheses 1 and 2, a search of the available literature exploring similar constructs turned up few reports of effect sizes, the most relevant being Allen and colleagues (2003). These authors reported eta squared values of η 2 = 0.66 for variables measuring rapport with examiner and η 2 = 0.50 for measures of self-enhancement. These would be considered very large effects using common eta squared guidelines of small = .01, medium = .10, and large = .25 (Keith, 2006). G*POWER 3.0 uses the Pillai-Bartlett V criterion as a multivariate test statistic corresponding with f 2 as the effect size statistic. Cohen (1988) defines conventional values of effect size f 2 as small = .02, medium = .15, and large = .35. As such, an a priori power analysis for the global effects MANOVA for hypotheses 1 and 2 was conducted, using the theoretically expected large effect size of .35. A sample size of 36 was determined adequate given a significance level (alpha) of α = .05 and 80% power. Missing data, outliers, and assumptions All data analyses except the power analysis were run using PASW 17.0 (SPSS, 2009). Before conducting the main data analyses, the child and parent continuous, ungrouped dependent variables were examined to determine their suitability for use in univariate and multivariate analyses. There were no missing data points among any of the dependent variables. Tukey’s boxplots were used to visually analyze the distribution 73 range of each dependent variable in order to detect outliers, or data points that present extreme or atypical values compared to the distribution. Several outliers were detected among several dependent variables; however, the detected outliers were not from the same or related sources (respondents), nor were they consistent across particular measures. The outliers were also not found to be related to scoring or data-entry errors. Although the safest course of action for handling outliers in large datasets is generally deletion, a more realistic approach with smaller samples is to determine whether or not the data in question represent the target sample (Meyers, Gamst, & Guarino, 2006). In the case of the current study it seems safe to assume that the data points in question likely represent the target sample, and, especially given the relatively small sample size of this study, the decision to delete all outliers was rejected. Moreover, univariate analyses run separately on each dependent variable using data with all outliers deleted did not affect the findings for statistical significance. For one proposed hypothesis, however, a single outlier affected the statistical significance of the overall MANOVA. An exception to the decision to not delete outliers was made for this single case, as outlined below under the headings Main Analyses: Hypothesis 2. Assumptions for normality were met, as each dependent variable displayed acceptable skewness and kurtosis. Slight deviations from ideal kurtosis were found and determined to be due to the outliers in question as discussed above. Assumptions for linearity and homoscedasticity for each dependent variable were also met. Additionally, using Levene’s Test for the subscales of the CEAS and PEAS, a preliminary analysis of homogeneity of error variance was found to be not significant (p > .05), indicating that 74 the control and experimental groups did not differ significantly on error variance across each dependent variable. Equality of groups Random assignment to groups resulted in 17 children and 17 parents in the control group and 15 children and 15 parents in the experimental group. Pearson’s Chi- square tests were conducted to analyze pre-treatment equality of groups on the variables child age, child gender, and assigned assessor with a significance level (alpha) of α = .05. These tests were found to be not significant; the experimental and control groups did not significantly differ by group on child age χ 2 (7, N = 32) = 9.36, p > .05, child gender χ 2 (1, N = 32) = 0.38, p > .05, or assessor χ 2 (6, N = 32) = 4.23, p > .05. In addition, four independent samples t-tests were conducted to determine pre-treatment equality of groups for the pre-assessment dependent variables of the CPNE-S and PPNE-C. Data from the CPNE-S and PPNE-C were collected from each child and parent before the child’s assessment and again afterward for use in a repeated-measures analysis. T-tests of pre- assessment means for these measures determined that the control and experimental groups were not significantly different on pre-assessment positive and negative affect, as shown in Table 4. 75 Table 4 Pre-assessment equality of treatment groups for the CPNE-S and PPNE-C. Variable t (30) Sig CPNE-S Pre- Positive Affect .35 .73 CPNE-S Pre- Negative Affect -.37 .72 PPNE-C Pre- Positive Affect -1.05 .30 PPNE-C Pre- Negative Affect -.75 .46 Correlations of dependent variables Pair-wise correlations among the subscales of the CEAS, as well as subscales of the PEAS, were conducted as a preliminary analysis for the MANOVA computations. By creating artificial dependent variables that are linear combinations of the measured variables that can be analyzed simultaneously, MANOVA reduces the chance of Type I error that occurs from multiple, independent statistical tests (French, Poulsen, & Yu, n.d.). The use of MANOVA is advantageous over multiple, independent one-way analysis of variance (ANOVA) tests when variables are predicted to correlate moderately (Weinfurt, 1995). However, suggested acceptable correlations between dependent variables run simultaneously via MANOVA vary by source (from .3 to .5 according to (Tabachnick & Fidell (1996); from approximately .21 to .36 according to Weinfurt; and from ‘correlated’ to ‘independent’ according to Statistics Solutions (2009)). However, these authors and others (Meyers et al., 2006; Foster, Barkus, & Yavorsky, 2006), make the case that even variables with low or high correlations are viable candidates for 76 including within MANOVA designs if the variables theoretically belong together and are not identical. Foster and colleagues (2006) suggest that subscales from the same measure are suitable to be analyzed together in a MANOVA design because they are theoretically related and so that Type I error from multiple tests across the same measure can be reduced. The correlations calculated among the subtests of the CEAS and of the PEAS are therefore determined to be acceptable and sufficient to include in a MANOVA design. Table 5 contains the correlation matrix for the five subscales of the CEAS and Table 6 contains the correlation matrix for the six subscales of the PEAS. Table 5 Pearson’s R Correlation Coefficients for the CEAS Subscales 1. 2. 3. 4. 5. 1. Learned New Things 1 .77 ** .78 ** .70 ** .80 ** 2. Child-Assessor Relationship 1 .82 ** .58 ** .68 ** 3. Feelings About Assessment 1 .65 ** .70 ** 4. Collaboration 1 .69 ** 5. Parent Understanding 1 ** Correlation is significant at the 0.01 level (2-tailed). 77 Table 6 Pearson’s R Correlation Coefficients for the PEAS Subscales 1. 2. 3. 4. 5. 6. 1. Learned New Things 1 .42 * -.27 .40 * .59 ** .12 2. Assessor-Child Relationship 1 -.31 .71 ** .79 ** -.16 3. Negative Feelings 1 -.45 ** -.48 ** .38 * 4. Assessor-Parent Relationship 1 .85 ** -.22 5. Collaboration/Informed Consent 1 -.06 6. Family Involvement 1 * Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed). Main Analyses Hypothesis 1 A. Children who receive assessment feedback in the form of individualized fables will report that they learned new things about themselves through the assessment at a greater level than will children who do not receive this feedback. B. Children who receive assessment feedback in the form of individualized fables will report a more positive relationship with the assessor than will children who do not receive this feedback. C. Children who receive assessment feedback in the form of individualized fables will report more positive feelings about the assessment compared to children who do not receive this feedback. 78 Analysis Hypothesis 1 was tested using a Hotelling’s T 2 , or two-group multivariate analysis of variance (MANOVA), with three subscales of the CEAS (Learned New Things, Child- Assessor Relationship, and Feelings About Assessment) as the composite dependent variable. Treatment group (control vs. experimental) was the independent variable and alpha level was set at α = .05. Using the Hotelling’s T 2 criterion, the composite dependent variable was significantly affected by treatment group, Hotelling’s T 2 , F(3, 28) = 3.99, p = .017, partial η 2 = .30. Follow-up univariate ANOVA’s were conducted on each dependent variable in order to specifically determine the root of the significant global effect. Univariate between-subjects tests on the dependent variables showed a significant effect by group for the variable Learned New Things, F(1, 30) = 6.25, p = .018, partial η 2 = .17. The variable Child-Assessor Relationship was also significantly affected by group, F(1, 30) = 5.26, p = .029, partial η 2 = .15. The univariate effect on Feelings About Assessment was not significant, F(1, 30) = 1.03, p = .32, partial η 2 = .033. Exploratory Analyses Hypothesis 1 analyzed the effects by treatment group for three subscales of the CEAS. Because data was collected for all five of the CEAS subscales, an exploration of the effects of the remaining two subscales, Collaboration and Parent Understanding, is warranted. As such, separate univariate ANOVA’s (independent samples t-tests) were conducted on the dependent variables Collaboration and Parent Understanding. Treatment group was the independent variable. Collaboration was significantly affected 79 by group, t(30) = -3.04, p = .005. Parent Understanding was also significantly affected by group, t(30) = -2.96, p = .006. Data collected from the CEAS supports the conclusion that children in the experimental group reported that they learned more about themselves, had a more positive relationship with their assessor, perceived that their parents learned more about them, and perceived greater collaboration with the assessment process than did children in the control group. There was no effect by group on the children’s reported feelings about their assessment. Hypothesis 2 A. Parents whose children receive assessment feedback in the form of individualized fables will report that they learned new things about their child at a greater level than will parents whose children do not receive this feedback. B. Parents whose children receive assessment feedback in the form of individualized fables will report that their child had a more positive relationship with the assessor than will parents whose children do not receive this feedback. C. Parents whose children receive assessment feedback in the form of individualized fables will report that they had fewer negative feelings about the assessment than will parents whose children do not receive this feedback. 80 Analysis Hypothesis 2 was tested using a Hotelling’s T 2 , or two-group multivariate analysis of variance (MANOVA), with three subscales of the PEAS (Learned New Things, Assessor-Child Relationship, and Negative Feelings About Assessment) as the composite dependent variable. Treatment group (control vs. experimental) was the independent variable and alpha level was set at α = .05. Using the Hotelling’s T 2 criterion, the composite dependent variable was not found to be significantly affected by group, Hotelling’s T 2 , F(3, 28) = 2.91, p = .052. However, preliminary analyses of the data distribution of these three dependent variables found one extreme outlier on the variable Positive Assessor-Child Relationship, as shown in Figure 1. Figure 1 Boxplot of data distribution for Positive Assessor-Child Relationship 81 This outlier was considered to be unusual, even in comparison to other identified outliers. The parent for this case (case number 001 – control group) relayed to the principal investigator a specific conflict between the child and assessor that was not typical of this setting, nor was it replicated in any of the subsequent 12 cases that this assessor was involved in. As such, a case could be made for the deletion of this single outlier on the dependent variable Positive Assessor-Child Relationship because it does not appear to represent the target sample. With this case deleted, the MANOVA was re-run and, using the Hotelling’s T 2 criterion, the composite dependent variable was found to be significantly affected by group, Hotelling’s T 2 , F(3, 27) = 3.19, p = .040. Follow-up univariate ANOVA’s were subsequently conducted on each dependent variable in order to specifically determine the source of the significant global effect. Univariate tests on the variables independently showed a significant effect by group for the variable Assessor- Child Relationship, F(1, 30) = 8.89, p = .006, partial η 2 = .22. This statistically significant result was unaffected by inclusion of the outlier, F(1, 29) = 8.40, p = .007, partial η 2 = .24. The variable Learned New Things was not significantly affected by group, F(1, 30) = 2.28, p = .142, partial η 2 = .073. The univariate effect by group on Negative Feelings About Assessment was also not significant, F(1, 30) = 2.43, p = .129, partial η 2 = .077. Exploratory Analyses Hypothesis 2 analyzed the effects by treatment group for three subscales of the PEAS. Because data were collected for all six of the PEAS subscales, an exploration of the effects of the remaining three subscales, Positive Assessor-Parent Relationship, 82 Collaboration/Informed Consent, and Family Involvement in Child’s Problems, seems a reasonable next step. As such, separate univariate ANOVA’s (independent samples t- tests) were conducted on the dependent variables Positive Assessor-Parent Relationship, Collaboration/Informed Consent, and Family Involvement in Child’s Problems. Treatment group was the independent variable. Collaboration/Informed Consent was found to be significantly affected by group, t(30) = -3.45, p = .002, d = 1.05. Neither Positive Assessor-Parent Relationship, t(30) = -1.47, p = .151, d = .52, nor Family Involvement in Child’s Problems, t(30) = .005, p = .996, d = 0, were significantly affected by group. Data collected from the PEAS supports the conclusion that parents in the experimental group reported that their child had a more positive relationship with their assessor and perceived greater collaboration with the assessment process than did parents in the control group. Parents did not differ by group on their perceptions of having learned new things about their child, their feelings about the assessment, the assessor- parent relationship, or their family’s involvement in their child’s problems. Hypothesis 3 A. Children who receive assessment feedback in the form of individualized fables will report an increase in positive emotions related to their challenges and future at post-assessment when compared with pre- assessment values. B. It is further hypothesized that the magnitude of this change in positive emotions between pre- and post-assessment for children receiving 83 assessment feedback in the form of individualized fables will be greater than that of children who do not receive this feedback. Analysis Hypothesis 3 was analyzed using a 2x2 (Group: control vs. intervention) x (Time: pre-assessment vs. post-assessment) Repeated Measures Analysis of Variance (ANOVA). A within-subjects interaction effect for Group x Time was not significant, F (1, 30) = .27, p = .61. The effect for Time was also nonsignificant across the sample, F (1, 30) = 2.42, p = .13. These results indicate that children’s positive affect was not statistically different across the sample over time, nor was it different by group over time. Hypothesis 4 A. Children who receive assessment feedback in the form of individualized fables will report a decrease in negative affect, as evidenced by improved affect scores, related to their challenges and future at post- assessment when compared with pre-assessment values. B. It is further hypothesized that the magnitude of this change in negative affect between pre- and post-assessment for children receiving assessment feedback in the form of individualized fables will be greater than that of children who do not receive this feedback. Analysis Hypothesis 4 was analyzed using a 2x2 (Group: control vs. intervention) x (Time: pre-assessment vs. post-assessment) Repeated Measures Analysis of Variance (ANOVA). A within-subjects interaction effect for Group x Time was not significant, F (1, 30) = 84 .002, p = .97. The effect for Time was also nonsignificant across the sample, F (1, 30) = .48, p = .49. These results indicate that children’s negative affect was not statistically different across the sample over time, nor was it different by group over time. Hypothesis 5 A. Parents whose children receive assessment feedback in the form of individualized fables will report an increase in positive emotions related to their child’s challenges and future at post-assessment when compared with pre-assessment values. B. It is further hypothesized that the magnitude of this change in positive emotions between pre- and post-assessment for parents of children receiving assessment feedback in the form of individualized fables will be greater than that for parents of children who do not receive this feedback. Analysis Hypothesis 5 was analyzed using a 2x2 (Group: control vs. intervention) x (Time: pre-assessment vs. post-assessment) Repeated Measures Analysis of Variance (ANOVA). A within-subjects interaction effect for Group x Time was not significant, F (1, 30) = .18, p = .68. Although there was a significant effect for Time across the sample, F (1, 30) = 13.87, p = .001, the magnitude of change in affect between pre-assessment and post- assessment was not statistically different between groups. These results indicate that parents’ positive affect was significant higher after their child’s assessment than it was before their child’s assessment, regardless of whether they were in the control or experimental group. 85 Hypothesis 6 A. Parents of children who receive assessment feedback in the form of individualized fables will report a decrease in negative emotions, as evidenced by improved affect scores, related to their child’s challenges and future at post-assessment when compared with pre-assessment values. B. It is further hypothesized that the magnitude of this change in negative affect between pre- and post-assessment for parents of children receiving assessment feedback in the form of individualized fables will be greater than that for parents of children who do not receive this feedback. Analysis Hypothesis 6 was analyzed using a 2x2 (Group: control vs. intervention) x (Time: pre-assessment vs. post-assessment) Repeated Measures Analysis of Variance (ANOVA). A within-subjects interaction effect for Group x Time was not significant, F (1, 30) = 1.34, p = .26. The effect for Time was also nonsignificant across the sample, F (1, 30) = 3.97, p = .055. These results indicate that parents’ negative affect was not statistically different across the sample over time, nor was it different by group over time. Hypothesis 7 Parents of children who receive assessment feedback in the form of an individualized fable will report a higher level of satisfaction with services than parents of children who do not receive this feedback. 86 Analysis An independent samples t-test was conducted to test for an overall treatment satisfaction mean difference between the control and experimental groups using an α- level of .05. Overall client satisfaction was found to be significantly higher in the experimental group compared to the control group, t(30) = -2.78, p = .009, d = .89. Table 7 Summary of Significance Testing for Child Variables Measure/Subscale Control (n = 17) M SD Experimental (n = 15) M SD Significance Testing t p d CEAS Learned New Things 3.05 .70 3.77 .92 -2.50 a .018 * .82 b Child-Assessor Relationship 3.91 .50 4.40 .70 -2.29 a .029 * .77 b Feelings About Assessment 3.43 .85 3.77 1.02 -1.02 a .32 .37 Collaboration 3.23 .61 4.00 .82 -3.04 .005 * .95 Parent Learning 3.30 .59 3.99 .72 -2.96 .006 * .95 CPNE-S Mpost - Mpre Mpost - Mpre Positive Affect 0.15 0.3 .52 .61 .19 Negative Affect -0.11 -0.09 .045 .97 .03 * Significant at α = .05 a For consistency, t statistics were computed from SPSS-provided F values b For consistency, Cohen’s d effect size are reported for all subscales 87 Table 8 Summary of Significance Testing for Parent Variables Measure/Subscale Control (n = 17) M SD Experimental (n = 15) M SD Significance Testing t p d PEAS Learned New Things 3.72 .49 4.01 .43 -1.79 a .084 .60 Assessor-Child Relationship 3.72 .55 4.19 .37 -2.80 a .009 * .90 b Negative Feelings 1.75 .43 1.53 .40 1.54 a .13 .53 Assessor-Parent Relationship 4.04 .50 4.29 .43 -1.47 .15 .52 Collaboration/InformConsent 3.80 .38 4.24 .33 -3.45 .002 * 1.05 b Family Involvement 2.95 .56 2.95 .68 .005 .97 0 PPNE-C Mpost - Mpre Mpost - Mpre Positive Affect .24 .19 .42 a .68 .13 Negative Affect -.08 -0.30 1.16 a .26 .32 CSQ-8 3.63 .31 3.87 .14 -2.78 .009 * .89 * Significant at α = .05 a For consistency, t statistics were computed from SPSS-provided F values b For consistency, Cohen’s d effect size are reported for all subscales 88 CHAPTER V Discussion The purpose of this research study was to investigate whether providing children with individualized feedback affects how they and their parents report experiencing a neuropsychological assessment. Research has not addressed this topic until now. In the field of collaborative/therapeutic assessment, providing relevant feedback to all clients, whether children or adults, is not only considered an ethical obligation but also an avenue toward improving clients’ lives (Finn, 2007; Fischer, 1985). With this in mind, the current study sought to answer the broader question of whether or not children who undergo a neuropsychological assessment will benefit in their lives by receiving developmentally sensitive and creative feedback about their performance. Specific to the current study, data collected from a control and an experimental group were analyzed to test proposed and exploratory hypotheses comparing variables that may impact the children’s and parents’ experience of the assessment: new information learned about the child; relationship with the assessor; affect regarding the assessment; children’s perceptions that their parents learned new information about them; parents’ perceptions that their family affects their child’s functioning; affect regarding the child’s challenges and future; and parental satisfaction with services. Although results of the experiences of the assessment between the control and experimental groups were mixed, all significant results that were found were in favor of the experimental group as hypothesized. A summary of the results follow, including discussion of statistical versus 89 clinical, or practical, significance of the findings. Lastly, limitations of the study and implications for practice and future research are considered. Method The sample in this study was a convenience sample of children who had been referred for a neuropsychological assessment to a private outpatient clinic. Of the 47 sets of parents with children in the target age range who were informed of the study, 46 agreed to participate. This level of cooperation suggests that the conceptual idea of providing individualized feedback to children after an assessment is appealing to most parents. Of the original 46 who began the study, 32 returned to complete the study and participate in the child feedback sessions. There were no discernible differences found between participants who completed the study and participants who dropped out. That is, the participants who did not complete the study were not different from those who did complete the study in regards to child age, ethnicity, parental education, diagnosis, or in regards to their neuropsychologist or the child’s assessor. Qualitatively, the main reasons parents gave for declining to return for the child feedback session were scheduling and time issues. This trend is also evident in the timing of the cases, in that a majority of the participants who dropped out were assessed near holiday (winter and summer) breaks that often represent busier times for school-aged families. One mother did state that, as a result of the assessment, she was unsure about wanting her child to receive feedback about the results. Although data was not collected regarding household size, marital status of the parent, and parental work schedule, these may have been factors involved in study completion. 90 This final sample size of 32 is less than that suggested by the a prior power analysis, yet the originally estimated sample size of 36 was calculated using the only effect size found in existing literature that approached relevance for this study (Allen, et al., 2003). The final sample size proved adequate for discerning significant differences between groups in this population on a number of variables, suggesting that a sample size of 32 provided adequate power for this study. The clientele generally referred to this clinic, and consequently the sample for this study, are fairly homogeneous with respect to parental educational attainment (college and beyond), ethnicity (Caucasian), and referral concerns (attentional and academic problems for the child). A homogeneous sample such as this is beneficial with respect to finding differences between groups as there is decreased within-subjects variance. However, the trade-off is poorer generalizability to more diverse populations. The setting for this study was beneficial to the study’s design in the sense that child neuropsychological assessment ‘practice-as-usual,’ that this study attempted to contrast with, occurs in the participating clinic on a daily basis. The addition of a singular component of collaborative/therapeutic assessment to each participant’s standardized assessment did not seem disruptive to the clinic’s daily operations. Overview of Results Analyses of the Child Experience of Assessment Survey (CEAS) yielded positive results for the experimental group as compared to the control group on two of the three subscales originally included in the proposal, and on both subscales included in the exploratory analyses. Analyses of the Parent Experience of Assessment Survey (PEAS) 91 yielded positive results for the experimental group compared to the control group on one of the three originally proposed hypotheses, and on one of the three subscales included in the exploratory analyses. Analyses of the CPNE-S and PPNE-C affect scales did not discern differences between groups, although the Positive Affect scale for the parents significantly increased over time for the sample as a whole. Finally, analysis of the CSQ- 8 showed greater satisfaction with the assessment process as a whole among the experimental group of parents compared with the control group of parents. Children who received fables as individualized assessment feedback reported having a greater sense of new self-understanding compared to children in the control group, as measured by the child dependent variable Learned New Things from the CEAS. As fables are designed to present new information to the child in an understandable format, this finding was anticipated. Still, these results suggest that the fables created met the criteria of meeting the children at their level to the extent that they were able to absorb new information about themselves and report that they were aware that they had done so. This finding supports previous reports that fables help children incorporate new and complicated information in a non-threatening way (Mutchnick & Handler, 2002; Tharinger et al., 2007). If one of the goals of psychological assessment is for clients to gain greater self-understanding of their strengths and weaknesses and in their ability to solve their problems as it is in Therapeutic Assessment (Finn, 2007), then providing individualized feedback in the form of a fable seems to have met this goal for the children in this study. 92 Parents in the experimental group were not statistically different from those in the control group on the variable Learned New Things. However, a medium effect size (partial η 2 = .073) suggests that the groups differ. Although a larger sample size may have been necessary to detect statistically significant differences for this variable, this medium effect size does suggest clinical significance. Children in the experimental group reported significantly higher levels of rapport with their assessor, as measured by the variable Child-Assessor Relationship, than did those in the control group. Additionally, parents in the experimental group reported a more positive Assessor-Child Relationship than did those in the control group. These results imply that children who received individualized feedback felt a closer bond with the assessor, felt that they were regarded more highly by their assessor, and held more positive regard for their assessor and that their parents also perceived these differences. These variables are theoretically related to the construct of therapeutic alliance, which has implications for early engagement in therapy, positive therapeutic outcomes, and adherence to recommendations for treatment (Ackerman et al., 2000; Finn, 2007; Finn & Tonsager, 1997). As such, a more positively perceived relationship between client and assessor may lead to beneficial future outcomes for the client. It is important to note that it is possible that these findings are related to the additional time that children in the experimental group spent with their assessor, as opposed to being a product of the actual assessment feedback and fable that the child received. A limitation of this study is that there was not an equalization of time spent between the child, parent, and assessor between the two groups. It may therefore not be possible to determine if the significant 93 findings for the variable Child-Assessor Relationship from the CEAS and for the variable Assessor-Child relationship from the PEAS were simply the result of the child and parent receiving more attention from the assessor. Still, although research has been limited, there is evidence that the beneficial effects experienced by adult clients after feedback are not a function of examiner attention (Finn & Tonsager, 1992; Newman & Greenway, 1997). Children in the experimental group did not report having feelings about the assessment process that were significantly different than those in the control group as measured by the Feelings About Assessment subscale of the CEAS. These findings are contrary to what was predicted in the proposed hypotheses for this study, as the assessment process was envisioned to seem more meaningful and fun (and less boring) for children after receiving a personalized story about their experience. Results across the sample for this subscale indicate that children undergoing neuropsychological assessment under both conditions report neutral to fairly positive feelings (M = 3.59, SD = .93 on a 5-point scale) about their assessment. Although significant group differences were not found for this subscale of the CEAS, a closer look at individual items that make up this scale did reveal a more positive trend on some items for the experimental group. There were no discernible differences between the groups for the items “I felt the assessment was boring,” “I felt the assessment was fun,” and “I’m proud of myself for doing the assessment.” However, for the item “I felt the assessment was a waste of time,” 81% of the experimental group responded with either “Not true” or “Really, really not true,” while 49% of the control group endorsed one of these responses, as seen in Table 9. 94 Similarly, for the item “I’m glad I did the assessment,” 80% of the experimental group responded with either “True” or “Really, really true,” while 62% of the control group endorsed one of these responses, as seen in Table 9. Thus, while the subscale did not achieve statistically significant differences between groups, clinical significance is suggestive of the finding that children in the experimental group responded more positively to being glad that they participated in the assessment. Additionally, it may be clinically significant to discover that children in the experimental group were less likely to feel that the assessment was a waste of time in comparison to children in the control group. These findings suggest that children who receive individualized feedback may think of an assessment as being more worthwhile than those who do not receive feedback, which in turn could affect their engagement in and attitude toward future assessment experiences. Table: 9 Items from CEAS Subscale: Feelings About the Assessment Item % of children who responded to the item as either: “NOT True” or “Really, really NOT true.” “I felt the assessment was a waste of time.” Control: 49% Experimental: 81% Item % of children who responded to the item as either: “TRUE” or “Really, really TRUE.” “I’m glad I did the assessment.” Control: 62% Experimental: 80% While looking at these individual items may suggest that the groups seem to have some differential feelings about some aspects of the assessment, it is important to note that these are merely descriptive trends and do not imply real differences between the groups. 95 Statistical results across the sample for this subscale indicate that children undergoing neuropsychological assessment under both conditions report neutral to fairly positive feelings about their assessment. Means and standard deviations for items on the CEAS are included as Appendix L. Parents in the experimental group did not report that their negative feelings about the assessment process were significantly different than those in the control group as measured by the Negative Feelings About the Assessment subscale of the PEAS. Statistical results across the sample indicate that there was no difference between groups on this subscale, and that neither group experienced strong negative feelings about the assessment (M = 1.65, SD = .42 across the sample on a 5-point scale, where “1” equates to strongly disagreeing with a negative statement and “5” equates to strongly agreeing with a negative statement). A closer look at individual items on this scale reveals that a large percentage of parents in both groups disagreed with all of these negatively worded items. In fact, over 85% of all parents responded with either “Strongly disagree” or “Disagree” to eight out of the ten items that make up the Negative Feelings subscale. This indicates that, in general, parents did not have strong negative affective reactions to the assessment process whether they were in the control or experimental group. Of the ten items on this scale, the two items in which there was some spread among the response options were, “I was anxious throughout the assessment,” and “The assessment was overwhelming.” On the first of these items, “I was anxious throughout the assessment,” the spread was similar for both the control and the experimental groups. On the other item that contained some response spread, “The assessment was overwhelming,” there 96 was a noticeable difference between groups. For this item, 87% of parents in the experimental group responded with either “Strongly disagree” or “Disagree” and the remaining 13% responded with “Neutral.” In contrast, parents in the control group responded to the item “The assessment was overwhelming” fairly evenly across response options: 23% responded, “Strongly disagree;” 18% responded, “Disagree;” 23% responded, “Neutral;” 18% responded, “Agree;” and 18% responded, “Strongly agree.” Thus, it does appear as if parents whose children received individualized feedback felt less overwhelmed by the assessment process than did parents in the control group, and this may be of clinical importance. Parents who feel less overwhelmed by the experience of an assessment may be able to be more engaged in the process and more understanding and accepting of the results, especially if the results are difficult or unexpected. Still, although looking at items individually may suggest that the groups seem to have some differential feelings about some aspects of the assessment, these are merely descriptive trends and do not imply statistical differences between the groups. Means and standard deviations for items on the PEAS are included as Appendix M. Children in the experimental group of this study reported a greater sense of collaboration in the assessment process than did those in the control group, as measured by the Collaboration subscale of the CEAS. In addition, parents in the experimental group of this study reported a greater sense of collaboration in the assessment process than did those in the control group, as measured by the Collaboration/Informed Consent subscale of the PEAS. Neither of these scales was originally predicted to show differences among the treatment groups in the proposed hypotheses of this study, the 97 thought being that collaboration in assessment encompasses the entire assessment process, and that the additive of a child feedback session would not have an effect on the general sense of a collaborative experience. Collaboration is the cornerstone of a collaborative/therapeutic approach to providing psychological assessment. Within Finn’s (2007) semi-structured model of Therapeutic Assessment, collaboration with the client begins with the initial meeting and generation of assessment questions and continues throughout every step, from deciding the order of the assessment instruments to checking the “fit” of results with the clients’ perceptions of themselves. Although this study did not employ any specific collaborative techniques with the exception of those that occur in the process of the child feedback session, results indicate that the experimental group participants perceived that the entire assessment process was more collaborative than did the control group. A close look at the Collaboration subscale of the CEAS reveals that two of six items address the idea of collaboration occurring during the feedback session: “(The assessor) helped me understand things about myself from the tests” and “(The assessor) helped me understand the results of the testing.” The remaining items seem to address feelings of collaboration experienced during the actual assessment. Similarly, the Collaboration/Informed Consent subscale of the PEAS has three of ten items which could seemingly be affected by the child feedback session specifically: “The assessor helped me explain the assessment to my child;” “I felt like part of a team working to help my child;” and “The assessor asked me if the assessment findings seemed right to me.” It is important to note, however, that these suppositions are based on subjective face validity and not empirically- or theoretically-derived hypotheses. The CEAS is an exploratory 98 measure that has previously never been used; the PEAS is also fairly exploratory, and its use has been limited to a small number of collaborative/therapeutic assessments. An establishment of reliability and validity of both measures, including the Collaboration subscale from the CEAS and the Collaboration/Informed Consent subscale from the PEAS, will need to be addressed in future research. The Collaboration subscales for both the children and parents in this study were significantly higher in the experimental group and had very large effect sizes (Child: d = .95; Parent: d = 1.05) according to established criteria (Cohen, 1988). This suggests that participation in the individualized child feedback session may have led children and parents to feel the entire assessment process was more collaborative even if the assessment itself was not more collaborative. This interesting finding is ripe for future exploration. Finally, the parent variables Positive Parent-Assessor Relationship and Family Involvement in Child’s Problems were found to be statistically indistinguishable between the control and experimental groups. These two subscales from the PEAS were not expected to be affected by the intervention in this study. As the child feedback sessions were child-focused and not parent-focused, the direct relationship between assessor and parent was not addressed by the intervention. Nor is this relationship addressed at the participating clinic during traditional neuropsychological assessments. Similarly, traditional child neuropsychological assessments by nature focus on the problems of the child, as opposed to Therapeutic Assessments that purposefully bring into focus the systemic influences on the child’s behavior and the function of the system in finding and implementing solutions. Thus, the Family Involvement subscale was not expected to be 99 different between groups in this study, and it was not, confirming the anticipated outcome. Analyses of the Child Positive and Negative Emotions measure (CPNE-S) did not reveal significant differences between the groups or across time for the sample as a whole. A closer look at the means of the subscales indicates slight improvement for both positive and negative emotions from pre-test to post-test for both groups, although this change is not statistically evident. The sample as a whole presented at pre-test as neutral to fairly positive (M = 3.74, SD = .72 on a 5-point scale) and not very negative (M = 1.66, SD = .76 on a 5-point scale), so perhaps the range of the responses from these participants was not sufficient to capture change. Moreover, the age range of children in the sample covers a large developmental span. It may be possible that this measure is not suitable for younger children who may not have the cognitive capacity to reflect on their emotions as pertaining to their challenges and future. The younger children in the sample may be considered to be either in the pre-operational stage or the concrete operations stage of cognitive development (Piaget, 1977). As the transitioning of the stages may occur between ages 7 and 11, many children in this study may not yet have developed the cognitive capacity for abstract thinking that comes with the concrete operations stage. Children in the pre-operational stage are present-oriented and have difficulty with the concepts of time and future (Piaget). Thus, many children in this sample may have responded to the questions presented on the CNPE-S with their feeling about themselves right now, as opposed to their feelings about their future. Qualitatively, several parents who attended child feedback sessions near the end of the data collection stage of the 100 study, which coincided with the end of the school year, commented that their children may be responding to the feelings items on the CPNE-S based on their excitement of school being out. Analyses of the Parents’ Positive and Negative Emotions about Their Child measure (PPNE-C) also did not reveal significant effects between the control and experimental groups. However, there was a significant increase in positive emotions by parents in the sample as a whole between pre-test and post-test times. While this finding was not related to the current study’s intervention, it is encouraging to find out that parents feel more hopeful, compassionate, patient, and encouraged about their child’s challenges and future after a traditional neuropsychological assessment than they did before the assessment. Parents’ negative emotions did not change by group or over time as a result of the intervention or as a result of the standard assessment. Similar to the child measure, parents’ negative affect began fairly low at pre-test (M = 2.28, SD = .71 on a 5-point scale), and thus may not have had much room for movement. Results from the administration of the CSQ-8 indicate that parents in the experimental group were more satisfied with their clinic experience in general than were parents in the control group. A closer look at the items of the CSQ revealed that parents whose children were provided with individualized assessment feedback were most differentiated from the control group by responding more positively to the questions: “Did you get the kind of service you wanted?” and “How satisfied are you with the amount of help you have received?” Regardless of treatment group, however, overall 101 satisfaction with clinic services was found to be high among the entire sample (M = 3.74, SD = .27 on a 4-point scale). In summary, the purpose of this research study was to investigate whether providing children with individualized feedback affects how they and their parents report experiencing a neuropsychological assessment. Results indicate that providing this feedback to children increases their self-understanding, creates a more positive relationship between child and assessor, creates a sense in children that the assessment process was more collaborative, and increases children’s perceptions of their parents’ understanding of their problems. From the point of view of parents, results from this study suggest that providing individualized feedback to children increases the parents’ sense of a positive relationship between the child and assessor, may create a sense in the parents that the assessment process was more collaborative, and may increase parental satisfaction with the assessment process. Results from the current study do not support the proposed hypothesis related to the parental perception of learning new information about their child, nor does the study support the hypotheses related to children’s and parents’ affective reactions to the assessment or the child’s challenges. Implications This research study contributes to the literature base on the utility of assessment feedback through individualized stories provided to children and their parents by being the only known empirical study to attempt to isolate a single therapeutic assessment technique and evaluate its efficacy for child outcomes. This is a necessary step in establishing efficacy for any intervention and will contribute to the study of Therapeutic 102 Assessment by beginning to distinguish which aspects of the assessment process contribute to the beneficial effects reported by clients. Moreover, in a neuropsychological context in which parents are generally the assessment information gatekeepers, this study also breaks new ground by actively engaging the child in the assessment feedback process and by addressing him or her at an appropriate and accessible developmental level. If one of the goals of an assessment, be it psychoeducational, socio-emotional- personality, or neuropsychological, is to afford them with respect and assume that they have the capacity to integrate new information about themselves, then providing individualized fables as assessment feedback may well be a way to accomplish this goal. Another aspect to be considered that may be less quantifiable, but important nonetheless, is that the children and parents overwhelmingly seemed to like receiving individualized fables as assessment feedback. In the context of establishing a consistent structure for feedback, the assessors were asked to complete a feedback checklist after each session, regardless of control or experimental group. A compilation of this information is included as Appendix J. The assessors reported that 90% of children reacted to the fable with either “Liked it” or “Loved it.” Some evidence given for this was that (the children) said so; (the children) reported identifying with the main characters; and that they smiled, laughed, were attentive, and were receptive to the story. Two of the children were reported to have been “Indifferent” to their feedback fable. One of these children told the assessor that he did not think he needed the fable because his parents had already explained the results of the assessment to him. The other child stated that the fable “was fine,” but that he did not want to take it home. Assessors also reported 103 that almost all parents either liked or loved the fable, with evidence such as: “said it was great;” “said they loved it;” and “said that it sounded like their child.” Many parents also commented on fables’ simplistic form of presentation, stating that it helped them (the parents) better understand the assessment results. Two of the parents were reported by the assessors to have been “indifferent” to the fable. (Note that these were not associated with the two children who responded indifferently to their fables). One parent reportedly displayed little affect during the child feedback session. The other parent openly disagreed with some of the findings from the assessment. Qualitatively, both the assessors and the principal investigator noticed that many of the parents appreciated the child feedback sessions as an opportunity to further discuss and understand the findings originally presented to them during the parent feedback sessions. One mother stated that she would reference the fable in the future in order to help her implement recommendations. Another mother said that she was overwhelmed by the information presented during parent feedback. She brought her child’s report with her and asked for help in understanding it and in knowing what to do next. Another set of parents took the opportunity of the child feedback session to discuss school and counseling options and to gain a more thorough understanding of their child’s diagnosis. Thus, the child feedback sessions seem to have met different needs for different families. Another implication for practice that came to light as a result of this study is that of the assessors’ impression of the utility of the feedback sessions. When asked if they thought that spending the extra time to conduct the child feedback sessions was worth it, every assessor involved agreed that it was. Comments from the assessors to this effect 104 included: “I could see (child) really liked learning about himself and he looked thoughtful when he heard the end of the story. He seemed to be relaxed;” “(I) enjoyed giving feedback to the child directly;” “(The child and parent) were very receptive and seemed so thrilled to have had the extra info;” and “The child identified with the characters and seemed to like hearing the solution the character’s helpers found. I think both mom and child feel hopeful about the future.” The fact that the assessors thought it was helpful and enjoyed giving feedback has implications for practice – they may be more likely to invest the extra effort to really understand the child’s difficulties and be able to explain results to clients. This effort will enhance their development as clinicians and may lead to greater client satisfaction in the future. Children in this study who received individualized feedback were found to have learned new things about themselves, reported that their parents learned more about them, and felt the process to be more collaborative. Children who gain self-understanding and have a sense of feeling respected as individuals are well on their way toward the healthy development of the self (Kohut, 1977). From a practical standpoint, however, addressing children with feedback is not a requirement according to guidelines that govern the professional practice of psychology (APA, 2002). As legal guardians, parents are considered to be the clients who receive the assessment results, and results of this study suggest that they are generally satisfied with traditional child neuropsychological assessment without the provision of child feedback. Still, parents whose children received this feedback were significantly more satisfied with the services provided by the clinic, and they also reported a greater sense of feeling the assessment to be a collaborative 105 process. Whether or not these statistical differences translate into clinical differences in practice remains to be investigated in future research. In a broader sense, there are clinical implications for creating a more satisfying assessment experience for parents, and a more enjoyable environment for the children undergoing neuropsychological assessment. The reality of the lives of these children is that they will likely undergo multiple assessments in their lifetime, and will often be referred to outside counseling and therapy services to help deal with their problems. Establishing a comfortable, respectful, and even enjoyable assessment environment may well enhance their willingness to participate fully and meaningfully in future assessments and interventions. Limitations There were numerous limitations inherent to this study. These limitations are fundamental to considering the implications for the study as well as for the consideration of future directions in the study of collaborative/therapeutic assessment and neuropsychological assessment, and in the practice of engaging children in their assessment feedback. First and foremost, there were limitations with this study’s sample and design. As previously mentioned, the research participants obtained from the convenience sample of the participating clinic’s clients were homogeneous with respect to ethnicity, parental education, referral concern, and diagnoses. Therefore, conclusions drawn from the results may not be generalizable to other populations and settings. Additionally, as there was no research base to build upon in the area of empirically evaluated child-centered feedback, 106 the instruments used to measure the child variables in question (CEAS and CPNE-S) were both pilot instruments created specifically for this study. As such, these instruments have no established estimates of reliability and validity. Furthermore, with the exception of the CSQ-8, the parent instruments used (PEAS and PPNE-C) have been relatively untested, and have not been used at all in the context of neuropsychological evaluations. The PEAS and PPNE-C were created for and have been used in a limited number of case study research projects evaluating Therapeutic Assessment. Any statistical results obtained from analyses of these instruments in the context of a neuropsychological assessment must therefore be interpreted with caution. It is hoped that future research will establish acceptable reliability and validity for all of these measures across a variety of assessment disciplines. There were also methods used in this study that may have impeded the collection of the most informative data. For instance, the randomization of assigning participants to treatment group was not pure. As addressed above in the methods section, some participants were assigned to a treatment group out of scheduling necessities. In addition, the structure and scheduling practices of the participating clinic precluded the assessors from being randomly assigned to research cases in which they participated; that is, in an effort to not disrupt the typical functioning of the clinic, the research design was subject to the assessors’ regular work schedules. There was also a limitation for this study in relation to time control. There was no standardization of time passed between each parent feedback session and child feedback session. The differences in the amount of time that passed between parent and child 107 feedback sessions were vast, ranging from one hour to several weeks, and this factor may have affected results. However, there was no discernible difference in the length of time passed between parent and child feedback sessions between the groups. Qualitatively, many parents commented about the difficulty of absorbing the amount of new information gleaned from their child’s neuropsychological assessment. Looking at both sides of this issue, having the parent and child feedback sessions close together may be beneficial for parents in that results of the assessment are still fresh in their minds; alternatively, having some time between the sessions may give parents a chance to begin to absorb and integrate the newly acquired information. Ideally, future studies will control for the time that passes between parent and child feedback sessions. An additional time-control variable concerned the time-spent-with assessor limitation. As discussed earlier, the experimental group of children and parents had spent more time with their assessor before filling out the research measures than did the control group. Although previous research with adult clients has shown that time-spent-with assessor was not a factor in benefits obtained from feedback (Finn & Tonsager, 1992; Newman & Greenway, 1997), this is a variable that will ideally be controlled for in future research evaluating child-focused feedback. Another limitation in the research design involved the way in which the study was presented to parents who were invited to participate. All participants were informed about the nature of the study and about the presentation of child feedback in the form of a story at the conclusion of the assessment. The control group was therefore not purely an “assessment-as-usual” group of participants. Parents and children in both groups knew 108 that they would participate in a child-focused feedback session and that they would receive a personalized story at the end of the assessment. Having this information upfront automatically changes the nature of a traditional assessment in this setting, and thus may have influenced the results with regard to reducing the differences between the treatment groups. Additionally, the final sample size may have impeded some of the variables on the PEAS from reaching statistical significance. Especially for the measurement of parents’ Learned New Things about their child, with a p-value of .08 and a medium effect size (.60), additional participants in the sample may well have increased the power enough to find a significant difference between the groups. Finally, in the current study, the principal investigator found it more difficult to recruit assessors to write fables, and then teach them how to do it well, than was anticipated. Only two participating assessors agreed to attempt to write fables, and, by final count, only four out of the 32 fables delivered were co-written by these assessors. The remaining fables were written by the principal investigator and by two research volunteers. The strength if this intervention may have been enhanced if each fable had been written by the person who worked directly with the child. Future Directions The purpose of this research study was to investigate whether providing children with individualized feedback through a fable enhances how they and their parents experience a neuropsychological assessment. Hopefully, this study is but a small slice of what is to come. Future research will ideally replicate the findings of this study using a 109 larger and more diverse sample. Further, future studies may be strengthened by an analysis of three treatment groups: one group receiving no feedback; one group receiving more traditional feedback involving a simple summary of the results; and a third group receiving the feedback in the form of an individualized fable. In addition to addressing the limitations of this study mentioned earlier, future research should more fully address the format of and information included in feedback fables, connect these findings with more specific information about how the fable was received, and connect these findings to diagnoses and the ability levels of the children. Moreover, in order to fully understand the impact of children receiving feedback, follow-up data must be collected that includes information on adherence to recommendations and impressions from parents and teachers on how the children involved are progressing. There is also an obvious void in the literature concerning how often children are given feedback, and in what contexts. There is also much room for exploring the format of children’s fables – an interesting study could employ writing the fable with the child’s direct input, or perhaps a choose-your- own adventure type fable; in this way the child becomes an active participant in re- creating his or her own story. Optimistically, the voice of the children, especially those with difficulties that will follow them throughout their lives, will become a voice that all clinicians believe deserves to be heard. It is to give these children a voice, to treat them with respect, and to engage their own problem-solving abilities that is the hope for the future of research in this area. 110 Appendix A Parent Experience of Assessment Survey (PEAS) Respondent’s Relationship to Child Research ID Number Date This questionnaire deals with your thoughts and feelings about your child’s psychological assessment. Please read each statement carefully. Once you decide how much you agree or disagree with a statement, circle the number that best matches how the statement applies to you. Be as honest and as accurate as possible. Use the following scale to rate each statement: Strongly Disagree Disagree Neutral Agree Strongly Agree 1. The assessor worked well with my child. 1 2 3 4 5 2. I learned new ways of interacting with my child. 1 2 3 4 5 3. The assessor was genuinely interested in helping us. 1 2 3 4 5 4. I had a say in what the assessment focused on. 1 2 3 4 5 5. My child did not like the assessor. 1 2 3 4 5 6. The assessment process was very confusing. 1 2 3 4 5 7. I now see that our family will need to change to help my child. 1 2 3 4 5 8. I am more aware of my child’s strengths. 1 2 3 4 5 9. The assessment made me feel guilty. 1 2 3 4 5 10. I liked the assessor. 1 2 3 4 5 11. The assessor helped me explain the assessment to my child. 1 2 3 4 5 12. Our family has little to do with why my child has problems. 1 2 3 4 5 13. Now I know more about why my child acts the way he/she does. 1 2 3 4 5 14. My child never really warmed up to the assessor. 1 2 3 4 5 15. The assessor liked me. 1 2 3 4 5 16. I was informed about each step of the assessment. 1 2 3 4 5 17. I am uncomfortable with how much the assessment revealed. 1 2 3 4 5 18. I didn’t learn anything new about my child from the assessment. 1 2 3 4 5 111 Use the following scale to rate each statement: Strongly Disagree Disagree Neutral Agree Strongly Agree 19. I felt close to the assessor. 1 2 3 4 5 20. I never really understood the point of the assessment. 1 2 3 4 5 21. Many of my child’s difficulties have to do with our family. 1 2 3 4 5 22. I learned a tremendous amount about my child from this assessment. 1 2 3 4 5 23. The assessment made me feel ashamed. 1 2 3 4 5 24. I felt like part of a team working to help my child. 1 2 3 4 5 25. The assessment revealed how family members play a role in my child’s problems. 1 2 3 4 5 26. I felt the assessor respected me. 1 2 3 4 5 27. Now I am more confused about how to handle my child. 1 2 3 4 5 28. I helped make sense of the test results. 1 2 3 4 5 29. The assessor never really understood my child. 1 2 3 4 5 30. I don’t believe our family makes my child’s problems worse. 1 2 3 4 5 31. I felt blamed for my child’s problems. 1 2 3 4 5 32. Now I know specific things I can do to help my child. 1 2 3 4 5 33. I understood the goals of the assessment. 1 2 3 4 5 34. I felt the assessor was cold towards me. 1 2 3 4 5 35. My child looked forward to meeting with the assessor. 1 2 3 4 5 36. My child is the only person in our family who needs to change. 1 2 3 4 5 37. I wish I had learned more concrete ways to help my child day to day. 1 2 3 4 5 38. The assessor asked me if the assessment findings seemed right to me. 1 2 3 4 5 39. The assessment was a humiliating experience. 1 2 3 4 5 40. The assessment completely changed the way I view my child. 1 2 3 4 5 41. I felt the assessor looked down on me. 1 2 3 4 5 42. My child felt comfortable with the assessor. 1 2 3 4 5 112 Use the following scale to rate each statement: Strongly Disagree Disagree Neutral Agree Strongly Agree 43. My child is worse with our family than with other people. 1 2 3 4 5 44. I trusted the assessor. 1 2 3 4 5 45. The assessor got my child to work really hard. 1 2 3 4 5 46. I am better able to communicate with my child. 1 2 3 4 5 47. No one ever told me what would happen during the assessment. 1 2 3 4 5 48. I now see how our family’s problems affect my child. 1 2 3 4 5 49. My child and the assessor really connected well. 1 2 3 4 5 50. The assessment made me feel like a bad parent. 1 2 3 4 5 51. Now I know what to expect from my child. 1 2 3 4 5 52. I felt judged by the assessor. 1 2 3 4 5 53. My child’s problems are partly caused by other struggles in our family. 1 2 3 4 5 54. The assessment has helped me have more patience with my child. 1 2 3 4 5 55. I felt that my opinion was valued. 1 2 3 4 5 56. The assessment was overwhelming. 1 2 3 4 5 57. My child dreaded almost every meeting with the assessor. 1 2 3 4 5 58. I have lots of new ideas about how to parent my child. 1 2 3 4 5 59. My child struggles more when people in our family aren’t getting along. 1 2 3 4 5 60. At the end of the assessment, I was left feeling angry. 1 2 3 4 5 61. The assessor seemed to like my child. 1 2 3 4 5 62. I was anxious throughout the assessment. 1 2 3 4 5 63. The assessor really listened to me. 1 2 3 4 5 64. I understand my child so much better now. 1 2 3 4 5 113 Appendix B Child Experience of Assessment Survey (CEAS) Research ID # Date This questionnaire deals with your thoughts and feelings about your assessment. Please read each statement carefully. Once you decide how much you agree or disagree with a statement, circle the number that best matches how the statement applies to you. Be as honest and as accurate as possible. Please do not skip any item and check only one box for each statement. Use the following scale to rate each statement: Really, really NOT true NOT true Kind of TRUE TRUE Really, really TRUE _________ helped me understand why we were doing the tests. 1 2 3 4 5 _________ liked me. 1 2 3 4 5 I think my parents learned a lot about me because of the assessment. 1 2 3 4 5 I learned that with help, I can handle many of my problems. 1 2 3 4 5 I felt the assessment was boring. 1 2 3 4 5 I will think about myself differently now. 1 2 3 4 5 _________ and I had fun together. 1 2 3 4 5 I think my parents will understand me better now. 1 2 3 4 5 Now I know more about why some things are harder for me. 1 2 3 4 5 _________ asked me what it was like to take the tests. 1 2 3 4 5 I am proud of myself for doing the assessment. 1 2 3 4 5 _________ helped me understand things about myself from the tests. 1 2 3 4 5 My parents will never understand me. 1 2 3 4 5 114 Use the following scale to rate each statement: Really, really NOT true NOT true Kind of TRUE TRUE Really, really TRUE _________ was mean to me. 1 2 3 4 5 I know more now about where my problems came from. 1 2 3 4 5 I felt the assessment was a waste of time. 1 2 3 4 5 _________ wanted to know what I thought about my life. 1 2 3 4 5 Maybe, after this assessment, my parents will realize its not all my fault. 1 2 3 4 5 I liked _________. 1 2 3 4 5 I felt the assessment was fun. 1 2 3 4 5 I still don't think my parents get it. 1 2 3 4 5 I looked forward to coming to see _________. 1 2 3 4 5 Now I better understand my problems. 1 2 3 4 5 _________ explained why each test was important. 1 2 3 4 5 I felt the assessment was helpful. 1 2 3 4 5 Maybe my parents will go easier on me now. 1 2 3 4 5 I learned that I am good at some things I didn't know about. 1 2 3 4 5 _________ helped me understand the results of the testing. 1 2 3 4 5 I'm glad I did the assessment. 1 2 3 4 5 _________ seemed to care about me. 1 2 3 4 5 115 Appendix C Case ID #: _________ Date: _______ Assessor: _____________________ PPNE – C Please answer the following questions, using the 5-point scale provided. Think about how you’re feeling RIGHT NOW. Strongly Strongly Disagree Disagree Neutral Agree Agree 1 2 3 4 5 Today as I think about my child’s challenges and future I feel….. 1. patient 1 2 3 4 5 2. scared 1 2 3 4 5 3. sympathetic 1 2 3 4 5 4. frustrated 1 2 3 4 5 5. compassionate 1 2 3 4 5 6. like I want to give up 1 2 3 4 5 7. encouraged 1 2 3 4 5 8. overwhelmed 1 2 3 4 5 9. at my wits end 1 2 3 4 5 10. determined 1 2 3 4 5 11. stuck 1 2 3 4 5 12. hopeful 1 2 3 4 5 13. anxious 1 2 3 4 5 14. positive 1 2 3 4 5 15. tired 1 2 3 4 5 16. that I have support 1 2 3 4 5 17. alone 1 2 3 4 5 18. pretty good 1 2 3 4 5 116 Appendix D Research ID #: _________ CPNE-S Date: _________ Please answer the following questions, using the 5-point scale provided. Think about how you’re feeling RIGHT NOW. Today as I think about my challenges and my future I feel ….. Really, Really NOT NOT Kind of Really, Really True True True TRUE TRUE 1. Afraid 1 2 3 4 5 2. Good 1 2 3 4 5 3. Sad 1 2 3 4 5 4. Bad 1 2 3 4 5 5. Happy 1 2 3 4 5 6. Hopeful 1 2 3 4 5 7. Strong 1 2 3 4 5 8. Frightened 1 2 3 4 5 9. Angry 1 2 3 4 5 10. Lonely 1 2 3 4 5 11. Cheerful 1 2 3 4 5 12. Excited 1 2 3 4 5 13. Proud 1 2 3 4 5 14. Alone 1 2 3 4 5 15. Like I can handle it 1 2 3 4 5 16. Like I want to give up 1 2 3 4 5 17. Like I will try my hardest 1 2 3 4 5 18. Like everything will be OK 1 2 3 4 5 117 Appendix E Client Satisfaction Questionnaire (CSQ-8) Please help us improve our services by answering some questions about the neuropsychological assessment your child received. We are interested in your honest opinion, whether it is positive or negative. Please answer all of the questions. Thank you very much, we really appreciate your help. 1. How would you rate the quality of service you received? 1 2 3 4 Poor Fair Good Excellent 2. Did you get the kind of service you wanted? 1 2 3 4 Yes, definitely Yes, generally No, not really No, definitely not 3. To what extent has our clinic met your needs? 1 2 3 4 None of my needs have been met Only a few of my needs have been met Most of my needs have been met Almost all of my needs have been met 4. If a friend were in need of similar help, would you recommend our clinic to him or her? 1 2 3 4 Yes, definitely Yes, generally No, not really No, definitely not 5. How satisfied are you with the amount of help you have received? 1 2 3 4 Very satisfied Mostly satisfied Indifferent or mildly dissatisfied Quite dissatisfied 6. Have the services you received helped you to deal more effectively with your child’s problems? 1 2 3 4 No, they seemed to make things worse No, they really didn’t help Yes, they helped somewhat Yes, they helped a great deal 7. In an overall, general sense, how satisfied are you with the service you have received? 1 2 3 4 Quite dissatisfied Indifferent of mildly dissatisfied Mostly satisfied Very satisfied 8. If you were to seek help again, would you come back to our clinic? 1 2 3 4 Yes, definitely Yes, generally No, not really No, definitely not 118 Appendix F PARENT CONSENT FORM PARENT AND CHILD EXPERIENCES OF NEUROPSYCHOLOGICAL ASSESSMENT AS A FUNCTION OF INDIVIDUALIZED FEEDBACK You and your child, ___________________, are invited to participate in a study. This form provides you with information about the study. The person in charge of this research will also describe this study to you and answer all of your questions. Please read the information below and ask any questions you might have before deciding whether or not to take part. Your participation is entirely voluntary. You can refuse to participate without penalty or loss of benefits to which you are otherwise entitled. You can stop your participation, or that of your child, at any time and your refusal will not impact your or your child’s current or future relationships with The University of Texas at Austin or Austin Neuropsychology Clinic. To do so, simply tell the researcher, the assessor, or the neuropsychologist you wish to stop participation. The researcher will provide you with a copy of this consent for your records. The purpose of this study is to examine certain aspects of children’s and their parents’ experiences after the child has undergone a neuropsychological assessment. The study is designed to supplement the standard neuropsychological assessment procedure with the addition of with a separate feedback session between the child, the child’s parent(s), and the assessor. This separate feedback session will be child-focused, and the information presented will be tailored to his or her developmental level. The research study will be conducted by Shea Pilgrim, Doctoral Candidate in School Psychology [512-587-2466], spilgrim@mail.utexas.edu] under the supervision of Deborah Tharinger, Ph.D., Associate Professor, University of Texas at Austin, Licensed Psychologist [512-471-4407, dtharinger@mail.utexas.edu]; and Melissa Bunner, Ph.D., Licensed Psychologist [512-637-5841; mbunner@austin.rr.com], Austin Neuropsychology Clinic. Your participating child will undergo a neuropsychological examination per the standard practice of this clinic. The standard practice generally involves a full day of testing with a psychometrician (referred to as assessor in this study), though occasionally the testing battery is broken up over two days. Upon completion of the testing session, you and the neuropsychologist meet for the standard feedback session. This is typically scheduled several days to one week post-testing. One additional component to this standard practice is the focus of this research study: a child feedback session. After the standard feedback session between you and neuropsychologist, you and your child will be asked to return (several days up to one week later) for a child-focused feedback session with the assessor. This session will 119 include the parent as an active participant. The session will take approximately 30 minutes. Current research indicates that children are typically not provided with feedback about their assessment performance, and if they are, it is not presented to them in a form that is understandable. Case studies show promising benefits to children and families after child feedback is provided in a story format that is at a developmentally appropriate level and individualized to their life. Additional requirements for your participation: 1) You will be asked to complete a short demographic form; 2) you will complete one pre-assessment questionnaire regarding feelings about your child and their future; 3) you will complete three post-feedback questionnaires regarding a) feelings about your child and their future; b) your experience of the assessment; and c) your satisfaction with services. Your child will be asked to complete: 1) one pre-assessment questionnaire regarding feelings about themselves and their future; 3) two post-feedback questionnaires regarding a) feelings about themselves and their future; b) their experience of the assessment. After the post-feedback questionnaires have been completed, you will also be asked to participate in a short oral interview with the Principal Investigator. Total estimated time to participate in this study is about 1 hour of time that is supplemental to the standard neuropsychological assessment. The child feedback session will take approximately 30 minutes, the pre-assessment measures approximately 10 minutes, and the post assessment measures approximately 20 minutes. Risks of being in the study This assessment intervention may involve risks that are currently unforeseeable. If you wish to discuss the information above or any other risks your child may experience, you may ask questions now or call the Principal Investigator listed on the front page of this form. Benefits of being in the study In clinical practice using similar techniques, children have shown improved self-esteem, self-understanding, mood, and hope; parents have shown increased understanding of their child’s challenges, empathy for their child, and hope for the future. Compensation: • You will not receive any compensation for participation in this study. Confidentiality and Privacy Protections: • The data resulting from your participation may be made available to other researchers in the future for research purposes not detailed within this consent form. The data will contain no identifying information that could associate you with it, or with your participation in any study. 120 The records of this study will be stored securely and kept confidential. Authorized persons from The University of Texas at Austin, members of the Institutional Review Board, and Austin Neuropsychology Clinic have the legal right to review research records and will protect the confidentiality of those records to the extent permitted by law. By law, exceptions to confidentiality occur when information is acquired about child abuse, neglect, or other form of endangerment. As part of parental consent and child assent, all will be informed of the limits to confidentiality and the mandate that the examiner report such findings to Child and Family Protective Services. All publications will exclude any information that will make it possible to identify you or your child as a subject. Throughout the study, the researchers will notify you of new information that may become available and that might affect your decision to remain in the study. Contacts and Questions: If you have any questions about the study please ask now. If you have questions later, want additional information, or wish to withdraw your child’s participation call the researchers conducting the study. Their names, phone numbers, and e-mail addresses are at the top of this page. If you have questions about your child’s rights as a research participant, complaints, concerns, or questions about the research please contact Jody Jensen, Ph.D., Chair, The University of Texas at Austin Institutional Review Board for the Protection of Human Subjects at (512) 232-2685 or the Office of Research Support at (512) 471-8871.or email: orsc@uts.cc.utexas.edu. You may keep the copy of this consent form. You are making a decision about participating and allowing _________________ to participate in this study. Your signature below indicates that you have read the material above and have agreed to participate and allow your child to participate in this study. If you later decide that you wish to withdraw your permission for you or _______________ to participate in the study, simply tell me. You may discontinue participation at any time. ______________________________ Printed Name of (son/daughter) _________________________________ _________________ Signature of Parent(s) or Legal Guardian Date ________________________________ _________________ Signature of Child Date _________________________________ ________________ Signature of Investigator Date 121 Additional Consent for Use of Audio Recording 1. In order to ensure that the clinician is conducting the feedback session correctly, it will be necessary to audio record the session. No personally identifying information will be visible on tapes and they will be kept in a locked file cabinet in a locked office. Tapes will be heard for research purposes only by the investigator and her associates. Tapes will be erased after the research study is completed. Please sign below if you are willing to allow us to record audio in your child’s feedback session. I hereby give permission for audiotape recording to be used during my child’s feedback session. _________________________________ _________________ Signature of Parent(s) or Legal Guardian Date 2. In order to ensure that the clinician is conducting the parent interview correctly, it will be necessary to audio record the session. No personally identifying information will be visible on tapes and they will be kept in a locked file cabinet in a locked office. Tapes will be heard for research purposes only by the investigator and her associates. Tapes will be erased after the research study is completed. Please sign below if you are willing to allow us to record audio in your parent interview session. I hereby give permission for audiotape recording to be used during the parent interview. _________________________________ _________________ Signature of Parent(s) or Legal Guardian Date 3. We may wish to present some of the transcribed narrative portions from your child’s audiotapes in presentations and publications. In doing so, we will not provide identifying information about you or your child. Please sign below if you are willing to allow us to do so with portions of the transcribed audiotapes of your child’s session. I hereby give permission for transcribed portions of the audiotapes made for this research study to be used for publication of transcribed narratives. _________________________________ _________________ Signature of Parent(s) or Legal Guardian Date 122 Appendix G CHILD ASSENT FORM PARENT AND CHILD EXPERIENCES OF NEUROPSYCHOLOGICAL ASSESSMENT AS A FUNCTION OF INDIVIDUALIZED FEEDBACK I understand that I am being asked to participate in a research study. I agree to return for a short meeting with my assessor after the testing day. I agree to complete some questionnaires about my feelings and about what the testing was like for me. These questionnaires have been explained to my parent or guardian and he or she has given permission for me to participate. I may decide at any time that I do not wish to participate and that it will be stopped if I say so. When I sign my name to this page I am indicating that I read this page and that I am agreeing to participate. _______________________________________ _______________________ Your Signature Date _______________________________________ Please Print your Name 123 Appendix H Guidelines for Constructing Fables (condensed from Tharinger, Finn, Wilkinson, et al., 2008) From our collective clinical practice experience and the knowledge we gained using fables in a recent research study on the efficacy of Therapeutic Assessment with children (Tharinger, Finn, Wilkinson & Schaber, 2007), we have derived guides for constructing therapeutic fables. The following concrete suggestions and examples should be helpful to the assessor who is starting to construct therapeutic fables. Create the individualized storyboard. The introduction of a fable typically incorporates elements of the child’s and family’s development and culture. The goal is to bring the story alive with details the child will recognize and be drawn to, thus enlisting the child’s attention and imagination—and to create a somewhat veiled connection with the child’s everyday reality. The child is the main character in the fable, often represented as an animal or a mythical creature that the child has identified as being their favorite or one he or she wishes to be. Important family members are included as additional characters (for example, the child unicorn lives in the forest with the mother and father unicorn and her big brother unicorn). The main character is usually given a name similar to the child’s name (e.g., Marsha become Martha), a name that fits the animal depicted (e.g., Kobe the Koala Bear, Priscilla the Penguin), or a name for which the child has indicated a fondness (e.g., Emma may have told a story to a Thematic Apperception Test [TAT] card about a young girl named Olympia who was kind and courageous, so the unicorn-- Emma’s favorite animal--in her story is named Olympia). The setting for the fable is congruent with where the main character would live in reality or fantasy, such as animals in the forest or jungle, performing monkeys in the circus, or a pet dog with a family of humans. Cultural characteristics are integrated in the choice of characters and setting. For example, in a fable developed for a Native American boy, the main character was an eagle—a totem of his tribe—and another character was a horse—the boy’s personal totem. In addition, the assessor typically is included in the fable and represented as a figure of wisdom and kindness, for example, as a wise owl, sage, or respected tree in the forest. The parent characters usually have sought assistance from the wise character (as occurred during the assessment). Introduce the challenge. Following the introduction that sets up the scenario, the child’s character is typically confronted with a challenge or conflict that is quite similar to one in the child’s past or recent experience, and that in real life has been somewhat overwhelming. For example, in the fable Kobe the Koala Bear, Kobe experiences an unexpected separation and divorce in his koala family and feels angry and sad by the sudden demand to go back and forth between two trees instead of living in one. The focus of the challenge is based on one of the presenting concerns for the assessment, on the obtained findings, and on the level of change the family appears to be ready for at the time. The goal of a fable is to model a successful step or steps toward constructive change. The steps usually are suggested by the wise character in the story but are carried out by the parental characters. Maximize effectiveness through awareness and collaboration. As previously mentioned, prior to completing the fable for the child, the assessor typically has provided feedback to the child’s parents and has noted shifts the parents have made or are ready to make that indicate new support and understanding of their child. The assessor’s judgment of how able parents are to make such shifts is central in how big or little a challenge and step toward a solution is represented in the fable. The assessor also usually has a sense of how ready the child is to accept 124 a new response from his or her parents. The parents’ new understanding and renewed energy for their child and the child’s new openness is subsequently conveyed in the fable. The response from the parent characters helps the child to see that he or she is not alone in the change process. For example, during the parent feedback session the week before, Kobe’s parents had agreed that the communication between their homes was not very good and had committed to put new methods in place. Thus in the fable, the Koala parents work with Kobe to set up a communication system between the two tree homes so Kobe can check in with and talk to the parent he is not with at the time. This part of the fable was important because the assessment revealed that a major contributor to this child’s being mad and sad was his despair at missing the other parent. In the fable the koala family also agreed to work hard to ensure that the system would always be in good repair. In some cases, the parents are invited to assist in writing the fable itself, thus enhancing the collaborative nature of the assessment. For example, “Kobe’s” parents, upon reading the first draft of the story, had great ideas of what kind of system they would set up—and these details were incorporated into the story before it was presented to “Kobe.” Thus, a fable can serve as an intervention for the parents as well, as it invites their participation and tries to incorporate key change mechanisms for the whole family. Parent input also maximizes acceptance of the story by the child as it helps the child not feel alone in the solution to his or her challenges. Stay within the constraints of the real context and possibilities. It is important to emphasize that if parents are not capable of or willing to implement certain solutions, they should not be incorporated into a fable for the child. In cases where the next steps are unclear by the time the child receives feedback, the fable should indicate that the next steps and solutions are to be worked out, and (if it is true) that the parents have committed to work toward change. In an instance where the parents reject the suggestions coming from the assessment, the resulting fable likely will not reflect any changes in the family, but may indicate that the central character will learn new ways to handle challenges by drawing on new internal resources or by seeking support outside the family. This more autonomous solution must carefully take into account the developmental features of the child and be careful not to over-estimate what the character is the story should be taking on (and thus the child). Finally, the outcome of the assessment of a child, and thus the story, may not centrally involve the parents. The child may have a newly discovered learning disability and most of the changes will be occurring at school (although the parents certainly have a role in integrating this new awareness and responding accordingly). Whatever the changes may be, they can be represented metaphorically in the story. By the end of the assessment, the assessor should have some idea of what shifts are hoped for in the weeks and months to come. With a little creativity, just about any situation can be incorporated. For example, a foster home can be portrayed as a safe haven on a frightening journey, and a new medication for inattention can be written as a magic potion that helps the child focus. What is important is to write the fable in a way that reflects the child’s reality, engenders hope and provides direction about next steps. 125 Appendix I Checklist for Child Feedback Sessions Assessor: Date: Child’s Research Code: Who was present? □ child □ mother □ father □ other: __________ 1. Thank child and parent(s) for agreeing to participate and for meeting with you for an additional session (make it personal to this child and parents). Direct this feedback to the child: 2. Orient the child to the assessment session: Say something to the child about what the assessment session was like for you with him or her, what you remember from the session, what you enjoyed about the child, etc. Ask the child what they remember about it; maybe bring up an incident to remind the child about the session (especially for the younger ones). For example: “Remember when you were here before and we did all of those activities… remember when you picked out that green bracelet/ball/Toy Story sticker, etc… remember when you were here before and you met Dr. ____ and then we ran around the fountain…” 3. Introduce the fable and emphasize how special it is: Say something like: “We wrote this story just for you.” 4. Invite the child to choose who will read the fable: Say something like: “You get to choose who you would like to read the story: I can read it, or you can or your mom or dad.” Note whom the child chooses: □ assessor □ child □ mother □ father 5. Note how the child and parents react during the reading of the fable and any discussion that takes place. 126 6. After the fable is read, invite the child to modify it if he or she wishes: “Is there anything you want to change about the story?” Also ask (especially if you can’t tell by the child’s reaction) “Did you like the ending of the story?” “What did you like about the story?” (Encourage discussion of the story across child and parents). 7. Give the fable to the child and tell them that they get to take it home and read it whenever they want. (If changes were requested, you can tell them that we will make the changes and mail the story to the child’s home.) 8. Thank them again for their time. Please give your general impression of how the CHILD reacted to the fable: □ loved it □ liked it □ indifferent □ didn’t like it □ hated it What made you think so (evidence)? Please give your general impression of how the PARENT(S) reacted to the fable: □ loved it □ liked it □ indifferent □ didn’t like it □ hated it What made you think so (evidence)? Please record your time involvement for the research portion of this case: Preparation time: Feedback session time: In your opinion, was the extra time worth it? □ yes □ no Why or why not? 127 Appendix J Checklist for Child Feedback Sessions – Compilation of Results NOTE: these were completed by the assessor after each feedback session (see original document “Checklist for ANC Child Feedback Sessions”); these responses are not related to quantitative data collection or to the child’s treatment group Who was present? 1. child + mother 12 2. child + father 2 3. child + both parents 2 4. child + 1 parent + sibling(s) 1 5. child + both parents + sibling(s) 3 6. child only 1 Who did child choose to read the fable? 1. assessor 8 2. child (self) 8 3. mother 6 4. father 1 5. sister 1 Note how the child and parents react during the reading of the fable and any discussion that takes place: • “Mom was laughing at humorous sections and (child) was smiling throughout the story” • “Smiled periodically throughout the story; appeared a little sad when reading about his big brother being sad sometimes; smiled and laughed when read about his little brother hugging him” • “Child was volunteering his own recommendations and the results seemed to resonate with him. He confirmed frustration with ‘slower’ classmates and liked how we talked about all his thoughts moving faster than his hand.” • “Lots of nodding and smiling” • “Child was excited to read the story herself. She asked if she could illustrate and color her storybook. They all laughed when the story referenced her brother teasing her. Her brother even playfully poked at her.” • “(Child) sat on mom’s lap as she read it to him. (Child’s) mom thought character name was a mistake but understood when told the story was about a boy just like (child), not actually (child). Both laughed and smiled while the story was read.” • “(Child) interrupted several times to correct parts of the story (related to the Bionicles)” 128 • “Child sat in mom’s lap. She smiled a lot and clapped her hands a few times as the story was read. Mom paused for child’s comments.” • “Smiled frequently throughout the reading. Verbalized out loud about what fit and what didn’t fit so much (e.g. child said he didn’t like the name (of story character).” • “Child followed along while mom read; at end, mom asked child ‘who does that sound like?’” • “(child) smiling; father chuckles and smiles” • “laughed and nodded at some points” • “child went back and re-read sections about test results; asked for clarification about no ADHD diagnosis” • “Mom smiling and laughing; child very focused, smirking.” Child’s/parent’s response to solicitation of changes to the fable: None: 16 Other: • “said the ending is simplistic; the things suggested as a result of the assessment aren’t happening yet.” • “Mom asked for edits to the math part; didn’t agree with results” Assessor’s general impression of how CHILD reacted to fable, and evidence to support supposition: 1. Loved it 6 2. Liked it 13 3. Indifferent 2 4. Didn’t like it 0 5. Hated it 0 “Loved it” evidence: • “smiling, making eye contact with assessor and family members when things about him were read; some trouble with hard parts – leaned on mom and sucked thumb – but by the end of story he was fine again.” • “seemed to enjoy references to HSM and AI, smiled and laughed at these parts” • “Definitely identified with the main character – comments such as ‘Like me!’ When asked what she thought of the story she said ‘It’s perfect!’” • “Gave it a thumbs up and smiled and giggled.” • “She pointed out all of her favorite parts, lots of big smiles, lots of laughs.” • “(Child) said ‘I loved it. It sounded just like me’” “Liked it” evidence: • “smiled – said ‘that was funny’ at the end” • “He told me! He was really receptive and had ideas for how to help himself.” 129 • “He is very quiet, but he smirked throughout the reading of it and said that there wasn’t anything that was hard to hear.” • “smiled frequently/wanted to read it herself/wanted to color it” • “smiling; said he liked it when he was asked” • “very attentive; read to himself as story was read; said he liked the ending ok; however, said Bionicles aren’t born and don’t have parents or schools; when he brought that up during the story I said ‘let’s pretend that they do;’ he seemed ok with that.” • “said he liked it; wouldn’t change anything; a little indifferent at first, but smiled and laughed as story was read.” • “said ‘it sounds like me’” • “smiling, nodding, laughing while reading it” • “he smiled at certain parts (mind reading parts)” • “smiled – asked if we intentionally made character names sound like his and his brother’s name” • “stated she liked it and continued to talk about ‘missions’” • “said he (child) liked it and that it kinda sounded like him” “Indifferent” evidence: • “Said he didn’t need the story – that he understood the results when his parents had talked to him about it. I get the sense that he thought he was too old for the fable.” • “Said ‘it was fine.’ Child did not want to take the story home. Told mom, ‘you can take it if you want.’” Assessor’s general impression of how PARENT(S) reacted to fable, and evidence to support supposition: 1. Loved it 8 2. Liked it 10 3. Indifferent 2 4. Didn’t like it 0 5. Hated it 0 “Loved it” evidence: • “smiling, making eye contact with (child) at appropriate times, said they enjoyed it, thanked me for doing it” • “mom smiled and laughed throughout presentation. Commented at end that it sounded a lot like (child)” • “They told me!” • “when asked what they thought of the story they both said they really liked it; (mom) said it would help to have the story.” • “Said it was great and smiled as read it” 130 • “Lots of big smiles; said it was great” • “So involved and asking questions; discussing impact and effort into story.” “Liked it” evidence: • “said it was a good story; read the story with a lot of interest and enthusiasm” • “(Mom) said it was helpful but that she was overwhelmed a bit with how to implement” • “Mom said she thought story would be helpful. Told child she’d read it to him whenever he wanted her to.” • “Liked it a lot; smiling, listened to child’s comments during the story, said it was really good.” • “Mom stated that she wanted to keep the fable for herself to especially reference the recommendations.” • “said it was cute and sounded like him; smiled and laughed as story was read.” • “smiled frequently” • “stated that he (Dad) liked it and smiled” • “said it (the fable) was great” • “(mom) said, ‘I really liked it!’” “Indifferent” evidence: • “little affect displayed except at start of story; at start, seemed to enjoy it.” • “smiled at certain parts; strongly disagreed with math problems” Time involvement for assessor for the research portion of this case: Preparation time: 45, 3 hrs (wrote fable), 5, 15, 3 hrs (wrote fable), 10, 0, 15, 10, Feedback session time: 15, 15, 30, 60, 35, 30, 15, 15, 15, 10, 15, 20 In (assessor’s) opinion, was the extra time worth it? Yes 18 No 0 Why or why not? (was the extra time worth it) • “I could see (child) really liked learning about himself and he looked thoughtful when he heard the end of the story. He seemed to be relaxed (also had thumb in his mouth).” • “(I) enjoyed giving feedback to the child directly; this particular child was very curious about his scores on all of the tests, so providing information in a fable was valuable for him” • “Sure, it went over well and seemed helpful.” • “They were very receptive and seemed so thrilled to have had the extra info. They noted feeling a bit overwhelmed with the recommendations but we discussed taking it one thing at a time.” 131 • “Nice to be able to give feedback to the child. In addition, it is nice to see/observe family dynamics that occur even in this short time together.” • “(Child) seemed to enjoy hearing the fable and identified with the character. Mom said she thought the story would be helpful.” • “Parents – mom especially – said the story helped them understand some things better.” • “Child identified with characters and seemed to like hearing the solution the character’s helpers found. I think both mom and child hopeful about the future.” • “enjoyed being able to check back in and offer the child an opportunity to ask questions – validates all of the work that they did during the evaluation.” • “They were really! into the story.” • “(Child) said the fable made it easier for her to understand the results of the assessment.” • “Child had his own questions about whether or not he had ADHD” 132 Appendix K Sample Fables (reformatted for this document) SAMPLE 1 DEV GARR, AVATAR IN TRAINING Once upon a time there was an 9-year-old boy named Dev Garr. Dev lived with his mom, his dad, and his 14-year-old sister, Katara, in the Universe of Austin, otherwise known as (at least to Dev) the Universe of the Avatar. Dev really loved his family, and he loved building with legos, playing video games, reading, going swimming, and riding his bike. But, more than anything else in the world, Dev loved practicing his martial arts so that he could one day save the world. You see, Dev liked to believe that his real name was Aang and that secretly he was the next Avatar who would someday save the world! Since Dev liked to believe that he was a real Avatar, he knew that real Avatars had to learn a lot of things if they were going to save the world. So Dev tried really hard to do everything he was supposed to do. The most important thing Avatars had to learn was to become a master of ALL FOUR ELEMENTS: The element that helps you get along with others. The element that shows your strength to succeed. The element that takes over when you get mad. The element that helps keep you calm and peaceful. Well, for a 9-year old like Dev, it was actually pretty hard to become master of ALL FOUR elements. That’s a lot to do when you also had to do normal stuff like go to third grade, clean up your room, and do your homework. What made it even harder for Dev was that, on top of everything else, Dev had A LOT harder time paying attention and sitting still than other kids his age. It wasn’t his fault… he had ADHD and his brain just worked differently than most other kids’ brains. But it was really hard for Dev all the same. Well, Dev went to Avatar Park Academy so he could practice becoming master of Water, Earth, Fire and Air. Even though he really wanted to do well in school and master the elements, it was getting harder and harder to do what his 3rd grade teacher, Miss Zulo, asked him to do. In fact, Miss Zulo usually told Dev: 133 “You must master and pay attention!” “You must master and listen in class!” Well, Dev was trying to pay attention and listen, so how do you think that made Dev feel? That made Dev MAD! Sometimes, it even hurt his feelings. And sometimes Dev would act MAD even though inside he was really just SAD. Do you know what else sometimes hurt Dev’s feelings? Well, it seemed like Dev always got in trouble in class for not listening or paying attention. And since the other kids always saw it, Dev thought that the other kids thought he was dumb. That made it harder for Dev to master and get along with them. And it made Dev kind of SAD, and even a little MAD… because Dev knew that he was certainly NOT dumb! When Dev got these mad and sad feelings, he tried to calm himself down by practicing his mastery of the element . Airbending helped Dev calm down after he saw fire. And, even though he wasn’t great at it yet, he tried hard and he was getting better and better at calming himself down! Still, his parents were kind of worried about Dev’s trouble mastering the elements. They were also worried about his trouble in school with paying attention and listening. So they decided to take him to the special monks that help young kids become the best airbenders, waterbenders, firebenders, and earthbenders they can be. The special monks gave Dev a lot of tests, then they told Dev’s parents all about how he did… The monks found out for sure that Dev was certainly NOT dumb! They said he was really good at working puzzles and building things and that he had a great memory! Then they said that if Dev saw pictures or diagrams of something, then his memory was AWESOME! They told Dev’s parents that practicing something over and over again and showing Dev pictures and diagrams would be a great way for him learn to master all of the elements. The tests also showed something new – that Dev had a lot of trouble understanding what people said to him, especially if he was in a noisy room. 134 So… it wasn’t Dev’s fault AT ALL when his teacher said he wasn’t listening in class! His brain actually had a lot of trouble understanding what he heard. Dev felt better about that, and the monks told his parents about a doctor Dev should go see in order to get help. Another thing the monks said was that Dev had a lot of trouble with writing. They said he should get extra help in school for his writing. Also, sometimes Miss Zulo could ask Dev to answer questions out loud instead of writing the answers down. The monks also talked about Dev’s attention problems that his parents already knew about. The monks said that, because of Dev’s ADHD, it was really EXTRA hard for him to master the elements. was hard because other kids didn’t understand what it was like to have ADHD and that made it harder to get along with them; was hard because Dev’s strength’s were usually hidden behind his attention, listening, and writing problems; was hard because kids with ADHD have SO MUCH stuff going on in their brains; when things get to be TOO MUCH the fire tries to take over; and was the hardest of all for Dev even though it was the one he was practicing the most. Having ADHD meant that Dev and his brain moved FAST and had TOO much going on around them to try and keep up with most of the time. Sometimes being calm and peaceful just seemed impossible! Well, the monks told Dev’s parents about a special doctor that might be able to help Dev master the four elements. This special doctor was special because he was really good at teaching kids with ADHD, just like Dev. The most important thing that Dev’s parents learned after he took all of those tests with the monks was that Dev was a special and sensitive boy who had A LOT of talents and gifts. They didn’t know for sure if he was the real Avatar, but they did know that he was their wonderful son who was trying really hard to master the elements. They realized how hard it was for Dev to MASTER ALL FOUR of the elements at his age, especially with his ADHD. They realized that someday he would – but that it would take TIME. Dev’s mom and dad knew that Dev was trying and practicing his skills every day, and Dev knew it too. He knew he would do his best, and he knew that his parents and teachers, and even the special doctors, would all help him to become the best airbending, waterbending, earthbending, and firebending Avatar he could be. 135 SAMPLE 2 Rosie the Remarkable Rabbit Once upon a time there was a Remarkable Rabbit named Rosie. Rosie was a JOYFUL bunny who was eight years old and in the third grade. She had many bunny friends at school and she also had a little bunny sister to explore the grasses and gardens with around her home. Rosie was a good little bunny, and Mama Rabbit thought that she was REMARKABLE. Rosie was remarkable in many, many, many ways: She was loving and kind. She loved to go ice skating! She could play lacrosse, even boy-bunny lacrosse! She LOVED singing and dancing. And Rosie was a great prankster! She loved to play pranks on Papa Rabbit. Sometimes, when Rosie wasn’t quite as remarkable as she should be, Mama Rabbit got out the Sad Spoon. Rosie didn’t like the Sad Spoon, it made her SAD and when she got sad she crawled under her bed of grass and hugged her pillow REAL TIGHT. When Rosie was being remarkable, as she usually was, she was a busy little bunny. And she was REALLY busy at school. She had a lot of helpers at school, and lots of 136 new things to learn and remember. That was good because, just like a lot of remarkable rabbits, Rosie did her best when she kept really busy. Rosie was really busy trying hard at school. In the third grade though, at Hoppity Episcopal School, things started getting a lot harder. Especially things like spelling. C - U - C - U - M - B - E - R C – E – L – E – R – Y V – E – G – E – T – A – B – L – E - S Those are big words for a little bunny. Rosie got a lot of her big spelling words right, but when she was thinking about other things, like writing a story about a vegetables, then she would sometimes get the same words wrong! Also, third grade bunnies had to add and subtract lots of carrots, radishes, and other crops that their teacher grew. She did extra work to do well in math, but it was hard. Rosie worked hard at spelling and math, but her parents and her teachers thought she might need some extra help. So, one day, Rosie’s parents said to her: “Rosie, we are going to take you to visit Miss Bunner-Bunny. Miss Bunner-Bunny works at the Cabbage and Carrot Clinic, and she is an expert at helping rabbits do their best at school..” Rosie thought that sounded ok, so she missed a whole day of school to go to the Cabbage and Carrot Clinic. While she was there, she worked with Miss Bunner- Bunny’s assistant, Miss Bonnie, for a long time on lots of different activities. Miss Bonnie thought Rosie did a great job, and so did Rosie’s parents. Mama and Papa Bunny were really proud of Rosie. When it was all over, Miss Bunner-Bunny told Rosie’s parents that they were right! She said: “You are right! Rosie truly is a remarkable little bunny.” Miss Bunner-Bunny said Rosie was doing a great job at all of the things that rabbits her age are supposed to be able to do. 137 Miss Bunner-Bunny thought that, because Rosie was such a busy little bunny, that maybe she had trouble paying attention to all of the little, BUT IMPORTANT, things that she has to pay attention to when she’s writing a story or working on her math. Miss Bunner-Bunny said that this was pretty normal for a lot of busy bunnies. After seeing Miss Bunner-Bunny, Rosie, Mama Bunny, Papa Bunny, and Rosie’s teachers began to work together to help Rosie pay more attention to the little but IMPORTANT things in her school work. They taught Rosie to slow down and check her work more often. They let her play computer math games! And they thought that Rosie might even get an extra helper at school to make sure she doesn’t miss out on anything when her brain is being too busy to pay attention. Rosie thought these were good ideas. She liked being busy, and she was glad that she would get to keep doing all of the things she loved. She knew it would had work. She knew that slowing down and checking for mistakes might be frustrating, and boring. But she also knew that she would try her best. Everyone agreed that Rosie was good at being busy, and with help, she’d get even better at it. Rosie truly was a REMARKABLE Rabbit! And Rosie knew it too! And that made her feel pretty cool. 138 SAMPLE 3 Diary of an 11-year-old Professional Football Team Manager (aka Jaden’s Writing Journal) Tuesday, April 7 First off, let’s be clear that this is NOT a DIARY, it is a journal. Yes, I can see that it does say DIARY on the front cover, but that’s my mom’s fault so let’s just leave it at that. This is all her idea anyway. BUT, it’s getting me out of one chore a week so I don’t really mind. She thinks writing in this JOURNAL will help me with my handwriting and she also thinks I’m going to write about my feelings or something. But I figure that she’ll never read it so I can write about whatever I want. Wednesday, April 8 Went to school. Boring. Played video games at Jake’s house after school. Awesome. What else? Oh yeah, I am a professional football team manager now. Boy, is it great!!! And, since I’m only 11 years old, I am also famous because I’m the youngest professional football team manager ever. And my team, the Mustangs, has won more games than any team in the history of football! All my friends are jealous of me, but it’s ok because I share all of my money with them. Thursday, April 9 Already the third day. This isn’t so bad. Let me just say that, even though it’s not easy being an 11-year-old professional football team manager, I have the BEST JOB IN THE WORLD! Never mind that all of the players on my team are 6000 pounds bigger than me and could squash me like a bug. That’s just minor details. I’ve shaped my group of misfits into the winningest team in football history! Mustangs… Strong and Proud!!! AND did I mention I’m SOOO RICH!!! AND I HAVE AN AWESOME CAR! Friday, April 10 Hummm… I guess I don’t really have to lie about stuff because I’m the only one reading this thing. OK, so, I know that I’m not actually an 11-year old professional 139 football team manager, and I’m certainly NOT rich yet. Not even a little. And I don’t have a car. Not yet. At least, not a real one. But it doesn’t hurt to dream, right??? Certainly that’s enough writing for one day. A confession should probably count as double. Saturday, April 11 Who am I talking to when I ask questions in here??? Hello? Hi Jaden. Hi me? Yes? Hi. Hi back at you (me). How am I? I am fine. AARRGGHH! Five days of journal entries and I’m already going mad! I have to sit here longer or else my mom won’t count this as a whole entry. It’s a good thing I write so slowly or else I would have to think of pages and pages of something to write about because my mom makes sure I sit here writing for a whole hour every day. At least it seems like an hour. Maybe it’s thirty minutes. Sunday, April 12 This is the WORST part about having to do this journal – spending thirty minutes of my Sunday – MY SUNDAY! – writing. Reminds me of school, and school is my biggest problem. It’s my biggest problem, but it’s not THAT bad really, just hard sometimes… especially grammar stuff and of course, writing. Plus, my teacher is SUPER grumpy. I wish she could just be nicer. It’s a good thing that math isn’t that hard. If math was hard too then school would be an even BIGGER problem. In any case, school is certainly the LAST thing I want to think about on a Sunday. What the…? Mom just told me I’m not going to school tomorrow. My excitement was squashed when she said I’ll be taking school-tests somewhere all day instead. Monday, April 13 So I did go to this weird place and took school-like tests all day, just like mom warned be. It wasn’t any worse than school except just more boring. And I’m exhausted! Maybe mom will let me off the hook for journal writing today. She said yes. Tuesday, April 14 Yikes! These things are almost getting to be about my FEELINGS! I’d better keep this under control or else my mom will turn out to be right about that. Went to school. Had a lot of make-up work to do because of yesterday. You’d think that if I have to miss school to take school-like tests than I wouldn’t have to do make-up work for missing school... 140 Played football after school. Came home. Mom said she’ll find out about my test results tomorrow. Not sure if I’m nervous about that or not. Yikes! There are those feelings again. Time to eat dinner. Wednesday, April 15 Got home and Mom told me about the results from all the boring tests that I took. Said that the doctor said that I’m really good at math. (Already knew that!) and that I’m doing good in everything else except for writing (Could have told ya that too!). She said that my writing trouble is called dysgraphia. She said that a lot of kids who have dysgraphia do better when that get to type on a computer instead of having to write. (WOO HOO, I’ve been asking for that forever!). So, of course, this first thing I asked her was if I have to keep writing in this stupid journal. AW MAN, talk about crushing the spirit when she said yes!!! Turns out, the DOCTOR there at the test place thinks that this is a good idea too!!! There’s NO MERCY! Thursday, April 16 So school was cooler today – my mom met with my grumpy teacher yesterday and now I have a keyboard I get to use for my writing assignments! PLUS… the teacher is going to start giving me the lecture notes already printed out! Talk about a lifesaver! My mom also got me a computer program that teaches me how to type faster. It’s not easy, but it’s actually kind of fun so I’m sure I’ll get the hang of it. Too bad I can’t type up this journal. I guess that the testing doctor thinks that practicing writing like this will be helpful. I guess I can’t complain too much… I’ll bet that by the time my handwriting gets better I’ll be rich and famous! And I’ll be the manager of the winningest professional football team ever! And when I sign all of the autographs that the kids ask me for they’ll be able to read what I write! And that will make all of this JOURNAL writing, NOT DIARY WRITING, worth it. 141 Appendix L Means and Standard Deviations of Items by Subscale for the CEAS Subscale/Item Control (n = 17) M SD Experimental (n = 15) M SD Total sample (N = 32) M SD Learned New Things 3.05 .70 3.77 .92 3.39 .88 Now I better understand my problems. 2.65 1.12 3.73 1.28 3.16 1.30 I know more now about where my problems came from. 2.71 .92 3.47 1.46 3.06 1.24 Now I know more about why some things are harder for me. 3.12 1.17 4.07 .88 3.56 1.13 I learned that with help, I can handle many of my problems. 3.47 1.13 4.07 .96 3.75 1.08 I learned that I am good at some things I didn't know about. 3.59 1.18 3.80 1.32 3.69 1.23 I will think about myself differently now. 2.76 1.09 3.47 1.19 3.09 1.17 Child-Assessor Relationship 3.91 .50 4.40 .70 4.14 .64 _________ and I had fun together. 3.76 .97 4.00 1.51 3.88 1.24 _________ liked me. 4.06 .75 4.47 .74 4.25 .76 _________ seemed to care about me. 3.94 .66 4.47 .74 4.19 .74 _________ was mean to me. 1.35 .61 1.00 .00 1.19 .47 I liked _________. 3.94 .75 4.40 .73 4.16 .77 I looked forward to coming to see _________. 3.12 .99 4.07 1.10 3.56 1.13 Feelings About Assessment 3.43 .85 3.77 1.02 3.59 .93 I am proud of myself for doing the assessment. 3.82 1.13 3.93 1.22 3.88 1.16 142 I'm glad I did the assessment. 3.59 1.23 4.00 1.36 3.78 1.29 I felt the assessment was helpful. 3.53 1.28 4.00 1.07 3.75 1.19 I felt the assessment was a waste of time. 2.47 1.18 1.73 1.28 2.12 1.26 I felt the assessment was fun. 3.24 1.15 3.47 1.41 3.34 1.26 I felt the assessment was boring. 3.12 1.27 3.07 1.28 3.09 1.25 Collaboration 3.23 .61 4.00 .82 3.59 .81 _________ explained why each test was important. 3.35 1.06 3.73 1.03 3.53 1.05 _________ helped me understand why we were doing the tests. 3.82 .81 4.07 .70 3.94 .76 _________ helped me understand the results of the testing. 2.76 .83 4.13 1.06 3.41 1.16 _________ helped me understand things about myself from the tests. 3.41 .94 4.20 .78 3.78 .94 _________ wanted to know what I thought about my life. 3.00 1.00 3.93 1.03 3.44 1.11 _________ asked me what it was like to take the tests. 3.00 1.32 3.93 1.10 3.44 1.29 Parent Understanding 3.30 .59 3.99 .72 3.63 .73 Maybe, after this assessment, my parents will realize its not all my fault. 2.41 1.06 3.80 1.21 3.06 1.34 I think my parents learned a lot about me because of the assessment. 3.24 1.15 3.93 .88 3.56 1.08 I think my parents will understand me better now. 3.35 .86 4.07 .88 3.69 .93 I still don't think my parents get it. 2.24 1.09 1.80 1.15 2.03 1.12 Maybe my parents will go easier on me now. 2.71 1.21 3.53 1.30 3.09 1.30 My parents will never understand me. 1.65 1.06 1.60 1.12 1.62 1.07 143 Appendix M Means and Standard Deviations of Items by Subscale for the PEAS Subscale/Item Control (n = 17) M SD Experimental (n = 15) M SD Total sample (N = 32) M SD Learned New Things 3.72 .49 4.01 .43 3.85 .48 2. I learned new ways of interacting with my child. 3.12 1.05 3.73 .80 3.41 .98 8. I am more aware of my child’s strengths. 4.24 .44 4.40 .63 4.31 .54 13. Now I know more about why my child acts the way he/she does. 4.24 .56 4.40 .51 4.31 .54 18. I didn’t learn anything new about my child from the assessment. (F) 1.47 .51 1.40 .63 1.44 .56 22. I learned a tremendous amount about my child from this assessment. 4.06 .83 4.13 .64 4.09 .73 27. Now I am more confused about how to handle my child. (F) 1.82 1.02 1.27 .46 1.56 .84 32. Now I know specific things I can do to help my child. 3.82 .81 4.53 .52 4.16 .77 37. I wish I had learned more concrete ways to help my child day to day. (F) 3.06 1.03 2.47 1.19 2.78 1.13 40. The assessment completely changed the way I view my child. 2.94 1.02 2.80 1.08 2.87 1.04 46. I am better able to communicate with my child. 3.29 .77 3.87 .64 3.56 .76 51. Now I know what to expect from my child. 3.76 .66 4.00 .54 3.88 .61 54. The assessment has helped me have more patience with my child. 4.06 .97 4.07 1.03 4.06 .98 58. I have lots of new ideas about how to parent my child. 3.06 .90 3.53 .92 3.28 .92 64. I understand my child so much better now. 3.88 .78 3.87 .74 3.88 .75 144 Assessor-Child Relationship 3.72 .55 4.19 .37 3.94 .52 1. The assessor worked well with my child. 4.18 1.02 4.53 .52 4.34 .83 5. My child did not like the assessor. (F) 1.88 .99 1.27 .46 1.59 .84 14. My child never really warmed up to the assessor. (F) 2.12 .99 1.73 .80 1.94 .91 29. The assessor never really understood my child. (F) 1.94 .83 1.27 .46 1.62 .75 35. My child looked forward to meeting with the assessor. 2.88 .70 3.13 .83 3.00 .76 42. My child felt comfortable with the assessor. 3.65 .70 4.33 .62 3.97 .74 45. The assessor got my child to work really hard. 3.82 .73 3.87 .83 3.84 .77 49. My child and the assessor really connected well. 3.00 .71 3.73 .59 3.34 .75 57. My child dreaded almost every meeting with the assessor. (F) 2.18 1.08 1.87 1.24 2.03 1.15 61. The assessor seemed to like my child. 3.76 .97 4.40 .63 4.06 .88 Negative Feelings 1.75 .43 1.53 .40 1.65 .42 6. The assessment process was very confusing. 1.82 .73 1.47 .52 1.66 .65 9. The assessment made me feel guilty. 1.82 .95 1.67 .90 1.75 .92 17. I am uncomfortable with how much the assessment revealed. 1.35 .49 1.87 1.25 1.59 .95 23. The assessment made me feel ashamed. 1.35 .49 1.40 .51 1.38 .49 31. I felt blamed for my child’s problems. 1.53 .62 1.20 .41 1.37 .55 39. The assessment was a humiliating experience. 1.35 .49 1.20 .41 1.28 .46 50. The assessment made me feel like a bad parent. 1.53 .62 1.47 .52 1.50 .57 56. The assessment was overwhelming. 2.88 1.43 1.53 .74 2.25 1.34 145 60. At the end of the assessment, I was left feeling angry. 1.35 .49 1.47 1.06 1.41 .80 62. I was anxious throughout the assessment. 2.53 1.18 2.00 1.03 2.28 1.17 Assessor-Parent Relationship 4.04 .50 4.29 .43 4.16 .48 3. The assessor was genuinely interested in helping us. 4.29 .59 4.47 .52 4.38 .55 10. I liked the assessor. 4.00 .79 4.27 .70 4.13 .75 15. The assessor liked me. 3.47 .72 3.60 .74 3.53 .72 19. I felt close to the assessor. 2.88 .99 3.27 .70 3.06 .88 26. I felt the assessor respected me. 4.06 .56 4.27 .59 4.16 .57 34. I felt the assessor was cold towards me (F). 1.59 .62 1.40 .63 1.50 .62 41. I felt the assessor looked down on me. (F) 1.35 .49 1.33 .49 1.34 .48 44. I trusted the assessor. 4.12 .70 4.47 .52 4.28 .63 52. I felt judged by the assessor. (F) 1.35 .49 1.20 .41 1.28 .46 63. The assessor really listened to me. 3.88 .86 4.47 .52 4.16 .77 Collaboration/InformConsent 3.80 .38 4.24 .33 4.01 .42 4. I had a say in what the assessment focused on. 3.59 .80 3.93 .71 3.75 .76 11. The assessor helped me explain the assessment to my child. 3.24 .75 4.33 .72 3.75 .92 16. I was informed about each step of the assessment. 3.41 1.23 4.13 .74 3.75 1.08 20. I never really understood the point of the assessment. (F) 1.41 .51 1.20 .41 1.31 .47 24. I felt like part of a team working to help my 4.18 .39 4.40 .63 4.28 .52 146 child. 28. I helped make sense of the test results. 3.00 1.06 3.47 .64 3.22 .91 33. I understood the goals of the assessment. 4.41 .51 4.67 .49 4.53 .51 38. The assessor asked me if the assessment findings seemed right to me. 3.35 .93 3.73 .96 3.53 .95 47. No one ever told me what would happen during the assessment. (F) 1.76 .75 1.47 .52 1.62 .66 55. I felt that my opinion was valued. 4.00 1.06 4.40 .51 4.19 .86 Family Involvement 2.95 .56 2.95 .68 2.95 .61 7. I now see that our family will need to change to help my child. 3.76 .83 3.86 .95 3.81 .87 12. Our family has little to do with why my child has problems. (F) 2.47 1.07 2.87 1.06 2.66 1.07 21. Many of my child’s difficulties have to do with our family. 2.59 1.18 2.40 .99 2.50 1.08 25. The assessment revealed how family members play a role in my child’s problems. 2.76 .75 3.13 .99 2.94 .88 30. I don’t believe our family makes my child’s problems worse. (F) 3.18 .95 3.53 .92 3.34 .94 36. My child is the only person in our family who needs to change. (F) 1.47 .51 2.13 .83 1.78 .75 43. My child is worse with our family than with other people. 2.00 1.00 2.20 1.15 2.09 1.06 48. I now see how our family’s problems affect my child. 2.82 1.13 3.20 .86 3.00 1.02 53. My child’s problems are partly caused by other struggles in our family. 2.06 1.14 2.53 1.25 2.28 1.20 59. My child struggles more when people in our family aren’t getting along. 2.65 1.32 2.87 .99 2.75 1.16 147 References Ackerman, S. J., Hilsenroth, M. J., Baity, M. R., & Blagys, M. D. (2000). Interaction of therapeutic process and alliance during psychological assessment. Journal of Personality Assessment, 75, 82-109. American Psychiatric Association (2000). Diagnostic and statistical manual of mental disorders, (4th ed., text revision). Washington, DC: American Psychiatric Association. American Psychological Association (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57, 1060-1073. Appelbaum, S. A. (1970). Science and persuasion in the psychological test report. Journal of Consulting and Clinical Psychology, 35, 349-355. Allen, J. G., Lewis, L., Blum, S., Voorhees, S., Jernigan, S., & Peebles, M. J. (1986). Informing psychiatric patients and their families about neuropsychological assessment findings. Bulletin of the Menninger Clinic, 50, 64-74. Allen, A., Montgomery, M., Tubman, J., Frazier, L., & Escovar, L. (2003). The effects of assessment feedback on rapport-building and self-enhancement processes. Journal of Mental Health Counseling, 25, 165-181. Baltes, M. M. & Baltes, P. B. (1986). The psychology of control and aging. Hillsdale, NJ: Erlbaum. Baron, I. S. (2004). Neuropsychological evaluation of the child. New York, NY: Oxford University Press. 148 Bennett-Levy, J., Klein-Boonschate, M. A., Batchelor, J., McCarter, R., & Walton, N. (1994). Encounters with Anna Thompson: The consumer’s experience of neuropsychological assessment. The Clinical Neuropsychologist, 8, 219-238. Berg, M. (1985). The feedback process in diagnostic psychological testing. Bulletin of the Menninger Clinic, 49, 52-69. Beutler, L. E. (1995). Integrating and communicating findings. In: L. E. Beutler & M. R. Berren (Eds.), Integrative assessment of adult personality. New York, NY: Guilford. Beutler, L. E. & Rosner, R. (1995). Introduction to psychological assessment. In: L. E. Beutler & M. R. Berren (Eds.), Integrative assessment of adult personality. New York, NY: Guilford. Bhattacharyya, A. (1997). Historical backdrop. In K. N. Dwivedi (Ed.), The therapeutic use of stories. London: Routledge. Bracken, B. A. (2006). Clinical observation of preschool assessment behavior. In B. A. Bracken & R. J. Nagle (Eds.), Psychological assessment of preschool children. London: Routledge. Braaten, E. B. (2007). Personality assessment feedback with adolescents. In S. Smith & L. Handler (Eds.) The clinical assessment of children and adolescents: A practitioner's handbook (pp. 73-83). Mahwah, NJ: Lawrence Erlbaum Associates. Brenner, E. (2003). Consumer-focused psychological assessment. Professional Psychology: Research and Practice, 34, 240-247. Brown, M. W. (1947). Goodnight Moon. New York, NY: Harper. 149 Butcher, J. N. (1992). Introduction to the special section: Providing psychological test feedback to clients. Psychological Assessment, 4, 267. Clair, D. & Prendergast, D. (1994). Brief psychotherapy and psychological assessments: Entering a relationship, establishing a focus, and providing feedback. Professional Psychology: Research and Practice, 25, 46-49. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates. Colley, T. E. (1973). Interpretation of psychological test data to children. Mental Retardation,11 (1), 28-30. Crosson, B. (2000). Application of neuropsychological assessment results. In R. D. Vanderploeg (Ed.), Clinician’s guide to neuropsychological assessment (2 nd ed., 195-244). Mahwah, NJ: Lawrence Erlbaum. Donofrio N., Piatt A., Whelihan W., & DiCarlo M. (1999). Neuropsychological test feedback: Consumer evaluation and perceptions. Archives of Clinical Neuropsychology, 14, 721. Farmer, J. E. & Brazeal, T. J. (1998). Parent perceptions about the process and outcomes of child neuropsychological assessment. Applied Neuropsychology, 5, 194-201. Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191. Finn, S. E. (1996). Using the MMPI-2 as a therapeutic intervention. Minneapolis, MN: University of Minnesota Press. 150 Finn, S. E. (2007). In our clients’ shoes: Theory and techniques of therapeutic assessment. Mahwah, NJ: Lawrence Erlbaum. Finn, S. E. & Bunner, M. (1993, March). Impact of test feedback on psychiatric inpatients’ satisfaction with assessment. Paper presented at the 28 th Annual Symposium on Recent Developments in the Use of the MMPI. St. Petersburg Beach, FL. Finn, S. E. & Butcher, J. N. (1991). Clinical objective personality assessment. In M. Hersen, A. E. Kazdin, & A. S. Bellack (Eds.) The clinical psychology handbook (2 nd ed., pp. 362-373). New York: Pergamon Press. Finn, S. E. & Tonsager, M. E. (1992). Therapeutic effects of providing MMPI-2 test feedback to college students awaiting therapy. Psychological Assessment, 4, 278-287. Finn, S. E. & Tonsager, M. E. (1997). Information-gathering and therapeutic models of assessment: Complementary paradigms. Psychological Assessment, 9, 374- 385. Fischer, C. T. (1970). The testee as co-evaluator. Journal of Counseling Psychology, 17, 70-76. Fischer, C. T. (1985). Individualizing psychological assessment. Monterey, CA: Brooks/Cole Publishing. French, A., Poulsen, J., & Yu, A. (n.d.). Multivariate analysis of variance (MANOVA). Retrieved on 9/11/08 from: http://userwww.sfsu.edu/~efc/classes/biol710/manova/manovanew.htm. 151 Foster, J. J., Barkus, E., & Yavorsky, C. (2006). Understanding and using advanced statistics. London: Sage Publications, Ltd. Gass, C. S. & Brown, M. C. (1992). Neuropsychological test feedback to patients with brain dysfunction. Psychological Assessment, 4, 272-277. Goodyear, R. K. (1990). Research on the effects of test interpretation: A review. The Counseling Psychologist, 18, 240-257. Gorske, T. T. (2008). Therapeutic neuropsychological assessment: A humanistic model and case example. Journal of Humanistic Psychology,48, 320-339. Hamilton, A. M., Fowler, J. L., Hersh, B., Austin, C. A., Finn, S. E., Tharinger, D. J., Parton, V. T., Stahl, K., & Arora, P. (2009). Why won’t my parents help me? Therapeutic Assessment of a child and her family. Journal of Personality Assessment, 90, 108-120. Handler, L. (1998). The clinical interpretation of the Wechsler Intelligence Tests as personality instruments. In L. Handler & M. Hilsenroth (Eds.), Teaching and learning personality assessment, (pp. 245-324). Mahwah, NJ: Lawrence Erlbaum Associates. Handler, L. (2007). The use of therapeutic assessment with children and adolescents. In: S. R. Smith & L. Handler (Eds.), The Clinical Assessment of Children and Adolescents (pp. 53-72). Mahwah, NJ: Lawrence Erlbaum Associates. Hanson, W. E., Claiborne, C. D., & Kerr, B. (1997). Differential effects of two test- interpretation styles in counseling: A field study. Journal of Counseling Psychology, 44, 400-405. 152 Hayashi, L. A. (2001). The history of fables. Retrieved on 08/31/08 from: http://www.fablesfromthefriends.com/history_of_fables.htm. Hilsenroth, M. J., Peters, E. J., & Ackerman, S. J. (2004). The development of therapeutic alliance during psychological assessment: Patient and therapist perspectives across treatment. Journal of Personality Assessment, 83, 332-344. Holmes, J. (1993). Between art and science. London: Routledge. Human, M. T. & Teglasi, H. (1993). Parents’ satisfaction and compliance with recommendations following psychoeducational assessment of children. Journal of School Psychology, 31, 449-467. Keith, T. Z. (2006). Multiple regression and beyond. Boston, MA: Allyn & Bacon. Kitamura, T. (2005). Stress-reductive effects of information disclosure to medical and psychiatric patients. Psychiatry and Clinical Neurosciences, 59, 627-633. Knoff, H. M. (1982). Evaluating consultation service delivery at an independent psychodiagnostic clinic. Professional Psychology: Research and Practice, 14, 699-705. Kohut, H. (1977). The restoration of the self. New York: International Universities Press. Laine, C. & Davidoff, F. (1996). Patient centered medicine: A professional evolution. Journal of the American Medical Association, 275, 152-156. Landrath, G. L. (2002). Play therapy: The art of the relationship (2 nd ed.). London: Routledge. Larsen, D. L., Attkisson, C. C., Hargreaves, W. A., & Nguyen, T. D. (1979). Assessment of client/patient satisfaction development of a general scale. Evaluation and Program Planning, 24, 197–207. 153 Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment (4 th ed.). New York, NY: Oxford University Press. Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology,140, 1-55. Lillie, R. (2007). Getting clients to hear: Applying principles and techniques of Kiesler’s Interpersonal Communication Therapy to assessment feedback. Psychology and Psychotherapy: Theory, Research, and Practice, 80, 151-163. Lyon, J. (1995). A British perspective on the psychological assessment of childhood AD/HD, International Psychology Services. Retrieved on 09/05/08 from: http://www.adders.org/info10.htm. Malla, A. K., Lazosky, A., McLean, T., Rickwood, A., Cheng, S., & Norman, R. M. G. (1997). Neuropsychological assessment as an aid to psychosocial rehabilitation in severe mental disorders. Psychiatric Rehabilitation Journal, 21,(2), 169-173. Meyers, L. S., Gamst, G., & Guarino, A. J. (2006). Applied multivariate research. Thousand Oaks, CA: Sage. Mosak, H. H. & Gushurst, R. S. (1972). Some therapeutic uses of psychological testing. American Journal of Psychotherapy, 26, 539-546. Mutchnick, M. G. & Handler, L. (2002). Once upon a time…: Therapeutic interactive stories. The Humanistic Psychologist, 30, 75-84. Newman, M. L. & Greenway, P. (1997). Therapeutic effects of providing MMPI-2 test feedback to clients at a university counseling service: A collaborative approach. Psychological Assessment, 9, 122-131. 154 Pegg, P. O., Auerbach, S. M., Seel, R. T., Buenaver, L. F., Kiesler, D. J., & Plybon, L. E. (2005). The impact of patient-centered information on patients’ treatment satisfaction and outcomes in traumatic brain injury rehabilitation. Rehabilitation Psychology, 50, 366-374. Piaget, J. (1977). The Essential Piaget. Howard E. Gruber and J. Jacques Vonèche (Eds.) New York: Basic Books. Pollak, J. M. (1988). The feedback process with parents in child and adolescent psychological assessment. Psychology in the Schools, 25, 143-153. Pope, K. S. (2002). Responsibilities in providing psychological test feedback to clients. Psychological Assessment, 4, 268-271. Purves, C. (2002). Collaborative assessment with involuntary populations: Foster children and their mothers. The Humanistic Psychologist, 30, 164-174. Quill, T. E. & Brody, H. (1996). Physician recommendations and patient autonomy: Finding a balance between physician power and patient choice. Annals of Internal Medicine, 125, 763-769. Quirk, M. P., Strosahl, K., Kreilkamp, T., & Erdberg, P. (1995). Personality feedback consultation to families in a managed mental health care practice. Professional Psychology: Research and Practice, 26, 27-32. Richman, J. (1967). Reporting diagnostic test results to patients and their families. Journal of Projective Techniques and Personality Assessment, 31, 62-70. Rogers, C. (1951). Client-centered psychotherapy: Its current practice, implications, and theory. Boston: Houghton Mifflin. 155 Sartre, J.-P. (1946). L'existentialisme est un humanisme. Paris: Nagel. Schectman, F. (1979). Problems in communicating psychological understanding: Why won’t they listen to me? American Psychologist, 34, 781-790. Seuss, D. (1963). Hop on pop. New York, NY: Random House. Smith, S. R., Wiggins, C. M., & Gorske, T. T. (2007). A survey of psychological assessment feedback practices. Assessment, 14, 310-319. Söderlund, M. (2004). Dismantling “positive affect” and its effects on consumer satisfaction: An empirical examination of customer joy in a service encounter. Journal of Consumer Satisfaction, Dissatisfaction and Complaining Behavior. Retrieved on 09/05/08 from Find Articles: http://findarticles.com/p/articles/mi_qa5516/is_200401/ai_n21362034. PASW Statistics for Windows, Rel. 17.0.2. 2009. Chicago: SPSS Inc. Statistics Solutions (2009). Retrieved on 06/06/09 from: http://www.statisticssolutions.com/manova. Swann, W. B., Jr. (1990). To be adored or to be known : The interplay of self- enhancement and self-verification. In R. M. Sorrentino & E. T. Higgins (Eds.) Foundations of Social Behavior, (Vol. 2, pp. 408-448). New York: Guilford. Tabachnick, B. G. & Fidell, L. S. (1996). Using multivariate statistics (3 rd ed.). New York, NY: HarperCollins. Tharinger, D. J., Finn, S. E., Gentry, L., Hamilton, A., Fowler, J., Matson, M., Krumholz, L., & Walkowiak, J. (2009). Therapeutic Assessment with children: A pilot study 156 of treatment acceptability and outcome. Journal of Personality Assessment, 90, 1- 7. Tharinger, D. J., Finn, S. E., Hersh, B., Wilkinson, A., Christopher, G., & Tran, A. (2008). Assessment feedback with parents and pre-adolescent children: A collaborative approach. Professional Psychology: Research and Practice, 39, 600-609. Tharinger, D. J., Finn, S. E., Wilkinson, A. D., DeHay, T., Parton, V., Bailey, E., & Tran, A. (2008). Providing psychological assessment feedback to children through individualized fables. Professional Psychology: Research and Practice, 39, 610- 618. Tharinger, D. J., Finn, S. E., Wilkinson, A. D., & Schaber, P. M. (2007). Therapeutic assessment with a child as a family intervention: A clinical and research case study. Psychology in the Schools, 44, 293-309. Therapeutics and Technology Assessment Subcommittee of the American Academy of Neurology (1996). Assessment: Neuropsychological testing of adults. Considerations for neurologists. Neurology, 47: 592-599. Ward, R. M. (2008). Assessee and assessor experiences of significant events in psychological assessment feedback. Journal of Personality Assessment, 90, 307-322. Watson, D., & Clark, L. A. (1994). Positive and Negative Affect Schedule – Expanded Form Manual. The University of Iowa. 157 Weinfurt, K. P. (1995). Multivariate analysis of variance. In L. G. Grimm & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (pp. 245-276). Washington DC: American Psychological Association. Westervelt, H. J., Brown, L. B., Tremont, G., Javorsky, D. J., & Stern, R. A. (2007). Patient and family perceptions of the neuropsychological evaluation: How are we doing? The Clinical Neuropsychologist, 21, 263-273. Winnicott, D. (1971). Therapeutic consultations in child psychiatry. New York: Basic Books. 158 VITA Shea McNeill Pilgrim attended St. Stephen’s School in Austin, Texas. In 1992, she entered the University of Texas at Austin. She received the degree of Bachelor of Arts from the University of Texas in December, 1996. During the following years she was employed by the Texas House of Representatives and started a family. In August of 2005, she entered the Graduate School at the University of Texas at Austin in the Department of Educational Psychology. Permanent address: 1002 Lisa Drive, Austin, Texas 78733 This manuscript was typed by the author.