Research Methods Literature ReviewPrior to beginning work on this assignment, review the qualitative and quantitative research designs encountered so far in this course.For your literature review, you

Get quality term paper help at Use our paper writing services to score better and meet your deadlines. It is simple and straightforward. Whatever paper you need—we will help you write it!

Order a Similar Paper Order a Different Paper

Research Methods Literature Review

Prior to beginning work on this assignment, review the qualitative and quantitative research designs encountered so far in this course.

For your literature review, you will select one design from each of the following categories.



DescriptiveArchivalObservationalCorrelationalSurvey research

Quantitative experimental

Pretest-posttest control groupPosttest-only control groupSolomon four-group


EthnographyPhenomenologyGrounded theoryNarrativeParticipatory action research (PAR)

Mixed methods


Visit the Research Methods research guide in the Ashford University Library and search the databases for a minimum of one peer-reviewed journal article published within the last 10 years about each of the research designs you selected. The articles must not be research studies using the designs. Instead, they must be about how to conduct a study using the design. Examples of acceptable articles for this assignment are listed at the Suggested Articles tab in the Research Methods research guide.

In your paper, briefly outline the topic you selected for your Final Research Proposal in Week One and apply the scientific method by suggesting both a specific research question and a hypothesis for the topic. Evaluate your chosen peer-reviewed articles summarizing each and explaining how the research design described could be useful for designing original research on your topic. Compare and contrast the paradigms or worldviews inherent in the methodology associated with each research design. Apply professional standards and situate yourself as a researcher by identifying which of these approaches best fits with your worldview.

I am totally lost on this one I did find articles that might me useful if not if you find ones that may work better please use. I found 10 I hope this is what I was suppose to look.

The Research Methods Literature Review

  • Must be four to six double-spaced pages in length formatted to APA style
  • Must include a separate title page with the following:

    • Title of paper
    • Student’s name
    • Course name and number
    • Instructor’s name
    • Date submitted
  • Must use at least four peer-reviewed sources published within the last 10 years.
  • Must document all sources in APA style
  • Must include a separate reference page

Research Methods Literature ReviewPrior to beginning work on this assignment, review the qualitative and quantitative research designs encountered so far in this course.For your literature review, you
International Journal of Qualitative Methods 2 (2) Spring 2003 1 Design and Control Issues in Qualitative Case Study Research Gaynor Lloyd-Jones Gaynor Lloyd-Jones, Ph.D., CSPPD Coordinator, Medical Teaching Organization, University of Edinburgh, Scotland Abstract: Some methodologists have pointed to similarities between experimental method and case study research in terms of design, theory testing and development. However, little is known about how these debates inform qualitative research rationales. The use of a sequential dual case study provided an opportunity to examine these issues and their impact on the unfolding research process. The interplay of inductive and deductive approaches was evident throughout in decisions determining the nature of the research enquiry. Keywords: Qualitative case study research; experimental method; qualitative research design Acknowledgements: The author gratefully acknowledges the support of the following: Dr. Tony Hak who supervised the research, PH Holt Trust for funding the research, and Dr. Roger Gomm for his constructive comments on earlier versions of this article. Citation Information: Lloyd-Jones, G. (2003). Design and control issues in qualitative case study research. International Journal of Qualitative Methods, 2(2). Article 4. Retrieved [INSERT DATE] from Lloyd-Jones DESIGN AND CONTROL ISSUES 2 International Journal of Qualitative Methods 2 (2) Spring 2003 Introduction Much is sometimes made of the distinctions between qualitative and quantitative research design and the development of the enquiry process. Quantitative research honours the logic of experimental or correlational method in adhering to agreed rules and predetermined sequences, irrespective of emerging data and analysis. The role of researcher is detached from the field of enquiry. By contrast, qualitative design displays an interactive, dynamic, and emergent character in which the aims, strategies, data, analysis, and validity are woven together in the process of the study (Hammersley & Atkinson, 1995; Maxwell, 1996; Becker, 1996). The qualitative researcher is the key instrument in the design process, continually deploying reflexivity and evaluative skills to data analysis and to the decisions concerning the direction of the next step in the study. The design of each qualitative research study might therefore be considered unique. Yet dissenting opinions point to greater complexity beneath the surface of this polarized perspective. Hammersley (1992) argues that the qualitative-quantitative divide is artificially polarized, disguising both methodological similarity and diversity in consequence. In an analysis that emphasizes the trade-offs and overlaps between experimental, survey, and qualitative research, he concludes that no single approach is necessarily ideal and that selection inevitably involves loss as well as gain. This raises the possibility that the underlying logic between approaches may be shared, and also that different approaches may be strategically deployed to offset their particular disadvantages and advantages. Hammersley is not alone in seeing relationships between qualitative research and experimental method. Yin (1994) draws attention to similarities at the design level, seeing the single case study as analogous to a single Lloyd-Jones DESIGN AND CONTROL ISSUES 3 International Journal of Qualitative Methods 2 (2) Spring 2003 experiment in terms of theory testing and development. What is shared between qualitative and quantitative research may therefore be more real than apparent (Becker, 1996). Although these arguments are well delineated in the literature, their impact on methodological decisions in research practice is less well documented. Whether they can be used in qualitative case study research, what implications ensue from their adoption, and how they might influence the ongoing design, conduct, and study outcomes remain largely unanswered questions. Qualitative research and experimental method In order to place the discussion in context, three interrelated features of research methodology will be used to compare qualitative research and experimental method: theory testing, the nature and use of control, and the induction-deduction dimension. This is not intended as a comprehensive review but serves to highlight those aspects of research design that have practical implications for the present study. The ultimate aim of experimental method is to develop theory through repeated testing of related hypotheses. However, the outcomes of experiments are not generalizable per se but by reference to the related theory. In practice, much replication is required under similar and differing conditions before a theory may be upheld, rejected, or modified, and the point at which this occurs requires the exercise of judgement (Campbell & Stanley, 1963). Yin (1994) employs similar replication logic in his description of case study method, notably in his treatment of multiple case study design. He stresses the careful selection of cases that will either replicate (literal replication) or produce contrasting findings (theoretical replication) in line with the Lloyd-Jones DESIGN AND CONTROL ISSUES 4 International Journal of Qualitative Methods 2 (2) Spring 2003 prevailing theory. Similar findings uphold the theory whereas contradictory findings either demand theory modification or refutation thus mimicking experimental method. Though not all qualitative studies are intended, by any means, either to develop or test theory, theoretical inference has been proposed as a means of combating the problem of local to global generalization inherent in qualitative case study research (Gomm, Hammersley, & Foster, 2000; Yin, 1994). There is a small, but significant, point to be made concerning theory testing and threats to validity in experimental method. In general, an experiment tests only a single theory, although it may be designed to differentiate between two rival hypotheses. Therefore, valid inference from experimental method is prey to threats from unforeseen, conflicting theories or hypotheses despite the use of control measures (Cook & Campbell, 1979). Bem’s (1967) alternative reading of Festinger’s (1957) cognitive dissonance theory is a case in point. Qualitative researchers could argue that inductive approaches might be less prone to the problem but this is true in terms of potential only. As in all research, omissions pose potential threats to validity similar to misinterpretations, treatment artefacts, or measurement error (Popper, 1972). In essence, experimental method relies on the logic of comparison, contrasting the outcomes of two conditions that vary by a single assigned variable. Validity threats from rival hypotheses are excluded by controlling extraneous, but not necessarily identified, influences on the outcome. It is important to differentiate between the use of comparison and control measures for the former is no stranger to qualitative researchers. It is central to the analysis of grounded theory, implicit in natural experimental settings and widely used in historical and political research where Lloyd-Jones DESIGN AND CONTROL ISSUES 5 International Journal of Qualitative Methods 2 (2) Spring 2003 multiple cases are available for analysis (Ragin, 1994; Gomm et al., 2000). In the latter instances, the comparisons relate to naturally occurring examples, avoiding the artificial contrivance of control. Experimental control is essentially a means to an end. It is the technique that allows the creation of different manufactured conditions that permit comparison and valid inference. The approach to control, and the degree to which it is pursued, differentiate between case study and experimental method (Hammersley, 1992). Experimental control measures act as a filter against generic validity threats, relieving the researcher of the need to identify rival hypotheses (Maxwell, 1996) although occasionally threats are specified (Campbell & Stanley, 1963). Experimental control encompasses a range of strategies such as pre- or post-testing, matched groups and contexts, randomization, and treatment equivalence all devoted to close matching of the control and treatment groups. The absence of rigorous control measures is one reason for the pluralistic, diverse character of qualitative research design, as the researcher must seek out alternative explanations and hypotheses throughout the research process design, data collection, and analysis (Maxwell, 1996). The success with which such threats are kept at bay will depend on the researcher’s analytical and imaginative insights that inform methodological rationales and decisions. Control, as conceived in quantitative research terms, conflicts with the respect paid to context and naturalism in qualitative research. In practice however, qualitative researchers do operate with lesser degrees of control (Hammersley, 1992). Data collection methods such as structured interviewing and focus groups inhabit a ‘no man’s land’ between naturalism and control. In such situations, the choice of greater control may influence the data in ways that compromise the representativity of the subsequent analysis. Lloyd-Jones DESIGN AND CONTROL ISSUES 6 International Journal of Qualitative Methods 2 (2) Spring 2003 A similar, and often cited, distinction between the inductive approach of case study and the deduction of experimental method also blurs under scrutiny. Though hypothesis raising and testing are claimed as definitive of deductive reasoning they are neither the exclusive preserve of quantitative research nor the only ways that deductive reasoning can be employed (Hammersley, 1992). Qualitative researchers employ both cognitive processes informally in the development of the enquiry process (Becker, 1993). Yet the deductive principles of experimental research do lead to a restrictive focus when control is relied on to exclude invalid inference. By contrast, qualitative researchers must remain open and alert to possible alternatives and it is this quality that marks qualitative research as primarily inductive. It seems likely that the qualitative researcher will encounter choices that reflect some of these considerations during the research enquiry. Few qualitative researchers have utilized sequential case studies to develop and test theory in a manner similar to experimental method. Though practical considerations of time, scale, and feasibility discourage such endeavours, the assumption that the complex pluralistic nature of social life does not rest on universal laws constitutes the greater obstacle (Lincoln & Guba, 1985). Yet the ethnographic studies of Hargreaves (1967), Lacey (1970), and Ball (1981) offer a striking example from the qualitative case study literature of the development of differentiation- polarization theory. The theory proposes that streaming secondary school pupils, according to academic ability, results in polarization of their attitudes toward the school. The streaming practices varied in the different schools studied and the relationship between these practices and the degree of polarization found represented a test of theory through the medium of qualitative research (Hammersley, 1992). In this article, a similar strategy of theory development and testing Lloyd-Jones DESIGN AND CONTROL ISSUES 7 International Journal of Qualitative Methods 2 (2) Spring 2003 is described, but within a single research study. The methodological rationales that directed the enquiry process are highlighted to explore the conceptual territory between qualitative design and experimental method. Background to the study The background to the study was a wave of curricular revision in medical undergraduate education in the United Kingdom, instigated by the General Medical Council (1993). The medical school at the University of Liverpool responded by developing a subject integrated, problem-based learning (PBL) course. Small group, enquiry-led learning replaced the former heavy diet of lectures and discrete, discipline-based courses were combined in a curriculum that integrated natural, clinical, and social sciences throughout. The course was launched in 1996 with a pioneer intake of 200 students, while senior students continued on the traditional course that was gradually phased out. PBL had already established a foothold in US/Canadian medical education but, despite this, there were few successful examples of institutions negotiating curriculum change of the scale contemplated by Liverpool. Design of the study The double case design employed in the research (see Table 1) was not predetermined at the start of data collection but emerged gradually in response to literature reviews, collected data, and analysis. This section describes the rationales and possibilities entertained at the start of the study. As a medically qualified curriculum facilitator working in the medical school, the researcher was familiar with experimental research method and was attracted to the possibility of theory Lloyd-Jones DESIGN AND CONTROL ISSUES 8 International Journal of Qualitative Methods 2 (2) Spring 2003 development and testing within Yin’s (1994) model of multiple case study design. As a two-year period of qualitative data collection was planned, it was feasible to look at two separate academic sessions, each representing a single case study. The choice of cohorts and course years was not made at the outset, other than a decision to start data collection with the launch of the course. By default, the pioneer year thus became the first case study. First case study Second case study 1996/7 academic session (pioneer entry) 1997/8 academic session Peer social reference group – senior medical studentsPeer social reference group – senior medical students No peer educational reference group Peer educational reference group – pioneer entry group Table 1. Multiple case study design of the Liverpool study The choice of the second case study was left open, although three possibilities were considered. The one ultimately adopted was to study the next student entry on the PBL course, in effect comparing the experiences of two first-year cohorts in two case studies (Table 1). Alternatively the pioneer cohort studied in the first case study could be followed into the second course year, thereby comparing the experience of different course years from the viewpoint of a single cohort. A third plan combined the two, researching the second first-year student entry plus the pioneer cohort experience in the second year of the course. The latter idea proved too ambitious for the resources of a single part-time researcher and was discarded early on. Reviewing the literature revealed three areas of relevance for study design development. First, there was a series of ethnographic case studies of the medical student experience (Fox, 1957; Becker, Geer, Hughes, & Strauss, 1977; Haas & Shaffir, 1991), which could provide comparison with the present study. In two of these student experience was characterized by insecurity Lloyd-Jones DESIGN AND CONTROL ISSUES 9 International Journal of Qualitative Methods 2 (2) Spring 2003 attributed to impending professional responsibilities (Fox, 1957; Haas & Shaffir, 1991) but in the third, educational cueing 1 to faculty, tutors, and assessment was prominent and uncertainty was not in evidence (Becker et al., 1977). A separate review of the PBL literature showed that little was known about the nature of PBL student experience. This was highly significant because PBL proponents claim that the actions of the typical PBL student differ from more conventional students. PBL students are self-directed learners who recognize their own individual knowledge deficiencies and take responsibility for satisfying their learning needs (Barrows & Tamblyn, 1980; Schmidt, 1983). Neither cueing to teachers, nor to peers is compatible with PBL student action, for PBL is an emphatically individualistic educational approach. The outcomes of both literature reviews therefore justified a focus on the student perspective, which in essence became the case at the heart of each case study. A study of reference groups at a women’s college in the United States during the depression years also fed into design development (Newcomb, Koenig, Flacks, & Warwick, 1967). In that study, Bennington College (now co-ed) students adopted contrasting political allegiances to those of family (Democrat) and background (Republican), attitudes that were retained 20 years later. Yet it was not the findings of the American study that were of interest to the Liverpool setting, but the application of the concept of reference groups. On reflection, it was apparent that incoming students could employ senior students as reference groups for advice about social and educational matters. A closer analysis revealed potential distinctions between the experiences of the pioneer entry students and the following year entry in this respect. Clearly, both cohorts Lloyd-Jones DESIGN AND CONTROL ISSUES 10 International Journal of Qualitative Methods 2 (2) Spring 2003 would have access to advice about social affairs but this would not be true for education. The pioneer entry would lack a senior student reference group, as they were experiencing a quite different curriculum. The following year would revert to normative circumstances, as the pioneer entry became the senior educational reference group. At this stage, the analysis was speculative and clearly did not preclude other contextual differences between pioneer and later cohorts but it tended to favour the study of different cohorts. However, the final decision was not taken until completion of the first case study analysis. Methods Details of the rationales underlying the unfolding enquiry process in relation to methods of data collection are described in full elsewhere (Lloyd-Jones, 2002; Lloyd-Jones & Hak, in press), but are only outlined in this article concentrate on the aspect of theory development within the design. Participant observation and direct observational methods formed the mainstay of data collection methods and were supplemented with interviews, focus groups, analysis of course documents and resources, and a survey derived from a nominal group analysis. Consent for the study was gained at the start from the Dean and all course directors. Every student received a copy of the research guidelines assuring them of their rights to confidentiality and of their right to decline participation. Staff tutors were approached individually on a similar basis. First case study Insecurity characterized the student experience during the first term. Study practices contradicted the PBL model as students were observed cueing to staff, the given resources, and references. Interview data triangulated the observational data, and inspection of student notes also confirmed Lloyd-Jones DESIGN AND CONTROL ISSUES 11 International Journal of Qualitative Methods 2 (2) Spring 2003 students’ reliance on course resources. The insecurity proved problematical for research development as its causes could not be attributed with certainty. Though the students blamed uncertainty on the lack of an explicit, shared syllabus, the unfamiliarity and organizational teething problems of a novel course could not be excluded as contributory. Nor was it possible to assess how the absence of an educational reference group affected the pioneer students. During the spring term, insecurity gradually declined as peer interaction and social comparison increased among the cohort. Students compared workloads, notes, and PBL study, and modified private study accordingly if they deviated from the consensus. However, insecurity resurfaced in the summer with approaching assessment as students worried that the exclusive pursuit of PBL learning might risk examination failure by omitting unseen elements of the faculty agenda. For the first time, students articulated concern about their pioneer status, speculating about their knowledge and competence in comparison with students at other medical schools. The main points to emerge from an analysis of the first case study were insecurity, students’ rejection of self-directed study practices in favour of faculty-directed learning and the gradual development of collective identity and action among the cohort. These findings favoured the design in Table 1 for three reasons. First, the study practice findings were novel and unexpected. The assumption that PBL students acted according to PBL claims had not previously been challenged, although there was little supporting evidence for it. The first case study findings could therefore be dismissed as atypical because of the peculiarities of context unless additional evidence could be found. Evidence of similar practices in a different student cohort in a more normative context would give greater robustness to the first case study findings. Second, studying a new cohort experience under more normative circumstances might test the speculation Lloyd-Jones DESIGN AND CONTROL ISSUES 12 International Journal of Qualitative Methods 2 (2) Spring 2003 relating to reference groups and, third, help to illuminate the cause of insecurity in the first case study. Based on the logic of the study design and the first case study analysis, hypotheses were raised for the second case study, which predicted recurrences of all three phenomena in the second case study. However, the presence of the pioneer group as a source of educational advice was anticipated to reduce, but not eradicate, uncertainty. How this effect might be mediated was regarded as an emergent research question to be answered by data collection and analysis. Second case study The hypothesis on study practices was strongly upheld from the start of term as students were observed openly cueing to resources, tutors, and faculty, similar to their predecessors. Collective identity and action developed rapidly but, on the other hand, insecurity was striking by its absence. The refutation of the related hypothesis required explanation and the answer was sought, and found, in interactions between the pioneer student entry and the new group first year students. Acting backstage of formal educational settings, the pioneer year had conveyed advice on a specific textbook, which had been adopted, almost without exception. The effects of this were not only to standardize content knowledge and reduce insecurity, but also to undermine the individualistic aims of PBL learning. However, uncertainty reappeared in focus group data in the spring term, prompted by a recent changeover of PBL tutorial groups and tutors. Students’ former cues of tutor direction were rendered unreliable as they encountered unexpected variation in tutor and group practice. In neither case study had students’ study practices conformed to those claimed by advocates for PBL. Though the timing and degree of insecurity in the two case studies was strikingly different, Lloyd-Jones DESIGN AND CONTROL ISSUES 13 International Journal of Qualitative Methods 2 (2) Spring 2003 in both groups a consistent relationship was found between the existence of collective social groupings and the diminution of uncertainty. Group consensus and conformity appeared to allay insecurity in both case studies. Conversely, uncertainty was maximal when group consensus was either undeveloped, as at the start of the first case study, or when challenged by group and tutor changes in the second case study. Multiple case study design and the natural experiment The dual case study design capitalized on the natural experimental features of the setting by comparing the experience of two successive student cohorts in naturally occurring, but different, contexts. The various contextual similarities and differences thus constituted a naturally occurring control structure. The institution, the curriculum and medical student social culture could be regarded as consistent features across both case studies. On the other hand, initial problems of implementation in the first case study and growing staff familiarity with the course in the second were potential differences. However, the most striking distinction that emerged fully during data collection arose from the different relationships between first year and senior students as a consequence of curriculum. The initial speculation on senior student role and action in the early stages of the study proved justified in the findings of the second case study, where their advice on textbook use and workload at registration was universally adopted, immediately promoting educational conformity and undermining PBL process. The design shares some features with Yin’s (1994) model of multiple case study design, notably in his treatment of each case study as separate entities that permit replication and in the testing of theory in the second case study. These are principles shared in common with experimental Lloyd-Jones DESIGN AND CONTROL ISSUES 14 International Journal of Qualitative Methods 2 (2) Spring 2003 method, although in the natural world replication is too precise a term. The testing in the second case study occurs under some conditions that are shared with the first, but also under circumstances that uniquely differentiate between the case studies. The findings that appeared in both case studies therefore acquire greater robustness as they have withstood the influence of contextual variation. For instance, the relationship between insecurity, educational cueing, and collective identity held good in the second case study despite the educational referent action, the different triggers to uncertainty and the temporal variations in the appearance of phenomena when compared to the first case study (Lloyd-Jones, 2002). The second case study data also served as a source, and means, of refining and developing theory. As an example, the low level of initial student insecurity in the second case study confounded expectations and triggered a search for explanations that ultimately led back to second-year student action. However the design differs from Yin (1994) in at least two respects. First, the approach taken with the first case study was exploratory and inductive, for the uncertainty of outcomes and the novelty of the situation militated against a deductive strategy so early in the study. This meant that theory emerged from data analysis of the first case study rather than, as Yin suggests, being developed deductively before any data are collected. Here, a more qualitative approach to study design was necessary to run neither the risk of prejudging events nor of being deaf to alternative hypotheses too early in the research. The first case study therefore, served as an exploratory vehicle for theory development from which hypotheses or predictions were raised for testing in the second case study. Lloyd-Jones DESIGN AND CONTROL ISSUES 15 International Journal of Qualitative Methods 2 (2) Spring 2003 The second contrast lies in the coexistence of literal and theoretical replications within the second case study, which again imposes a more open and emergent stance on the researcher. Since the second case study was contingent on the first in terms both of data and design, it was not possible to identify at the start of the research what might constitute literal, and what theoretical, replications in Yin’s (1994) terms. That could only be achieved following an analysis of the first case study data, and then only in the form of provisional hypotheses. The hypotheses, founded on literal replications and for testing in the second case study, predicted that educational cueing and the influence of collective action on study practices would recur. These were upheld in the data. Another hypothesis, based on theoretical replication, anticipated diminished initial uncertainty, lessened by the presence and actions of the senior second year students. However, exactly how the educational reference group action might be mediated was less certain and had to await further data collection. Here the research reverted to the emergent, inductive approach of the first case study. The case study posits an alternative explanation for PBL student action and behaviour to the one outlined by educationalists (Barrows & Tamblyn, 1980; Schmidt, 1983). The model is derived from empirically grounded data that respect those social and contextual influences neglected in the educational theory. In terms of theory testing in experimental method, this constitutes an alternative hypothesis to the existing claims of PBL. That the qualitatively derived model is congruent with PBL research derived from a variety of methodologies suggests the findings of the present study might not be unique (Lloyd-Jones, 2002). Yet caution should be exercised in generalizing to other PBL curricula on two grounds. The first is because the study represents the Lloyd-Jones DESIGN AND CONTROL ISSUES 16 International Journal of Qualitative Methods 2 (2) Spring 2003 equivalent of two experiments only, and second reason is because of the general acceptance of probabilistic rather than deterministic laws governing social action amongst social researchers. The discussion has been primarily concerned with maximizing research opportunities to deploy theory construction and testing but there are limitations to the approach. Significant methodological implications flow from the design since the logic of the multiple case study rests on comparing like with like. This is problematical for qualitative research where depth is valued more highly than breadth, for there is the danger that the research may concentrate on non- equivalent elements for comparison. Furthermore, without knowledge of the relationship between the whole case and the constituent parts, as represented by the collected data, claims to generalization within the case study may be tenuous. Gomm et al. (2000) have pointed to the lack of attention paid by qualitative researchers to generalization within the case study and of the cautions necessary in making assertions about the case on the basis of partial knowledge. In the present study attempts to combat the problem were made through rigorous attention to sampling of respondents, time and events and the use of a survey instrument (Lloyd-Jones, 2002) but the issue remains a potential validity threat. Conclusion Recent interest in qualitative research methodology has created opportunities to explore conceptual relationships between qualitative and quantitative design and methods. Though it is accepted by some researchers that theory development may be achieved by qualitative case study research, these examples are rare. A qualitative case study, which blended notions of experimental method with qualitative research, has been described and the design rationale Lloyd-Jones DESIGN AND CONTROL ISSUES 17 International Journal of Qualitative Methods 2 (2) Spring 2003 examined. By rendering more explicit the arguments and decision making processes that contribute to qualitative case study design, it is possible to gain a clearer assessment of a study’s strengths and limitations, as well as contribute to our understanding of qualitative case study design and methodology. Notes 1. The term cueing or educational cueing refers to the way students seek clues or “cues” from the educational environment as to what the faculty or school wishes them to know and learn. In its most common form students try to find out about assessment, particularly examination content, from their teachers. It was particularly obvious in the study quoted in the article because PBL curricula do not specify curricular content, leaving it up to the student to define what they think they should learn. This was a cause of insecurity to them. The students’ problem became acute when they faced examinations as there is no formal curriculum to refer to. Consequently, they try to ‘read’ tutor’s behaviour and search the resources for clues as to what might be included in the assessment. The term ‘curriculum hunting’ can sometimes be used as a substitute. References Ball, S. J. (1981). Beachside comprehensive: A case study of secondary schooling. Cambridge, UK: Cambridge University Press. Barrows, H. S., & Tamblyn, R.N. (1980). Problem-based learning: An approach to medical education. New York: Springer. Becker, H. S., Geer, B., Hughes, E. C. & Strauss, A.L. (1977). Boys in white: Student culture in medical school. Chicago: Chicago University Press. Becker, H.S. (1993). How I learned what a crock was. Journal of Contemporary Ethnography, 22, 28-35. Becker, H. S. (1996). The epistemology of qualitative research. In R. Jessor, A. Colby, & R. Schweder, (Eds.), Essays on ethnography and human development (pp. 53-72). Chicago: University of Chicago Press. Bem, D. J. (1967). Self-perception: An alternative interpretation of cognitive dissonance phenomena. Psychological Review, 74, 198-200. Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally College. Lloyd-Jones DESIGN AND CONTROL ISSUES 18 International Journal of Qualitative Methods 2 (2) Spring 2003 Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation design and analysis issues for field settings. Chicago: Rand McNally College. Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CT: Stanford University Press. Fox, R. (1957). Training for uncertainty. In R.K. Merton, G. G. Reader, & P. L. Kendall, (Eds.), The student-physician Introductory studies in the sociology of medical education (pp. 207-241). Cambridge, MA.: Harvard University Press. General Medical Council. (1993). Tomorrow’s doctors. London: General Medical Council. Gomm, R., Hammersley, M., & Foster, P. (2000). Case study method. London: Sage. Haas, J., & Shaffir, W. (1991). Becoming doctors: The adoption of a cloak of competence. Greenwich, CT: JAI Press. Hammersley, M. (1992). What’s wrong with ethnography? London: Routledge. Hammersley, M., & Atkinson, P. (1995). Ethnography Principles in practice (2nd ed.). London: Routledge. Hargreaves, D. H. (1967). Social relations in a secondary school. London: Routledge. Lacey, C. (1970). Hightown grammar. Manchester, UK: Manchester University Press. Lincoln, Y. S. & Guba, E. G. (1985). The only generalization is: there is no generalization. In Y. S. Lincoln & E. G. Guba (Eds.), Naturalistic inquiry Newbury Park, CA: Sage. Lloyd-Jones, G. (2002). A multiple case study of the first year student perspective in a medical undergraduate PBL curriculum. Unpublished doctoral dissertation, University of Liverpool, England. Lloyd-Jones, G., & Hak, A. (in press). Self-directed learning and student pragmatism. Advances in Health Sciences Education. Maxwell, J. A. (1996). Qualitative research design: An interactive approach. Thousand Oaks, CA: Sage. Newcomb, T. M., Koenig, K., Flacks, R., & Warwick, D. P. (1967). Persistence and change: Bennington College and its students after 25 years. New York: John Wiley & Sons. Popper, K. R. (1972). Truth, rationality and the growth of scientific knowledge. Conjectures and refutations: The growth of scientific knowledge (pp. 215-250). London: Routledge and Kegan Paul. Lloyd-Jones DESIGN AND CONTROL ISSUES 19 International Journal of Qualitative Methods 2 (2) Spring 2003 Ragin, C. C. (1994). Constructing social research: The unity and diversity of method. Thousand Oaks, CA: Pine Forge Press. Schmidt, H. G. (1983). Problem-based learning: Rationale and description. Medical Education, 17, 11-16. Yin, R. K. (1994). Case study research: Design and methods (2nd ed.). Thousand Oaks, CA: Sage. Copyright of International Journal of Qualitative Methods is the property of University of Alberta and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.
Research Methods Literature ReviewPrior to beginning work on this assignment, review the qualitative and quantitative research designs encountered so far in this course.For your literature review, you
Journal of Counseling & Development ■ April 2013 ■ Volume 91 184 © 2013 by the American Counseling Association. All rights reserved. Received 07/31/11 Revised 04/13/12 Accepted 09/04/12 DOI: 10.1002/j.1556-6676.2013.00085.x For practitioners in the field of counseling, the combining or mixing of qualitative and quantitative methodologies is not a new or unique phenomenon. In fact, as surmised by Powell, Mihalas, Onwuegbuzie, Suldo, and Daley (2008), “by defini- tion, assessment, whether for purposes of program planning or treatment, necessitates the consideration of multiple sources of data” (p. 293). In addition, as stated in Section E (i.e., Evalua- tion, Assessment, and Interpretation) of the ACA Code of Ethics (American Counseling Association, 2005), counselors use both quantitative and qualitative assessments in practice. Counselor researchers and counselors-as-practitioners routinely collect and analyze qualitative and quantitative data as a necessary part of their profession. Therefore, the purpose of this article is threefold: (a) to demonstrate that regardless of philosophi- cal stance, collecting quantitative data via psychometrically sound quantitative instruments during the qualitative interview process enhances interpretations by helping researchers better contextualize qualitative findings; (b) to explain the concept of the mixed methods interview; and (c) to provide an example demonstrating this strategy whereby a baseline was established using a quantitative scale and normative data as a mixed re- search approach. On the basis of definitions provided by 19 leading meth- odologists of mixed methods research, or more aptly named as mixed research to denote the fact that more than methods typically are mixed (e.g., philosophical assumptions and stances, research questions), Johnson, Onwuegbuzie, and Turner (2007) defined mixed research as an intellectual and practical synthesis based on qualitative and quantitative research; it is the third methodological or research paradigm (along with qualitative and quantitative research). Rebecca K. Frels, Department of Counseling and Special Populations, Lamar University; Anthony J. Onwuegbuzie, Department of Educational Leadership and Counseling, Sam Houston State University. The authors would like to acknowledge and thank John Harris, Applied Research Consulting, and Michael Nakkula, University of Pennsylvania, for the use of the Match Characteristics Questionnaire. The authors also thank Michael Karcher, University of Texas San Antonio, for his guidance in selecting the quantitative measure and locating other resources on school-based mentoring. Correspondence concerning this article should be addressed to Rebecca K. Frels, Department of Counseling and Special Populations, Lamar University, 223 Education Building, Beaumont, TX 77710 (e-mail: [email protected]). Administering Quantitative Instruments With Qualitative Interviews: A Mixed Research Approach Rebecca K. Frels and Anthony J. Onwuegbuzie The authors demonstrate how collecting quantitative data via psychometrically sound quantitative instruments dur- ing the qualitative interview process enhances interpretations by helping researchers better contextualize qualitative findings, specifically through qualitative dominant crossover mixed analyses. They provide an example of this strategy, whereby a baseline was established using a quantitative scale and normative data to help interpret qualitative inter- views, resulting in what they call a mixed methods interview. Philosophical and practical implications are discussed. Keywords: qualitative interviews, qualitative dominant mixed analysis, crossover mixed analysis, mixed methods research, mixed research It recognizes the importance of traditional quantitative and qualitative research but also offers a powerful third paradigm choice that often will provide the most informative, complete, balanced, and useful research results. Mixed methods research is the research paradigm that (a) partners with the philosophy of pragmatism in one of its forms (left, right, middle); (b) follows the logic of mixed methods research (including the logic of the fundamental principle and any other useful logics imported from qualitative or quantitative research that are helpful for producing defensible and usable research find- ings); (c) relies on qualitative and quantitative viewpoints, data collection, analysis, and inference techniques combined according to the logic of mixed methods research to address one’s research question(s); and (d) is cognizant, appreciative, and inclusive of local and broader sociopolitical realities, resources, and needs. (p. 129) If we take into account the integrative nature of counseling, it is surprising that relatively few counseling researchers com – bine or mix qualitative and quantitative data in their studies. Ray et al. (2011), who recently reviewed 4,457 articles from 1998 to 2007 in 15 ACA division-affiliated journals, identi- fied only 171 mixed research articles, which represented only 3.84% of the total number of articles published in these jour – nals. In fact, this finding is consistent with other researchers’ studies examining counseling journals that documented the lack of mixed research articles for either empirical research articles or nonempirical research articles (e.g., theoretical/ conceptual articles; Hanson, Creswell, Plano Clark, Petska, & Creswell, 2005; Leech & Onwuegbuzie, 2011). Similar to other fields and disciplines, the low prevalence rates of mixed research articles published in counseling Journal of Counseling & Development ■ April 2013 ■ Volume 91 185 Administering Quantitative Instruments With Qualitative Interviews journals have occurred despite the exponential increase in the number of methodologically based mixed research ar – ticles that have been published in the literature (Ivankova & Kawamura, 2010), including two handbooks (i.e., Tashakkori & Teddlie, 2003, 2010) and numerous books (i.e., Andrew & Halcomb, 2009; Bergman, 2008; Collins, Onwuegbuzie, & Jiao, 2010; Creswell & Plano Clark, 2010; Greene, 2007; Hess-Biber, 2010; Morse & Niehaus, 2009; Newman & Rid- enour, 2008; Onwuegbuzie, Jiao, & Bostick, 2004; Plowright, 2011; Teddlie & Tashakkori, 2009) on mixed research, as well as guidelines written directly for counseling researchers that were published in ACA’s flagship journal (i.e., Leech & Onwuegbuzie, 2010). These and other methodological works have demonstrated the utility of conducting mixed research. Moreover, although quantitative research is particularly useful for “answering questions of who, where, how many, how much, and what is the relationship between specific variables” (Adler, 1996, p. 5), it is not optimal for answering why and how questions. The converse is true for qualitative research. In contrast, mixed research can address both sets of questions within a single research study. Alternatively stated, mixed research has been shown to be useful for addressing simultaneously both quantitative-based questions that deal with prevalence rates (i.e., descriptive research), relationships (i.e., correlational re- search, causal-comparative, quasiexperimental research), and cause-and-effect relationships (i.e., experimental research) and qualitative-based questions that lead to the examination of local processes, experiences, and perceptions of individu- als such as counselees (e.g., biography, autobiography, life history, oral history, autoethnography, case study) and groups such as cultural groups (e.g., phenomenology, ethnography, grounded theory). Greene, Caracelli, and Graham (1989) identified five dif- ferent research purposes for mixing quantitative and qualita- tive data: (a) triangulation (the intent is to seek convergence in data); (b) complementarity (the intent is to measure overlap- ping but different facets of a phenomenon); (c) development (the intent is to help develop or inform the other method); (d) initiation (the intent is to discover paradox and contradiction, new perspectives of frameworks, and the recasting of ques- tions or results); and (e) expansion (the intent is to extend the breadth and range of inquiry). In addition, mixed research can be used to address a broader range of research questions than can monomethod studies (i.e., quantitative research alone or qualitative research alone). For example, as identified by Plano Clark and Badice (2010), mixed research can be used to address the following types of research questions: separate research questions (i.e., one or more quantitative research questions coupled with one or more qualitative research questions); general overarching mixed research questions (i.e., broad questions that are addressed using both quantita- tive and qualitative approaches); hybrid mixed issue research questions (i.e., one question with two distinct parts such that a quantitative approach is used to address one part and a qualitative approach is used to address the other part); mixed procedural/mixing research questions (i.e., narrow questions that direct the integration of the qualitative and quantitative strands of the study); combination research questions (i.e., at least one mixed research question combined with separate quantitative and qualitative questions); independent research questions (i.e., two or more research questions that are related, with each question not depending on the results of the other question[s]); dependent research questions (i.e., questions that depend on the results stemming from addressing another question); predetermined research questions (i.e., questions based on literature, practice, personal tendencies, and/or disciplinary considerations that are posed at the beginning of the study); and emergent research questions (i.e., new or modified research questions that arise during the design, data collection, data analysis, or interpretation phase). It is probable that many researchers might not conduct mixed research because of a lack of training or, as noted by Frels, Onwuegbuzie, Leech, and Collins (2012) and Onwueg- buzie, Frels, Leech, and Collins (2011), a lack of pedagogical information in published form. Other reasons for the dearth of mixed research studies published in counseling journals might be philosophical (i.e., researchers’ beliefs about the nature of knowledge, objectivity–subjectivity dualism), axiological (i.e., researchers’ beliefs about the role of values and ethics), or ontological (i.e., researchers’ beliefs about the nature of reality). In particular, at least some researchers mistakenly believe that the philosophical assumptions and stances of their quantitative- based research (e.g., postpositivism) or their qualitative-based research (e.g., constructivism, critical theory) prevent them from mixing quantitative and qualitative approaches. However, as demonstrated by Onwuegbuzie, Johnson, and Collins (2009), the ontological, epistemological, and methodological assump- tions and stances representing the major research paradigms do not prevent researchers from collecting and analyzing both quantitative and qualitative data—at least to some degree. Table 1, which was created using Onwuegbuzie, Johnson, and Collins’s (2009) comparison of paradigms, displays major characteristics associated with three qualitative-based paradigms (i.e., construc- tivism, critical theory, and participatory), one quantitative-based paradigm (i.e., postpositivism), and one mixed research-based paradigm (i.e., pragmatism-of-the-middle) with respect to three axiomatic components (i.e., ontological, epistemological, and methodological foundations). It can be seen from the table that the philosophical assumptions and stances underlying postposi- tivism allow postpositivist researchers to utilize some qualitative analysis techniques, especially those that yield frequency data such as word count (i.e., counting particular times a word or words are used) and classical content analysis (i.e., counting the codes). At the other end of the research paradigmatic continuum, philosophical assumptions and stances associated with qualitative inquiry—specifically constructivism—allow Journal of Counseling & Development ■ April 2013 ■ Volume 91 186 Frels & Onwuegbuzie constructivist researchers (e.g., radical constructivists, cog- nitive constructivists, social constructivists/constructionists) to use, at the very least, descriptive statistics (i.e., measures of central tendency [e.g., mean, median, mode, proportion]; measures of variation/dispersion [e.g., variance, standard deviation]; measures of position [e.g., percentile rank, z score]; and measures of distributional shape [e.g., skewness, kurtosis]). As also seen in Table 1, philosophical assumptions and stances underlying both critical theory and participatory paradigms allow critical theorist researchers and participatory researchers, respectively, to use descriptive statistics and many forms of inferential statistics. In the mixed research tradition, the philosophical assump- tions and stances that underlie mixed research represent the commensurability of paradigms. Johnson and Gray (2010) explained mixed methods thinking as considering how con- TABLE 1 Contemporary Research Paradigms and Characteristics Research Type Ontology Epistemology Methodology Rhetoric Qualitative analysis Quantitative analysis Postpositivism Note. This table was created based on definitions found in The Sage Dictionary of Qualitative Inquiry (3rd ed.), by T. A. Schwandt, 2007, Thousand Oaks, CA: Sage; and information from “Toward a Philosophy of Mixed Data Analysis,” by A. J. Onwuegbuzie, R. B. Johnson, and K. M. T. Collins, 2009, International Journal of Multiple Research Approaches, 3, pp. 122–123. aExternal statistical generalizations involve making generalizations, judgments, inferences, or predictions on data stemming from a representative statistical (i.e., optimally random and large) sample of the population from which the sample was drawn (Onwuegbuzie, Slate, Leech, & Collins, 2009). bInternal statistical generalizations involve making generalizations, judgments, inferences, or predictions on data obtained from one or more representative or elite participants, such as key informants, subsample members, or politically important cases, of the sample from which the participant(s) was selected (Onwuegbuzie, Slate, et al., 2009). Pragmatism-of-the- Middle Critical Theory Constructivism Participatory Social science inquiry should be objective Researchers are neutral, emotion- ally detached, and should eliminate biases; empirically justify stated hypotheses Generalizations are time- and context- free; real causes of social scientific outcomes can be determined reliably and validly via quantitative (and sometimes qualitative) methods Use of formal, impersonal, passive voice and technical terminology; focuses on social laws Some qualitative analy- ses that generate numbers as part of the findings (e.g., word count, classical content analysis) All forms of descrip- tive and inferential statistics for making external statistical generalizations a Traditional dualisms are rejected; high re- gard for the influence of the inner world of human experiences in action Knowledge is based on the reality of the world and constructed through experience; justification comes via warranted assertability Thoughtful/ dialectical eclecti- cism and pluralism of methods and perspectives; deter- mines what works and solves individual and social problems Use of both imper- sonal passive voice/ technical terminology and empathetic, rich and thick descrip- tions All forms of qualitative analyses All forms of descrip- tive and inferential statistics Social, political, cultural, ethnic, racial, economic, and gender values that evolve over time affect reality Transactional/ subjectivist; value- mediated findings Use of a dialogue or dialectical approach Use of critical discourse All forms of qualitative analyses Descriptive statis- tics; most forms of inferential statistics for internal statistical generalizations b and external statistical generalizations Multiple contradic- tory, but equally valid accounts of the same phenomenon represent multiple realities Co-created findings/ meaning; knowledge is subjective and not separable for the knower Dialectical and impos- sible to differentiate fully causes and effects; uses induc- tive reasoning; time- and context-free generalizations are neither desirable nor possible Use of empathetic descriptions that are informal, detailed, rich, and thick All forms of qualitative analyses Descriptive statistics; some inferential statistics for internal statistical generalization but not external statistical generalization The mind and given world order are co- created through subjective–objective reality Experiential and practical for practice; co-created findings Political participation for collaborative action research; emphasizes the practical Use of language based on shared experiential context All forms of qualitative analyses Descriptive statistics; inferential statistics for both internal statistical generalizations and external statistical generalizations Journal of Counseling & Development ■ April 2013 ■ Volume 91 187 Administering Quantitative Instruments With Qualitative Interviews flicting positions illuminate new learning. Onwuegbuzie, Johnson, and Collins (2009) described the coming together of philosophical assumptions through 11 philosophical stances. These stances fall within various points on a con- tinuum. As seen through the pragmatism-of-the-middle paradigm in Table 1, the major philosophical assumptions and stances do not prevent researchers from using one or more analysis types associated with one tradition (e.g., quan- titative analysis) to analyze data associated with a different tradition (e.g., qualitative data)—a concept Onwuegbuzie and Combs (2010) called crossover mixed analyses. Crossover Mixed Analyses As a mixed research approach, crossover mixed analyses can be used for the following: • Reduce: to condense the dimensionality of qualita – tive data/findings using quantitative analysis (e.g., exploratory factor analysis of qualitative data) and/or quantitative data/findings using qualitative techniques (e.g., thematic analysis of quantitative data; Onwueg- buzie, 2003; Onwuegbuzie & Teddlie, 2003) • Display: to present visuall y qualitati ve and quantita – tive results within the same display (Onwuegbuzie & Dickinson, 2008) • Transform: to con vert quantitati ve data to be anal yzed qualitati vely (i.e., qualitizing data) and/or qualitative data into numerical codes that can be analyzed statistically (i.e., quantitizing data; Tashakkori & Teddlie, 1998) • Correlate: to associate qualitati ve data with quan – titized data and/or quantitative data with qualitized data (Onwuegbuzie & Teddlie, 2003) • Consolidate: to mer ge multiple data sets to create ne w or consolidated codes, variables, or data sets (Onwuegbuzie & Teddlie, 2003) • Compare: to e xamine si de-by-side qu alitative an d qu an- titative data/findings (Onwuegbuzie & Teddlie, 2003) • Integrate: to incor porate qualitati ve and quantitati ve data/findings either into a coherent w hole or into two separate sets (i.e., qualitative and quantitative) of coherent wholes (Onwuegbuzie & Teddlie, 2003) • Assert: to re view all qualitati ve and quantitati ve data to yield meta-inferences (Smith, 1997) • Import data: to use follo w-up f indings from quali – tative analysis to inform the quantitative analy- sis (e.g., qualitative contrasting case analysis, qualitative residual analysis, qualitative follow- up interaction analysis, and qualitative internal replication analysis) or follow-up f indings from quantitative analysis to inform the qualitative analysis (e.g., quantitative extreme case analysis, quantitative negative case analysis; Onwuegbuzie & Teddlie, 2003) Therefore, quantitative researchers and qualitative re- searchers can use mixed research techniques without contra- dicting their underlying research philosophical belief systems by conducting what they refer to as quantitative dominant crossover mixed analysis and qualitative dominant crossover mixed analysis, respectively. Quantitative Dominant Crossover Mixed Analysis According to Onwuegbuzie, Leech, and Collins (2011), who expanded the concept, quantitative dominant crossover mixed analysis is used when the researcher seeks to answer research questions through a postpositivist (quantitative) stance while also believing that qualitative data and analysis help address the research question(s) to a greater extent. This occurs at various levels. At one end of the spectrum—the highest level—inte- gration involves combining one or more sets of inferential analyses with other types of qualitative analyses “for the purpose of integrated data reduction, integrated data display, data transformation, data correlation, data consolidation, data comparison, data integration, warranted assertion analysis, and/ or data importation” (p. 377). At the lowest end of the spectrum, quantitative dominant crossover mixed analysis is the combina- tion of one or more sets of inferential analyses with qualitative analyses that generate some frequency data (e.g., word count) because these data are closer to statistical data than are the data that would be generated by other qualitative analyses (e.g., constant comparison analysis, discourse analysis). Regardless of the level of integration, the quantitative strand would rep- resent the dominant strand, with the qualitative strand being incorporated to address one or more of Greene et al.’s (1989) five purposes for mixing (i.e., tri angulation, complementarity , development, initiation, and expansion), using one or more of the nine crossover analysis types. Qualitative Dominant Crossover Mixed Analysis According to Onwuegbuzie, Leech, and Collins (2011), qualitative dominant mixed analysis involves a philosophical stance whereby the researcher assumes a (qualitative) con- structivist, critical theorist, or any stance that is associated with the qualitative research paradigm and also believes that the addition of quantitative data and analysis would address in more detail the research question(s). Building on this idea, Ross and Onwuegbuzie (2011) categorized the array of established quantitative analysis techniques into the follow- ing eight levels of complexity: Level 1, descriptive analyses (e.g., measures of central tendency, dispersion, position); Level 2, univariate analyses (e.g., independent samples t test, dependent samples t test, one-way analysis of variance); Level 3, multivariate analyses (e.g., multiple analysis of variance, multiple analysis of covariance, discriminant analysis, canoni- cal correlation analysis); Level 4, analyses of group mem- bership (e.g., exploratory factor analysis, cluster analysis, correspondence analysis, multidimensional scaling); Level 5, Journal of Counseling & Development ■ April 2013 ■ Volume 91 188 Frels & Onwuegbuzie measurement techniques (e.g., confirmatory factor analysis, item response theory); Level 6, analyses of time and/or space (e.g., autoregressive models, integrated models, moving av- erage models, geocoding, geostatistics, cartography); Level 7, multidirectional or multilevel analyses (e.g., structural equation modeling, hierarchal linear modeling); and Level 8, multidirectional and multilevel analyses (e.g., multilevel structural equation modeling, multilevel item response theory, multivariate hierarchical linear modeling). Thus, at the low- est level of integration, the qualitative dominant crossover mixed analysis would involve combining one or more sets of qualitative analyses with descriptive statistics (i.e., Level 1 quantitative analysis). At a higher level of integration, the qualitative dominant crossover mixed analysis would involve combining one or more sets of qualitative analyses with exploratory analysis techniques (i.e., Level 4 quantitative analysis), such as by subjecting the emergent themes to an exploratory factor analysis (i.e., integrated data reduction; see Onwuegbuzie, 2003; Onwuegbuzie & Teddlie, 2003). At the highest level of integration, the qualitative dominant crossover mixed analysis would involve combining one or more sets of qualitative analyses with inferential statistics (i.e., Levels 2–3, 5–8). Whatever the level of integration, the qualitative strand would represent the dominant strand, with the quantitative strand being used in an attempt to fulfill one or more of Greene et al.’s (1989) five purposes for mixing. As Greene (2008) surmised, the combining of quantita- tive and qualitative analysis techniques “has not yet cohered into a widely accepted framework or set of ideas” (p. 14). Furthermore, Bazeley (2010) concluded that “there are surprisingly few published studies reporting results from projects which make more than very elementary use of the capacity to integrate data and analyses using computers” (p. 434). Therefore, the concept of crossover analysis has great potential for advancing the process of combining quantitative and qualitative data collection and data analysis techniques within the same framework. Indeed, Teddlie and Tashakkori (2009) declared that this concept represents “one of the most fruitful areas for the further development of MM [mixed methods] analytical techniques” (p. 281). To this end, the purpose of the remainder of this article is to advance further one of the two components of crossover analyses, namely, the qualitative dominant crossover analysis. Specifically, we illustrate how a qualitative dominant crossover analysis can enhance the quality of interpretation of interview data. Qualitative Interviews Interviews represent one of the most common ways of col- lecting data in qualitative research because they provide opportunities for the researcher to collect rich and meaning- making data (e.g., Roulston, 2010). Because of the therapeutic relationship and the role of a counselor, there is no doubt that qualitative interviews are more relevant for the field of counseling than for other fields. In fact, certain types of interviews, to a certain degree, can resemble the counseling interview process (e.g., Chenail, 1997; Ortiz, 2001). Thus, in many counseling specialties, including the field of marriage and family therapy, interviews have been the most utilized qualitative method (Gehart, Ratliff, & Lyle, 2001). As Chenail (1997) declared, interviewing is a natural form of inquiry in the field of counseling because “it is so similar to the way in which counselors and therapists interact with their clients in therapy sessions” (Abstract). We contend that the interview, as a natural mode of in- quiry, can be enhanced when researchers/interviewers collect quantitative data alongside qualitative responses. Teddlie and Tashakkori (2009) referred to this strategy as within- strategy mixed methods data collection. We call it a mixed methods interview. Some examples of studies can be found in the literature wherein the researcher(s) developed and uti- lized interview formats that contained both open-ended and closed-ended items (e.g., Brannen, 2005). However, our call is for an even more rigorous process of combining qualita- tive open-ended interview questions with items from one or more relevant (standardized) quantitative instruments (e.g., Likert-format scales, rating scales) that possess adequate psychometric properties (i.e., adequate score reliability; ad- equate score validity stemming from adequate content-related, criterion-related, and construct-related validity), whenever available, which allow the researcher(s) to contextualize further the qualitative interview responses. Extracting standardized quantitative information—which represents only Level 1 complexity on Ross and Onwueg- buzie’s (2011) quantitative analysis continuum—alongside qualitative information from qualitative interviews enhances both representation and legitimation of the phenomenon of interest. Representation refers to the ability to extract an adequate amount of relevant information from each partici- pant—optimally, under the conditions of saturation (Morse, 1995), particularly data saturation (i.e., when information emerges so repeatedly that the researcher can expect it and wherein the collection of more data appears to have no addi- tional interpretive worth; Sandelowski, 2008) and theoretical saturation (i.e., when the researcher can assume that her or his emergent theory is adequately developed to fit any future data collected; Sandelowski, 2008). Information gleaned from the quantitative instrument(s) also lends clarity to the voice of a participant. More specifically, in Greene et al.’s (1989) typology, representation would be increased via enhanced complementarity, expansion, and development. Thus, by incorporating additional sources of information, qualitative researchers would obtain richer interpretations. In contrast, legitimation refers to the validity of interpre- tations that stem from the interview data. Indeed, legitima- tion would be increased via the ability to compare and to contrast the qualitative and quantitative data extracted from the interview(s), again using Greene et al.’s (1989) triangu- Journal of Counseling & Development ■ April 2013 ■ Volume 91 189 Administering Quantitative Instruments With Qualitative Interviews lation and initiation. And by increasing both representation and legitimation by administering one or more standardized quantitative instruments, increased verstehen (i.e., under – standing) would ensue. What follows is a heuristic example to illustrate, using a real study, the benefit of administering a standardized quantitative instrument as part of the qualita- tive interview process. It is our belief that our exemplar also will serve as a model for understanding the mixed research concepts previously discussed. Heuristic Example The example we provide here was written by Frels (2010), a professional school counselor (also the first author of the present article), using a qualitative dominant crossover mixed analysis within the context of a qualitative study wherein interviews represented the main data collection tool. The purpose of Frels’s (2010) study was to explore selected mentors’ perceptions and experiences of the dyadic mentoring relationship in school-based mentoring (SBM). A second purpose was to build on the qualitative body of research (Spencer, 2004, 2007) for understanding roles, purposes, approaches, and experiences of the relationship process with mentees (the dyadic relationship). The research explored SBM as a type of helping relationship facilitated by a mentor, involving the untapped resources of the psy- chotherapy literature and described by Spencer (2004), specifically, the dyadic relationship itself as the facilitator of change to affect both the mentor and the mentee. Frels’s research questions were as follows: Research Question 1: What are the experiences and per – ceptions of selected school-based mentors regarding roles, purposes, and approaches of mentoring within the dyadic relationship with elementary school students? Research Question 2: What are the differences and similarities in experiences and perceptions of selected school-based mentors working with elementary school students as a function of ethnicity of the mentor? As noted by Plano Clark and Badice (2010), three ele- ments are key in the focus of any study: the content area, the purpose, and the research questions. Even though Frels’s (2010) research was a qualitative study, the research questions also might be considered as representing general overarching mixed research questions (i.e., broad questions that are addressed using both quantitative and qualitative approaches) for driving the data collection methods. Con- sequently, as explained by Plano Clark and Badice, research questions are inherently linked to environmental contexts that include theories and beliefs. Therefore, we explored belief systems and philosophies at the onset of the study to recognize the lens from which data would be collected. As a result, the driving research paradigm was determined to be what Johnson (2011, 2012) labeled as dialectical pluralism, which refers to an epistemology that requires the researcher to incorporate multiple epistemological perspec- tives. This philosophical stance lends itself to the use of a crossover mixed analysis, and we combined epistemological perspectives to include pragmatism-of-the-middle (Onwueg- buzie, Johnson, & Collins, 2009) and social constructionism (Schwandt, 2000). Data Collection To address the research questions, Frels (2010) conducted a multiple case study with 11 adult mentors (four men, seven women), with ages ranging from 28 to 70 years and ethnicities of African American (n = 5), Hispanic (n = 2), and White (n = 4). Each of these mentors was paired with a mentee such that the pairing involved one or each of the following two mentee–mentor pairings: same-gender versus different-gender mentee–mentor pairings and same-ethnic versus different-ethnic mentee–mentor pairings. Although many forms of data were collected with regard to all dyad interactions—including observations, descriptive case notes, reflexive data, and debriefing data—interviews represented the major data collection technique for explor – ing the phenomenon of dyadic mentoring relationships. This mode of inquiry resonated with the first author’s identity and relational approach in research as a professional school coun- selor. Each mentor was interviewed separately on multiple occasions. Each interview, which lasted between 20 minutes and 60 minutes, was semistructured, with questions being purposefully created to gain insight into the experience of the dyadic relationship. Examples of interview questions include the following: What are your beliefs, thoughts, and opinions about the purpose of mentoring? What words, phrases, or images come to mind to describe the time you spend with your mentee? When you feel challenged in your relationship, what are some thoughts or beliefs that help? In addition to the in-depth interviews, the 11 mentors completed a standardized quantitative instrument, the 62- item Match Characteristics Questionnaire (MCQ; Harris & Nakkula, 2008), which measures the quality of matching between mentors and mentees. The MCQ yielded good psychometric properties, with the score reliability pertain- ing to some of the subscales (e.g., Growth Focus, Support- Seeking From the Mentee) ranging in the .90s. The MCQ subscale scores were used to contextualize the position of each mentor relative to each other and also to obtain a richer description of each of the 11 participants. The subscale scores informed both the ensuing cross-case analyses and within-case analyses. Data Analysis Cross-case analysis. The following segments are excerpts from Frels’s (2010) report that provide examples of how the MCQ subscale scores were used to enhance the richness of interpretations stemming from the cross-case analysis: Journal of Counseling & Development ■ April 2013 ■ Volume 91 190 Frels & Onwuegbuzie To legitimate the metathemes and themes, scores from the MCQ (Harris & Nakkula, 2008) and selected subscales were analyzed. Because norms are in the process of being established, scores have been established as percentiles by the authors of the MCQ (J. T. Harris, personal communication, June 2, 2010). On the whole, the selected mentors in my study scored high on every subscale of the MCQ. (pp. 197–198) To explore relationship behaviors in each dyad and per – ceived program support (e.g., relating to Research Question 2), the following three subscales were examined for each mentor: (a) program support subscale (i.e., the degree to which mentors feel that the program is providing effecting training, supervision, and support); (b) support-seeking broadscale (i.e., the degree to which mentors feel that their mentees seek their support in relation to personal issues and academics); and (c) mentor satisfaction subscale (i.e., the degree to which mentors feel that their match is growing stronger and producing good results for the mentee). Interestingly, Savannah and Chad [pseudonyms of two participants of the study] scored lowest in the area of support- seeking as they did in the area of sharing. In addition, Savannah scored below the 50th percentile in all three categories: program support, support-seeking behavior, and mentor satisfaction. Of the 11 mentors, seven scored above the 50th percentile for all three categories. Two of 11 mentors scored higher than the 75th percentile for two of the three categories; and only one mentor (Savannah) scored low for all three categories. (pp. 200–201) Comparing the MCQ subscale scores with qualitative responses involved the following crossover analyses: inte- grated data reduction, data transformation, data consolidation, data comparison, data integration, and warranted assertion analysis. Indeed, the selected mentors expressed satisfaction (with the exception of the participant, Savannah, who scored lower on the MCQ) in the interviews, and the use of the MCQ enriched this finding. Within-case analysis. The profiles of MCQ subscale scores played an important role in Frels’s (2010) decision to select Savannah for a follow-up, in-depth within-case analysis. The following excerpt from Frels’s (2010) report distinguishes a unique profile for Savannah: Savannah’s profile on the MCQ indicated an equal to or higher score than the 75th percentile on three (comfort, fun focus, future outlook) of the 10 subscales utilized to measure relationship characteristics. On three subscales (closeness, character develop- ment, relating focus), Savannah scored above the average range but below the 75th percentile. Additionally, Savannah scored particularly low in the areas of sharing (lower than average) and satisfaction (at the 25th percentile). Figure 32 [Figure 1 in the present article] depicts the scores of Savannah as they relate to the MCQ averages, the 75th percentile, and the 25th percentile. Interestingly, Savannah scored highest in the area of focusing on the future (S f = 96 < 75th percentile = 80). As seen in [Figure 1 in the present article], Savannah’s profile on the MCQ was 100 — 90 — 80 — 70 — 60 — 50 — 40 — Closeness Discomfort Character Development Fun Focus Sharing Focus Academic Focus Relating FocusGrowth Focus Future Outlook Focus Satisfaction Savannah Average 75th percentile 25th percentile FiGuRE 1 A verages of Selected Subscales From the Match Characteristics Questionnaire and the Profile of Savannah Note. Adapted from “The Experiences and Perceptions of Selected Mentors: An Exploratory Study of the Dyadic Relationship in School-Based Mentoring,” by R. K. Frels, 2010, unpublished doctoral dissertation, pp. 212–213. Copyright 2010 by R. K. Frels. Journal of Counseling & Development ■ April 2013 ■ Volume 91 191 Administering Quantitative Instruments With Qualitative Interviews very high or very low on various subscales. Hence, her profile is deemed: highly fluctuating. Table 27 [Table 2 in the present article] provides statements (i.e., qualitative data that support the MCQ responses). (p. 211) As a result of the analysis, the table described in the excerpt (see Table 2) was a reference point that aligned the quantitative instrument with some of the qualitative findings. For example, as seen in Table 2, Savannah scored in the high fourth quartile for the MCQ subscale Future Outlook Focus. This technique of correlating scores with qualitative information involved the following crossover analyses: integrated data reduction, integrated data display, data transformation, data correlation, data consolidation, data comparison, data integration, and warranted assertion analysis. During the interview, Savannah disclosed that she was unhappy with the progress that she and her mentee were making. Subsequently, Savannah scored in the lower third quartile of the MCQ subscale Satisfaction. As seen in Table 2, the data were integrated by displaying this MCQ quantitative score aligned with an example quotation from the interview. Savannah’s high expectations and unre- alistic goals resulted in her frustration and her decision to discontinue mentoring (Frels & Onwuegbuzie, 2012a, 2012b). This concept resonates with other SBM literature (Karcher, Herrera, & Hansen, 2010), whereby goal-oriented interactions (e.g., focusing on the future) often are not sufficient indicators of relationship closeness in SBM. Furthermore, revealing the relationship between Savannah’s intent to leave mentoring and the MCQ Satisfaction subscale provided evidence of triangulation or convergence in data. Finally, with reference to Greene et al.’s (1989) five differ – ent research purposes for mixing quantitative and qualitative data, themes from the constant comparison analysis of inter – view data and the MCQ scores were mapped and identified with one or more purposes. This type of data correlation map can provide further evidence of how a researcher can integrate qualitative findings with a quantitative instrument. For example, Frels (2010) presented the purpose of comple- mentarity—to measure overlapping but different facets of a phenomenon (Greene et al., 1989)—with the case of Savan- nah. The constant comparison analysis of qualitative data yielded the theme of Too Many Questions, which seemed to hinder the dyadic relationship. Frels presented the subscale Satisfaction in relationship to the theme Too Many Questions to contextualize how the use of questions was inherent in Savannah’s relating style. For example, Savannah described her own disappointment of the mentoring experience through questions during an interview: “Why are you [myself] here? . . .you know . . . why did they [the mentees] want you to come? And was it their idea? Was it their parents’ idea? Was it their teacher’s idea?” Thus, with the use of Greene et al.’s (1989) purposes for mixing as a frame for a visual display, data integration in the qualitative dominant crossover mixed analysis was evident. Conclusion As we have shown in this article, supplementing open-ended interview responses with quantitative data from one or more psychometrically sound (standardized) quantitative instru- ments can increase the rigor of qualitative studies; this practice is consistent with many philosophical paradigms. In addi- tion, by recognizing the value of crossover mixed analyses, researchers might view philosophical integration much like how they would view the concept of theoretical integration in counseling. Oftentimes, counselors adhere to one guiding theory, which can be integrated with points in common with other theoretical concepts, including underlying philosophy, values, and data collection (Kottler & Montgomery, 2011). Because incorporating information from standardized quantitative instruments into the analysis of qualitative interview data represents the use of quantitative analysis techniques that are classified only as Level 1 complexity (Ross & Onwuegbuzie, 2011), this strategy should not con- tradict the philosophical assumptions and stances of any of the major qualitative-based research paradigms (e.g., con- TABLE 2 Correlating Match Characteristics Questionnaire Subscale Scores With Qualitative Statements Dimension/Subscale Academic Focus Relating Focus Future Outlook Focus Satisfaction Quartile Note. Adapted from “The Experiences and Perceptions of Selected Mentors: An Exploratory Study of the Dyadic Relationship in School-Based Mentoring,” by R. K. Frels, 2010, unpublished doctoral dissertation, pp. 212–213. Copyright 2010 by R. K. Frels. Examples From Savannah’s i nterview Low fourth Low fourth High fourth Low third “I mean you have to have something—like if her teacher wanted her to work on fractions—she could have sent maybe something for her to work on with us, to work on together.” “Well, we played a couple of games but I think talking more connecting . . . because when we played the game—we didn’t talk so much.” “And um, I [mentor] . . . really studied. I worked hard but I did well. And I thought ‘wow—wow’ and I just want to encourage her ’cause things didn’t come easy to me; they don’t come easy to her. And I wanted to give her a head start. Don’t wait till you’re in college or until after your kids are born to learn how to study.” “The fact that she [mentee] was shy and the fact that she still struggles in school, I wanted to help but those two things really kind of made me decide, you know, on what I did [to quit]. Some days, it was hard.” Journal of Counseling & Development ■ April 2013 ■ Volume 91 192 Frels & Onwuegbuzie structivism, critical theory, participatory). Thus, even though supplementing qualitative interview data with quantitative data—what we call a mixed methods interview—leads to a mixed analysis, the resultant mixed analysis would be qualitatively dominant. Furthermore, as Guba and Lincoln (2011) wrote, Are paradigms commensurable? Is it possible to blend ele- ments of one paradigm into another, so that one is engaging in research that represents the best of both worldviews? The answer, from our perspective, has to be a cautious yes. This is so if the models (paradigms, integrated philosophical systems) share axiomatic elements that are similar, or that resonate strongly between them. (p. 117) Therefore, our call for an even more rigorous process of combining qualitative open-ended interview questions with items from standardized quantitative instruments, via a mixed methods interview, represents the blending of elements of one paradigm into another that provides qualitative research- ers from the field of counseling the best of both worldviews. Most important, the collection of quantitative data during the qualitative interview process allows researchers to compare each interviewee with extant normative data, including inter – national norms, national norms, regional norms, local norms, and relevant cultural norms. Thus, we encourage qualitative researchers, whenever appropriate, to administer one or more quantitative instruments that tap the construct of interest to increase verstehen. References Adler, L. (1996). Qualitative research of legal issues. In D. Schimmel (Ed.), Research that makes a difference: Complementary methods for examining legal issues in education (NOLPE Monograph Series No. 56, pp. 3–31). Topeka, KS: National Organization on Legal Problems of Education. American Counseling Association. (2005). ACA code of ethics. Alexandria, VA: Author. Andrew, S., & Halcomb, E. J. (Eds.). (2009). Mixed methods re- search for nursing and the health sciences. Chichester, England: Wiley-Blackwell. Bazeley, P. (2010). Computer-assisted integration of mixed methods data sources and analysis. In A. Tashakkori & C. Teddlie (Eds.), Sage handbook of mixed methods in social and behavioral research (2nd ed., pp. 431–467). Thousand Oaks, CA: Sage. Bergman, M. (Ed.). (2008). Advances in mixed methods research: Theories and applications. Thousand Oaks, CA: Sage. Brannen, J. (2005). Mixing methods: The entry of qualitative and quantitative approaches into the research process. International Journal of Social Research Methodology, 8, 173–184. Chenail, R. J. (1997). Interviewing exercises: Lessons from family therapy. The Qualitative Report, 3. Retrieved from http://www. Collins, K. M. T., Onwuegbuzie, A. J., & Jiao, Q. G. (Vol. Eds.). (2010). The research on stress and coping in education series: Vol. 5. Toward a broader understanding of stress and coping: Mixed methods appr oaches. Greenway, CT: Information Age Publishing. Creswell, J. W., & Plano Clark, V. L. (2010). Designing and conduct- ing mixed methods research (2nd ed.). Thousand Oaks, CA: Sage. Frels, R. K. (2010). The experiences and perceptions of selected mentors: An exploratory study of the dyadic relationship in school-based mentoring (Unpublished doctoral dissertation). Sam Houston State University, Huntsville, TX. Frels, R. K., & Onwuegbuzie, A. J. (2012a). The experiences of selected mentors: A cross-cultural examination of the dyadic relationship in school-based mentoring. Mentoring & Tutoring: Partnership in Learning, 20, 1–26. doi:10.1080/13611267.201 2.679122 Frels, R. K., & Onwuegbuzie, A. J. (2012b). Principles of play: A dialogical comparison of two case studies in school-based mentoring. International Journal of Play Therapy, 21, 131–148. doi:10.1037/a0028536 Frels, R. K., Onwuegbuzie, A. J., Leech, N. L., & Collins, K. M.. (2012). Challenges to teaching mixed research courses. Journal of Effective Teaching, 12, 23–44. Gehart, D. R., Ratliff, D. A., & Lyle, R. R. (2001). Qualitative research in family therapy: A substantive and methodological review. Journal of Marital and Family Therapy, 27, 261–270. doi:10.1111/j.1752-0606.2001.tb01162.x Greene, J. C. (2007). Mixed methods in social inquiry. San Francisco, CA: Jossey-Bass. Greene, J. C. (2008). Is mixed methods social inquiry a distinctive methodology? Journal of Mixed Methods Research, 2, 7–22. doi:10.1177/1558689807309969 Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11, 255–274. doi:10.3102/01623737011003255 Guba, E. G., & Lincoln, Y. S. (2011). Paradigmatic controversies, contradictions, and emerging confluences, revisited. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualita- tive research (4th ed., pp. 97–128). Thousand Oaks, CA: Sage. Hanson, W. E., Creswell, J. W., Plano Clark, V. L., Petska, K. S., & Creswell, J. D. (2005). Mixed methods research designs in counseling psychology. Journal of Counseling Psychology, 52, 224–235. doi:10.1037/0022-0167.52.2.224 Harris, J. T., & Nakkula, M. J. (2008). Match Characteristic Ques- tionnaire (MCQ). Unpublished measure, Harvard Graduate School of Education. Hess-Biber, S. N. (2010). Mixed methods research: Merging theory with practice. New York, NY: Guilford Press. Ivankova, N. V., & Kawamura, Y. (2010). Emerging trends in the utilization of integrated designs in the social, behavioral, and health sciences. In A. Tashakkori & C. Teddlie (Eds.), Sage handbook of mixed methods in social and behavioral research (2nd ed., pp. 581–611). Thousand Oaks, CA: Sage. Journal of Counseling & Development ■ April 2013 ■ Volume 91 193 Administering Quantitative Instruments With Qualitative Interviews Johnson, R. B. (2011). Dialectical pluralism: A metaparadigm to help us hear and ‘‘combine” our valued differences. In S. J. Hesse-Biber (Chair), Addressing the credibility of evidence in mixed methods research: Questions, issues and research strategies. Symposium conducted at the meeting of Seventh International Congress of Qualitative Inquiry, University of Illinois at Urbana-Champaign. Johnson, R. B. (2012). Dialectical pluralism and mixed re- search. American Behavioral Scientist, 56, 751–754. doi:10.1177/0002764212442494 Johnson, R. B., & Gray, R. (2010). A history of philosophical and theoretical issues for mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Sage handbook of mixed methods in social and behavioral research (2nd ed., pp. 581–611). Thousand Oaks, CA: Sage. Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1, 112–133. doi:10.1177/1558689806298224 Karcher, M. J., Herrera, C., & Hansen, K. (2010). “I dunno, what do you wanna do?”: Testing a framework to guide mentor training and activity selection. New Directions for Youth Development, 126, 51–69. doi:10.1002/yd.349 Kottler, J. A., & Montgomery, M. J. (2011). Theories of counseling and therapy: An experiential approach. Thousand Oaks, CA: Sage. Leech, N. L., & Onwuegbuzie, A. J. (2010). Guidelines for con- ducting and reporting mixed research in the field of counseling and beyond. Journal of Counseling & Development, 88, 61–69. doi:10.1002/j.1556-6678.2010.tb00151.x Leech, N. L., & Onwuegbuzie, A. J. (2011). Mixed research in counseling: Trends in the literature. Measurement and Evaluation in Counseling Development, 44, 169–180. doi:10.1177/0748175611409848 Morse, J. M. (1995). The significance of saturation. Qualitative Health Research, 5, 147–149. doi:10.1177/104973239500500201 Morse, J. M., & Niehaus, L. (2009). Mixed method design: Principles and procedures. Walnut Creek, CA: Left Coast Press. Newman, I., & Ridenour, C. R. (2008). Mixed methods research. Chicago, IL: Southern Illinois University Press. Onwuegbuzie, A. J. (2003). Effect sizes in qualitative research: A prolegomenon. Quality & Quantity: International Journal of Methodology, 37, 393–409. doi:10.1023/A:1027379223537 Onwuegbuzie, A. J., & Combs, J. P. (2010). Emergent data analy- sis techniques in mixed methods research: A synthesis. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (2nd ed., pp. 397–430). Thousand Oaks, CA: Sage. Onwuegbuzie, A. J., & Dickinson, W. B. (2008). Mixed methods analysis and information visualization: Graphical display for effective communication of research results. Qualitative Report, 13, 204–225. Retrieved from QR13-2/onwuegbuzie.pdf Onwuegbuzie, A. J., Frels, R. K., Leech, N. L., & Collins, K. M. T. (2011). A mixed research study of mixed research courses: Experiences and perceptions of instructors. International Journal of Multiple Research Approaches, 5, 169–202. Onwuegbuzie, A. J., Jiao, Q. G., & Bostick, S. L. (2004). Library anxiety: Theory, research, and applications (Research Methods in Library and Information Studies, No. 1). Lanham, MD: Scarecrow Press. Onwuegbuzie, A. J., Johnson, R. B., & Collins, K. M. T. (2009). A call for mixed analysis: A philosophical framework for combining qualitative and quantitative. International Journal of Multiple Research Methods, 3, 114–139. Onwuegbuzie, A. J., Leech, N. L., & Collins, K. M. T. (2011). Toward a new era for conducting mixed analyses: The role of quantitative dominant and qualitative dominant crossover mixed analyses. In M. Williams & W. P. Vogt (Eds.), The Sage handbook of in- novation in social research methods (pp. 353–384). Thousand Oaks, CA: Sage. Onwuegbuzie, A. J., Slate, J. R., Leech, N. L., & Collins, K. M. T. (2009). Mixed data analysis: Advanced integration techniques. International Journal of Multiple Research Approaches, 3, 13–33. Onwuegbuzie, A. J., & Teddlie, C. (2003). A framework for analyzing data in mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 351–383). Thousand Oaks, CA: Sage. Ortiz, S. M. (2001). How interviewing became therapy for wives of professional athletes: Learning from a seren- dipitous experience. Qualitative Inquiry, 7, 192–220. doi:10.1177/107780040100700204 Plano Clark, V. L., & Badice, M. (2010). Research questions in mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Sage handbook of mixed methods in social and behavioral research (2nd ed., pp. 275–304). Thousand Oaks, CA: Sage. Plowright, D. (2011). Using mixed methods: Frameworks for an integrated methodology. Thousand Oaks, CA: Sage. Powell, H., Mihalas, S., Onwuegbuzie, A. J., Suldo, S., & Daley, C. E. (2008). Mixed methods research in school psychology: A mixed methods investigation of trends in the literature. Psychology in the Schools, 45, 291–309. doi:10.1002/pits.20296 Ray, D. C., Hull, D. M., Thacker, A. J., Pace, L. S., Swan, K. L., Carlson, S. E., & Sullivan, J. M. (2011). Research in counseling: A 10-year review to inform practice. Journal of Counseling & De- velopment, 89, 349–359. doi:10.1002/j.1556-6678.2011.tb00099.x Ross, A., & Onwuegbuzie, A. J. (2011, February). Complexity of quantitative analyses used in mixed research articles from the field of mathematics education. Paper presented at the annual meeting of the Eastern Educational Research Association, Sarasota, FL. Roulston, K. (2010). Considering quality in qualita- tive interviewing. Qualitative Research, 10, 199–228. doi:10.1177/1468794109356739 Sandelowski, M. (2008). Theoretical saturation. In L. M. Given (Ed.), The Sage encyclopedia of qualitative methods (Vol. 1, pp. 875–876). Thousand Oaks, CA: Sage. Schwandt, T. A. (2000). Three epistemological stances for qualitative inquiry: Interpretivism, hermeneutics, and social constructio- nism. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 189-215). Thousand Oaks, CA: Sage. Journal of Counseling & Development ■ April 2013 ■ Volume 91 194 Frels & Onwuegbuzie Schwandt, T. A. (2007). The Sage dictionary of qualitative inquiry (3rd ed.). Thousand Oaks, CA: Sage Smith, M. L. (1997). Mixing and matching: Methods and models. In J. C. Greene & V. J. Caracelli (Eds.), Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms (New Directions for Evaluation No. 74, pp. 73–85). San Francisco, CA: Jossey-Bass. Spencer, R. (2004). Studying relationships in psychotherapy: An untapped resource for youth mentoring. New Directions for Youth Development, 103, 31–42. doi:10.1002/yd.89 Spencer, R. (2007). “It’s not what I expected”: A qualitative study of youth mentoirng relationship failures. Journal of Adolescent Research, 22, 331–354. doi:10.1177/0743558407301915 Tashakkori, A., & Teddlie, C. (1998). Applied Social Research Meth- ods Series: Vol. 46. Mixed methodology: Combining qualitative and quantitative approaches. Thousand Oaks, CA: Sage. Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of mixed methods in social and behavioral research. Thousand Oaks, CA: Sage. Tashakkori, A., & Teddlie, C. (Eds.). (2010). Sage handbook of mixed methods in social and behavioral research (2nd ed.). Thousand Oaks, CA: Sage. Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative techniques in the social and behavioral sciences. Thousand Oaks, CA: Sage. Copyright of Journal of Counseling & Development is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.
Research Methods Literature ReviewPrior to beginning work on this assignment, review the qualitative and quantitative research designs encountered so far in this course.For your literature review, you
GUEST EDITORIAL Qualitative and Mixed Methods in Social Work Knowledge Development Deborah K. Padgett P roviding guidelines on qualitative and mixed methods in social work knowledge develop- ment is a daunting task. Quantitative methods also require careful consideration, but they rarely entail the degree of epistemological self-searching and ongoing consequential decision making that qualitative methods demand. As a reviewer of qualitative studies for academic journals and federal funders, and as the recipient of many such reviews (some quite negative), I have learned some lessons along the way. This editorial offers a few sugges- tions arising from these experiences that I hope will be of assistance to those interested in conducting qualitative research. Qualitative methods have been contributing to knowledge development for a very long time— ethnography and field observation were around a century before the 20th century rise of quan- tification, with its emphasis on measurement and statistical analysis (Padgett, 2008). Nevertheless, the codification of qualitative methods is a relatively recent development, beginning in the late 1970s and growing by leaps and bounds ever since. Their embrace in social work came somewhat later than in education and nursing, but qualitative studies have since become commonplace in social work research, as evidenced by pubhcation of such studies in social work journals and by numerous presenta- tions at the annual conferences of the Society for SocialWork and Research and the Council on Social Work Education. Without revisiting the paradigm wars that have consumed much time and energy, suffice it to say that disagreements about epistemology contribute to (but are not entirely responsible for) the lack of consensus regarding what is “good” versus what is “bad” qualitative research. At the more construction- ist end of the epistemological continuum, standards tend to conform more to the humanities than to the sciences. At the other (postpositivist) end of the continuum, standards are not formulaic but are more concretely specifiable. This editorial hews closer to the postpositivist end but will hopefully resonate with social work researchers all along the continuum who wish to make their own contribu- tions to knowledge. I will make seven points—both exhortations and recommendations: 1. The burden of proof is heavier but doable. 2. Choose an approach and stick with it. 3: Theories and concepts matter. 4. Social justice values do not have to be sidelined. 5. Research designs should be detailed and specific. 6. Writing the report: balancing description and interpretation. 7. Mixed methods require multiple inputs of expertise and effort. Paying attention to these will not guarantee indi- vidual success in doing, publishing, and disseminating qualitative research, but it will likely help to raise standards (and the professional profile) of social work research in a broader sense. The distinction between what one does in a study and what one reports having done needs to be taken seriously, for there is too often a disconnect between these in qualitative research. A superbly conducted study that is inadequately written up will not make its right- ful contribution and will hkely run into problems getting pubhshed. 1. THE BURDEN OF PROOF IS HEAVIER BUT DOABLE We live in a quantitative world, and adaptation to this reahty requires anticipation of skepticism (however unfair). Elsewhere, I have promoted using one or more of six strategies as a means of strengthening a quahtative study’s rigor (Padgett, 2008). These strategies are triangulation of data, peer debriefing and support, negative case analysis, maintaining an CCC Code: 0037-8046/09 $3.00 O2009 National Association of Social Workers 101 audit trail, prolonged engagement, and member checking. The choice of which of these to deploy will depend on a study’s goals and design, but, in general, the more used the better. Rigorous qualitative research is accountable even if it follows flexibly applied guidelines. Criteria used in quantitative research do not apply. Thus, most types of validity—internal, external, and measure- ment related—are not appropriate, but cultural or ecological validity may be on-target. Similarly, reliability and replicability are not suitable criteria because they imply fidelity and repetition rather than fluidity and uniqueness. Lincoln and Guba’s (1985) concept of trustwor- thiness is the most widely used global standard for adjudging qualitative studies. A trustworthy study is one that is carried out ethically and whose findings represent as closely as possible the experiences of the participants. Because trustworthiness is not a matter of blind faith or glib assurances, the burden of proof is on the qualitative researcher to carry out the study as rigorously as possible and to faithfully give an account of what happened. In one of the best published examples I have found. Morrow and Smith (1995) conducted a grounded theory study of women who survived childhood sexual abuse in which they used all six strategies for rigor and reported exactly how that was done.They also argued that their study demonstrated evidentiary adequacy by reporting on the breadth of the data: 220 hours of audio- and videotapes, 165 hours of interviews, 24 hours of group sessions, and 25 hours of follow-up interactions over a period of 16 months. Data for analysis exceeded 2,000 pages of transcriptions, field notes, and documents (Morrow & Smith, 1995). It is uncommon to see a quahta- tive study that involves such an expansive effort. Certainly, breadth should not be mistaken for depth, and this sort of evidentiary accounting cannot fuUy convey the intellectual and emotional engagement with the data that distinguishes qualitative research. But it does help in enhancing trustworthiness. 2. CHOOSE AN APPROACH AND STICK WITH IT While conducting a meta-synthesis of 62 qualita- tive studies of women with AIDS, Sandelowski and Barroso (2003) received a rude awakening: The studies were virtually unclassifiable. For example, a “phenomenological” study used coding from grounded theory, and a “case study” was nothing more than a lengthy case description. Given the diversification in qualitative methods, one may choose to do ethnography, grounded theory, nar- rative analysis, phenomenological analysis, or case study analysis along with other, less-known ap- proaches. Each of these has, to varying degrees, a codified methodology that distinguishes it from the others. The researcher is wise to adhere to one of these methods and to cite leading texts describing it. One may mix qualitative approaches—for ex- ample, an ethnographer might carry out a grounded theory analysis of audiotaped interviews, and this could lead to separate studies or to mixing within the confines of a single study. But any qualitative study should reveal a consistency and integrity of approach that is easily recognized by the reader and the reviewer. 3. THEORIES AND CONCEPTS MATTER Qualitative studies do not take place in an intel- lectual vacuum; theories and concepts are used to inform but not constrain. Grounded theorists, for example, refer to “sensitizing concepts” and the requirement that they earn their way (Charmaz, 2006) into the findings. Some research topics tap into a deep reservoir of available knowledge, and others represent virtually uncharted territory. Patricia Attia, a doctoral student I advised, chose to study Orthodox Jewish runaways, a problem unknown and unacknowledged when she began to develop her dissertation proposal. Yet even this new area of interest could be linked to previous literature on runaway youths and eth- noreligious identity maintenance. Ben Henwood, another doctoral advisee of mine, is interested in how case managers work with homeless people with serious mental illness. His theoretical and conceptual foundations range from organizational theories to psychotherapeutic concepts such as the working alliance. Qualitative studies may draw on several theo- retical frameworks at once. They may also draw in new theories during analysis, and they may produce midlevel theories as part of their findings. In our National Institute of Mental Health (NIMH)- funded study of homeless people with serious mental illness (Padgett, Hawkins, Abrams, & Davis, 2006; Padgett, Henwood, Abrams, & Davis, 2008), a priori theoretical lenses included empowerment (Friere, 1973), social ecology (Bronfenbrenner, 1979), and capabilities (Nussbaum, 1997) theories. 102 Social Work VOLUME 54, NUMBER 2 APRIL 2009 During analyses of the interview data, two additional theories were drawn in as a natural fit: feminist theory (Padgett et al., 2006) and Giddens’ (1990) theory of ontological security (Padgett, 2007). We also developed a grounded theory to explain a key outcome of interest—engagement and retention in services (Padgett et al., 2008). Of course, there is the very real danger of theory and conceptual overkill crowding out the inductive thinking that makes qualitative studies uniquely valuable. It takes time and experience to get the balance right. Inductive thinking ensures that data are approached from a fresh perspective and that theoretical concepts are held lightly and discarded if not found to be relevant to the data. 4. SOCIAL JUSTICE VALUES DO NOT HAVE TO BE SIDELINED One of the primary dividing lines between quan- titative and qualitative methods has been the unapologetic embrace of social justice values by practitioners of the latter. Social work researchers are conversant in the language of empowerment and share a commitment to social welfare policies and practices that are equitable and humane. In public health, the rise in popularity of community-based participatory research (Israel, Eng, Schulz, & Parker, 2005) attests to the embrace of empowerment values by other professions. Yet there persists a not-unfounded belief that socially conscious values are incompatible with rigorous research. Scientific review committees, concerned about bias, are prone to look askance at studies that appear to tilt more toward ideology than methodology. This does not have to be an either-or situation. In the NIMH study mentioned earlier, we drew on Freiré s (1973) empowerment theory and built the study around foregrounding consumer input and egalitarian relationships between researchers and study participants (Padgett & Henwood, in press). The caveat, probably obvious by now, is that rigor- ous methods are vital even when social values are brought in to infuse a study with larger meaning. After all, advocacy without empirical support is a far less credible stance. S. RESEARCH DESIGNS SHOULD BE DETAILED AND SPECIFIC Flexibility, a hallmark of qualitative inquiry, does not mean that a study is haphazard or unsystematic Qualitative designs can be seen as road maps, with allowances made for detours and nonlinear progress. That said, their development and implementation is an exercise in specification both before and after the fact. At the planning stage, several questions are addressed and answered. At the write-up stage, one describes what was done and why (with the understanding that detours were warranted and defensible). A qualitative design typically entails description of the following: sample size, types of data to be collected, sampling and recruitment techniques (including inclusion/exclusion criteria and proce- dures for obtaining voluntary informed consent), data collection procedures, data management and analysis plans, and what strategies for rigor will be used.Virtually all qualitative studies use some form of purposive sampling, but under this rubric are a number of options—maximum variation sampling, criterion sampling, intensity sampling, and so on— that can be used. Qualitative data collection may be retrospective (as in life history interviews), or it may be prospective (multiple points of data col- lection proceeding over a period of time). A study may revolve around group comparisons, or it may zero in on a specific population, entity, or event. Describing one’s design in detail does not preclude the qualitative caveat that flexibility will prevail over rigidity if the study’s goals can thereby be better met (hence the permissible detours). 6. WRITING THE REPORT: BALANCING DESCRIPTION AND INTERPRETATION Qualitative researchers originally preferred book- length monographs but have long since come to rec- ognize that peer-reviewed journals are the preferred outlet in academia. A few journals—Qualitative Health Research, Qualitative Social Work, Qualitative Sociology—are dedicated to qualitative inquiry, but the vast majority are predominantly quantitative and thus must grapple with how to conduct fair reviews of qualitative submissions in the absence of prescriptive standards for quality. Decisions about how to frame and present qualitative findings can make or break a study’s publication prospects. As alluded to in point 2, the ideal scenario involves maintaining a clear align- ment between one’s choice of method and the explicit terminology used to describe the study and its procedures (if invoked, epistemology should also be compatible).The method section is criti- PADGETT / Qualitative and Mixed Methods in Sociai Work Knowledge Development 103 cal. Here, crisp, factual, and thorough specification pays off, including description of procedures for recruitment and consent, sample size, number of interviews per participant, length of data collection period, training and supervision of interviewers, and so on. Acknowledgment of a study’s strengths and limita- tions is an area rarely discussed in qualitative research. In most instances, one need not apologize for the sample size, and a qualitative study is not suspect because it “lacks generalizability,”At the same time, the depth and intensity of data may leave something to be desired (for example, if there was only one interview per participant or some strategies for rigor were not used even though they would have been appropriate). Explanation of what is meant by “saturation” may be necessary to familiarize readers (and reviewers) with this concept as the guidepost for knowing when to end data collec- tion and analysis. Presentation of a study’s results varies by approach but typically involves conceptual fmdings along with direct quotations,This mix of description and interpretation is a delicate balance; too much of the former makes the study appear simplistic, and too much of the latter makes it appear contrived (Creswell,2007, suggested ratios of 70/30 or 60/40, favoring description). Balancing interpretation and description entails considerations of “voice,” As my colleague Deb- bie Gioia noted in a recent e-mail exchange, “it’s about voice (researcher and participant) and not just quotes,” Thus, too heavy reliance on direct quotation does not necessarily honor what participants said, even as it compromises a study’s ultimate contribu- tion to knowledge. To be sure, qualitative methods offer a degree of interpretive latitude that is daunting, especially for the novice researcher. Finding one’s voice requires self-discipline and constant referenc- ing of the data, 7. MIXED METHODS REQUIRE MULTIPLE INPUTS OF EXPERTISE AND EFFORT Mixed methods are rising in popularity, yet their design and conduct require careful consideration (see Creswell, 2003, for guidance on designing mixed-methods studies). The required expertise in quantitative and qualitative methods does not necessarily rule out the solo investigator, but this approach is far more plausible for larger resourced studies and teams of investigators. Decisions need to be made and specified regard- ing whether the mixing is to be done sequentially or concurrently and whether it will be qualitative- dominant or quantitative-dominant. The choice of which methods to mix depends on compatibility and portability. Thus, while focus groups are a popular choice among quantitative-dominant researchers, ethnography is far less commonly adopted (or feasible). Similarly, quahtative-dominant research- ers may use scaled measures as a small component of in-depth interviews, but they are not likely to incorporate a large-scale survey into their mixed- methods study. As might be expected, there are serious constraints attendant upon the writing and publishing of mixed- methods research—-journals and grant funders do not allocate extra space for such efforts. Perhaps the most daunting challenge is integrating findings from the two “sides”; it is far easier when the two sets of findings corroborate or complement each other than when they conflict (Padgett, 2008), Despite their demands, mixed-methods studies present unique possibilities for synergy and knowledge growth that mono-method studies cannot match, CONCLUSION This editorial represents a contribution to what is already a lively dialogue in social work research. The stakeholders are many: students, experienced researchers, journal editors, and reviewers, to name only a few. There are other topics on the horizon deserving of attention. Secondary analysis of quali- tative data, for example, is becoming increasingly popular and deserves attention as distinct from quantitative secondary analysis (Thorne, 1998),The rise of evidence-based practice, with its emphasis on systematic reviews, presents unprecedented chal- lenges to those who seek to synthesize knowledge through aggregate reviews of qualitative studies. Last but not least, qualitative social work researchers continue to have epistemological differences that undedie questions about the borderland between practice and research. In conclusion, the development and expansion of the knowledge base in social work research is a dynamic enterprise that depends on contributions from diverse empirical methods. Shared commit- ment to transparency and rigor unites quantitative and qualitative methods even as their respective strengths are complementary and necessarily distinct, BUH 104 SocialWork VOLUME 54, NUMBER Z APRIL 2009 REFERENCES Bronfenbrenner, U. (1979). The ecology of human develop- ment: Experiments hy nature and design. Cambridge, MA: Harvard University Press. Charmaz, K. (2006). Constructing grounded theory. Thousand Oaks, CA: Sage Publications. Cresvvell,J.W. (2003). Research design: Qualitative, quantita- tive, and mixed metltods approaches (2nd ed.).Thousand Oaks, CA: Sage Publications. Creswell,J.W. (2007). Qualitative inquiry and research design (2nd ed.).Thousand Oaks, CA: Sage Publications. Friere, P. (1973). Pedagogy of tlie oppressed (M. 13. Ramos, Trans.), New York: Seabury Press. Giddens,A. (1990). Vte consequences of modernity Oxford, England: Polity Press. Israel, B.A.,Eng, E., Schulz, A.J., & Parker, B.A. (Eds). (2005). Methods in community-based participatory research for health. San Francisco:Jossey-Bass. Lincoln,Y. S., & Cuba, B. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage Publications. Morrow, S, L., & Smith, M. L. (1995). Constructions of survival and coping by women who have sur- vived childhood sexual abuse.JoiirHd/ of Counseling Psychology, 42, 24-33. Nussbaum, M. (1997), Capabilities and human rights. Fordliam Law Review, 66, 273—300, Padgett, D. K. (2007).There s no place like (a) home: Ontological security in the third decade ofthe “homelessness crisis” in the United States. Social Science & Medicine, 64, 1925-1936. Padgett, D. K. (2008), Qualitative methods in social worb research (2nd ed,).Thousand Oaks, CA: Sage Publications. Padgett, D. K., Hawkins, ft. L., Abrams, C, & Davis, A. (2006), In their own words:Trauma and substance abuse in the lives of formerly homeless women with serious mental illness. American Journal of Ortlwpsychiatry, 16, 461-467. Padgett, D. K., Henwood, B., Abrams, C, & Davis, A. (2008). Engagement and retention in care among formerly homeless adults with serious mental illness: Voices from the margins. Psychiatric Rehabilitation Journal, 31, 226-233. Padgett, D. K., & Henwood, B. F. (in press). Obtaining large-scale funding for empowerment-oriented qualitative research: A report from personal experi- ence. Qualitative Health Research. Sandelowski, M., & Barroso,J. (2003).Writing the proposal for a qualitative research methodology project. Qtialitative Health Research, 13, 781-820, Thorne, S. (1998). Ethical and representational issues in qualitative secondary analysis. Qualitative Heahh Research, 8, 547-555. Deborah K. Padgett, PhD, MPH, is professor. Silver School of Social Work, New York University, Ehrenkranz Center, 1 Washington Sqttare North, New York, NY 10003-6654; e-mail: dehorah [email protected] edu. NASW PRESS POLICY ON ETHICAL BEHAVIOR T he NASW Press expects authors to ad- here to ethical standards for scholarship as articulated in the NASW Code of Ethics and Writing for the NASW Press: Information for Authors. These standards include actions such as • taking responsibility and credit only for work they have actually performed • honestly acknowledging the work of others • submitting only original work to journals • fully documenting their own and others’ related work. If possible breaches of ethical standards have been identified at the review or publication process, the NASW Press may notify the au- thor and bring the ethics issue to the attention ofthe appropriate professional body or other authority. Peer review confidentiality will not apply where there is evidence of plagiarism. As reviewed and revised by NASW National Committee on Inquiry (NCOI), May 30, 1997 Approved by NASW Board of Directors, September 1997 PADGETT / Qualitative and Mixed Metboé in Social Work Knowledge Development 105
Research Methods Literature ReviewPrior to beginning work on this assignment, review the qualitative and quantitative research designs encountered so far in this course.For your literature review, you
RESEARCH INNOVATIONS AND RECOMMENDATIONS Optimization. New York, NY: John Wiley & Sons Inc; 2000. 9. Fisher B, Costantino JP, Wickerham DL, et al. Tamoxifen for prevention of breast cancer: report of the National Surgical Adjuvant Breast and Bowel Project P-1 Study. J Nati Cancer Inst. 1998,90:1371-1388. 10, O’Connor AM, Fiset V, Rostom A, et al. Dedsion aids for people fadng health treatment or screening decisions. Cochmne Database Syst Rev. 2001,3:CD001431. 11. O’Connor AM, Rostom A, Fiset V, et al. Dedsion aids for patients facing health treatment or screening decisions: system- atic review. BA//. 1999;319:731-734. 12. Shiffman S, Gitchell J. World’s best practice in tobacco control: increasing quitting by increasing access to treat- ment medications: USA. Tob Control. 2000;9:228-236. 13. Shiffman S, Paty JA, Rohay JM, DiMadno ME, Gitchell JG. The efEcacy of computer tailored smoking cessation material as a supplement to nicotine patch therapy. Drug Alcohol Depend. 2001;64:35-46. 14. Etter JF, Using new information technology to treat tobacco dependence. Respiration. 2002;69:111-114. 15. Strecher VJ. Computer-tailored smoking cessation materials: a review and discussion. Patient Educ Couns. 1999;36:107-117. 16. West SG, Aiken LS, Todd M. Prob- ing the effects of individual components in multiple component prevention pro- grams, AmJ Community Psychol. 1993; 21:571-605. 17 West SG, Aiken LS. Toward under- standing individual effects in multicompo- nent prevention programs: design and analysis strategies. In: Biyant KJ, Windle M, West SG, eds. 77K Science of Prévention: Methodohgical Advances From Alcokd and Substance Use Research. Washington, DC: American Psychological Assodation; 1997. Alternatives to the Randomized Controlled Trial Public health researchers are addressing new research questions (e,g,, effects of en- vironmental tobacco smoke. Hurricane Katrina) for which the randomized controlled trial (RCT) may not be a feasi- ble option. Drawing on the potential outcomes framework (Rubin Causal Model) and Campbel- lian perspectives, we consider alternative research designs that permit relatively strong causal inferences. In random- ized encouragement designs, participants are randomly in- vited to participate in one of the treatment conditions, but are allowed to decide whether to receive treatment. In quantitative assignment designs, treatment is assigned on the basis of a quantitative measure (e.g., need, merit, risk). In observational studies, treatment assignment is un- known and presumed to be nonrandom. Major threats to the validity of each design and statistical strategies for miti- gating those threats are pre- sented. {Am J Public Health, 2008;98:1359-1366, doi:10. 2105/AJPH.2007,124446) Stephen G. West, PhD, Naihua Duan, PhD, Willo Pequegnat, PhD, Paul Gaist, PhD, MPH, Don C, Des Jaríais, PhD, David Holtgrave, PhD, José Szapocznik, PhD, Martin Fishbein, PhD, Bruce Rapkin, PhD, Michael Clatts, PhD, and Patricia Dolan Mullen, DrPH Far better an approximate answer to the right question, which is often vague, than an exact answer to the imong question, which can always be made precise. —John Tukey’ THE RANDOMIZED CONTROLLED trial (RCT) has long been the gold standard for clinical re- search, representing the best way to determine efficacy and effectiveness for many interven- tion and prevention programs. However, public health research- ers are increasingly addressing questions for which the RCT may not be a practical (or ethi- cal) option or for which the RCT can be complemented by alter- native designs that enhance gen- eralization to participants and contexts of interest. When structural or policy in- terventions are being examined, practical problems in conducting RCTs may arise^—for example, re- search partidpants may not want to be randomized, randomization may not be feasible or not ac- cepted in the research context or only atypical partidpants may be willing to be randomized. Such problems might be a concern in studies of the effects of environ- mental tobacco smoke on non- smokers or of the effects of the severe disruption of Gulf Coast communities by Hurricane Kat- rina on HIV risk behaviors and medical care. Only atypical par- tidpants may agree to partidpate in the evaluation of a faith-based intervention. Highly religious participants may refuse to be assigned to a non-faith-based treatment group, whereas non- religious participants may refuse or be unable to partidpate sin- cerely in a faith-based group. Randomization may be precluded if the religious organization im- plementing the intervention strongly believes that all people desiring the faith-based interven- tion should receive it. With one exception, our focus is on designs in which partidpants are assigned to treatment or control conditions. Parallel designs exist in which set- tings, time, or even dependent mea- sures are the unit of assignment,^ THE RANDOMIZED CONTROLLED TRIAL The RCT has its origins in the pioneering work of the English sd- entist and statistidan Sir Ronald Fisher”* in agriculture during the 1920s and 1930s, Fisher’s key in- sight was that random assignment of units to treatment conditions leads to 2 related expectations: (1) the mean level for each of the treatment conditions is equal, on average, on any conceivable par- tidpant background variable prior to the beginning of the experi- ment; and (2) treatment assign- ment is, on average, unrelated to any conceivable partidpant back- ground variable. In the context of Fisher’s agri- cultural studies, these expectations August 2008, Vol 98, No, 8 | American Journal of Public Health West et al. | Peer Reviewed | Research Innovations and Recommendations I 1359 RESEARCH INNOVATIONS AND RECOMMENDATIONS guaranteed that the design would provide an unbiased estimate of the true causal effect. However, other features of the public health context require addi- tional assumptions when tradi- tional RCTs are utilized.’ Unlike the com plants in Fisher’s agri- cultural studies, people can seek out altemative treatments or re- fuse treatments (nonadherence to treatment). People can refuse to be measured or migrate to an- other locale (attrition). Important advances addressing the chal- lenges of nonadherence, attri- tion, and their combination have • been made during the last half century. Advances in altemative designs and statistical analyses have also occurred.^”” Two per- spectives have guided this work. TWO PERSPECTIVES ON STRENGTHENING CAUSAL INFERENCE Potentiai Outcomes . Perspective The potential outcomes per- spective was originally introduced by Neyman*^ and developed by Rubin et al.’*^*^ It takes as its starting point a comparison of an individual unit’s outcome when the treatment is applied, Y((u), versus the same unit’s outcome when the altemative (or control) treatment is applied, Y^(u). The causal effect is defined as where Y((u) represents the re- sponse of unit u to treatment t, and Y^(u) represents the response of the same unit u to the control treatment c at the identical time and in the identical setting. Theo- retically, comparison of these 2 outcomes provides the ideal de- sign for caus£il inference. Unfortu- nately, this ideal can never be achieved in practice. Additional assumptions are required depend- ing on the choice of altemative to the ideal design. For the RCT, the additional assumptions required are (1) the units are independent, (2) partidpants actually received the treatment as intended (e.g., complete treatment adherence), (3) attrition fh)m posttest mea- surement did not occur, and (4) the existence of other treat- ment conditions did not affect the participant’s outcome.^ If these as- sumptions of the RCT are met, strong inferences can be drawn about the average causal effect of treatment t relative to treatment c on the outcome. However, these assumptions are often not met. For example, in RCTs of mam- mography screening, one third of participants in the treatment group have refused screening and many partidpants in the control group have obtained screening outside the trial.” Campbelllan Perspective Campbell et al. have devel- oped a practical theory of causal inference that follows the logic and strategies of the working scientist.’^’^ Researchers need to identify plausible threats to the validity of the causal inference based on design considerations and prior empirical research. Then they need to rule out the possibility that any of those threats are responsible for the observed effect. If the initially proposed design does not rule out important plausible threats to causal inference, enhance- ments to the design are intro- duced that address the identified threats. Through a process of continued critical evaluation and additional research, plausible threats to validity can be identi- fied and eliminated, yielding im- proved estimates of the causal effect. Although Campbell et al. dis- cussed 4 types of threats to valid- ity, space limitations restrict our discussion to 2 types. Threats to internal validity are confounding factors that may potentially pro- duce the observed results. These threats include factors that may lead to changes between baseline and posttest (e.g., differential his- tory, maturation) and factors that may lead to differences between the treatment and control groups (e.g., differential selection, differ- ential attrition) in the absence of a treatment effect. Threats to ex- ternal validity limit the potential generalization of the results, an important consideration given the increasing emphasis on the translation of research results in public health into practice. ALTERNATIVE DESIGNS FOR STRENGTHENING CAUSAL INFERENCE Randomized Encouragement Designs Trial partidpants eire expected to adhere to their treatment as- signments in classic RCTs. They may be given strong incentives that are outside usual practice to ensure adherence with the full protocol. Alternatively, partid- pants may be randomly assigned to an opportunity or an encour- agement to receive a specific treatment, but allowed to choose whether to receive the treatment. This variation fi-om the classic RCT model is useful for interven- tions for which it is impractical or unethical to require adherence or in which the necessary incentives would be unrealistic, thus pre- duding generalization to practice. For example, this design was used by Vinokur et al.’^ to study the impact of a job seeking skills program OOBS) on depression in partidpants. This study recruited eligible partidpants (e.g., laid off and seeking a new job) at unem- ployment offices. All partici- pants received a brief booklet describing job search methods. Partidpants were randomly as- signed (stratified by baseline risk) to receive or not receive an invi- tation to partidpate in the JOBS program, a 20-hour group train- ing program that emphasized learning and practicing job seeking skills, inoculation against setbacks, and sodal sup- port Of invited partidpants, 54% attended the program. At- tempts were made to measure all participants on depression 6 months after baseline measure- ment (87% response rate). Intention to treat analyses c£in be applied to randomized en- couragement designs to assess the impact of treatment assign- ment (the offer of or encourage- ment to partidpate in the pro- gram) on partidpant outcome (depression). To the extent that missing data are negligible, the estimated effects are unbiased. Under the assumption of the ex- clusion restriction (the impact of treatment assignment is medi- ated entirely through the receipt of treatment), instrumental vari- ables einalysis^ provides an unbi- ased estimate of the more in- formative complier average causal effect—the effect of the re- ceipt of treatment QOBS atten- dance) averaged across adherers who are expected to adopt the treatment if assigned to the treat- ment group. Littie and Yau* com- pared the subgroup of partid- pants who adhered to treatment in the JOBS program with the subgroup of individuals in the control group who would be ex- pected to adhere to the treatment if invited to partidpate in the JOBS program. The JOBS pro- gram led to decreased depression 1360 I Research Innovations and Recommendations | Peer Reviewed | West et al. American Journai of Public Heaith | August 2008, Voi 98, No. 8 RESEARCH INNOVATIONS ANO RECOMMENOATIONS for high-risk participants who would adhere to treatment The combination of randomization and the assumption of the exclu- sion restriction provided a strong basis for the unbiased estimate of the average effect of the JOBS program and proper standard er- rors for treatment adherers in a community population. More- complete discussions of random- ized encouragement designs are Nonrandom Quantitative Assignment of Treatment In quantitative assignment de- signs, párüdpeints are assigned to treatment groups on the basis of a quantitative measure, often a measure of need, merit, or j.jg]^ 17,21-24 -pQj. example, school lunch programs in the United States are assigned to children whose household income falls below a prespecified threshold re- lated to need (e.g., poverty line). Causal inference is based on mod- eling the functional relationship between the known quantitative assignment variable (household income) and the outcome variable (e.g., health, school achievement), estimated separately for the treated group that falls below the threshold and the control group that falls above the threshold. Because the assignment variable Rally determines treatment assign- ment, proper adjustment for the assignment variable permits the inference of a treatment effect for the school lunch program if there is a discontinuity at the threshold where the treatment is introduced (Figure 1). As part of the launch of the Head Start program in 1965, US counties with a poverty rate above 59.2% (the 300 poorest in the country) received technical assistance in writing Head Start proposals. A very high proportion (80%) of the poorest counties received Head Start funding, ap- proximately double the funding rate of counties that were slightly better off economically (49.2%-59.2% poverty rates) that did not receive technical as- sistance. The original Head Stiirt program provided basic health services (e.g., nutrition, immu- nization, screening) to children in addition to its educationed component. Using a regression discontinuity design, Ludwig and Miller^^ found results that demonstrated the introduction of Head Start had led to sub- stantially lower mortality rates in children aged 5 to 9 yeeirs from diseases addressed by the program (e.g., measles, anemia, diabetes). Quantitative assignment de- signs can be applied to units at various levels such as individuals. High-, Si o (U community health clinics, neigh- borhoods, or counties. These de- signs offer an important alterna- tive to classic RCTs in situations in which randomization is im- practical or unethical. Many im- portant public health communi- ties might be resistant to RCTs, in which case quantitative assign- ment designs might be more ac- ceptable. In addition, the RCT might be unethical when there are doubts about equipoise. Quantitative assignment designs that utilize a clinically meaning- ful assignment variable (e.g., risk level) might provide a stronger ethical basis for such studies. Quantitative assignment designs can also be implemented based on time (interrupted time series) or settings (e.g., measured risk of neighborhoods).” For example, Khuder et al.^^ analyzed 6 years of monthly data on hospital admissions in 2 cities for coronary heart disease and for other diseases unrelated to smoking. In one city, a public ban on indoor smoking was im- plemented after the third year of data collection. Khuder et al. showed that hospital admissions for coronary heart disease (but not other diseases) declined fol- lowing the introduction of the smoking ban. By contrast, no cheuige in hospital admissions for either coronary heart disease or other diseases was observed in the comparison city, which did not institute a smoking ban. Any alternative explanation of these results must clearly ac- count for why the change oc- curred at the point at which the smoking ban was introduced only in the treatment city and only on the outcome related to smoking.’^ Low 20000 40000 60000 Family Income, $ 80000 100000 Note. All children whose family income was below the threshold, here $20000 (dotted line), received the treatment program (school lunch program); all children whose family income was above the threshold did not receive the program. The difference in level between the regression lines for the program and no program groups at the threshold represents the treatment effect. FiGURE 1-liiustration of regression discontinuity design. August 2008, Vol 98, No. 8 | American Journal of Public Health West et al. | Peer Reviewed | Research Innovations and Recommendations | 1361 RESEARCH INNOVATIONS AND RECOMMENDATIONS One primary weakness of quantitative assignment designs is that the functional form of the relationship between the assign- ment variable and response vari- able is usually unknown. With unknown functional foirns, bias might be introduced if the func- tional relationship is misspecified (e.g., assuming a linear func- tional form when the true func- tional form is curvilinear). In smaller samples, separate non- parametric smooths (e.g., lowess) of the data for the partidpants who receive the treatment and control conditions provide some information about functional form. In large samples, this bias can be minimized by using non- parametric regression methods to estimate the functional rela- tionship. In the regression dis- continuity design, the assignment threshold can sometimes be modified in subsequent studies (e.g., some states have different incomes necessary for treatment receipt) to further strengthen causal inference. In the time se- ries design, cities that introduce the intervention at different time points can be compared. “‘^^ Observational Studies In observational studies, par- ticipants in preexisting or con- structed groups receive various treatment conditions, often through voluntary selection.^*’^^ The selection of participants into each treatment condition may be associated with confounding factors, resulting in bias that might occur in naive statistical analyses. However, advances in methodology have provided a much stronger toolkit for obser- vational studies. We discuss 2 general approaches below. First, within the potential out- comes perspective, an important focus has been on the development of matched sampling strategies and analyses.^””” Among the most developed strategies are causal inference methods based on propensity scores.^^’^^ Propen- sity scores represent the predicted probability that a participant will receive the treatment given his or her baseline measurements, esti- mated using either logistic regres- sion to predict treatment status, or more-complex functional fonris such as regression tree models. If the researcher can accurately construct propensity scores that balance the treatment and control partidpants on all potentially rele- vant baseline variables, the differ- ence between the response in the treatment condition and the con- trol condition (conditioned on the propensity scores) will represent the causal effect In essence, con- ditioning on the basis of the propensity scores attempts to cre- ate homogeneous units whose re- sponses in the treatment and con- trol groups can be directly compared. The propensity scores method can only mitigate overt selection bias attributed to those baseline characteristics that have been ac- curately measured.^^ The ade- quacy of the comparison depends strongly on baseline assessment of the full set of variables believed to be potentiaUy related to treatment selection and outcome. Assess- ment of a few convenient baseline variables (e.g., demographics) is unlikely to substantially mitigate selection bias. Haviland et al.^** studied the effect of gang membership on violent delinquency, an impor- tant question for which an RCT was not feasible. They conducted a longitudinal study of boys liv- ing in lower sodoeconomic areas of Montreal, Quebec, and identi- fied boys who were not members of any gang prior to age 14 years. Based on the boys’ behaviors between the ages of 11 and 13 years, they identified groups with a history of low violence, declin- ing violence, and chronic high violence. Within each of these groups, they measured a large number of baseline covariates known to be related to gang membership and violence. Propensity to join a gang at age 14 years was estimated sepa- rately within each violence his- tory group from the baseline co- variates, with the result that boys who did and did not join geings at age 14 years could be closely matched within both the low and declining violence groups, but not the chronic high violence group. This finding illustrates that the propensity scores method often appropriately limits general- ization of the causal effect by re- stricting comparisons to only the range of propensity scores within which adequate comparisons can be constructed. In the low and . declining groups, joining a gang at age 14 years increased violent delinquent acts. Haviland et al. also performed a sensitivity analysis that investi- gated how large hidden bias would need to be before the treatment effect was no longer statistically significant. They found that even if hidden vari- ables existed that led to a 50% increase in the odds of joining a gang, a significant treatment ef- fect would still exist. Such causal sensitivity analysis against hidden bias can be used to bracket the plausible range of the magnitude of the causal effect^**’^^ Alterna- tively, hidden bias caused by un- observed confounding factors can sometimes be mitigated using in- strumental variables analysis.^^”*^ Second, within the Campbel- lian Iramework, design elements are added that address likely threats to internal validity. “‘^^ These design elements include strategies such as matching and stratifying, use of pretests on multiple occasions to estimate preexisting trends, use of multi- ple control groups with differ- ent strengths and weaknesses to bracket the effect, and the use of nonequivalent dependent measures that are expected to be affected by the threat but not by the treatment (see also Rosenbaum^^’^^). Reynolds and West”” provide an illustration of the use of several of these strategies in an observational study designed to evaluate the effectiveness of a program to increase the sales of state lot- tery tickets in convenience stores. The store managers re- fused randomization. Those stores that agreed to implement the program were matched with other stores in the same chain on basefine sales volume and • geographical location. Increases in sales were observed in ( 1 ) the treatment but not the con- trol group; (2) within the treat- ment group, for lottery ticket sales, but not other sales cate- gories; and (3) in the weeks fol- lowing the introduction of the intervention, but not before (Figure 2). Taken together, in- clusion of these additional de- sign elements made it extremely difficult to identify any poten- tial confounding factors that might be responsible for the observed pattern of results. In the Campbellian framework, strong priority is given to design enhancements over statistical corrections with their associated assumptions.^* CONCLUSION The RCT is the gold standard among research designs. It has 1362 I Research Innovations and Recommendations | Peer Reviewed 1 West et al. American Journai of Pubiic Heaith | August 2008, Vol 98, No. 8 RESEARCH INNOVATIONS AND RECOMMENDATIONS the highest internal vatidity because it requires the fewest assumptions to attain unbi- ased estimates of treatment ef- fects. Given identical sample sizes, the RCT also typically surpasses all other designs in terms of its statistical power to detect the predicted effect. Nonetheless, even with the best planning, the RCT is not immune to problems common in community trials. These threats potentially weaken the causal inferences. When RCTs cannot be imple- mented in settings or with partici- pants of interest, it is far better to use a strong alternative design than to change the treatment (e.g., using an analog rather than an actual faith-based treatment) or study population (e.g., using only participants indifferent to the treatment choice) so that an RCT may he implemented. Such changes may severely limit the external validity of the findings, potentially distorting the inference about the causal effect for the specific population, treatment, and setting of interest. Even when RCTs can he implemented; alter- native designs can be valuable complements that broaden the generalizations of RCTs in multi- study programs of research. The alternative design and statistical approaches permit rela- tively strong causal inference in the RCT when common problems such as treatment nonadherence and participant attrition occur and in alternative designs when randomization is not possible. 0.00055 a, 0.00050 ^ 0.00045 % 0.00040 0.00035 0.00030 10 11 Game Number J, no. sSol( Ticket: 800 • 700 • 600 . 500 • 400 • 300 • 200 • Progran^ Started . Treatment ””‘^ .^_^ Control 7.0 – 6.0 – 5.0 – 4.0 – -1.0 – -1.1 – -1.2 • -1.3 – -1.4 – -1.5 – -1.6 • Gasoline Cigarettes Tickets Ties (tax) Pretest Posttest Design 2 12 3 4 12 3 4 Pretest Week Posttest Week A/ote. In panel a, treatment and control stores were selected from the same chain, were in the same geographical location, and were comparable in sales during baseline (lottery game 10). Introduction of the treatment at the beginning of lotteiy game 11 yielded an increase in sales only in the treatment stores. In panel b, within the treatment stores, sales of lottery tickets increased substantially following the introduction of treatment. Sales of other major categories (gasoline, cigarettes, nontaxable groceries, and taxable groceries that would be expected to be affected by confounding factors, but not treatment) did not show appreciable change. In panel c, treatment and control stores’ sales show comparable trends in saies during the 4 weeks prior to and 4 weeks following the introduction of the treatment.The level of sales in the treatment and control stores is similar prior to the introduction of treatment but differ substantially beginning immediately after treatment is introduced. Source. Adapted from Reynolds and West.™ FIGURE 2-Design elements that strengthen causal inferences in observatlonai studies: matching (a), nonequivalent dependent variables (b), and repeated pre- and posttest measurement (c). Researchers need to give carellil attention to the additional as- sumptions required by these ap- proaches. Table 1 lists each of the designs considered in this article. The first section lists the basic assumptions and internal validity threats of the RCT, to- gether vnth design and statistical approaches for addressing these issues. Each subsequent section lists key assumptions and threats to internal vdidity in addition to those of the RCT, together with design and statistical approaches for addressing these issues. To illustrate, the key additional threat in the regression disconti- nuity design is misspecification of the functional form of the rela- tionship between the assignment and outcome variables (typically assumed to be linear). Statisti- cally, nonparEimetric regression in large sarnples and sensitivity analyses in small samples that probe the extent of misspecifica- tion necessary to undermine the observed treatment effect can help bracket the possible range of the effect size. Adding the design feature of a nonequiv- alent dependent variable that is expected to be affected by im- portant confounders, but not by the treatment, can help rule out many of the threats to internal validity. In general, the causal ef- fect estimated Irom the alterna- tive designs and analyses is likely to be associated with more un- certainty, than those from the ideal RCT in which no attrition or treatment nonadherence has occurred. Confidence intervals that provide a range of plausible effect sizes caused by sampling fiuctuations should be supple- mented with estimated brackets on effect sizes that indicate how large or small tiie effect might plausibly be if key assumptions are not met.^” Remaining August 2008, Vol 98, No. 8 | American Journal of Public Health West et al. | Peer Reviewed | Research Innovations and Recommendations | 1363 RESEARCH INNOVATIONS AND RECOMMENDATIONS TABLE 1-Key Assumptions or Threats to internal Validity and Example Remedies for Randomized Controi Triais and Aiternatives Approaches to Mitigating the Threat to interval Vaiidity Assumption or Threat to internai Vaiidity Design Approach Statisticai Approach Randomized controiied experiment Independent units Fuii treatment adherence No attrition Other treatment conditions do not affect participant’s outcome (SUM) Randomized encouragement design Exciusion restriction Regression discontinuity design Functional form of relationship between assignment variable and outcome is properly specified interrupted time series anaiysis Functionai form of the relationship for the time series is properiy specified; another historicai event, a change in popuiation (seiection), or a change in measures coincides with the introduction of the intervention. Observational study Measured baseiine variabies equated; unmeasured baseiine variabies equated; differential maturation; baseline variabies reiiabiy measured Temporai or geographicai isoiation of units Incentives for adherence Sampie retention procedures Temporai or geographicai isoiation of treatment groups No design approach yet available Replication with different threshold; nonequivaient dependent variabie Muitiievel anaiysis (other statisticai adjustment for clustering) instrumentai variabie anaiysis (assume exciusion restriction) Missing data anaiysis (assume data missing at random) Statisticai adjustment for measured exposure to other treatments Sensitivity anaiysis Nonparametric regression; sensitivity anaiysis Nonequivaient controi series in which intervention is not introduced; Diagnostic piots (autocorrelogram; spectrai density); sensitivity switching repiication in which intervention is introduced at anaiysis another time point; nonequivaient dependent measure Muitipie controi groups; nonequivaient dependent measures; additional pre- and postintervention measurements Propensity score anaiysis; sensitivity anaiysis; subgroup anaiysis; correction for measurement error «oie, SUTVA=stabie unit treatment vaiue assumption,The iist of assumptions and threats to internai vaiidity identifies issues that commoniy occur in each of the designs.The aiternative designs may be subject to each of the issues iisted for the randomized controiied triai in addition to the issues iisted for the specific design,The examples of statisticai and design approaches for mitigating the threat to internai validity iilustrate some commoniy used approaches and are not exhaustive. For the observational study design, the potentiai outcomes and Campbeilian frameworks study differ so that the statistical and design approaches do not map 1-to-l onto the assumptions or threats to internai vaiidity that are listed. More in-depth descriptions can be found in Shadish et ai,” and West et al,^^ unceiiainty about the causal ef- fect can often be reduced by adding design features that help rule out the possibility that other unobserved confounders are pro- ducing the observed effect We have touched only briefly on the matter of external validity. Generalization of findings should not be assumed; features to en- hance generalization need to be built into the design,” Some RCTs have features that decrease the generalizabüity of their results to the actual treatments, settings, and populations of interest^^ This may limit the ability of public health research to provide informa- tion about the actual effectiveness of interventions to alleviate health problems. People can have prefer- ences and capacities that interact with treatment effects. Important contextual variables can influence intervention effects as well as par- tidpant self-selection and attrition. Regardless of the design chosen, features that maximize external validity should be incorporated into the design, Shadish et al,” present procedures for doing this in both single and multiple studies. Our opening quotation from John Tukey reminds us that the public health significance of the research question should be para- mount in the design of research. Important questions should not be ignored if they cannot be fit- ted into the framework of an RCT, Rather, the strongest possi- ble design that can feasibly be implemented should be chosen, whether an RCT or an alternative design. Whatever design is cho- sen, careful attention must be given to the viability of the as- sumptions of the design, adding design and analysis features to address plausible threats to inter- nal and external validity. In addition, the evaluation of important interventions is rarely limited to single studies but rather is based on the accumu- lated body of research. The use of systematic reporting frame- works, such as CONSORT”‘ for RCTs and TREND”^ for non- randomized studies, may encour- age more in-depth appraisal of re- search designs both during the planning of the study and the evaluation of its results. Scientific progress in public health will be facilitated by asking the right questions, choosing the strongest feasible design that can answer those questions for the popula- tion of interest, and probing the assumptions underlying the de- sign and analysis choices through the addition of carefully chosen design features and supplemental statistical analyses, • 1364 I Research Innovations and Recommendations | Peer Reviewed | West et al. American Journal of Public Health | August 2008, Vol 98, No, 8 RESEARCH INNOVATIONS ANO RECOMMENOATIONS About the Authors Stephen C. West is with Arizona State University, Tempe. Naihua Duan is with Columbia University, New York, NY, and New York State Psychiatric Institute, New York. Willo Pequegnat is with the National Institute of Mental Health, Bethesda, MD. Paul Gaist is with the National Institutes of Health, Bethesda. Don C. Des jaríais is with Beth Israel Medical Center, New York. David Holtgrave is with the Johns Hopkins University, Bloomber:g School of Public Health, Baltimore, MD.José Szapocznik is ivith the University of Miami School of Medicine, Miami, FL Martin Fishbein is with the Annenberg School for Communi- cation, University of Pennsylvania, Phila- delphia. Bruce Rapkin is with Memorial Sloan-Kettering Cancer Center, New York. Michael Clatts is with the National Devel- opment and Research Institutes, Inc, New York. Patriría Mullen is with the University of Texas School of Public Health, Houston. Requests for reprints should be sent to Stephen G. West, Psychology Department, Arizona State University, Tempe, AZ 85287-1104 (e-mail: [email protected]). Note. The views in this article are those of the authors. No official endorsement by the US Department of Health and Human Services or the US National Institutes of Health is intended or should be inferred. Contributors S. G. West partídpated in the initial workshop and helped develop the out- line, wrote the initial draft and subse- quent drafts of the article incorporating additions and edits, and wrote the final article. N. Duan participated in the ini- tial workshop, participated in the devel- opment of the paper outline, drafted part of the article and reviewed and edited the entire article. W. Pequegnat conceptualized the initial workshop on which the article is based, co-chaired the workshop and guided development of original outline, wrote the introduction for the first draft, provided feedback on multiple drafts, and coordinated contin- ued development of the article. P. Gaist participated in the original workshop, guided development of the original out- line, provided significant input and con- tributions throughout the planning, writing, review, and revision stages of this article. He has served as 1 of the 2 primaiy coordinators responsible for overseeing each phase that has been re- quired in the development and writing of this article. D. G. Des Jaríais chaired the initial workshop that led to the writ- ing of the article, contributed text to various drafts, edited and approved the final draft. D. Holtgrave, J. Szapocznik, M. Fishbein, B. Rapkin, M.G. Glatts, and P.D. Mullen attended the workshop, helped conceptualize ideas, contributed text, and reviewed and edited drafts. Acknowledgments S. G. West was supported by a study visit grant at the Free University of Berlin by the German Academic Exchange Service. An earlier version of this article was presented to the meeting of the Gommit- tee on the Prevention of Mental Disor- ders and Substance Abuse among Ghildren, Youth, and Young Adults, In- stitute of Medicine, Washington, DG, October, 2007. We thank Wei Wu for her help in the preparation of the figures. Note. On November 14-15, 2005, the US National Institute of Mental Health and the Office of AIDS Researoh, US National Institutes of Health, con- vened a group of experts to consider the critical questions assodated with the ef- ficacy and effectiveness of interventions for preventing HIV and other chronic diseases that do not lend themselves to randomized controlled trials. This dis- cussion led to the development of this article. Human Participant Protection No protocol approval was needed for this study. References 1. Tukey JW. The future of data anal- ysis. Ann Math Stat. 1962;33:13-14. 2. Bonnell G, Hargreaves J, Strange V, Pronyk P, Porter J. Should structural in- terventions be evaluated using RGTs? The case of HIV prevention. Soc Sd Med 2006;63:I135-1142. 3. Reichardt CS. The prindple of par- allelism in the design of studies to esti- mate treatment effects. Psychol Methods. 2006;ll:l-18. 4. Fisher RA. The Design of Experi- ments. Edinburgh, Scotland: Oliver & Boyd; 1935. 5. Holland PW. Statistics and causal inference (with discussion).//4m Stat Assoc. 1986:81:945-970. 6. Angrist JD, Imbens GW, Rubin DB. Identification of causal effects using instrumental variables (with discussion). J Am Stat Assoc 1996,91:444-472. 7. Jo B. Statistical power in random- ized intervention studies with noncompli- ance. Psychol Methods. 2002;7:178-193. 8. Little RJ, Yau L. Statistical tech- niques for analyzing data from preven- tion trials: treatment of no-shows using Rubin’s causal model. Psychol Methods. 1998;3:147-159. 9. Little RJA, Rubin DB. Statistical Analysis with Missing Data. 2nd ed. New York, NY: John Wiley and Sons; 2002. 10. Schäfer JL, Graham JW. Missing data: our view of the state of the art Psychol Methods. 2002;7:147-177 11. West SG, Sagarin BJ. Partidpant selection and loss in randomized experi- ments. In: Bickman L, ed. Research De- sign: Donald Campbell’s Legacy. Vol. 2. Thousand Oaks, GA: Sage Publications, 2000;117-154. 12. Neyman J. On the application of probability theory to agriculture experi- ments. Essay on prindples. Section 9. Statistical Science. 1990;5:465-472. Originally published in Roczniki Nauk Rolniczych [Annals of Agricultural Sdence] 1923, Tom X, 1-51. Trans- lated and edited by DM Dabrowska and TP Speed. 13. Rubin DB. Estimating causal ef- fects of treatments in randomized and nonrandomized studies, f Educ Psychol. 1974;66:688-701. 14. Rubin DB. Gausal inference using potential outcomes: design, modeling, decisions.//Im Stat Assoc 2005;100: 322-331. 15. Baker SG. Analysis of survival data from a randomized trial with all-or- none compliance: estimating the cost- effectiveness of a cancer screening pro- gram.//Im SiaMssoc. 1998;93: 929-934. 16. Gampbell, DT Factors relevant to the validity of experiments in sodal set- tings. Psychol Bull. 1957;54: 297-312. 17. Shadish WR, Gook TD, Gampbell DT Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston, MA: Houghton-Mifllin; 2002. 18. Vinokur AD, Price RH, Schul Y Impact of the JOBS intervention on unemployed workers varying in risk for depression. Amf Community Psychol 1995,23:39-74. 19. Holland PW. Gausal inference, path analysis, and recursive structural equation models (with discussion). In: Glogg G, ed. Sociological Methodology 1988. Washington, DG: American Sod- ological Assodation; 1988:449-493. 20. Barnard J, Frangakis GE, Hill JL, Rubin DB. A principal stratification approach to broken randomized experi- ments: a case study of school choice vouchers in New York Gity (with dis- cussion)./Am Stat Assoc. 2003;98: 299-323. 21. Mark MM, Reichardt GS. Quasi- experimental and correlational designs: methods for the real world when ran- dom assignment isn’t feasible. In: San- sone G, Morf GG, Panter AT, eds. Sage Handbook of Methods in Social Psychol- ogy. Thousand Oaks, GA: Sage Publica- tions, 2003:265-286. 22. WestSG,BiesanzJG, Pitts SG. Gausal inference and generalization in field settings: experimental and quasi- experimental designs. In: Reis HT, Judd GM, eds. Handbook of Research Methods in Social and Personality Psy- chology. New York, NY: Gambridge Uni- versity Press; 2000:40-84. 23. Finkelstein MO, Levin B, Robbins H. Glinical and prophylactic trials with as- sured new treatment for those at greater risk: 1. A design proposal. AmJ Public Health. 1996,86:691-695. 24. Finkelstein MO, Levin B, Robbins H. Glinical and prophylactic trials with as- sured new treatment for those at greater risk: II. Examples. AmJ Public Health. 1996;86:696-705. 25. Ludwig J, Miller DL. Does Head Start improve children’s life chances? evidence from a regression discontinuity design. QJEcon. 2007;122:159-208. 26. Khuder SA, Milz S, Jordan T, Price J, Silvestii K, Butler P The impact of a smoking ban on hospital admissions for coronary heart disease. Preo Med. 2007;45:33-8. 27. Hawkins NG, Sanson-Fisher RW, Shakeshaft A, D:Este G, Green LW. The multiple baseline design for evaluating population-based research. AmJ Prev Med 2007;33:162-168. 28. Gochrane WG. The planning of observational studies of human popula- tions (with discussion). / R Stat Soc. Se- ries A (General). 1965;128:236-265. 29. Rosenbaum PR. Oiseroa/iona/Sftid- ies. 2nd ed. New York. NY: Springer; 2002. 30. Rubin DB. Matched Sampling for Causal Effects. New York, NY: Gam- bridge University Press; 2006. 31. West SG, Thoemmes F Equating groups. In: Alasuutari P, Brannen J, Bickman L, eds. .The SAGE Handbook of Social Research Methods. London, England: Sage Publications; 2008: 414-430. 32. Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. I983;70:41-55. 33. McGaffrey DF, Ridgeway G, Morral AR. Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychol Methods. 2004;9:403-425. 34. Haviland A, Nagin DS, Rosenbaum PR. Gombining propensity score match- ing and group-based trajectory analysis in an observational study. Psychol Meth- ods. 2007;12:247-267 35. Sommer A, Zeger SL. On estimating August 2008, Vol 98, No. 8 ) American Journal of Public Health West et al. | Peer Reviewed | Research Innovations and Recommendations | 1365 RESEARCH INNOVATIONS AND RECOMMENDATIONS efUcacy from clinical trials. Stat Med. 1991;10:45-52. 36. Winship C, Morgan SL. The esti- mation of causal effects from observa- tional data. Annu Rev Sodol. 1999;25: 659-706. 37. Morgan SL, Winship C. Counterfac- tuals and Causal Inference: Methods and Principles for Social Research. New York, NY: Cambridge University Press; 2007 38. Shadish WR, Cook TD. Design rules: more steps towards a complete theory of quasi-experimentation. Stat Sei. 1999;14:294-300. 39. Rosenbaum PR. Replicating effects and biases. Am Stat. 2001 ;55:223-227 40. Reynolds KD, West SG. A muld- plist strategy for strengthening non- equivalent control group designs. Eval Rev. 1987:11:691-714. 41. Moher M, Schulz KF, Altman D, the CONSORT Group. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. 2001;285:1987-1991. 42. Des Jaríais DC, Lyies C, Crepaz N, the TREND Group. Improving the re- porting quality of nonrandomized evalu- ations of behavioral and public health evaluations: the TREND statement. Am J Public Health. 2004:94:361-366. Adverse Event Detection in Drug Development: Recommendations and Obligations Beyond Phase 3 Prémarketing studies of drugs, although large enough to demonstrate efficacy and detect common adverse events, cannot reliably de- tect an increased incidence of rare adverse events or events with significant latency. For most drugs, only about 500 to 3000 participants are stud- ied, for relatively short dura- tions, before a drug is mar- keted. Systems for assess- ment of postmarketing ad- verse events include sponta- neous reports, computerized claims or medical record data- bases, and formal postmar- keting studies. We briefly review the strengths and limitations of each. Postmarketing surveil- lance is essential for devel- oping a full understanding of the balance between benefits and adverse effects. More work is needed in analysis of data from spontaneous re- ports of adverse effects and automated databases, design of ad hoc studies, and design of economically feasible large randomized studies. {Am J Public Health. 2008; 98:1366-1371. doi:10.2105/ AJPH.2007.124537) Jesse A. Berlin, ScD, Susan C. Glasser, PhD, and Susan S. Ellenberg, PhD REPORTS OF DEVASTATING adverse events suffered by pa- tients create public doubt about whether drugs are safe. Develop- ing “safe” drugs presents a high hurdle, because every drug carries potential for harm (“risk”). Drug safety cannot be considered an absolute; it can only be assessed relative to the drug’s benefits. At the time of marketing, however, the amount of information on benefits and risks, especially long term, is relatively small, and often based on highly selected popula- tions with respect to age, comor- bidities, use of concomitant med- ications, and other factors. We discuss drug development and assessment of adverse events and offer recommendations for continued evaluation of benefits and harms after a medicinal product becomes marketed. DRUG DEVELOPMENT PROCESS The drug development pro- cess, lrom discovery to market, is long and costly.”^ Rigorous processes are in place during dinical trials that protect the safety of study pcirticipants and also ensure that collection of ad- verse event data is complete. This completeness, coupled with the randomized design, also helps develop an understanding of the benefits and side effects of a new medicine by strengthening the validity of the comparisons between the new drug and the comparator, which could be a placebo or an active therapy for the condition under study. Preclinical Testing Prior to being studied in hu- mans, a drug candidate under- goes an extensive series of labo- ratory and animal tests to study possible therapeutic and adverse effects. Preclinical studies are also used to characterize the pharmacokineücs and pharmaco- dynamics of the drug, including absorption, distribution, metabo- lism, excretion, and persistence of pharmacological effects. A preclinical evaluation of safety includes in vitro and in vivo studies in animals to search for unintended pharmacological and toxic effects at the whole- cinimctl level cind on specific or- gans and tissues. In addition, car- cinogenicity and mutagenicity studies are conducted, along with specific tests of effects on cardiac rhythms. If results suggest the product cein be used safely and may produce the desired benefi- cial effects, the stage is set for testing in humans. There is gen- erally a low threshold for reject- ing drugs for safety reasons; the assumption is that unfavorable preclinical results are predictive of human safety problems (al- though the validity of this as- sumption may be questionable). Most drug candidates, whether for safety concerns or insufficient potential for efBcacy, will never complete the development pro- cess; only 1 of every 5000 to 10 000 compounds that enter preclinical testing will become approved for marketing.” 1366 I Research Innovations and Recommendations Peer Reviewed | Berlin et al. American Journal of Public Health | August 2008, Vol 98, No. 8
Research Methods Literature ReviewPrior to beginning work on this assignment, review the qualitative and quantitative research designs encountered so far in this course.For your literature review, you
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.Grounded Theory Methods and Qualitative Family Research LaRossa, Ralph Journal of Marriage and Family; Nov 2005; 67, 4; ProQuest Central pg. 837 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Research Methods Literature ReviewPrior to beginning work on this assignment, review the qualitative and quantitative research designs encountered so far in this course.For your literature review, you
The Qualitative Report Volume 16 Number 6 November 2011 1713-1730 16-6/chenail.pdf Ten Steps for Conceptualizing and Conducting Qualitative Research Studies in a Pragmatically Curious Manner Ronald J. Chenail Nova Southeastern University , Davie, Florida, USA In a world of methodological pluralism and mixed- methods, qualitative researc hers can take a pathway of pragmatic curiosity by exploring their research interests and the possible design and methodology choices to create studies that not only allow them to pursue their investigative curiosities, but also result in coherent and effec tive systems of procedural choices. Ten steps are offered for researchers to conceive and conduct qualitative research projects that are both responsive to research goals and objectives and defendable to criteria of quality and critics of utility. Key Word s: Qualitative Research, Research Design, Research Methodology, Mixed- Methods, Methodological Pluralism, Pragmatic Curiosity . The qualitative research being conducted today is in man y ways not like your grandparents’ qualitative inquiries! Although, f or some researchers, there appears to be clearly defined boundaries between when researchers should use a qualitative research methodology and when they should employ a quantitative research methodology (e.g., Dobrovolny & Fuentes, 2008; Keenan & van Teijlin gen, 2004) . In this apparently black and white worldview , qualitative studies are most likely exploratory, naturalistic, subjective, inductive, ideographic, and descriptive/interpretive and quantitative studies are most likely confirmatory, controlled, obj ective, deductive, nomothetic, and predictive/explanatory. For other investigators, the boundaries are a bit more grey as contemporary designs become more mixed ( e.g., Creswell, Klassen, Plano Clark, & Clegg Smith, 2011) , pluralistic and diverse ( e.g., Barker & Pistrang, 2004; Sandelowski, 2004) when it comes to utilizing particular methodologies to meet specific design goals and objectives. These changes in methodology utilization patterns suggest we are entering an interesting time for qualitative researc h design in that more and more investigators are creatively using qualitative methods to address new types of research problems. For example, researchers are starting to use qualitative methodologies to conduct confirmatory studies such as the effectivenes s of interventions (e.g., Flemming, Adamson, & Atkin, 2008) and efficacy of treatments ( e.g., Verhoef, Casebeer, & Hilsden , 2002) . In these cases, the qualitative researchers might employ a mix of procedures (e.g., randomized sampling more typically associ ated with experiments combined with open- ended interviews more typically associated with qualitative research ) in the design . As these qualitative researchers offer , what Morse (2006) terms , “alternative forms of evidence” (p. 86) , new opportunities for qualitative inquiries open up. I offer these observations because as a beginning researcher once you learn traditional or typical utilizations of a qualitative methodology; you might subsequently find a number of articles in which the researcher s used the se approach es in an effectively novel way s. The key to all of this practical experimentation or pragmatic improvisation, 1714 The Qualitative Repor t November 2011 as well as with traditional uses of qualitative methodologies, is for you as the qualitative researcher to be clear as to what methodolog ies and procedures were used to accomplish what aspects of your design and to explain/defend why such c hoices were made. In such a defense, the keys are (a) to address the procedure conceptually first by citing a source for this new orientation to the process, (b) to explain the novelty in your application of the method to the accomplish the design objective at hand, (c) to show how the innovative procedural choice made coheres with the other design choices being implemented , and (d) to demonstrate how all the methodological choices made are a llowing the study’s design to address the guiding research question or hypothesis. In other words, you should embrace a sense of “pragmatic curiosity” to explore an optimal array of methodological choices to meet the n eeds of your design’s concept which was chosen based upon your research questions. To paraphrase the title of Elliot Mishler’s well -known 1979 essay, “Methodology in context: Is there any other kind?” So, taking this question as a mantra, it is critical fo r you to remember continually to craft a design so that it meets the need of your study in a coherent and effective manner. To help you, as a beginning qualitative researcher , decide when and how to use qualitative research methodologies in this changing w orld, I have designed a ten step process for conceiving and conducting qualitative inquiries. For this guide, I suggest you take a pragmatic posture to creating studies that marry the most fitting design and methodology choices with the focus of your resea rch curiosity. In this approach I suggest you remain true to your interests and then explore a variety of research approaches which can help in the designing and conducting studies to meet your needs. The bottom line is to be pragmatic in creating the desi gn, but remain curious so every reasonable methodological option is considered. In doing so, I think it is important for you to be creative in considering and selecting design elements, and then to evaluate the design, methodology, and procedures you choos e and implement , so these inquiry decisions remain fitting with your research goals and objectives and also coherent with each other. By embracing this pragmatic curiosity, you will need to describe and explain each choice made in conceptualizing and conducting the research because each method is justified in the conduct of its usage in the study at hand. The answer to the question, “How does it make sense to utilize an ethnographic methodology in a study designed to explore the effectiveness of a psychothe rapy intervention?” is “Here is what I did and why these choices make sense in the context of my study.” Without certainty in terms of methodological destiny, researchers are left with the tools of openness and rhetoric when it comes to defending their res earch choices (Chenail, 2011). These ten steps are intended as a general set of guidelines for you to plan and execute a qualitative research study in a tr ansparent and coherent manner. As an i nvestigators following specific research designs such as dis covery-oriented inquiry (Mahrer, 1988; Mahrer & Boulet, 1999) and qualitative research methodologies such as phenomenology ( Moustakas, 1994) or narrative inquiry ( Riessman, 2007), you would be guided by more particular prescriptions to describe and defend your choices (see the appendix for a list of these basic resources), but as suggested by these ten steps , there are some actions and re- actions common across most if not all qualitative research projects when it comes creating fitting studies. Before readi ng the ten steps I want to share an important clarifying point. Because I suggest qualitative researchers need to make many decisions in creating and conducting Ronald J. Chenail 1715 a study via these ten steps, you may get the impression that I am suggesting qualitative resear ch studies’ designs must be complex in nature. To clarify this point, I would more accurately say I think qualitative research designs are multifaceted, but at their hearts I think the simpler they are the better. I emphasize this point for a number of rea sons. In qualitative research studies I think the method should be as simple as possible because the complexity of research lies in the matter to be studied especially in naturalistic and exploratory inquiries . If the method is overly complicated, then it s many parts and phases might overwhelm the subject being studied. When complexity meets complexity, the results are usually a muddle. Embracing simple yet effective procedures is an optimal goal to which for qualitative researchers should strive : Collect rich data and let it shine as the star of the study. Like using fresh ingredients in cooking, keep the preparation and presentation simple so your guests can appreciate the qualitative differences great products can deliver. More methodologies being used in a study do not necessarily make the design a superior one. If you find yourself designing a phenomenological grounded theory ca se study, please ask yourself do you really need to employ three of Creswell’s (2007) five approaches to qualitative r esearch in one research project? Like taking too many medications can lead to adverse effects to your body, using too many methodologies might produce negative side effects which could be unhealthy for your study. To help remedy this potential risk, please remembe r this simple research commandment: Thou shall not select an additional methodology for a study , until thou is sure the first methodology selected cannot manage all of the design issues. As a final note, even though I offer ten steps for conceptualizing a nd conducting qualitative research studies in a pragmatically curious manner , please remember three guiding principles: Keep it coherent, Keep it clear, and Keep it simple. If you adopt these three pieces of advice as your research mantra, you will find yo urself creating and completing qualitative studies of quality. Ten Steps Step One: Reflect on What Interests You Think about the program, project, population, participant, problem, phenomenon, policy, practice, process, or product about which you would like to learn. For instance, are you interested in discovering students’ experiences learning in field settings, the integration of theory and practice, how students learn online, becoming a culturally competent instructor, or customer satisfaction? Starting with a topic about which you have a passion helps to sustain you throughout the research process. It also helps you to find a design that fits your passion rather than needing to find a passion that fits a design! Step Two: Draft a Statement Identifyin g your Preliminary Area of Interest and Justifying Its Scholarly and/or Practical Importance Compose a simple sentence or two in which you state your beginning area of curiosity and explain why the topic is significant, relevant, and worthy of study. By doing so you begin to address the “so what” question right away. For instance, if you select 1716 The Qualitative Repor t November 2011 “how students learn online” as your preliminary area of interest, you might cite the increase in the number of students learning online or the growth of online programs and acknowledge the challenges involved with learning and teaching online as reasons why the topic would be worthy of further study. You could also cite a gap in the education research literature on this topic as another reason for wanting to pursue t his area of inquiry. In addition, you can reflect upon your personal perspectives in relation to your preliminary area of interest and record your hopes, aspirations, and biases as an educator. As you progress through the rest of these steps, refer back to this record from time to time in order to assess how your personal perspectives are shaping the research process (e.g., biasing data analysis or research design). Step Three: Hone your Topic Focus Now that you have begun to articulate your area of inter est, begin to hone your focus by considering the choices you need to make in order to design your study. For example, if you have selected “how students learn online” as your topic, explore the options you can exercise by deliberating on the following ques tions: Who: Who do you want to study and from whose perspective do you want to learn about how students learn online (e.g., undergraduate, master’s, and/or doctoral students, faculty members, program completers, students with specific demographics/charact eristics like culture, race, religion, or ethnicity)? What: What aspect of how students learn online would be your focus (e.g., students’ experiences, evaluation of learning outcomes, participating in discussions, student -faculty interaction, student perf ormance on assignments or examinations, faculty members’ stories, or pre and post -course development)? When: When would you focus on this phenomenon (e.g., pre -matriculation, during the first year, throughout a course, or a combination of all of them)? Where: Where would you observe/interact this phenomenon (e.g., observing online electronic classrooms, interviewing students over the phone or the internet, focus group interview with faculty members who have taught students in online environments, and/or s urveys)? Why: Why would you study this phenomenon (e.g., because you want to inform, perform, reform, transform, describe, interpret, explain, confirm, criticize, suggest, evaluate, or assess something)? How: How will you generate data in order to study this phenomenon (e.g., administer a survey, conduct interviews, make observations, collect transcripts of online sessions, or gather student journals)? You can see that each of these questions begin with words often associated with journalistic inquiries because the investigative postures of both journalists and qualitative Ronald J. Chenail 1717 researchers are typified by open -ended inquisitiveness. This open- ended posture applies to both the discovery of your research focus and your methodological design. Also, these questions are just some of the ones you can ask about your study to help you discover the areas in which you need to make important procedure questions and to decide what research methods will best help you achieve these design objectives. Step Four: Compose your Initial Research Question or Hypothesis Based upon your answers to the Who, What, Where, When, Why, and How questions, compose your initial research question. For example, one research question could be, “What are the experiences of doctoral students learning qualitative research in a primarily online learning environment?” In composing this research question, envision what would be the implications arising from the results of this study for education researchers, faculty members, students, program administrators, and other interested stakeholders. This question may change over time as you become more and more familiar with the phenomenon to be studied so it is critical that you continually refer to the question to see if you are staying on course or, if you need to adjust the question as you learn more about what you know and still don’t know about the area of study. In qualitative research it is perfectly okay to make adjustments to your research question as the inquiry develops, but it is critical that you are aware when these adjustments are made and make the appropriate adjustments to your design. Trouble can arise “in the field” if you become interested in some new area of inquiry and lack the self -reflection to know when you are drifting. Again, it is okay to drift as long as you are aware of the changes made in the course of the inquiry and justify the corrections being made. Step Five: D efine your Goals and Objectives Focus on the overall goals of your potential research study and the objectives that you must accomplish in order to achieve these goals. For example, if a goal is to learn more about the experiences of doctoral students learning qualitative research in a primarily online learning environment, relevant objectives could be (a) Conduct a literature search in order to learn what has been previously published on this topic, (b) Adjust the research question based upon the literature review, (c) Identify potential sites for collecting data, (d) Prepare Institutional Review Board (IRB) protocol, etc. Make sure each goal and objective can be justified and evaluated so you can track the progress you are making and identify where problems are arising or where adjustments are being made. Step Six: Conduct a Review of the Literat ure Some researchers start their qualitative research process with a review of the literature, some delay their reviews until after the study is completed, and some continually review the literature throughout the research process ( Chenail, Cooper, & Desir, 2010). Some qualitative researchers explore the literature to learn what is not known about a phenomenon and then formulate questions which will guide a discovery – 1718 The Qualitative Repor t November 2011 oriented inquiry to uncover new evidence about the phenomenon in question. With any of these approaches i t is important that you identify key terms (e.g., students , doctoral students, qualitative research, education, and online learning) to guide the electronic searchers of relevant databases (e.g., ProQuest, ERIC, and Google Scholar); in addition, you should also complement your electronic searches with systematic reviews of the references cited in the articles collected to locate additional sources. Step Seven: Develop your Research Design In qualitative research, your design is the system of choices you m ake that helps you to conceive and conduct your study in an orderly and effective manner. Develop a research design which will allow you to address your research question or hypothesis effectively and efficiently. For example, does your research question s uggest a design that will permit you to take a stance of curiosity in your study, or one that is more critical in nature, or one that asks you to help foster change in the organization or situation in which you will conduct your research? With each of thes e areas of emphasis you would conceive your design to align with the essence of your research question and to put you in the best position to achieve your research goals. To accomplish this plan you will need to make choices in the following areas: Design Concept : Conceptually, how do you design your study in order for you to address your research question or hypothesis and to meet your goals and objectives? For example, will the design help you to discover or explore basic patterns of a naturally occurrin g phenomenon, to evaluate or assess the performance of a project, to construct a theoretical model that helps to explain the relationships between different variables , to describe how participants understand their experiences regarding some aspect of their lives , or work with participants to change their organization or system ? Will your study be a primary research study (e.g., I will collect new data to study), a secondary research study (e.g., I will study data previously collected as part of another stud y), or a meta -study (e.g., I will study previously published studies)? Your answer to these questions will help you select an appropriate design concept. You may have also noted that I used a bold font to emphasize certain words. All of these words denote a different type of research design: Exploratory (e.g., Stebbins, 2001), Evaluation ( e.g., Patton, 2002), Explanatory ( e.g., Charmaz, 2006), Descriptive (e.g., Giorgi, 2009), Change (e.g., Reason & Bradbury, 2008), Primary (e.g., Maxwell, 2005), Secondary (e.g., Heaton, 2004), and Meta (e.g., Major & Savin- Baden, 2010). You can find more helpful guides to qualitative research design in the appendix located at the end of this paper. Participants: Depending on your choice of design, you will form different r elationships with the sources of your data (i.e., people, places, audio and visual artifacts, etc.). Research participants can be engaged as sources of information for you, co- researchers to help you carry out the study, or change -agents with whom you cons ult. As you determine the participants’ roles, you then need to decide w ho will participate in the study, how will I gain access and recruit them, and what precautions will I need to take in order to protect them from harm throughout the study? Answers to these questions will help you craft your inclusion criteria, sampling strategy, site location, and so forth. Ronald J. Chenail 1719 Res earch Methodology: Different qualitative research methodologies have different strengths when it comes of meeting the needs of different design concepts. For example, ethnographic methodologies are well suited for primary research studies conceived to describe social phenomenon and grounded theory approaches are quite useful for generating explanatory models. So whether your design concept is exploratory, descriptive, evaluative, or change -oriented, start by exploring and considering basic or traditional utilization of a methodology (e.g., phenomenology to study the lived experience of a group of people , Finlay, 2011; ethnography to describe the s ymbols, signs, rituals, ceremonies, a nd practices of an organization, Murchison, 2010; or grounded theory to generate a theory or model of a social happening , Charmaz, 2006) . Some traditional fits between these methodologies and your research questions, goals, and objectives might be optimal for your study, but if that is not the case , then after becoming more familiar with basic renderings and applications, you might then explore variations, hybrids, and improvisations which might have a better fit. By rem aining pragmatically curious you will avoid the practice of letting method ology totally drive the research rather than allowing your question and goals to organize the inquiry too. Research Procedures: With each methodology you will need to decide what yo ur p rocedures will be for selecting and sampling (e.g., convenient, purposeful, theoretical, random); and generating, collecting, preparing, and analyzing the data (Maxwell, 2005). Through the execution of these procedures or methods you will actually carr y out the design you have conceived. If you have selected a well -developed qualitative research methodology such as ethnography, an experienced author such as Fetterman (2009) will provide you with helpful procedural prescriptions from data collection thro ugh data analysis you can adopt or adapt for your own study. If you have decided to take an “eclectic” approach in your study, you may pick and choose or mix and match from different “designer” brands such as ethnography (e.g., Murchison, 2010), grounded t heory (e.g., Corbin & Strauss, 2007) , or phenomenology (e.g., Smith, Flowers, & Larkin, 2009); or from general qualitative research guides ( e.g., Merriam, 2009) to create your set of data generation and analysis procedures. For example, you might construct and conduct your interviews based upon Kvale and Brinkmann’s (2008) approach and select a coding system from those choices collected by Saldaña (2009). Whether you go with a designer or eclectic approach make sure the various procedures sync well with th e others so the data flow is coherent and smooth. Also, make sure if you are only incorporating some elements from a designer methodology such as open and axial coding from grounded theory (Corbin & Strauss, 2007) to create codes and categories as part of the qualitative data analysis in your eclectic qualitative descriptive design, please do not refer to your study as being grounded theory design or methodology because unless your study is designed to generate a theory or model it is not grounded theory in the full, designer sense of the methodology. Calling an eclectic design by a designer methodology name is akin to a selling a “knock-off” in f ashion: If the purse was not designed and constructed to Gucci specifications, then don’t call the bag a Gucci! Quality Control: It is one matter to conceptualize a qualitative research study, but it is another concern to create a system by which you maintain quality control to 1720 The Qualitative Repor t November 2011 ensure the study you conceived is the one you end up conducting. To focus yourself on thi s challenge there are many questions you can ask: How will I maintain rigor (e.g., reliability, validity, trustworthiness, generalizability) throughout the study? How will I identify and manage ethical concerns arising throughout the research? As you consi der these questions, you can first consider how these areas are addressed indigenously in the methodological and philosophical traditional you are considering for your design. In other words, when in phenomenology land, do as the phenomenologists do! Depending on context, you might want to incorporate a more generic approach to quality control, for example Lincoln and Guba’s (1985) trustworthiness or embrace some other qualitative research traditions for ideas ( King, Keohane, & Verba, 1994; Lamont & White, 2008). As with the choices of research procedures discussed above, make sure the qualitative control measures you select cohere with the design concept, methodology, and data collection and analysis decisions also being made. As you make methodological decisions in each of these areas take care to ensure that your choices align with each other (Chenail, 1997). For example, with the variety of grounded theory designs available, your epistemological stance should be in basic agreement with that of the grounded theorist you select (e.g., Charmaz’ 2006 version of grounded theory as your methodology with constructivist epistemology). If such an alignment is not the case, then you will need to explain and justify your variations. Step Eight: Conduct a Self -asse ssment in Order to Determine What Strengths You Have That Will Be Useful in your Study and What Skills You Will Need to Develop in Order to Complete your Study Whether considering the qualitative researcher as the instrument (Lincoln & Guba, 1985), a bric oleur (Denzin & Lincoln, 1994) , or as competent practitioner ( Polkinghorne, 2010) , certain skills, knowledge, and attitudes are needed to carry out the study effectively. As you review your plan and identify what skills and knowledge base you will need to complete the study successfully. Develop a growth plan for helping you to master the competencies you will need throughout the study (e.g., open -ended interviewing, taking field notes, using qualitative data analysis packages, writing, etc.). You can combi ne this development process with your efforts to test and refine the procedures entailed in your design. For example, you can practice your interviewing skills and improve the instrumentation in your study by interviewing yourself and recording and analyzi ng the results (Chenail, 2009). You may also consider creating a team or involve consultants to assist with your areas in need of development. Remember to reflect upon your personal context and point -of -view which may bias you during the study and record y our plan for managing this perspective throughout the qualitative research project. Step Ni ne: Plan , Conduct, and Manage the Study Successful qualitative research projects involve careful management of four different yet connected studies: (a) the study proposed, (b) the study conducted, (c) the study reported, and (d) the study of these studies. Develop an action plan detailing the Ronald J. Chenail 1721 steps you need to take in order to begin and complete your studies. Depending on the study, the elements you will need to ad dress include: people (including yourself), communication, data (including back -up systems), analysis, results, technology, time, money, ethical concerns (including securing institutional approvals), and other resources. Maintain a chronicle of your resear ch activities (e.g., lab notebook, journal, diary, audit trail, and time and effort reports) and save supporting documentation. Throughout the life of your studies you will need to make sure they remain in a coherent relational pattern. For example, it is easy to drift into other areas of interest as you begin to conduct your study, but you need to reflect back upon your study as proposed to make sure that you stay focused on the goals and objectives. Of course qualitative research design can be iterative meaning you can make adjustments along the way. In the event of these corrective changes, make sure you are aware as you make these deviations and revise your study plan or study report accordingly. Step Ten: Compose and Submit your Repo rt Depending on the vehicle you will use to report your study (e.g., dissertation, thesis, scholarly paper, poster, or conference presentation), identify the relevant policies and rules governing the form, substance, and submission of the report (e.g., school or departmental guidelines, journal article submission requirements, book prospectus elements, style manual of the American Psychological Association, 2010, etc.) and report and submit your findings in compliance with these parameters. Even though there can be a variety of outlets to make the results of your study public, a typical reporting format would be as follows: • Introduction and Review of Literature • Methodology • Findings or Results • Discussion of Implications and Limitations of the Results It is important to thin k about the form in which you will present your study early and often so you do not wait until the end of your study to write up your report. For example, you might draft a working title and abstract for your paper in progress. Both of these elements might start out being vague and abstract, but as you make your methodological choices and determine your findings and implications you will be able to make the title and the abstract clearer and more concrete. As you compose these separate sections and make sur e the ways in which you characterize your focus, method, and findings cohere across the title, abstract, and body of the report (Chenail, Duffy, St. George, & Wulff, 2009). Also, if you compose your title and abstract during the conceptualization or propos al phase, you should also consider revising your title from its proposal form (e.g., phenomenon, focus, and method) to one more fitting of a completed study (i.e., one that includes a reference to the findings). Lastly, be prepared to write and re -write yo ur report a number of times until you have accurately represented the process and outcome of your qualitative research project. 1722 The Qualitative Repor t November 2011 Discussion The challenge of conducting a qualitative research study successfully is to manage choices well throughout the inquiry. In starting your first study you will quickly realize that one decision made usually opens up multiple new decisions with which you will also have to address. For example, after you decide your study will be an exploratory one, then you will have to decide which qualitative research methodology will best fit your research question. Then if you select grounded theory ( Glaser & Strauss, 1967), you next will need to figure out is what style of grounded theory works for the project. Then once you have chosen the Glaser variation ( Glaser, 1994), you then will need to work on how you will actually carry out your exploratory Glaserian grounded theory study and so forth . Although I have presented these steps in particular order, it is important to remember tha t the conceptualization and conduct of qualitative research is a circular, recursive, and reflective process. The decision -making process in research can best be understood as an integrated system in which choices influence choices so although a particular procedural choice is made at one point in the research process; this choice may need to be re -considered as other issues arise or as new insights arise in the research undertaking. This iterative aspect of qualitative research means you should continuousl y check and re- check the decisions made for these ten steps and judge and re -judge their effectiveness and coherence. Given the nature of the enterprise it is critical you manage not only the study proposed and conducted, but also the study of their study. In this reflective process, you can record the decision -making process via a journal or diary and retain evidence of the changes to form an audit trail. Such a practice serves not only as a quality control system to help with the research management, but can also be the inspiration of creative improvisations as new choices are considered and possibly implemented. In making these methodological decisions in qualitative research studies, the best compass for you remains the research question. You should cons ult it often and let it be the guide to keep your design and methodological choices transparent , coherent , and simple . In the world of methodological plurality no design choice is right in and of itself; instead, as a qualitative researcher you must consider each step made along the way and justify each decision in terms of its fit with the your interest, goals, and objectives and the other choices already made in the study and those which will be made in the future of investigation. By taking and re -taking these ten steps, you will remain pragmatically curious as you conceptualize and conduct qualitative research of quality and utility. References American Psychological Association. (2010). Publication manual of the American Psychological Association (6 th ed.). Washington, DC: Author. Barker, C., & Pistrang, N. (2004). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35(3/4), 201- 212. Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis . Thousand Oaks, CA: Sage. Ronald J. Chenail 1723 Chenail, R. J. (1997). Keeping things plumb in qualitative research. The Qualitative Report, 3(3). Retrieved from 3/plumb.html Chenail, R. J. (2000). Navigating the “seven c’s”: Curiosity, confirmation, comparison, changing, collaborating, critiquing, and combinations. The Qualitative Report , 4(3/4). Retrieved from 3/sevencs.html Chenail, R. J. (2009). Interviewing the investigator: Strategies for addressing instrumentation and researcher bias concerns in qualitative research. The Weekly Qualitative Report , 2(3), 14-21. Retrieved from Chenail, R. J. (2011). How to conduct clinical qualitative research on the patient’ s experience. The Qualitative Report , 16(4), 1173-1190. Retrieved from 4/chenail.pdf Chenail, R. J., Cooper, R., & Desir, C. (2010). Strategically reviewing th e research literature in qualitative research. Journal of Ethnographic & Qualitative Research , 4, 88-94. Chenail, R. J., Duffy, M., St. George, S., & Wulff, D. (2009) . Facilitating coherence across qualitative research papers. The Weekly Qualitative Report, 2(6), 32-44. Retrieved from Corbin, J., & Strauss, A. (2007). Basics of qualitative research: Techniques and procedures for developing grounded theory (3 rd ed.). Thousand Oaks, CA: Sage. Creswell, J. W. (2007). Qualitative inquiry & research design: Choosing among five approaches (2nd ed.). Thousand Oaks, CA: Sage. Creswell, J. W., Klassen, A. C., Plano Clark, V. L., & Clegg Smith, K. (2011). Best practices for mixed methods research in the health sciences . Washington, DC: Office of Behavioral and Social Sciences Research. Retrieved from f/Best_Practices_for_Mixed_Methods_Research.pdf Denzin, N. K., & Lincoln, Y. W. (1994). Introduction: Entering the field of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.). Handbook of qualitative research (pp. 1-18). Thousand Oaks, CA: Sage. Do brovolny, J. L., & Fuentes, S. C. G. (2008). Quantitative versus qualitative evaluation: A tool to decide which to use . Performance Improvement, 47( 4), 7- 14. Fetterman, D. M. (2009). Ethnography: Step -by -step (3 rd Finlay, L. (2011). Phenomenology for therapists: Researching the li ved world. Malden, MA: Wiley -Blackwell. Flemming, K., Adamson, J., & Atkin, K. (2008). Improving the effectiveness of interventions in palliative care: The potential role of qualitative research in enhancing evidence from randomized controlled trials. Pall iative Medicine, 22(2), 123- 131. Giorgi, A. (2009). The descriptive phenomenological method in psychology: A modified Husserlian approach. Pittsburgh, PA: Duquesne University Press. Glaser, B. G. (1994). Basics of grounded theory analysis: Emergence versus forcing. Mill Valley, CA: Sociology Press. Glaser , B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago, IL: Aldine. 1724 The Qualitative Repor t November 2011 Heaton, J. (2004). Reworking qualitative data. London: Sage. Keenan, K. F., & va n Teijlingen, E. (2004). The quality of qualitative research in family planning and reproductive health care. Journal of Family Planning & Reproductive Health Care, 30(4), 257- 259. Kvale, S., & Brinkmann, S. (2008) Interviews: Learning the craft of qualitative research interviewing (2nd ed.). Thousand Oaks, CA: Sage. King, G., Keohane, R. O., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research . Princeton, NJ: Princeton University Press. Lamont, M., & White, P. (2008). Interdisciplinary standards for systematic qualitative research . Washington, DC: National Science Foundation. Retrieved from Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Newbury Park, CA: Sage. Mahrer, A. R. (1988). Discovery -oriented psychotherapy research: Rationale, aims, and methods. American Psychologist, 43, 694- 702. Mahrer, A. R., & Boulet, D. B. (1999). How to do discovery -orie nted psychotherapy research. Journal of Clinical Psychology, 55(12), 1481- 1493. Major, C., & Savin- Baden, M. (2010). An introduction to qualitative research synthesis: Managing the information explosion in social science research. London: Routledge. Maxwel l, J. A. (2005). Qualitative research design: An interactive approach (2nd ed.). Thousand Oaks, CA: Sage. Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco: Jossey -Bass. Mishler, E. G. (1979). Meaning in cont ext: Is there any other kind? Harvard Educational Review, 49 (1), 1-19. Morse, J. M. (2006). The politics of evidence. In N. K. Denzin & M. D. Giardina (Eds.), Qualitative inquiry and the conservative challenge (pp. 79- 92). Walnut Creek, CA: Left Coast Pres s. Moustakas, C. (1994). Phenomenological research methods . Thousand Oaks, CA: Sage. Murchison, J. M. (2010). Ethnography essentials: Designing, conducting, and presenting your research . San Francisco, CA: Jossey -Bass. Patton, M. Q. (2002). Qualitative res earch & evaluation methods (3rd ed.). Thousand Oaks, CA: Sage. Polkinghorne, D E. (2010). Qualitative research. In J. Thomas & M. Hersen (Eds.), Handbook of clinical psychology competencies (Part 3, pp. 425- 456). New York, NY: Springer Science+Business Me dia. Reason, P., & Bradbury, H. (Eds.). (2008). The Sage handbook of action research: Participative inquiry and practice (2nd ed.) . London: Sage. Riessman, C. (2007). Narrative methods for the human sciences . Thousand Oaks, CA: Sage. Saldaña, J. (2009). T he coding manual for qualitative researchers . London: Sage. Sandelowski, M. (2004). Using qualitative research. Qualitative Health Research, 14(10), 1366-1386. Smith, J. A., Flowers, P., & Larkin, M. (2009). Interpretive phenomenological analysis: Theory, method, and research. London: Sage. Ronald J. Chenail 1725 Stebbins, R. A. (2001). Exploratory research in the social sciences . Thousand Oaks, CA: Sage. Verhoef , M. J., Casebeer, A. L., & Hilsden , R. J. (2002). Assessing effic acy of complementary medicine: A dding qualitative research methods to the “ gold standard .” The Journal of Alternative and Complementary Medicine, 8(3), 275- 281. Appendix Qualitative Research Designs and Methodologies Qualitative Research Designs Butler -Kisber, L. (2009). Qualitative inquiry: Thematic, n – . London: Sage. Creswell, J. W. (2007). Qualitative inquiry & research design: Choosing among five approaches (2nd Thousand Oaks, CA: Sage. Erlandson, D. A., Harris, E. L., Skipper, B. L., & Allen, S. D. (1993). Doing naturalistic inquiry: A guide to methods . Newbury Park, CA: Sage. King, G., Keohane, R. O., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research . Princeton, NJ: Princeton University Press. Lamont, M., & White , P. (2008). Interdisciplinary standards for systematic qualitative research . Washington, DC: National Science Foundation. Retrieved from Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Newbury Park, CA: Sage. Maxwell, J. A. (2005). Qualitative research design: An interactive approach (2nd ed.). Thousand Oaks, CA: Sage. Marshall, C., & Rossman, G. B. (2006). Designing qualitative research (4th ed.). Thousand Oaks, CA: Sage. Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco , CA: Jossey -Bass. Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks, CA: Sage. Sal daña, J. (2009). The coding manual for qualitative researchers . London: Sage. Sandelowski, M. (2000). Whatever happened to qualitative description? Research in Nursing & Health, 23(4), 334- 340. Silverman, D. (2009). Doing qualitative research (3 rd ed.). London: Sage. Silverman, D., & Marvasti, A. (2008). Doing qualitative research: A comprehensive guide. Thousand Oaks, CA: Sage. Stake, R. E. ( 2010). Qualitative research: Studying how things work . New York, NY: Guilford. Stebbins, R. A. (2001) . Exploratory research in the social sciences . Thousand Oaks, CA: Sage. Yin, R. K. (2011). Qualitative research from start to finish . New York, NY: Guilford. 1726 The Qualitative Repor t November 2011 Qualitative Evaluation Designs Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluati on. Newbury Park, CA: Sage. Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks, CA: Sage. Patton, M. Q. (2008). Utilization -focused evaluation (4th ed.). Thousand Oaks, CA: Sage. Patton, M. Q. (2011 a). Developmental evaluation: Applying complexity concepts to enhance innovation and use . New York, NY: Guilford Press. Patton, M. Q. ( 2011b). Essentials of utilization -focused evaluation. Thousand Oaks, CA: Sage. Shaw, I. F. (1999). Qualitative evaluation. London: Sage . Mixed -Method Designs Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage. Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods research Thousand Oaks, CA: Sage. Creswell, J. W., Klassen, A. C., Plano Clark, V. L., & Clegg Smith, K. (2011). Best practices for mixed methods research in the health sciences . Washington, DC: Office of Behavioral and Social Sciences Research. Retrieved from f/Best_Practices_for_Mixed_Methods_Re search.pdf Hesse -Biber, S. N. (2010). Mixed methods research: Merging theory with practice . New York , NY: Guilford. Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of mixed methods in social & behavioral research . Thousand Oaks, CA: Sage. Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences . Thousand Oaks, CA: Sage. Ethnography Angrosino, M. (2008) . Doing ethnographic and observational research. Thousand Oaks, CA: Sage. Chang, H. (2008). Autoethnography as method . Walnut Creek, CA: Left Coast. Emerson, R. M., Fretz, R. I., & Shaw, L. L. (2011). Writing ethnographic fieldnotes (2 nd ed.). Chicago, IL : University of Chicago Press. Fetterman, D. M. (2009). Ethnography: Step -by -step (3 rd Kozinets, R. V. (2009). Netnography: Doing ethnographic research online . London: Sage. LeCompte, M. D., & Schensul, J. J. (1999). Desig ning and conducting ethnographic research . Lanham, MD: AltaMira. Ronald J. Chenail 1727 Murchison, J. M. (2010). Ethnography essentials: Designing, conducting, and presenting your research . San Francisco, CA: Jossey -Bass. Spradley, J. P. (1979). The ethnographic interview . New York , NY: Holt, Rinehart and Winston. Spradley, J. P. (1980). Participant observation. New York , NY: Holt, Rinehart and Winston. Van Maanen, J. (2011). Tales of the field: On writing ethnography (2 nd ed.). Chicago, IL: University of Chicago Press. Gr ounded Theory Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis . Thousand Oaks, CA: Sage. Charmaz, K., & Bryant, A. (2007). The SAGE handbook of grounded theory . Thousand Oaks, CA: Sage. Clarke, A. E. (2005). Situational analysis: Grounded theory after the postmodern turn. Thousand Oaks, CA: Sage. Corbin, J., & Strauss, A. (2007). Basics of qualitative research: Techniques and procedures for developing grounded theory (3 rd ed.). Thousand Oaks, CA: Sage. Glaser , B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago, IL: Aldine. Morse, J. M., Stern, P. N., Corbin, J., Bowers, B., Charmaz, K., & Clarke, A. E. (2009). Developing grounded theory: The second ge neration. Walnut Creek, CA: Left Coast. Phenomenology Finlay, L. (2011). Phenomenology for therapists: Researching the lived world. Malden, MA: Wiley -Blackwell. Giorgi, A. (2009). The descriptive phenomenological method in psychology: A modified Husserl ian approach. Pittsburgh, PA: Duquesne University Press. Moustakas, C. (1990). Heuristic research: Design, methodology, and applications . Newbury Park: CA: Sage. Moustakas, C. (1994). Phenomenological research methods . Thousand Oaks, CA: Sage. Pollio, H. R., Henley, T. B., & Thompson, C. J. (1997). The phenomenology of everyday life Smith, J. A., Flowers, P., & Larkin, M. (2009). Interpretive phenomenological analysis: Theory, method, and research. London: Sage. V an Manen, M. (1990). Researching lived experience: Human science for an action sensitive pedagogy . Albany, NY: The State University of New York. Zichi Cohen, M., Kahn, D. L., Steeves, R. H. (2000). Hermeneutic phenomenological research: A practical guide for nurse researchers Thousand Oaks, CA: Sage. 1728 The Qualitative Repor t November 2011 Case Study Byrne, D., & Ragin, C. C. (Eds.). (2009). The SAGE handbook of case -based methods . Thousand Oaks, CA: Sage. Gerring, J. (2007). Case study research: Principles and practices . Cambridge: Cambr idge University Press. Simons, H. (2009). Case study research in practice . London: Sage. Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage. Yin, R. K. (2008). Case study research: design and methods (4 th ed.). Thousand Oaks, CA: Sage. Narrative Analysis and Inquiry Clandinin, D. J. (Ed.). (2007). Handbook of narrative inquiry: Mapping a methodology . Thousand Oaks, CA: Sage. Clandinin, D. J., & Connelly, F. M. (2004). Narrative inquiry: Experience and story in qualitative re . San Francisco, CA: Jossey -Bass. Elliott, J. (2006). Using narrative in social research: Qualitative and quantitative approaches . London: Sage. Riessman, C. (2007). Narrative methods for the human sciences . Thousand Oaks, CA: Sage. Webster, L., & Mertova, P. (2007). Using narrative inquiry as a research method: An introduction to using critical event narrative analysis in research on learning and teaching. New York , NY: Routledge. Discourse and Conversation Analysis Hutchby, I., & Wooffitt, R. (2008). Conversation analysis (2nd ed.). Cambridge: Polity. Phillips, N., & Hardy, C. (2002). Discourse analysis: Investigating processes of social construction. Thousand Oaks, CA: Sage. Psathas, G. (1995). Conversation analysis: The study of talk -in -interaction . Thousand Oaks, CA: Sage. Rapley. T. (2008). Doing conversation, discourse and document analysis . Thousand Oaks, CA: Sage. ten Have, P. (2007). Doing conversation analysis (2nd ed.). London: Sage. Wodak, R., & Meyer, M. (2009). Methods for critical discourse analysis (2 nd ed.). London: Sage. Secondary Qualitative Data Analysis Corti, L., Witzel, A., & Bishop, L. (Eds.). (2005). Secondary analysis of qualitative data [Special issue]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 6 (1). Retrieved from http://www.qualitative – Ronald J. Chenail 1729 Gladstone, B. M., Volpe, T., & Boydell, K. M. (2007). Issues encountered in a qualitative secondary analysis of help -seeking in the prodrome to psychosis. Journal of Behavioral Health Services & Research, 34 (4), 431-442. Heaton, J . (1998). Secondary analysis of qualitative data. Social Research Update. Issue 22. Retrieved online Heaton, J. (2004). Reworking qualitative data. London: Sage. Qualitative Metasynthesis Dixon -Woods, M., Booth, A., & Sutton, A. J. (2007). Synthesizing qualitative research: A review of published reports. Qualitative Research , 7(3), 375-422. Finfgeld, D. L. (2003). Metasynthesis: The state of the art –so far. Qualitative Health Research, 13(7), 893- 904. Major, C., & Savin- Baden, M. (2010). An introduction to qualitative research synthesis: Man aging the information explosion in social science research. London: Routledge. Paterson, B. L., Thorne, S. E., Canam, C., & Jillings, C. (2001). Meta -study of qualitative health research. Thousand Oaks, CA: Sage Publications. Pope, C., Mays, N., & Popay, J. (2007). Synthesizing qualitative and quantitative health evidence: A guide to methods . New York, NY: McGraw Hill. Sandelowski, M., & Barroso, J. (2007). Handbook for synthesizing qualitative research. New York , NY: Springer. Thorne, S., Jensen, L., Ke arney, M. H., Noblit, G., & Sandelowski, M. (2004). Qualitative metasynthesis: Reflections on methodological orientation and ideological agenda. Qualitative Health Research, 14(10), 1342- 1365. Collaborative Inquiry, Action Research, Participatory Action Research, and Appreciative Inquiry Bray, J. N., Lee, J., Smith, L. L., & Yorks, L. (2000). Collaborative inquiry in practice: Action, reflection, and making meaning. Thousand Oaks, CA: Sage. Cooperrider, D. L., & Whitney, D. (2005). Appreciative inquiry: A positive revolution in change. San Francisco , CA: Berrett -Koehler Communications. McIntyre, A. (2008). Participatory action research. Thousand Oaks, CA: Sage. Reason, P., & Bradbury, H. (Eds.). (2008). The Sage handbook of action research: Participative inquiry and practice (2nd ed.) . London: Sage. Stringer, E. T. (2007). Action research (3rd ed.). Thousand Oaks, CA: Sage. Whitehead, J., & McNiff, J. (2006). Action research: Living theory Whitney, D., & Trosten- Bloom, A. (2010). The power of appreciative inquiry: A practical guide to positive change (2nd ed.) . San Francisco , CA: Berrett -Koehler. 1730 The Qualitative Repor t November 2011 Author Note Ronald J. Chenail is the Editor -in -Chief of The Qualitative Report and The Weekly Qualitative Report at Nova Southeastern Univer sity (NSU), where he also serves as the Vice President for Institutional Effectiveness, Director of NSU’s Graduate Certificate in Qualitative Research, and Professor of Family Therapy. Correspondence regarding this article can be addressed to Dr . Ronald J. Chenail at Nova Southeastern University , 3301 College Avenue, Fort Lauderdale, FL 33314 -7796 USA; Telephone: 954.262.5389; Fax: 954.262.3970; E -mail: [email protected] Copyright 2011: Ronald J. Chenail and Nova Southeastern University Article Citation Chenail , R. J . (2011). Ten steps for conceptualizing and conducting qualitative research studi es in a pragmatically curious manner . The Qualitative Report , 16(6) , 1713- 1730. Retrieved from 6/chenail.pdf

Our affordable academic writing services save you time, which is your most valuable asset. Share your time with your loved ones as our experts deliver unique, and custom-written paper for you.

Get a 15% discount on your order using the following coupon code SAVE15

Order a Similar Paper Order a Different Paper