1. Describe one educational research study for which qualitative research would be optimal and a second research study for which quantitative research would be optimal. Describe a third study in which mixed-method research would be optimal.2. Read chapter 8 in An Education Research Primer: How to Understand, Evaluate, and Use It.What do you consider the most important points of this chapter that has enhanced your learning?

Unformatted Attachment Preview

opyright © 2006 John Wiley & Sons, Inc. N DEMONSTRATESthe criteria for evaluating
experimentalresearch studies, and this chapter demonstrates the criteria for evaluatingthe other
major type of research—descriptive research studies. It isproba-bly a safe assumption that in
education, there are many more descriptivestudies than experimental studies. Descriptive studies
have the potential toprovide important information on what, why, and how things are happen-ing
in schools. Unfortunately, many descriptive studies are experimental“wannabes.” The
researchers of these studies use descriptive methods butthey want to make claims about what
works. This chapter shows what tolook for in a descriptive study to avoid accepting causal
claims. It also con-siders the design features that characterize good descriptive studies.The two
examples that follow are synopses of descriptive studies reportedin the education research
literature. As in Chapter Seven, each study is summarized and then analyzed by using specific
details from the study. Interested readers can obtain the reports and read them in their entirety.
Thereference information for the reports is available in the References section.(Tips on reading
research reports are provided in Appendix A.)Evaluation of a ComparativeCorrelationalDescriptive Research StudyAll descriptive studies have the same obvious
purpose—to describe. Asexplained in Chapter Three, descriptive studies differ primarily in
theircomplexity and in how the data are collected and analyzed. The study inthe first example is
a comparative descriptive study. It describes and com-pares data from two groups of
participants—teachers and principals—andit also compares different types of teachers. The study
is also a correlationalEvaluating DescriptiveResearch StudiesChapter 868Copyright © 2006 John
Wiley & Sons, Inc. 69descriptive study. It measures the statistical associations between
andamong several dependent variables.Comparative Descriptive Study: Listening to the Experts:
AReport on the 2004 South Carolina Teacher Working Conditions The purposes of this study by
Hirsch (2004) were to describeteacher working conditions in the state of South Carolina,demonstrate the relationships of these conditions to teacher retentionand student achievement,and make
recommendations to thestate,districts,and schools on how to improve teacher
workingconditions.Data were collected through an online teacher surveythat asked teachers
about working conditions related to time,empowerment,facilities and
resources,leadership,professionaldevelopment,and induction and mentoring.A separate online
sur-vey asked administrators their perceptions of teacher workingconditions.Student
achievement and teacher retention data wereobtained from databases provided by the state.Over
15,000 SouthCarolina teachers completed the survey,with representation fromall the school
districts and 90 percent of the schools.The data wereanalyzed with descriptive statisticsand
multiple regressionanalyses.The primary findings were (1) teacher working condi-tions predict
student achievement,(2) teacher working conditionsinfluence teacher retention,(3)
teachers’perceptions of their work-ing conditions reflect reality,(4) teachers and principals have
simi-lar perceptions of teacher working conditions,(5) teachers withdifferent experience and
background have similar perceptions ofworking conditions,(6) working conditions are correlated
so thatifthere are positive perceptions of one area,especially of leader-ship,there are positive
perceptions of all the areas.The report con-cludes with recommendations,including the need for
continuedstudy about teacher working conditions in South Carolina,theneed to invest in highquality leadership,and the need to reduceinequities in the mentoring and induction of new
teachers.Research QuestionThis type of study is often called a policy study because its main
purposeis to inform policy decisions, such as legislation and funding allocations.Reports on
policy studies usually are written for broad audiences thatinclude people who are unfamiliar with
research and statistical terminol-ogy. Although this approach is helpful to many readers, it can
make it dif-ficult to “unpack” or analyze the study. For example, the Hirsch study
lacksEvaluating Descriptive Research StudiesCopyright © 2006 John Wiley & Sons, Inc. mera
clear statement of the research question.Instead, the research question isimplied in the
introduction to the report as a directive of the South CarolinaTeacher Working Conditions to
document and analyze teacher working con-ditions in the state (pp. 1–2). The research question
seems to be, What arethe working conditions of South Carolina teachers? However, in
explaininghow the report is organized, the researcher says, “This report demonstratesthat
working conditions are critical to increasing student achievement” (p. 4). Thus, another research
question is, What is the relationship of teacherworking conditions to student achievement? The
use of the word “increas-ing” is troublesome because it suggests that a causal relationship is
beinginvestigated.Research DesignThe implied research questions indicate that a descriptive
research designis needed, including one that can examine associations between the depen-dent
variables of working conditions and student achievement. The methodsection of the report
explains that teachers were surveyed about their per-ceptions of working conditions, and
perceptions of teachers with differentexperiences and backgrounds were compared (p. 3).
Multiple regressiontechniques were used to connect survey responses to student achievementand
teacher retention. Although not specified, this explanation of themethod indicates a descriptive
research design that is both comparative andcorrelational. The lack of a clear statement about the
research design is typ-ical of reports for policymakers and practitioners. The main challenge
forthe reader is to determine whether the study is descriptive or experimen-tal. Once that
determination is made, the rest of the study can be evaluatedbased on the research
design.ParticipantsThe participants in the study were 15,202 teachers in South Carolina
whocompleted an online teacher survey (p. 3). This number represents teachersfrom 90 percent
of the schools in South Carolina and all of the school dis-tricts. An endnote to the executive
summary provides a more specificdescription: “At least one survey was returned from 990 of the
state’s 1,100public schools. Surveys were returned from all school districts. . . .” (p. 43).(Notes
and appendices are often sources of important technical informationin policy reports.) The
overall response ratefor the state was 28 percent, arelatively low response rate, which the report
does not address.Asubsample of the teachers was used for the correlational analyses(pp.4–5).
There were 519 schools that had a response rate to the survey ofCopyright © 2006 John Wiley &
Sons, Inc. 71at least 28 percent. Teacher responses for these schools were averaged,resulting in
school-level scores for the working conditions survey. Theresearcher demonstrates that this
subsample of 519 schools is similar to theother 581 schools in the state by comparing the two
groups on demographicand teacher quality characteristics, such as percentage of students
eligiblefor free and reduced lunch, percentage of highly qualified teachers (pre-sumably as
defined by NCLB), teacher retention rate, and average teachersalary. Atable lists the descriptive
statistics for the two groups of schools,but no inferential statisticsare reported. Nonetheless, the
breakdown onthe two groups is important for generalizationpurposes. The table suggeststhat the
subsample of schools used for data analyses and conclusions is sim-ilar to the other schools in
the state (schools with a response rate of 0 per-cent to less than 28 percent). This means that the
findings from thesubsample of schools probably can be generalized to the other schools inthe
state.TreatmentThe concept of treatmentis associated more often with experimental
thandescriptive research. In experimental research, the treatment is manipulatedso that some
participant groups receive the treatment and some do not, ordifferent groups receive different
treatments. In descriptive research, thetreatment is not manipulated. The researcher studies the
treatment “as is.”To identify the treatment in a descriptive research study, ask the
followingquestion: What program, policy, or practice is the researcher studying? Inthis example,
the researcher is studying teacher working conditions. Twoimportant characteristics of
treatments in descriptive studies are the oper-ational definitionand the construct validityof the
treatment. In the exam-ple, the researcher discusses these in the context of the survey that is
usedto collect the data about teacher working conditions.Data-Collection Instruments and
ProceduresIn the Hirsch study, the data-collection instrumentis critical to both defin-ing the
treatment and to collecting data about it. The report explains indetail how the Working
Conditions Survey was developed (pp. 2–3). Thesurvey is a modification of one that the
researcher’s organization used tostudy teacher working conditions in North Carolina. That state
had con-ducted focus groups and research to develop standards for teacher work-ing conditions
in the categories of time, empowerment, facilities andresources, leadership, and professional
development. This work informedthe development of a thirty-nine-question paper survey. The
survey wasEvaluating Descriptive Research StudiesCopyright © 2006 John Wiley & Sons, Inc.
merthen changed to an online format for a second administration in North Carolina. The online
survey had seventy-two questions in the same cate-gories but with additional questions on
professional development and onwork outside the school day. The report noted that some of
these new ques-tions were taken from the National School and Staffing Survey and
weretherefore “previously asked and validated” (p. 2). However, there is noinformation on the
number of such questions or what is meant by “vali-dated” (that is, validated for what?). The
North Carolina survey was usedin the current study after modification of questions so they were
specific toSouth Carolina and after the addition of questions concerning teacherinduction and
mentoring.To establish the construct validity of the Working Conditions Survey, “astatistical
factor analysis was conducted not only to ensure that the surveywas well-constructed, but also to
create domain averages that included onlyquestions that truly explained the working conditions
described” (p. 2).Inaddition, a stakeholder group that included thirty policymakers, administrators, and teachers were asked what questions best explained the different working condition
domains. (The domain here refers to the cate-gories of working conditions, such as time and
leadership.) The stakeholderresults indicated general agreement with the factor analysis.An
endnoteindicates “questions with a factor load of .3 were included in the domain”(p. 44). Afactor
loading can range from 0 to 1.00, similar to a correlation, sothe relationship of individual
questions to the domains might be weak. Itwould help to know the range of factor loadings.
Some were presumablyhigher than 0.3. The report does not provide the reliabilityor internal consistency of the factor domains, which would indicate how well the surveyquestions for each
domain “hang together.” Without this information, it isdifficult to judge whether a particular
domain average is a good measureof the associated working condition. Regarding procedure,
teachers’responses were anonymous. This is important information because partic-ipant
reactivityis likely to influence responses under conditions that arenot anonymous.Data Analysis
and ResultsAlmost the entire report is devoted to results and conclusions (pp. 6–37).This is
common in policy reports because researchers assume that theresults and their implications are
the primary concerns of the audience.However, the minimal attention given to methodology
makes the evalua-tion of the methods challenging.Copyright © 2006 John Wiley & Sons, Inc.
73The chapter in the report titled “In-Depth Analysis of Teacher WorkingCondition Domains”
(pp. 16–34) describes, illustrates, and discusses numer-ous descriptive and some inferential
statistics on teachers’ responses to thedifferent domains of working conditions. For example, the
domain of timehad an average rating of 3.11, which was significantly lower than the aver-ages
for the other domains. The researcher concluded, “teachers are not sat-isfied with the time they
receive” (p. 17). This statement is difficult tointerpret without knowing the scale on which
satisfaction was measured.The introduction to the report describes a scale of 1 to 5 but does not
indi-cate whether a 1 means low or high satisfaction. Only by examining theother domains that
have higher averages can it be determined that a 1 islow satisfaction. In light of this, a result of
3.11 would not suggest dissatis-faction but rather a neutral attitude. Another conclusion about
time is that“it appears that teachers attribute the time dilemma to teaching load andnoninstructional duties” (p. 18). However, only 50 percent of the teachersresponded in ways that
would support this statement. Afinal conclusionabout the domain of time is “teachers are solving
the time dilemma byworking on school related activities outside of the school day” (p. 18).
Thisstatement is based on the finding that 32 percent of the teachers spend tenhours per week
outside the school day on activities such as grading andconferences, as well as on other findings
indicating teachers are workingbeyond the school day. Following the presentations on the time
domain, theresearcher makes some broad recommendations on addressing workingconditions
related to time.The presentation format of conclusions, data summaries, and recom-mendations is
used for each of the domains of working conditions that thestudy addresses. The data summaries
in tables and graphs are helpful inexplaining the results, but some lack information on the scale
used toanswer the survey questions. Another concern is the overgeneralization ofthe results. For
example, “only one percent of South Carolina educatorsindicate that they receive [the]
recommended amount of time for collabo-ration and development” (p. 17). The results actually
refer to South Carolinaeducators who responded to the survey, and this apparently was only
28percent of the educators.The chapter in the report titled “What Has Been Discovered
AboutTeacher Working Conditions” (pp. 6–15) describes the results of the corre-lation and
regression analyses for the 519 schools with a survey responserate of at least 28 percent. Schools
that achieved the annual yearly progress(AYP) requirements of NCLB were compared to those
that did not meetAYP. Teachers in the AYPschools had significantly higher perceptions
ofEvaluating Descriptive Research StudiesCopyright © 2006 John Wiley & Sons, Inc.
merworking conditions than teachers in the non-AYPschools in each of thefiveworking condition
domains. (Mentoring and induction were not con-sidered in this analysis.) The inferential
statistic used to analyze these datawas not reported. The report indicates that for the regression
analysis, sur-vey results on empowerment and professional development were statisti-cally
significant predictors of AYPstatus. The regression results are availablein the appendix to the
report (pp. 39–41). In another regression analysis, sat-isfaction with leadership and time were
significant predictors of 2003–2004teacher retention rates. Atable presents bivariate
correlationsbetween eachof sixteen working condition–dependent variables (including the
fivedomains, for example, the domain of time, student–teacher ratio, and soforth) and 2003–2004
teacher retention rate. The chapter gives results of correlations and ANOVAsfor additional
findings, such as teacher versusprincipal perceptions. Overall, the researcher emphasizes the
regressionresults for student achievement (that is, AYPstatus) and teacher retention,indicating
these as new and important findings. However, the researcheruses causal language to describe
the results, which is an error in interpre-tation (pp. 8, 11, 12). For example, “only by controlling
for as many of themultitude of factors that contribute to student learning as possible,
wastheanalysis able to isolate the relationship with teacher working conditionsand identify causal
connections” (p. 8). Statistical modeling through multi-ple regression is a sophisticated technique
that helps identify and measurerelationships among dependent variables. However, it does not
identifycausal relationships.Rival ExplanationsThe purpose of many policy reports is to build a
foundation of research onwhich to make recommendations. Perhaps this is the reason there is
moreattention given to the outcomes and their implications than to methods andrival
explanations.The Hirsch report addresses rival explanations but theyare not labeled as such. All
the data in the study come from responses to asurvey. Apossible rival explanation is that the
survey questions do not measure the construct of teacher working conditions but some other construct (for example, teacher experience). The researcher counters this by dis-cussing the
development of the survey, including verification of theconstruct by stakeholders (pp. 2–3). In
addition, he reports that teacherbackground and experience did not affect teachers’ perceptions
of workingconditions (p. 14). In survey studies, low response rate is a threat to valid-ity similar
to sample attrition.The question is whether the participants whoresponded to the survey are in
some way different from those who did notCopyright © 2006 John Wiley & Sons, Inc.
75respond to the survey. The researcher addresses this threat to validity bycomparing the
subsample of 519 schools used for the regression analyses tothe other schools in the state (p. 3).
The comparisons show that the two setsof schools were similar in characteristics related to
student demographicsand teacher quality (although statistical tests verifying the similarities
arenot reported). Perhaps the rival explanations that are of most concernarethose related to data
interpretation. For example, the suggestion that thefindings demonstrate causal connections
cannot be supported bythedescriptive research design. In addition, the many references to
theresults of South Carolina teachers are not completely accurate. The resultsare actually from
the 28 percent of South Carolina teachers who respondedto the survey.Translation of research
results into accessible language is a particularchallenge in reports for policymakers and
practitioners. Overstating thefindings is a common occurrence. That is why those who read
anduseresearch (for example, policymakers) must be able to judge whether thefindings justify
the conclusions.Scientific CharacteristicsLinks with prior research are given in the introduction
to the report andthroughout the two chapters that present the results. For example, theresearcher
describes how the survey results for the working condition offacilities and resources relate to
results from national studies of that topic(pp. 21–22). The report does not delve into prior
theorybut it does presenta clear rationale for the study. The potential for generalization within
thestate of South Carolina is probably high given the similarity of the sub-sample of schools to
the schools that were not in the study. Whether theresults would generalize to other states is an
interesting question. There issome suggestion that they would, based on a similar study in North
Carolina that th …
Purchase answer to see full