An annotation consists of three separate paragraphs that cover three respective components: summary, analysis, and application. These three components convey the relevance and value of the source. As such, an annotation demonstrates your critical thinking about, and authority on, the sourceAnnotate one quantitative research article from a peer-reviewed journal on a topic of your interest.
Provide the reference list entry for this article in APA Style followed by a three-paragraph annotation that includes:

A summary
An analysis
An application as illustrated in this example

Format your annotation in Times New Roman, 12-point font, double-spaced. A separate References list page is not needed for this assignment.
Submit your annotation.
Annotate the attached article and follow the instructions aboveplease

Unformatted Attachment Preview

Terrorism and Political Violence
ISSN: 0954-6553 (Print) 1556-1836 (Online) Journal homepage:
Research on Terrorism, 2007–2016: A Review of
Data, Methods, and Authorship
Bart Schuurman
To cite this article: Bart Schuurman (2018): Research on Terrorism, 2007–2016:
A Review of Data, Methods, and Authorship, Terrorism and Political Violence, DOI:
To link to this article:
© 2018 Bart Schuurman. Published by
Taylor & Francis.
Published online: 01 Mar 2018.
Submit your article to this journal
Article views: 11050
View Crossmark data
Full Terms & Conditions of access and use can be found at
Research on Terrorism, 2007–2016: A Review of Data,
Methods, and Authorship
Bart Schuurman
Institute of Security and Global Affairs, Leiden University, The Hague, The Netherlands
Research on terrorism has long been criticized for its inability to
overcome enduring methodological issues. These include an overreliance on secondary sources and the associated literature review
methodology, a scarcity of statistical analyses, a tendency for authors
to work alone rather than collaborate with colleagues, and the large
number of one-time contributors to the field. However, the reviews
that have brought these issues to light describe the field as it developed until 2007. This article investigates to what extent these issues
have endured in the 2007–2016 period by constructing a database
on all of the articles published in nine leading journals on terrorism
(N = 3442). The results show that the use of primary data has
increased considerably and is continuing to do so. Scholars have
also begun to adapt a wider variety of data-gathering techniques,
greatly diminishing the overreliance on literature reviews that was
noted from the 1980s through to the early 2000s. These positive
changes should not obscure enduring issues. Despite improvements,
most scholars continue to work alone and most authors are one-time
contributors. Overall, however, the field of terrorism studies appears
to have made considerable steps towards addressing long-standing
authorship; database;
journals; primary sources;
research on terrorism;
review; state of the art;
The academic study of terrorism is often described as beset by numerous and pervasive
conceptual and methodological problems.1 Critiques of the state of the art have appeared
since the 1970s and their conclusions have been worryingly similar.2 One of these is that
the definitional debate on what exactly constitutes terrorism continues to exert a detrimental influence on the field’s development. Another is that an overreliance on secondary
sources, and the associated predominance of the literature review method, have seriously
hampered the development of empirically-grounded insights and the ability to falsify the
myriad potential explanations for terrorism that have been put forward.3 The last in-depth
assessment of terrorism as a scholarly discipline, however, reviewed the field as it had
developed until 2007.4 Thousands of articles have appeared since, many of them in new
journals, and leading scholars have begun to question the prevailing pessimism.5 To what
degree do these critiques still reflect contemporary scholarship on terrorism?
CONTACT Bart Schuurman
Institute of Security and Global Affairs, Leiden
University, Turfmarkt 99, 2511 DV, The Hague, The Netherlands
Color versions of one or more of the figures in the article can be found online at
© 2018 Bart Schuurman. Published by Taylor & Francis.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License
(, which permits non-commercial re-use, distribution, and reproduction in any medium,
provided the original work is properly cited, and is not altered, transformed, or built upon in any way.
This article presents the most extensive review of the field of terrorism studies to date.
Its goal is to assess whether the numerous methodological concerns raised over the past
decades continue to exert an influence on the contemporary literature. Are there signs that
the field is moving beyond these limitations or are there grounds for continued scepsis
regarding the prospects for its maturation? To address these questions, a database was
constructed using all articles published from 2007 to 2016 in nine academic journals on
Enduring issues in the study of terrorism
There is a long tradition of self-criticism within the literature on terrorism that began
soon after the field’s emergence in the 1960s and 1970s.6 In a 1977 book review, Bell
lamented that there were “no agreed definitions, no accepted bounds to the subject, no
very effective academic approach, no consensus on policy implications.”7 Schmid’s 1982
book Political Terrorism found that “despite its volume . . . much of the writing [on
terrorism] is impressionistic, anecdotal, superficial and at the same time often also
pretentious, venturing farreaching generalizations on the basis of episodical evidence.”8
At a 1985 symposium, Gurr remarked on the “disturbing lack of good empiricallygrounded research on terrorism.”9 In 1986, Crenshaw decried much existing research
for being unsystematic and ahistorical or alarmist.10 Concluding an edited volume in 1990,
Reich warned of a tendency toward overgeneralizations and simplistic explanations.11
Over a decade later, the situation had not improved. In his landmark 2001 study, Silke
reviewed all research published between 1995 and 1999 in the field’s two leading journals
—Terrorism and Political Violence (TPV) and Studies in Conflict & Terrorism (SCT). Over
80% of those publications relied solely on data gathered from secondary sources such as
books, articles, and media reporting. The literature review method was predominant
(62%), with other forms of data collection, such as interviews (10%) or the use of
databases (7%), trailing far behind.12 Moreover, a majority of papers (71–85% depending
on the year) did not use any kind of statistical analysis.13 Silke’s 2004 work laid bare
additional issues. Not only were there relatively few terrorism scholars, but over 90% of
papers published in the 1990s were the work of single authors and 83% of these papers
were written by one-time contributors.14 The field lacked dedicated researchers and there
was little collaboration between them, limiting the (intellectual) resources available to
addressing the topics being investigated.
The 9/11 attacks and the global war on terrorism that followed brought new funding
and researchers to the field.15 Yet, these changes do not appear to have significantly
addressed the various concerns outlined above. Reviews published in the late 2000s,
especially Silke’s 2007 and 2009 work, highlighted positive developments in terms of
increased collaboration among scholars, a somewhat higher frequency of statistical analyses, and more critical analytical perspectives, yet also noted that the reliance on secondary sources and literature review-based methods remained essentially unchanged.16
Emphasizing this last point, a 2006 literature review by Lum et al. found that only 3%
of the 6041 peer-reviewed articles on terrorism published between 1971 and 2003 that the
authors had studied, were based on any kind of empirical data.17 While a 2008 study
found that circa 20% of articles provided new knowledge, the scarcity of empirical work
remains clear.18 Moreover, in a 2007 article, Gordon underlined that most publications on
terrorism continued to be the work of “one-timers.”19 In the years following 9/11,
terrorism research was still marred by a lack of methodological complexity, a dearth of
primary data, and few dedicated scholars.20
Although researchers are apt to point to the lack of definitional consensus on “terrorism” when discussing the field’s progress,21 this may be less of an issue than is often
thought.22 Research on terrorism has flourished, at the very least in terms of quantity,
despite a lack of far-reaching consensus on how to define the subject under investigation.23
Indeed, there are few fields where core concepts are not subject to ongoing debate.24
Arguably the more pressing issue has been the field’s tendency to rely too heavily on
secondary sources of limited detail and uncertain accuracy, principally newspaper articles,
and associated research methodologies. As Schmid and Jongman wrote as early as 1988,
“there are probably few areas in the social science literature in which so much is written
on the basis of so little research.”25
Because journalists are often the first to report on terrorism and terrorists, media-based
sources are particularly relevant to scholars. Yet, problems can arise when individual papers
or the broader field rely predominantly, or even entirely, on such materials to describe
terrorist phenomena or develop theories to explain them. While there are numerous journalistic accounts of terrorism that are of very high quality, media-based sources present potential
problems with regard to factual accuracy,26 editorial bias,27 and the underreporting of failed
or foiled terrorist attacks.28 These issues extend to the numerous large-N terrorism databases
that rely primarily on media-based information.29 Moreover, an overreliance on the broader
academic and “grey” secondary literature runs the risk that scholars talk amongst themselves,
rather than with or about first-hand data gathered on the subjects under investigation.30 One
of the consequences of these issues is that, while dozens of potential explanations for
involvement in terrorism exist, few if any can count on the substantial empirical validation
that is necessary to advance academic knowledge.31
Recent years have seen authors continue to underline the influence of these issues, but
there have also been signs that positive changes are underway. The field appears to have
matured as an academic discipline,32 leading scholars argue that considerable progress has
been made on advancing our understanding of key issues,33 quantitative approaches
appear to be adopted more frequently,34 and opportunities for gathering primary data
seem to have increased.35 In a 2013 review of 260 publications on radicalization that
appeared between 1980 and 2010, Neumann and Kleinmann encountered “clusters of
excellence that meet the highest scholarly standards.”36 They found that 54% of their
sample used primary sources, suggesting that not all areas within the broader field of
terrorism research are equally affected by the problems that Silke and others have
At the same time, new concerns have arisen over the quality of the quantitative research
being conducted and the tendency to design research based on the available data, rather
than gathering the data required to address a particular question.38 Moreover, there is
enduring pessimism over the aforementioned methodological issues,39 the small number
of dedicated scholars,40 and the influence of pseudo-experts.41 This more critical position
was expressed most strongly by Sageman’s 2014 claim that research on terrorism had
“stagnated.”42 Yet, whatever position is taken, much of the current debate on the state of
terrorism research relies on data collected up to 2007.43 To accurately assess the degree to
which these issues endure, new research is needed.
Research design
The goal of this article is to provide a contemporary overview of the field of terrorism
studies that is detailed, extensive in its coverage, and able to chart developments over time.
To do so, data was gathered on all of the 3442 articles published between 2007 and 2016 in
nine journals on terrorism. This timeframe not only provides insights into how the field
has fared in the decade since Silke last reviewed it, but coincides with the creation of seven
new journals. Whereas previous reviews could focus on the field’s two core journals, TPV
(1989–present) and SCT (1977–present), an assessment of the current state of affairs
requires broadening the analytical scope to these seven newcomers: Perspectives on
Terrorism (POT, 2007–present), the Combating Terrorism Center Sentinel (SNT, 2007–
present), Critical Studies on Terrorism (CST, 2008–present), Dynamics of Asymmetric
Conflict: Pathways Toward Terrorism and Genocide (DAC, 2008–present), Behavioral
Sciences of Terrorism and Political Aggression (BSTPA, 2009–present), Journal of
Terrorism Research (JTR, 2011–present) and Journal for Deradicalization (JDR, 2014–
Data collection and analysis
A dataset on all publications in these journals was created using Microsoft Access. Data
collection was geared towards assessing the degree to which the various methodological
concerns noted by authors like Silke are still present in the literature on terrorism, and
whether a trend can be observed in their development over time. The following data was
recorded per article: title, author(s), publication year, publication type, method of data
collection, whether any primary data was utilized, if and what type of statistical analysis
was carried out, and a unique URL or DOI identifier. In order to enable a comparison
with earlier research, Silke’s categorizations for the types of research methods and the
types of statistical analyses were maintained.44 Data collection began in late 2015 and was
completed in September 2017.
The author conducted the bulk of the data collection but was aided by five research
assistants and interns over the course of the project. Their work was checked by the author
during regularly held meetings, by recoding random samples for accuracy, and by asking
them to highlight any articles they had questions about in a “comments” field specifically
included in the database for this purpose. The author used regular discussions with coders
and randomized checks to ensure reliability.
The first step in the data collection process was to enter an article’s name, year of
publication, DOI/URL, and type into the dataset. The various article types were condensed
into “research article,” “research note,” “book review,” “other resources” (e.g., interview
transcripts), “opinion piece,” “editorial introduction,” “editorial news/information” (e.g.,
list of contributors, conference announcements), “bibliography,” “conference proceedings/
summary,” and “erratum/retraction notice.”45 Next, each article’s title, abstract, and keywords were read. Sometimes this yielded all relevant data, but in the vast majority of cases
it was necessary to scroll through the article for a methods section and to see whether
tables or graphs were present, which would increase the likeliness of statistics being used.
Unless this step proved conclusive, a search for specific keywords was conducted (“interview,” “field work”/“fieldwork,” “archive,” “court,” “database,” “dataset,” “data,” “%”). If
this failed to yield conclusive results, the references were read to ascertain the types of
sources utilized.
Once data collection was complete, the dataset was analyzed using the Microsoft Access
and Microsoft Excel software packages. Access “queries” were created to enable specific
types of information to be drawn from the dataset, such as the number of articles using
primary sources published per year. This yielded data-subsets that were then imported
into Excel for straightforward descriptive statistics to be applied to them, yielding such
information as the average number of articles using primary sources, how this differed
between journals, and whether trends in such usage could be seen over the decade under
Defining primary sources
Primary sources typically provide information based on the direct observation of, or
participation in, a certain subject, whereas secondary sources relay such information
indirectly.46 The transcript of a speech by a terrorist leader is a primary source; the
newspaper article that publishes a journalist’s interpretation of the speech’s key points is
a secondary source. Yet the difference is not always so clear-cut, as much depends on the
type of question being asked. Interviews with family members of terrorists are a primary
source when the researcher is concerned with their experiences, thoughts, and emotions,
but a secondary source when the questions are about subjects that the interviewees have
no direct experience with. Conversely, newspaper articles are considered a secondary
source of information about terrorism and terrorists, but become a primary source
when the research focuses on how media reports on terrorism.
On the use of statistics
Following Silke’s example, two types of statistics were recognized: descriptive and inferential. The first describes a phenomenon, the second makes inferences about the degree to
which a variable is linked to particular outcomes. For instance, descriptive statistics might
provide the age or socioeconomic background of a group of terrorists. Articles using
inferential statistics might study the links between economic development and a state’s
vulnerability to political violence. Ascertaining these differences was usually, but not
always, straightforward. Another challenge was assessing whether the statistical analyses
were carried out by the author(s) or reproduced from an outside source. Only in the first
case was the article considered to have used statistics; citing the outcomes of opinion polls
conducted by others, for instance, or reproducing a graph made elsewhere, was not
considered sufficient to qualify as such. The author(s) had to either collect and analyze
the data themselves, or re-purpose or re-analyze existing datasets.
Inclusion criteria
Both with regard to the use of primary data and statistics, low inclusion thresholds were
utilized. The use of any first-hand sources or statistical analyses based on information
collected and/or analyzed by the author, sufficed to tick the respective boxes in the dataset.
As a result, authors who conducted years of field work in remote locations and authors
who held a 10-minute telephone interview are both seen as using primary data. Similarly,
authors who created an extensive dataset and carried out complicated statistical analyses
are grouped together with authors who provide a relatively straightforward overview of
the frequency with which a particular search-term appears on Google. This was done to
avoid making subjective judgements on what primary data or which types of statistical
analyses were “good” or “extensive” enough; a process bound to introduce considerable
bias into the results.
Figure 1 illustrates the total output over the period 2007–2016. The introduction of seven
new journals dedicated to the study of terrorism heralded a marked increase in output; up
from 143 articles in 2007 to a yearly average of 367 between 2008 and 2016. Total output
levels appear relatively stable after 2008, with research articles and research notes constituting a steadily increasing majority of the items published, and book reviews occupying
third place. With regard to “market share,” it is notable that four of the nine journals
account for 74.4% of all the articles published. These are the field’s established “classics”
TPV (2 …
Purchase answer to see full