Analytical Journalism: Credibility of
Justin Mayo, M.A.
Glenn Leshner, Ph.D.
All correspondence to Glenn Leshner:
School of Journalism
University of Missouri
283 Gannett Hall
Columbia, MO 65211
Email: [log in to unmask]
Running head: Analytical journalism
Mayo is a reporter for the Seattle Times. Leshner is an assistant professor.
Paper submitted to the Newspaper Division of the Association for Education in
Journalism, April 1, 1999
Analytical Journalism: Credibility of
An experiment tested readers' perceptions of newspaper stories that used one of
three different types of evidence to support the reporter's claims in the
stories: data the reporter independently gathered and analyzed via databases
(computer-assisted reporting), data from official or expert sources, and
anecdotal evidence. Participants read three news stories on different topics
with one of the three types of evidence in each story. After reading each
story, participants rated the story's credibility, newsworthiness, liking,
quality, understanding, and readability. The computer-assisted reporting
stories were rated as credible, newsworthy, and understandable as the official
and anecdotal stories, but were liked less and were rated poorer in quality than
the official stories and were liked less and were rated harder to read than the
anecdotal stories. Implications for computer-assisted reported stories are
Analytical Journalism: Credibility of
It is no longer enough to report the fact truthfully.
It is now necessary to report the truth about the fact.
- Robert Hutchins report on press freedom, 1947
Computer-assisted reporting, or CAR, began some 10 years ago and has been
gaining acceptance and momentum ever since. Through database and statistical
analysis, good CAR reporters use raw data to support their own conclusions.
Many reporters find this development to be ideal-they do not have to be as
dependent on official and expert sources, press releases, government statistics,
or anecdotal evidence to tell their stories. Now, in addition to such sources,
reporters can rely on their own evidence from independent data analysis,
allowing them to become more of a participant in the news-making process rather
than a mere observer. This new approach is challenging the traditional
philosophy of detached, objective journalism. Indeed some scholars believe this
transformation is a struggle "about the soul of journalism in the 1990s"
(Weinberg, 1996, p.4). If the media in general act as a searchlight on society,
then CAR gives the individual reporter a high-powered, hand-held flashlight.
Although these types of data-intense stories might have advantages from a
journalist's perspective, very little research has considered the effects this
type of story might have on the audience. Using an experimental design, this
study sought to measure readers' perceptions of computer-assisted stories,
specifically those in which the newspaper reporter independently analyzed data
and drew conclusions based on such findings. Will a move away from the
traditional objective-style reporting strengthen or weaken the credibility of
journalism? In addition, this study will measure participants' story ratings of
other outcomes important to journalists: newsworthiness, liking, quality,
readability, and understanding of computer-assisted stories.
Supporters of such techniques say that reporters can provide independent,
outside viewpoints that are not vested in a particular controversy. Critics, on
the other hand, fear that journalism may be overstepping its bounds by
advocating a position and discarding traditional objectivity. Thus, while some
view such proactive journalism as a means to abate declining newspaper
circulation, others suggest abandoning objectivity will further reduce newspaper
The literature reviewed here will briefly examine the argument of objectivity
in reporting and its influence in the development of reporter-generated data.
Then, it will address the development of computer-assisted reporting and the
issues most commonly discussed in CAR critiques.
Most media scholars point to the beginnings of objectivity as a direct reaction
to the rampant factionalism of the mid-1800s and the subsequent advent of penny
press (Glasser & Ettema, 1989; Glasser, 1992; Miraldi, 1990). Oliver Wendell
Holmes popularized the notion of free expression in his 1919 dissenting opinion
where he borrowed John Milton's "marketplace of ideas" metaphor (Glasser, 1992).
Milton (1951) argued that all sides of an issue must be heard in order to find
the truth because falsehoods will be exposed in the "marketplace". Objective
reporting, interpreted by publishers and editors as neutral, unbiased reporting,
became the norm by the early 1900s (Glasser, 1992).
Specific rules and conventions regarding journalism as a profession arose from
the principal of objectivity. Tuchman (1972) described how journalists rely on
objectivity as a "strategic ritual" and a defense against criticism (p.661).
Tuchman (1978) also found that most news organizations strategically identify
centralized sources that are accepted as "appropriate sites at which information
should be gathered" (p.211). Similarly, Gans (1979) showed that journalists use
a predictable set of criteria in deciding what is news by reporting the facts as
gathered from sources of recognized authority. Studying this system of
newsgathering and how news content is structured, Schudson (1978) argued that
"the process of news gathering itself constructs an image of reality that
reinforces official viewpoints" (p.185).
Many scholars have criticized objective journalism because it maintains the
status quo and encourages the establishment to dominate the marketplace of ideas
(Bennett, 1990; Hallin, 1986). Bennett (1990) argued that, like a strategic
ritual, reporters became dependent on government officials and simply presented
elite debate in an unqualified, objective manner. Similarly, Hallin (1986)
argued that most journalists operate within what he called the "sphere of
legitimate controversy" (p.50). In this paradigm, journalists who operate in an
objective manner are simply supporting consensus values. In such a culture of
objectivity, critics argue that the role of the reporter has been stifled
The rise of investigative journalism and the growing adversarial press-major
challenges to objectivity-began in the 1970s with the Watergate scandal
uncovered by The Washington Post (Glasser & Ettema, 1989). Miraldi (1989)
chronicled one of these attempts by examining a nursing home scandal uncovered
by the New York Times. Miraldi however, found that the reporter could never
directly insert his judgment into stories even if it were based on truth because
"objectivity forced him to cloak his opinions behind official sources, when he
could find them" (p. 3).
Despite such obstacles to investigative reporters, the popularity of the method
has continued to grow. Some say investigative reporting reached a new peak in
1991 when the Philadelphia Inquirer published the expose "America: What Went
Wrong?" by Barlett and Steele, which tried to explain the economic and social
breakdown of the 1980s. Weinberg (1997) believes the work is important not only
for its subject matter but also because the authors unapologetically threw aside
the shackles of objectivity and drew their own conclusions from their own
research. The reporters did not rely on official debate or official sources.
Rather, they saw an important story that was not being told and drew their
conclusions based upon their investigation of the evidence.
Of course, such a radical departure from objectivity did not come without
criticism. Jack Fuller, a Pulitzer Prize winner and publisher of the Chicago
Tribune, said that the "marshaling of evidence followed the conviction rather
than the conviction arising from the proof" (Weinberg, 1997, p. 9). Weinberg
(1997) contended the investigation duo did not begin with a conviction but
rather arrived at one through the "relentless presentation of evidence" (p. 10).
Whichever came first, the fact remains that Barlett and Steele did inject their
own analysis and interpretation of the facts-a direction journalism seems to be
One of the methods used by Barlett and Steele to document their story was
computer-assisted reporting, by analyzing 70 years of income tax data from the
IRS. This type of extensive, independent data analysis has opened up new
opportunities for investigative reporters (Weinberg, 1997). Instead of focusing
on isolated abuses of power and wrong-doing, today's investigative reporters
"look for wide-ranging failures of public policy, government neglect, corporate
scheming and threats to democracy," using computer-assisted analysis to help
them (Aucoin, 1993, p. 23).
In a basic sense, computer-assisted reporting can simply mean using a computer
to help report a story: on-line research, electronic morgue searches, etc. At
the advanced level, CAR means crunching raw data with a spreadsheet program or
database manager, performing statistical analyses and using mapping software to
show patterns. In fact, many authors have begun to use the term
"computer-assisted investigative reporting" to describe data analysis, as
opposed to the electronic gathering of background information (Friend, 1994, p.
63). For the purposes of this study, CAR will specifically refer to the
definition involving data analysis.
One of the benefits of using CAR is that investigative reporters no longer need
to be fearful of stating their claims. By using raw data and valid analytical
techniques, reporters can use their own evidence to support their claims.
Friend (1994) wrote, "data analysis allows a neutral approach to raw information
... By dealing directly with the facts, unadorned, reporters can bypass
interpreted statistics and get beyond relying on anecdotal stories" (p. 65).
Reisner (1995) claimed that CAR alters the traditional way of reporting: "The
technique, in a way, turns traditional reporting on its head: Normally,
reporters collect anecdotes and from them deduce trends. CAR lets reporters
find trends, then collect the anecdotes to illustrate them." (p. 47).
This combination of CAR coupled with the investigative mindset has dramatically
enhanced the power of the individual reporter. In many instances, reporters no
longer have to rely on someone else's numbers and someone else's interpretation
of those numbers; they have the power to generate scientific evidence
themselves. Meyer promoted a type of journalism where the reporter evaluates
and synthesizes the information using the rules of science: theory-based
investigation, hypothesis testing and replicability (Meyer, 1973, 1991; Meyer &
Jurgensen, 1992). As long as reporters understand statistics and data analysis,
Meyer argued, they should be able to draw conclusions from their
investigation-the ones that do will have a "value-added edge" (Meyer &
Jurgensen, 1992, p. 269).
Indeed many journalists unwittingly practice the scientific method already
(Stocking & LaMarca, 1990). Through qualitative interviews, the authors found
that nearly 80 percent of reporters' stories began with hypotheses. However, a
majority of these were implicit, which prompted the authors to conclude that
"journalists, unlike scientists, do not routinely pose formal hypotheses as part
of their method; instead, they appear to make formal speculations for some of
their stories but not for others" (Stocking & LaMarca, 1990, p. 300).
Meyer (1991) argued that this is precisely the reason journalism needs to adopt
and acknowledge the scientific method in reporting. Instead of everyone playing
the part of the objective journalist while at the same time harboring
preconceived notions, reporters should openly declare their hypotheses, methods
and evidence so that they can be held up to rigorous scrutiny by others. In
fact, Meyer and Jurgensen (1992) argued this would be even more objective than
the present system: "[T]he discipline of forming a falsifiable hypothesis and
then testing it is actually a way of preserving objectivity. When the test is
operationalized, the hypothesis is made to stand or fall on the basis of an
objective standard" (p. 270). With the power of sophisticated computer software
and with the increasing availability of databases, the scientific method and
journalism seem to be a natural fit.
Weinberg (1997) calls this "expert journalism," a term he credits to Lou
Ureneck, the executive editor of the Portland (Maine) Press Herald. In 1991,
the Press Herald began experimenting with expert journalism under the guidance
of Ureneck. The paper assigned a business reporter to take an in-depth look
into the failing state workers' compensation system. Ureneck (1992) said the
system had been a disaster for nearly a decade and that his paper had done a
poor job of covering the issue-the traditional hard news, episodic coverage had
failed. The reporter was immersed in the issue and became an expert who Ureneck
believed was qualified to make judgments on the story.
Since this first attempt, the Press Herald developed an "Expert Reporting
Coaching Sheet" and has devoted more time and resources to its success. Ureneck
(1994) said the reporters "state their conclusions up top without attribution
from officials or authorities and rely on the body of the story to develop the
evidence behind the conclusions" (p. 7). He noted that the evidence to support
the conclusions often comes from original research into database records and
cannot be attributed to an official. Ureneck (1994) described this type of
journalism as an "eclectic mix" of existing forms, which has one goal-"to cut
through the rhetoric and show readers where the weight of the evidence lies" (p.
There has been criticism of Ureneck's approach (Newman, 1993). Many point to
what happened after the workers' compensation series ran. The Press Herald
published a front-page editorial that called for the system to be trashed and
for the establishment of a blue-ribbon commission to rewrite the law.
Additionally, and more controversial, the newspaper brought together all the
parties in the conflict to "explain [the paper's] editorial position" (Ureneck,
1992, p. 6). Critics saw this as advocacy journalism where the paper was making
and guiding the news. A reporter from a competing paper noted at the time:
"Their reporting has shown a miraculous lack of curiosity about weaknesses in
the blue ribbon report" (Newman, 1993, p. 13). Yet Ureneck (1994) is confident
this journalistic style will benefit the press's image and believes more
newspapers will use some form of analytical journalism to engage the reader.
The Audience's Perspective
The spread of such analytical journalism is fueled by the rapidly increasing
use of computers to analyze data. A survey of 192 daily newspapers, which
focused specifically on sophisticated data analysis, found more than half the
newspaper readers in the United States were getting papers that did some kind of
computer-assisted reporting (Friend, 1994). A more extensive survey
subsequently found that two-thirds of large daily papers reported having some
sort of CAR desk in the newsroom (Garrison, 1996).
From 1989 to 1996, there has been at least one Pulitzer-winning reporter who
employed computer-analysis to uncover such stories as racism in mortgage loans,
arson fraud, medical malpractice, government waste and lax building codes
(Ciotta, 1996). Some in management see CAR as a way to possibly stem their
ever-declining circulation by filling a niche and providing "high-impact"
stories. Nieman Foundation curator, Bill Kovach, predicted CAR will revive what
he calls a "moribund newspaper journalism" (Fitzgerald, 1992, p. 15).
All this rests upon the assumption that the readers will respond favorably to
this kind of reporting. One of the worries Ureneck (1994) raised is how such
journalism will "affect the credibility of the newspaper among its readers" (p.
9). Weinberg (1996) added that some critics believe the credibility of all
journalists will decline if objectivity is completely abandoned. Recent polling
data from the Pew Research Center for the People and the Press indicate that the
public's dissatisfaction with the press has reached a new low. The percentage
of respondents who said they believed that "news organizations get the facts
straight" dropped from 55 percent in 1985 to 37 percent in 1997; the percentage
of people who thought reports were "often inaccurate" rose from 34 percent to 56
percent in the same period (Peterson, 1997).
The importance of audience perception of news credibility has kept researchers
studying the issue for more than five decades. One of the major problems with
studying credibility, however, is defining and measuring the concept. By the
1960s, several researchers concluded that credibility was a multidimensional
concept, although each study identified different dimensions (Gaziano & McGrath,
1986). Two main components, developed by Hovland, Irving, and Harold (1953),
are "trustworthiness" and "expertise" of the source. Gaziano and McGrath (1986)
attempted to pinpoint the various dimensions of credibility and find out which
ones grouped together. Using factor analysis, the authors showed that 12 items
loaded together: fairness, bias, completeness, accuracy, privacy, concern about
reader's interest, concern about community well-being, separate fact and
opinion, trustworthiness, concern about profits, opinion, and training of
Meyer (1988) examined the Gaziano and McGrath study and developed a more
streamline index to measure newspaper credibility. He was looking for a basic
measure of credibility, which he simply defined as "whether a newspaper is
believed by its readers" (p. 573). He was able to trim the list of items down
to five, which subsequently provided reliable results. These items were
fairness, bias, completeness, accuracy, and trustworthiness, and were tested as
semantic differential scales. Meyer's index serves as the measure of story
credibility in this study.
Although no studies have tried to compare audience perceptions of
computer-assisted journalism to more traditional journalism forms, Weaver and
Daniels (1992) measured public opinion of investigative reporting through
nationwide surveys. The authors found that a majority of the public considered
investigative reporting to be "somewhat important" during the 1980s (p. 146).
But Weaver and Daniels also found that certain techniques - hidden cameras,
anonymous sources, paying sources, etc.-lowered the credibility of the reporters
and of the news media employing them. From this study, it appears that the
audience values the independence of investigative reporting, depending on the
Iyengar and Kinder's (1987) study of television news offered some insight. They
showed that the framing of a story matters and that the audience perceives
stories differently based on the way evidence is presented. Iyengar and Kinder
attempted to measure the "vividness bias," which states that people are often
persuaded by reporting that relies on emotional, personal accounts to tell a
story (p. 35). However, instead of persuading the audience by using vivid
accounts, the authors found that the audience can differentiate between
anecdotal (vivid) and analytical (pallid) evidence. Indeed, it seems that
relying too much on a "good story" based on compelling but anecdotal details
about an individual may reduce a story's believability and may result in a less
Another study indicated that readers tend to consider the content of the
message rather than the source of the information (Austin & Dong, 1994). Austin
and Dong suggested "the general public makes little distinction among sources.
Any newspaper is simply a newspaper, and the story stands on its own merits" (p.
978). Although this study focused on the reputations of particular newspapers
instead of on the credibility of journalists in general, the results indicated
that the audience perceives a story to be credible based on the presentation of
evidence and not on who is performing the analysis. Austin and Dong (1994)
viewed their results as alarming because at least some people "may be analyzing
messages without much thought to the reputation of the source" (p. 979).
However, for analytical journalists this could be positive-if the evidence and
analysis is convincing, the reader may find the reporters' conclusions as
The dearth of research on audiences' perceptions of computer-assisted reporting
led us to ask how such perceptions of reporting differ from perceptions of
traditional reporting. Specifically, this study asks whether or not
computer-assisted reporting affects how readers judge a story's credibility,
newsworthiness, liking, quality, readability, and understanding.
The study was a within-subjects experiment in which all participants read three
newspaper stories: one with computer-assisted reporting data, one with official
data, and one with anecdotal data. Thirty-three undergraduate students enrolled
in an introductory journalism course at a large midwestern university
volunteered to participate in this study. There were 15 men and 18 women.
Credibility was defined as a five-item index, adapted from a previous study
(Meyer, 1988). The five items used to measure credibility were bipolar pairs on
seven-point scales: fair/unfair, biased/unbiased, accurate/inaccurate, can be
trusted/can't be trusted, and tells the whole story/doesn't tell the whole story
(a = .85). This measurement tool was appropriate for the study because the
concept deals directly with believability-the central issue this experiment
In addition to credibility, the study also included the following dependent
variables: newsworthiness, liking, quality, readability, and understanding.
Newsworthiness was indexed by five bipolar pairs on seven-point scales:
important/unimportant, interesting/uninteresting, informative/uninformative,
serious/not serious, and disturbing/not disturbing (a = .81). Story liking,
quality, readability, and understanding were measured by single questions on
seven-point response scales. Participants were asked to rate how much they
liked each story on scale anchored by "very little" and "very much." They were
also asked to rate the overall quality on a similar response scale, anchored by
"poor quality" and "excellent quality." For readability, participants were
asked to respond to "How difficult or easy was the story to read?" on a scale,
anchored by "very difficult" and "very easy." Finally, participants were asked
to respond to "How difficult or easy was the story to understand?" on scale
anchored by "very difficult" and "very easy."
The independent variable, story type, was the type of evidence used to support
a story's conclusions. For our purposes, story type had three levels: 1)
reporter-generated evidence, 2) authoritative evidence, and 3) anecdotal
evidence. Stories using reporter-generated evidence (computer-assisted
reporting) based conclusions on independent data analysis from the reporter.
Stories using authoritative evidence based conclusions on official and expert
source analysis. Stories with anecdotal evidence based conclusions on
individual case histories and source opinions, not on systematically obtained
The participants were given three stories to read-one for each of the three
levels of the independent variable. The dependent variables were measured for
each kind of story.
The stories used in the experiment were based on real-world, computer-assisted
reporting examples but were extensively rewritten to suit the requirements of
the study. A total of three different story topics were included to avoid
message-specific effects. The three topics were: 1) racial discrimination in
home mortgage lending, 2) inadequate restaurant inspections, and 3) medically
unnecessary Cesarean sections. Each of these three story topics had three
different versions, based upon the different levels of the independent variable:
one with reporter-generated evidence, one with authoritative evidence, and one
with anecdotal evidence. Thus, a total of nine distinct stories were used for
For example, the story of racial discrimination in home mortgage lending, or
bank redlining, had three different versions. The CAR version with
reporter-generated evidence explicitly stated that the data used to support the
story's conclusion was analyzed by the newspaper using a database of home
mortgages. The authoritative version arrived at the same conclusion using the
same statistical evidence, but the analysis and the source of the evidence was
attributed to watchdog groups and officials. The anecdotal version also reached
the same conclusion but only supplied evidence of personal case histories to
highlight the problem of redlining. All three versions of the story had exactly
the same headline, the same lead paragraph and arrived at the same conclusion
(i.e. "Banks are redlining in minority neighborhoods of the city."). All other
aspects of the stories were kept identical in hopes of controlling for unwanted
Each of the nine stories were designed and typeset as though they had appeared
in an actual newspaper. To enhance this effect, the stories were printed on
grainy paper and then photocopied on white paper as if they were originally cut
from newsprint. All of the different versions were approximately the same
length (about 1,100 words), and all fit onto one 8.5- by 11-inch photocopied
page. The stories contained no masthead or advertisements but did have bylines
that included the names of the fictitious newspapers: Dayton County Register,
The Courier-Times Journal and Bayview Daily News.
Prior to the experiment, three university journalism professors reviewed all
nine versions of the stories in order to ensure that they resembled and
represented legitimate, professional work, which could have been published in
the mainstream media.
Participants received three different stories to read and a corresponding
questionnaire for each story. Before the experiment began, the participants
were told the stories had been previously published in actual newspapers. They
were instructed to read the stories in the order they received them and to
answer the questions after they finished reading each story. Participants took
between 30 to 40 minutes to complete the experiment.
The stories were distributed so that each participant received three different
versions of three story topics. Therefore, each participant read one story
about redlining, one about restaurant safety, and one about Cesarean sections.
One of these stories was the CAR version, one was the anecdotal version, and one
was the official version. The story order was controlled using three random
orders so that the different topics and the different versions were not
presented in the same order.
The research questions for each of the six dependent variables sought to compare
the effects of stories that contain data from computer-assisted reporting as
evidence against the two other story types-those with anecdotal evidence and
those with official data evidence. Thus, the data were analyzed using planned
contrasts in a repeated measures analysis of variance design in which
computer-assisted stories were compared to each of the other two story
types-official and anecdotal-for each of the six dependent variables.
Table 1 shows the results of the planned contrasts (means accompanied by a
superscript are significantly different from the CAR version stories) and the
multivariate F-test for each of the six dependent variables across the three
For the credibility index, the means of all three story types showed no
significant differences (F(2,64) = 1.18, p = n.s.) nor did the CAR stories
differ from the other two story types (CAR M = 4.75 vs. anecdotal M = 4.87,
F(1,32) = 0.17, p = n.s.; CAR vs. official M = 5.11, F(1,32) = 1.58, p = n.s.).
These data suggest that readers perceived no difference in credibility among the
types of evidence provided in each of the stories. Thus, the primary research
question of this study was answered in the negative.
A similar pattern of no significance was found for the newsworthiness index (F
(2,64) = 0.26, p = n.s.; CAR M = 5.66 vs. anecdotal M = 5.52,F(1,32) = 0.44, p =
n.s.; CAR vs. official M = 5.67, F(1,32) = 0.00, p = n.s.). These data suggest
that readers perceived no difference in newsworthiness among the types of
evidence provided in each of the stories. Nor was there a significant effect
for story type and the reported ease of understanding the story. Although the
mean for CAR on understanding was lower than the means for both anecdotal and
official, the differences did not reach the level of significance (F (2,64) =
1.00, p = n.s.; CAR M = 5.12 vs. anecdotal M = 5.67,F(1,32) = 1.74, p = n.s.;
CAR vs. official M = 5.36, F(1,32) = 0.36, p = n.s.).
Table 1. Mean ratings of news stories as a function of type of evidence used in
the stories. (N = 33)
Note: Cell entries are means. Standard deviations are in parentheses.
Superscripts a and b are significant at the p < .05 level. Superscript c is
significant at the p < .001 level. *p < .05; **p < .01.
There were however, three sets of significant findings where the mean for the
CAR stories were different than the means for the anecdotal and official story
types. First, participants rated their liking for the CAR stories significantly
lower than their liking for the other two story types (CAR M = 3.88 vs.
anecdotal M = 4.76, F(1,32) = 4.00, p < .05, eta-squared = .11; CAR vs. official
M = 4.79, F(1,32) = 5.79, p < .05, eta-squared = .15).
Second, participants rated the quality of the CAR story significantly lower than
they did for the official story (CAR M = 4.58 vs. official M = 5.18, F(1,32) =
4.86, p < .05, eta-squared = .13). Although the CAR stories were also rated
lower than the anecdotal stories (M = 5.03), the mean difference and the
relatively large variances of the two (see Table 1) reduced the likelihood of
observing a significant difference.
In addition to liking and quality, the other dependent variable that produced
significant results was readability. Readability, the extent to which
participants rated the how easy it was to read the story, was significantly
lower for CAR stories than for anecdotal stories (CAR M = 5.06 vs. anecdotal M =
6.15, F(1,32) = 15.71, p < .001, eta-squared = .33). The CAR and official
stories did not differ significantly for readability.
This study attempted to explore how readers perceive computer-assisted reporting
in which journalists generate statistical evidence to support the claims made in
their stories. Such reporting is a departure from traditional journalism
techniques of relying on either official and/or expert sources or on anecdotal
sources to supply evidence and data. Instead of simply disseminating the claims
and counterclaims from "acceptable" sources (the "he said/she said" kind of
story), CAR journalists are entering the fray and can add their voice to the
discussion through original, independent research and data analysis. This has
been a controversial step for journalism as many critics believe that moving
away from objective reporting will further erode the public's trust and
satisfaction with the news media. The experiment, however, provides some
evidence in concert with this criticism. After reading three versions (one CAR,
one official and one anecdotal) for three different stories, the participants
found no difference in credibility among the stories. However, participants
liked the CAR stories less than the other two story types, and rated the CAR
stories as lower in quality than the official versions. CAR stories were also
rated harder to read than the anecdotal versions.
This is not to say that CAR stories are globally perceived as more negative than
traditional reporting. Even if the participants recognized the different
sources of evidence in the stories, they simply did not perceive the CAR stories
to be any less credible or newsworthy than the other story types. The readers'
news judgment could be based exclusively on the content of the article and not
on the supplier of that evidence, whether it be a journalist, an official, or a
common individual. If the results of the experiment are interpreted in this
way, CAR journalists should have receptive audiences for their work-as long as
their evidence and analyses are cogent, compelling, and most of all, clear. In
other words, the participants in this experiment found the CAR versions of the
stories to be at least as credible as compared to the official and anecdotal
versions. Participants may have accepted the evidence that was provided based
on its merits, and in doing so, accepted a type of journalism with a more
proactive role in the news-making process. This interpretation would be similar
to the result found in the Austin and Dong (1994) study mentioned earlier.
Their research concluded that readers did not distinguish among sources but
rather judged stories based on the quality of the content (Austin & Dong, 1994).
Although the perceived credibility and newsworthiness of computer-assisted
reporting did not appear different from more traditional forms of evidence, the
experiment also showed that the CAR stories were generally disliked and rated
lower in quality than the other versions. These differences were not only
between the CAR stories and the anecdotal stories (liking) but also between the
CAR and the official versions (liking and quality). One possible explanation
for these findings is that even though the participants found the CAR evidence
credible (at least as credible than the other story types), they might not think
that such proactive reporting methods constitute traditional, quality
journalism. If the general cynicism toward the news media is reflected in
liking of the story, readers might conclude that a story with too much
involvement by the journalist is one of "poor" quality and is also liked less.
Such a relationship between liking and quality was found in this study as the
two variables were significantly correlated (r = 0.55, p < .01).
CAR stories were also rated harder to read than the anecdotal stories. For CAR
journalists, these results indicate the importance of including a variety of
data sources to buttress their claims. If the story contained expert and
official sources in addition to the reporter's own analyses, it might appear
that the reporter is more neutral and less proactive to the reader.
Additionally, CAR journalism should not rely solely on the statistical evidence
to tell the whole story. As seen in the experiment, humanizing the evidence is
important to the audience and makes the story more readable. CAR journalists
need to be conscious about overloading the story with their independently
collected data and about including additional sources, both supporting and
opposing their own evidence. The reporter's voice has a place in the discussion
but it should not be the only one heard.
Aucoin, J. (1993). The new investigative journalism. Writers Digest, 73, 22-27.
Austin, E. W., & Dong, Q. (1994). Source v. content effects on judgments of news
believability. Journalism Quarterly, 71, 973-983.
Bennett, W. (1990). Toward a theory of press-state relations in the United
States. Journal of Communication, 40(2), 103-125.
Ciotta, R. (1996). Baby you should drive this CAR. American Journalism Review,
Fitzgerald, M. (1992). Wonked out? Editor & Publisher, 125, 15-17.
Friend, C. (1994). Daily newspaper use of computers to analyze data. Newspaper
Research Journal, 15, 63-72.
Garrison, B. (1996). Successful Strategies for Computer-Assisted Reporting.
Mahwah, NJ: Lawrence Erlbaum Associates.
Gans, H. (1979). Deciding what's news. New York: Pantheon.
Gaziano, C., & McGrath, K. (1986). Measuring the concept of credibility.
Journalism Quarterly, 63, 451-462.
Glasser, T. L. (1992). Objectivity and news bias. In Elliot Cohen (Ed.),
Philosophical issues in journalism (pp.176-183). New York: Oxford University
Glasser, T. L., & Ettema, J. S. (1989). Investigative journalism and the moral
order. Critical Studies in Mass Communication, 6(1), 1-20.
Hallin, D. (1986). The uncensored war: The media and Vietnam. Berkeley:
University of California Press.
Highton, J. (1990). Objectivity gets in the way of the truth. Masthead, 42,
Hovland, C., Irving, J., & Harold, K. (1953). Communication and persuasion. New
Haven: Yale University Press.
Iyengar, S., & Kinder, D. R. (1987). Vivid cases and lead stories, News that
matters: Television and American opinion (pp.34-46). University of Chicago
Landau, G. (1992). Quantum leaps: computer journalism takes off. Columbia
Journalism Review, 31, 61-64.
Milton, J. (1951). Areopagitica and of Education. H. S. George, (Ed.), Arlington
Heights, IL: Harlan Davidson.
Miraldi, R. (1989). Objectivity and the new muckraking: John Hess and the
nursing home scandal. Journalism Monographs, 115, 1-25.
Miraldi, R. (1990). Muckraking and objectivity: Journalism's colliding
traditions. New York: Greenwood Press.
Meyer, P. (1973). Precision journalism. Bloomington: Indiana University Press.
Meyer, P. (1988). Defining and measuring credibility of newspapers: Developing
an index. Journalism Quarterly, 65, 567-588.
Meyer, P. (1991). The New Precision Journalism. Bloomington: Indiana University
Meyer, P., & Jurgensen, K. (1992). After journalism. Journalism Quarterly, 69,
Newman, A. (1993). Is it opinion, or is it expertise? American Journalism
Review, 15, 12-14.
Peterson, I. (1997, March 21). $222.7 Million Libel Award In Case Against Dow
Jones. The New York Times, C6.
Reisner, N. (1995). On the beat. American Journalism Review, 17, 44-47.
Schudson, M. (1978). Discovering the news: A social history of American
newspapers. New York: Basic Books.
Stocking, S., & LaMarca, N. (1990). How journalists describe their stories:
Hypotheses and assumptions in news making. Journalism Quarterly, 67, 295-302.
Tuchman, G. (1972). Objectivity as strategic ritual: An examination of newsmen's
notions of objectivity. American Journal of Sociology, 77, 660-679.
Tuchman, G. (1978). Making news: A study in the construction of reality. New
York: Free Press.
Ureneck, L. (1992). When reporters do their homework, why not let them draw
conclusions, too? ASNE Bulletin, 41, 4-6.
Ureneck, L. (1994). Expert journalism. Nieman Reports, 48, 6-12.
Weaver, D., & Daniels, L. (1992). Public opinion on investigative reporting in
the 1980s. Journalism Quarterly, 69, 146-155.
Weinberg, S. (1996). Drawing conclusions from investigative reporting: Where
should journalists draw the line? IRE Journal, 19(6), 4-7.
Weinberg, S. (1997). The work of Barlett & Steele: Why is it so controversial?
IRE Journal, 20(1), 9-11.