Getting It Right
Getting It Right:
Newsmaker Perceptions of
Accuracy and Credibility
Scott R. Maier
University of North Carolina at Chapel Hill
In a survey of news sources cited in a metropolitan daily newspaper, 58.1
percent of local news stories examined were reported in error. Factual errors
were most common but "errors of judgment" were considered most egregious. News
sources were forgiving of error, rating most inaccuracies minor and almost never
seeking corrections. Inaccuracies were found to affect source perceptions of
story credibility, but errors from any one story had no significant influence on
overall newspaper credibility.
Submitted to the Newspaper Division, Association for Education in Journalism and
Mass Communication, annual convention 1999
Direct inquiries to Scott Maier, 381 Wesley Court, Chapel Hill, NC 27516;
Telephone: (919) 967-4371; email: [log in to unmask]
Getting It Right:
Newsmaker Perceptions of
Accuracy and Credibility
Accuracy is the foundation of media credibility. If journalists can't get their
facts straight, how can readers trust the media to reliably convey and interpret
the news? A recent survey commissioned by the American Society of Newspaper
Editors (ASNE) found that even small errors feed public skepticism about a
newspaper's credibility. Other studies have shown that errors are quite
common (about one in two stories are found to be in error). As the Fourth
Estate's credibility sinks perilously low, it's no wonder that ASNE president
Edward Seaton declared that it's time to make "a fetish of accuracy."
Long before Joseph Pulitzer evoked his three rules of the profession -
Accuracy! Accuracy!" - reporters and editors fretted over reporting a story
correctly. Concern over newspaper credibility also is older than the First
Amendment. But the quest for accuracy and credibility took on a new sense of
urgency after an embarrassing rash of stories in the past year were retracted
because they lacked substantiation or had been fabricated out of whole cloth. In
a lead article published in the American Journalism Review, Judith Sheppard
wrote, "The public perception of the [media's] shortcomings has never been
darker, while the pressures - electronic competition, the need to be first with
a startling story, the need to 'tweak' a good story into greatness with a few
tricks from the novelist's bag - are at their greatest. At the same time_those
traditional sentinels of accuracy, [newspaper] editors and copy editors, are
expected to focus more than ever on presentation of stories, less on their
While accuracy and credibility have become buzzwords in the news industry, the
research literature is surprisingly devoid of recent study of errors in
newspapers. A literature search indicates that it has been more than a decade
since the last published research assessing the rate and types of error made by
newspapers. Nearly 30 years ago, an accuracy investigator called attention to
the dearth of research examining the severity of errors made in newspapers, a
gap in the literature that persists. The link between accuracy and credibility
is intuitively appealing: If journalists can't get their facts straight, how can
readers trust what they read in the newspaper? While the historically high rate
of error in the press and its persistent credibility problems are well
documented, little research empirically examines the relationship between
accuracy and public confidence in newspapers.
This study seeks to address those deficiencies. In a case study of the Raleigh
News & Observer, a five-page questionnaire on news accuracy and credibility was
distributed to more than 1,000 newsmakers
cited in local news stories. The survey was based on a standard set of
questions posed by accuracy researchers over the past 60 years and the ASNE
"model" credibility questionnaire established in 1984. By combining these two
well-established surveys, the study provides a needed benchmark on newspaper
accuracy and credibility at the end of the millenium. Research questions posed
by this study include: What kinds of errors were most frequently made? What
kinds of errors were most troubling to newsmakers? How credible do newsmakers
find the press in which they are quoted? And, perhaps most importantly, the
study examines the relationship between accuracy and credibility. How do errors
affect the newsmaker perceptions of the newspaper? What is the influence of
journalistic accuracy and media credibility on working relations with
Background and literature review
More than 60 years ago, Mitchell Charnley of the University of Minnesota opened
a new line of inquiry in mass communication research when he reported the first
accuracy survey of newsmakers. In a mail survey sent to people cited in 1,000
news stories from three Minneapolis daily newspapers, recipients were asked to
identify typographical, factual and interpretive errors. Errors in "meaning," as
Charnley labeled the interpretive errors, were those in which the newsmaker
believes the story fails to give a fair representation of the subject. Charnley
found that about half of the stories were completely free of reported error. The
most common errors identified by news sources were those in meaning, names, and
title. The results of Charnley's study, published as "Preliminary Notes" in a
1936 edition of Journalism Quarterly, served as a benchmark and a model for a
series of accuracy surveys that followed.
Following Charnley's example, researchers have commonly classified factual
errors into the following categories: misquotes, spellings, names, ages, other
numbers, titles, addresses, other locations, time and dates. Some researchers
have created a factual/subjective dichotomy by expanding the list of "errors of
meaning" to include categories such as overemphasis, underemphasis, omission,
and misleading headlines. Research using factor analysis found support for
this two-category conception. Investigators also have surveyed news sources
to gauge the accuracy of science reporting, coverage of social issues,
and news magazines. Since the 1970s, accuracy audits also have become a
common management tool used by newspapers.
The proportion of stories with error has ranged from 40 percent to 60
percent, but most surveys have hewed remarkably close to the 50 percent mark
identified by Charnley in 1936. Noted Michael Singletary in a review of accuracy
research, "Numerous researchers since then have confirmed that about half of all
straight news stories contain some type of error." But newspapers apparently
receive a much more positive review from their accuracy surveys. In a review of
24 dailies that conducted accuracy checks, 15 percent reported that 90 percent
or more of their stories were error free. Investigators speculate that fear
of offending the newspaper and differences in research rigor may account for
these difference in accuracy rates. Researchers also have found that there
is often disagreement between sources and reporters over what is error. For
example, in an accuracy study of the San Jose Mercury-News, reporters agreed
with less than 25 percent of source claims of error. Source-reporter
agreement was only 5 percent for subjective errors such as issues of omission
Indeed, there is considerable latitude in interpretation of what constitutes
Communication accuracy has been defined as "the extent to which a message
produces agreement between source and receiver,"
as "truthful reproduction of an event or activity of public interest,"
and by its converse, "the deviation of a reported observation of an event from
the 'reality' or the 'truth' of the event."
Despite these different perspectives, a fundamental consensus underpins
accuracy research: newsmakers by
definition have first-hand knowledge of the news story and therefore are well
positioned to be an informed arbiter of error. Accuracy from the newsmakers'
perspective is also important because they tend to be opinion leaders, the
segment of the population that plays a strong role in shaping public
opinion. Moreover, newsmakers are essential to the news-gathering process;
loss of trust can only impede the ability of journalists to do their work.
As with accuracy, issues of media credibility have long intrigued academic and
industry researchers. In fact, Charnley's investigation of news accuracy is
considered one of two "primary ancestors" of credibility research. (The
other prong, the relationship between media believability and persuasion, is
largely outside the scope of this paper). Noted research analyst Cecilie
Gaziano, "Credibility is an important issue to study because public inability to
believe the news media severely hampers the nation's ability to inform the
public, to monitor leaders and to govern. Decreased public trust also can lead
to diminished freedom of the press and can threaten the economic health of some
Numerous studies have shown that credibility is a multi-dimensional
concept. While investigators have varied widely in their definition of
credibility, perception of accuracy is a common component of much of their
research. One of the leading credibility studies was conducted in 1984, when
the American Society of Newspaper Editors commissioned a survey of 1,600 adults
regarding a wide range of perceptions of media credibility. The results,
providing a more comprehensive look at credibility than any previous
research, were widely disseminated among news managers as well as in the
academic press. ASNE proposed the survey serve as a "model" for newspapers to
follow with their own research. Gaziano and McGrath analyzed the ASNE survey
data and developed with the aid of factor analysis a 12-item additive index
found to be a coherent measure of credibility. Accuracy was one of the items
that loaded most strongly (that is, had the most predictive value). Meyer
conducted a validation study of the Gaziano-McGrath scales and proposed a more
narrowly defined believability index based on reader perceptions of whether the
news is accurate, fair, unbiased, can be trusted, and tells the whole story.
In a cross-validation study of these widely used credibility scales, West found
the Meyer five-item believability index was reliable and empirically valid while
the Gaziano-McGrath creditability index was reliable but appeared to measure
more than one underlying factor. West concluded that the Meyer modification
of the Gaziano-McGrath credibility index could set the standard for future
The 1984 study ASNE credibility study was spawned by industry concern regarding
public distrust of the press. Polls had shown that the public's confidence in
newspapers to get "the facts straight" had diminished substantially and that
overall press credibility was in alarming decline. The ASNE survey provided
more reason for concern. The study concluded three-fourths of all adults have
some problems with the credibility of the press and slightly less than half
described their daily newspaper as accurate. Fourteen years later, ASNE
commissioned another series of surveys and focus groups to understand why public
confidence in the media has declined even further. "Major Finding #1" was that
the public sees too many factual errors and spelling or grammar mistakes in
newspapers.  "Even seemingly small errors feed the public skepticism about
a newspaper's credibility," the report said. However, the new ASNE study found
that admitting errors and running corrections helps, not hurts, newspaper
creditability. Of those who found errors, about one in five said these mistakes
are getting more frequent.  Perhaps even more sobering, the survey found
that those who have had actual experience with the news process are the most
critical of media credibility. The researchers concluded, "The closer someone
gets to the process, the more likely they are to feel the press chases and
overdramatizes sensational stories, and the more likely they are to be skeptics
about the accuracy of news reports (in particular) and journalists (in
Despite the impressive lineage of media accuracy and credibility research, the
literature offers little guidance to the relationship between error and
believability in newspapers. Furthermore, little is known about what kinds of
errors are most damaging to newspaper credibility. Most accuracy research has
focused on whether or not an error occurred without any attempt to distinguish
between small and large errors. In 1970, William Blankenburg urged accuracy
investigators to ask newsmakers to rate the seriousness of each distinct error,
a call that went largely unheeded.
Factual errors generally are found to be more common than subjective errors but
little is know about which kind is more important to newsmakers. Is it true, as
newspaper readers told ASNE pollsters, that even small mistakes exact a high
toll on credibility? To what extent do wronged newsmakers actually turn to
corrections to vent their frustration over errors found in the press? The
public's rising contempt for the media has been shown in many polls, but largely
unexplored is the extent that the phenomenon extends to newsmakers - the people
who deal most regularly with the press and the press rely on to for the news.
Research questions include:
(1) What kinds of errors occur most frequently? What kinds of errors are
considered most important?
(1) What is the relationship between error rate and newspaper credibility? Do
small errors affect story credibility? Do factual errors affect newspaper
credibility more than subjective errors?
(1) Are seasoned newsmakers more forgiving of error than occasional news
sources? Do error rates vary by newsmaker categories (i.e., government,
(1) What is the influence of newspaper accuracy and credibility on the working
relationship of newsmakers and the press?
A five-page self-administered questionnaire was developed to assess newsmaker
perceptions of newspaper accuracy and credibility. The accuracy questions
closely followed the factual error classifications established by Charnley and
the lineage of research that followed and the subjective error classifications
developed by Berry and his successors. Newsmakers also were asked to describe
each inaccuracy and rate the severity of error types on a 7-point Likert-like
scale. The credibility questions were almost identical to those posed in the
1984 ASNE "model" credibility questionnaire.
In addition, the survey probed newsmakers' willingness to serve as a news
source and their views of the credibility of competing news media. To pre-test
the survey, the questionnaire was administered to a small sample of newsmakers
cited in a campus newspaper, and slight revisions were made to address questions
that had been identified as ambiguous.
The survey was sent with a cover letter on university letterhead explaining the
purpose of the research and a stamped return envelope addressed to a university
journalism professor. Each survey was accompanied by a copy of the story in
which the newsmaker was cited. Newsmakers were promised that they would not be
identified by name or organization in published results. However, complete
confidentiality was not assured. Newsmakers were told that their responses might
be shared with the newspaper's editors and reporters in order to trace how
errors were made. The survey was conducted in cooperation with the News &
Observer, which paid for printing and mailing the surveys. The newspaper also
hired a research assistant who made copies of each story in the study, tracked
down newsmaker addresses, and assembled the survey packages.
Survey packages were mailed to primary newsmakers cited in locally produced,
bylined news stories appearing in the front, metro and business sections of the
News & Observer over a 31-day period in early 1999.
Following Blankenburg's operational definition of a "significantly mentioned"
newsmaker, surveys were sent to the first two people who, either as witnesses or
participants, have first-hand knowledge of the event. When an address
couldn't be found for the first two primary newsmakers, a survey was sent to the
next "significantly mentioned" newsmaker in the story. Surveys were mailed the
same week, usually the same day, as the story appeared. Following a two-week
waiting period, non-responding newsmakers were sent a follow-up survey with a
cover letter urging their participation.
From 553 local news stories published in the period studied here, 1,013
newsmakers were identified (some stories had only a single source). In all but
eight stories, deliverable addresses were identified for at least one newsmaker.
A total of 946 survey packages were mailed. Newsmakers returned 492 surveys for
a response rate of slightly more than 52 percent. The per story response rate,
in which at least one primary newsmaker cited in the article returned a survey,
was 70.5 percent. A little more than half of the respondents identified
themselves as government or business officials, while less than 10 percent said
their role as newsmaker was either as a citizen activist or witness/bystander.
About half of the newsmakers said they had been interviewed by the paper three
or fewer times in the past 12 months, but nearly a quarter of the respondents
were veteran newsmakers interviewed 10 or more times. A full breakdown of
respondents by type and times interviewed is presented in Table 1.
Table 1 here
The study has several limitations. As a case study, the results cannot be
generalized to other newspapers. The study examines "errors" only from the
newsmaker's perspective; the perception of error often is likely quite different
from a reporter's or reader's viewpoint. Even though this study relies on
classic survey techniques of accuracy and credibility, measuring these
constructs remains an inexact science. This study provides a snapshot view of
the influence of newspaper error, but newsmaker judgments of credibility draw on
cumulative experience with the press.
Out of 492 returned surveys, newsmakers identified 559 errors, or 1.1 errors
per respondent. Nearly 52 percent of the respondents found at least one error.
The number of errors reported per story was 1.4, a slightly higher figure
because the majority of the stories were reviewed by two newsmakers. Newsmakers
said 58.1 percent of the 384 locally produced stories contained errors. The
error rates are on the high side of the 40 percent to 60 percent range of errors
found in prior accuracy surveys, but precise comparisons cannot be made due to
Approximately 57 percent of the inaccuracies reported were factual errors. The
most frequently cited factual error was in the catchall "other" category (60
errors), followed by misquotations (57), numbers wrong (45) and inaccurate or
misleading headlines (31). Among subjective errors, most frequently reported was
the "essential information omitted" category (51 errors), followed by
significance overstated (45), other (32) and numbers misleading or
misrepresented (24). Despite the technological advance of automated
spellcheckers, typographical and spelling errors accounted for 10.3 percent of
factual errors, an error rate virtually identical to the rate reported by
Charnley more than 60 years ago! A complete ranking of errors is displayed in
While more factual than subjective errors were identified, newsmakers
considered subjective errors more egregious. On a Likert-like scale in which 1
is a minor error and 7 a major error, the mean factual error rating was 3.4
compared to a 4.0 mean subjective rating. Nearly 28 percent of the subjective
errors were assigned one of the two most severe ratings, while less than 19
percent of factual errors were rated as severely. The most severely rated
inaccuracies were subjective errors: exaggerated display or placement,
misleading or misrepresented numbers, essential information missing, and
quote(s) out of context. The least severely rated were factual errors: Address
wrong, location wrong, age wrong, and typographical error. Refer to Table 2 for
a complete listing of severity ratings.
Table 2 here
Despite the prevalence of perceived inaccuracies, only one of the 492
respondents reported requesting a correction (another said he set the record
straight in a letter to the editor). The most common reason given for not
seeking a correction was that the errors were considered minor. Other reasons
included fear of creating ill will with the newspaper, the request would be
ignored or shunned, and the futility of fighting a reporter's preconceived
"spin" or agenda. As one newsmaker explained why she didn't seek a correction,
"Scared to - reporter has apparent 'thing' about this topic. It might come back
to bite us."
Inaccuracies also appeared to deter few from being willing to serve as
"newsmakers." Based on the experience with the story under scrutiny, 65 percent
of the respondents characterized themselves as eager to be a news source
compared to less than 4 percent who said they were reluctant. On a seven-point
scale (with 1 = eager and 7 = reluctant), the mean response was 2.3.
Newsmakers considered the News & Observer a relatively credible news medium. On
a seven-point scale (1 is most credible), the Raleigh-based newspaper received a
mean rating of 2.55. The rating was significantly higher than the ratings given
other newspapers, television, radio, news magazines, and the Internet (refer to
Table 3). These findings may seem at odds with a long line of research showing
the public consistently holds television as a more credible news source than
newspapers. However, the results here are consistent with, and indeed
provide further evidence supporting, other research that shows media credibility
is linked to media use as well as a variety of social-economic
indicators. Since newsmakers by definition are those "in the know, "
presumably many regularly read the region's major local newspaper. And with more
than half of the respondents either in government or business, a newsmaker
generally fits what researchers have pegged the "ideal" newspaper reader:
someone who "has had at least some college, resides in an urban area, and has a
high-status occupation, most likely one of the professions."
Table 3 here
Newsmakers also rated the News & Observer higher on the ASNE model credibility
questionnaire than when the same questions were posed in the national sample of
newspaper readers in 1984. On a 5-point Likert-like scale (with 1 the highest
positive score and 5 most negative), the News & Observer received a mean 2.47
rating compared to the 3.0 mean of the 1984 study.
News & Observer newsmakers rated the newspaper most positively on reporter
training, concern for the community, fairness and accuracy.
Newsmakers were most negative in their assessment of the newspaper regarding
sensationalism, bias, watching out for the reader's interests, and separating
fact from opinion.
As noted earlier, it would be reasonable to expect that newspaper credibility
is positively correlated with newspaper accuracy: A news medium is unlikely to
be held credible unless it is believed accurate. This supposition is examined
empirically by a variety of measures.
The first test of the strength of the relationship between accuracy and
credibility used the Meyer five-item believability index. In an internal test of
reliability, the five items (bias, trustworthiness, accuracy, fairness and
"tells the whole story") produced a .92 Chronbach alpha coefficient, indicating
the items measure the same characteristic though some redundancy may be
involved. A statistically significant relationship was found between the number
of factual errors and subjective errors detected and newsmaker perception of the
newspaper's believability, r = .176, p < .001. However, with only 3.1 of the
variance explained, the relationship was weak. A larger correlation was found
between believability and the severity of subjective errors, r = .313, p < .001.
The correlation between believability and the severity of factual errors was not
significant, r = .135, p = .080.
Perhaps a stronger link between accuracy and credibility can be demonstrated by
narrowing the focus and examining the relationship between errors and a
newsmaker's overall view of the story in which he or she is quoted. To test this
idea, a credibility index was created based on seven criteria that newsmakers
rated the news story.
Its alpha coefficient was .89, and the use of the seven items in a simple
additive scale appeared justified. The relationship between story accuracy and
story credibility was statistically significant by every measure of error,
including severity of factual errors. The correlation between the total number
of factual and subjective errors and story credibility was moderately strong, r
= .376, p < .001. The correlation between severity of subjective errors and
story credibility was the strongest, r = .490, p < .001. In other words, the
extent to which newsmakers believed the story erred in interpretation explained
approximately 24 percent of the variance in their overall assessment of the
Yet another way to examine the impact of inaccuracies is to determine the
relationship between perceived errors and the willingness of newsmakers to be
news sources again. By every measure of error, the correlation between accuracy
and newsmaker willingness to be a source was significant. The strongest
relationship was with perceived severity of subjective error, r = .362, p <
.001. The correlation between source willingness and News & Observer
credibility, as measured by the Meyer believability index, also was
statistically significant, r = .272, p < .001. An even stronger relationship was
found between source willingness and the newsmakers' perception of the story's
overall credibility, r = .452, p < .001.
The data show inaccuracies have a demonstrable effect on newsmakers' perceptions
of the story's credibility, and, in turn, story credibility influences the
newsmaker perceptions of the newspaper's overall credibility. Through multiple
regression, the combined effect of errors and other aspects of story credibility
can be shown to account for a significant proportion of the variability in
newsmaker view of the newspaper's overall credibility, R = .503, F (9,181) =
6.479, p <.001. Similarly, the combined effects of errors and story credibility
account for a significant proportion of the variability in willingness to serve
as a news source, R = .549, F (9,183) = 8.324, p <.001.
Still unresolved, however, is how these inter-related effects sort out. Through
hierarchical regression, two models were analyzed. The first model comprised the
seven items used to evaluate overall newspaper credibility. These accounted for
a significant amount of the variability in overall newspaper credibility, R =
.494, F (7,181) = 8.017, p <001.
A second analysis was conducted to evaluate whether story errors predicted
overall newspaper credibility above and beyond the seven credibility items in
the first model. The two error measures - number of errors and severity of
errors - accounted for a negligible and statistically insignificant proportion
of newspaper credibility after controlling for the effects of story credibility
(Table 4). The influence of story error on source willingness also was
insignificant after controlling for the effects of story credibility. In other
words, errors in any one story appear to have little measureable influence on
newsmaker perceptions of the newspaper's overall credibility or the willingness
of newsmakers to participate in the news process.
Table 4 here
In addition, no statistically significant relationships could be identified
between types of news sources (government officials, business representatives,
etc.) and number or severity of errors, story credibility, or overall newspaper
credibility. The relationship between frequency of newsmaker interviews and
error or creditability scores also was not statistically significant.
From the newsmakers' view, more than half of the local news stories published
by the News & Observer were in error - a rate of perceived inaccuracy surely to
be of concern to the paper's management. As previously noted, the study's
findings are not generalizable. Nonetheless, the implications are sobering for
anyone who cares about the media. If the News & Observer, a Pulitzer
prize-winning newspaper widely cited for excellence,
has such difficulty getting its facts straight, one can imagine what the
results would be for news organizations less committed to quality journalism.
In an article entitled "To error is human," Quill magazine suggested that
production automation hampers quality control as copy editors spend more time
paginating papers and less time reading articles before they go into print. "Too
much copy editing is done with a spell-checker only," a local publisher told
Quill. This study's findings provide some empirical support for Quill's
anecdotal account. Not only were factual errors prevalent but even misspellings
and typos - the kinds of blunders that machines were supposed to catch -
continued to find their way into the newspaper at near historic levels.
A clich in journalism is that what really matters to newsmakers is that you
their names spelled right. In rating the severity of errors, newsmakers indeed
showed that getting the name right is important to them. But of even greater
concern, they said, were misleading headlines, misquotations, and wrong numbers.
From the newsmakers' perspective, getting the facts straight clearly wasn't
sufficient. Most bothersome were interpretive mistakes in which newsmakers
believed the newspaper overplayed the story, left out vital information, or made
other "errors of meaning." In fact, newsmakers rated all but one subjective
error type - poor story display -higher in severity than the average factual
However, newsmakers also appeared strikingly forgiving of errors. The majority
of the errors were considered minor. Many were deemed so inconsequential that a
correction wasn't considered necessary. Despite the prevalence of error, the
majority of newsmakers gave the News & Observer favorable credibility ratings
and remained quite willing to be news sources again. Many politicians, company
spokesmen, and other newsmakers may be indulgent of errors because they have a
vested interest in serving as a news source. But the data indicate that ordinary
citizens who have been caught up in the news were no less willing to be quoted
in the paper again than veteran newsmakers.
While many errors were considered too small to correct, even newsmakers who
felt egregiously misrepresented refused to seek a correction. The fact that more
than 500 errors were detected, yet only a single correction was requested,
vividly shows that news managers cannot rely on corrections as a safety valve
for the venting of frustrations by wronged newsmakers. A proactive approach -
random accuracy checks, publicizing correction policies, etc. - is needed to
help the newspaper set the record straight.
By several measures, the relationship between errors and newspaper credibility
was statistically significant but weak. A somewhat stronger relationship could
be established between errors and story credibility, especially when the errors
were subjective. The data also provided a modest connection between errors and
source willingness to be interviewed again. The relationship was most pronounced
between overall newspaper credibility and source willingness. Though sources
overall were cooperative, this finding could be ominous: if the public's
contempt for the media spreads to newsmakers, journalists may feel even more
"under siege," as the American Journalism Review characterized the onslaught of
hostility directed at the media.
After controlling for a variety of aspects of story credibility, the effects of
error on newspaper credibility and on source willingness was insignificant.
These results suggest accuracy is an important element of story credibility, but
the influence of errors on any one story is not sufficient to be predictive of
overall newspaper credibility. For those judgments, newsmakers are most
concerned with how the story was played. There's an underlying consistency to
the survey data: errors of judgment were held more egregious than errors of
fact; the newspaper was considered least credible in regard to other subjective
sins, such as sensationalism, bias, and failing to watch out for the reader's
interests. The bottom line is that newsmakers expect the newspaper not only to
get the story right, but to tell it in a manner that is straightforward,
fair-minded and respectful of its readers.
One could surmise, mistakenly, from this study that the media's hand wringing
over accuracy is overblown. But the focus of the industry's concern is on
ordinary readers, who likely are much less indulgent of inaccuracies. Further
study is needed to
examine how the influence of error on readers differs from that on newsmakers.
Longitudinal studies also would show the cumulative effect of errors on
newspaper credibility. In addition, efforts should be made to more clearly
understand the differences between what newsmakers see as an error of judgment
but journalists view as their responsibility to pursue and report the news.
While the link between errors and credibility seems obvious, this study shows
that it is a complex, multi-faceted relationship - one that ought to keep yet
another generation of accuracy researchers occupied.
Table 1. Percentages for types of newsmakers and times
interviewed in past 12 months
[--- WMF Graphic Goes Here ---]
Table 2. Error types ranked by mean severity rating
(Severity ranking on a 7-point scale: 1 = minor, 7 = major)
Table 3. News media ranked by mean credibility score
Credibility score based on a 7-point scale: 1 = most credible, 7 = least
Table 4. Hierarchical regression model summary and one-way analysis of variance
of story credibility items and story error on newspaper credibility.
 American Society of Newspaper Editors, "Why Newspaper Credibility Has Been
Dropping," (Washington, D.C.: American Society of Newspaper Editors, December
1998, 1). The study was carried out by Urban & Associates.
 Michael Singletary, "Accuracy in News Reporting: A Review of the Research,"
ANPA News Research Report No. 25, January 25, 1980, 6.
 Quoted in Judith Sheppard, "Playing Defense: Is enough being done to
prevent future journalistic embarrassments?" American Journalism Review,
September 1998, 49.
 The most recent overall newspaper accuracy survey identified in the
literature was conduced in 1987. See Philip Meyer, "A Workable Measure of
Auditing Accuracy in Newspapers," Newspaper Research Journal 10(1):39-51 (Fall
1988). Since then, researchers have focused on more specialized kinds of
accuracy research, such as error in news magazines, science reporting and
 William B. Blankenburg, "News Accuracy: Some Findings on the Meaning of
Errors," Journal of Communication 20(4):375-86 (December 1970)
 To avoid confusion over terminology, this paper refers to persons cited in
the news as "newsmakers" instead of "news sources," the term commonly used in
accuracy literature to refer to the medium (i.e., newspapers, television) in
which the news is communicated.
 Mitchell V. Charnley, "Preliminary Notes on a Study of Newspaper Accuracy,"
Journalism Quarterly, 13:394-401 (1936).
 See, for example, Charles H. Brown, "Majority of Readers Give Papers An A
for Accuracy," Editor &Publisher, February 13, 1965, 13, 63; Fred C. Berry, "A
Study of Accuracy in Local News Stories of Three Dailies," Journalism Quarterly,
44:482-90 (Autumn 1967); William B. Blankenburg, "News Accuracy: Some Findings
on the Meaning of Errors," Journal of Communication 20(4):375-86 (December 1970)
and William A. Tillinghast, " Source Control and Evaluation of Newspaper
Inaccuracies," Newspaper Research Journal (Fall 1983): 13.
 Berry, "A Study of Accuracy," 487; See also Gary Lawrence and David Grey,
"Subjective Inaccuracies in Local News Reporting," Journalism Quarterly
46(4):753-57 (Winter 1969).
 Michale Ryan, "A Factor Analytic Study of Scientists' Responses to Error,"
Journalism Quarterly, 52(2):333-36 (Summer 1975).
 Philip Tichenor, Clarice Olien, Annette Harrison and George Donohue, "Mass
Communication Systems and Communication Accuracy in Science News Reporting,
Journalism Quarterly 47(4):673-683 (Winter 1970); James W. Tankard Jr. and
Michael Ryan, "News Source Perceptions of Accuracy of Science Coverage,"
Journalism Quarterly 51(2):219-25;334 (Summer 1974)
 Michael Ryan and Dorothea Owen, "An Accuracy Survey of Metropolitan
Newspaper coverage of Social Issues," Journalism Quarterly 54(1):27-32 (Spring
 L. L. Burriss, "Accuracy of News Magazines as Perceived by News Sources,"
Journalism Quarterly 62(4):824-827 (Winter 1985).
 Ryan and Owen, "An Accuracy Survey," 27. Also refer to M.W. Singletary, R.
Boland, W. Izzard, & T. Rosser, "How accurate are news magazine forecasts?"
Journalism Quarterly 60(2), 342. In his 1988 study, Meyer reported a 25 percent
inaccuracy rate, but his investigation was based on a different and much smaller
selection of error categories than other accuracy surveys. See Meyer, "A Working
 Singletary, "Accuracy in News Reporting," 6.
 Gilbert Cranberg, "Do accuracy checks really measure what respondents
think about news stories?" The Bulletin of the American Society of Newspaper
Editors 697:14-15 (July/August 1987). However, an experiment designed to
replicate Cranberg findings failed to show statistically different results
between a newspaper and university sponsored surveys. See Meyer, "A Workable
 William Tillinghast, "Newspaper Errors: Reporters Dispute Most Source
Claims," Newspaper Research Journal, 3(4):15-23 (July 1982).
 A thoughtful review of accuracy research was made by Frank Fee, "Errors in
the News," (unpublished manuscript, July, 1993).
 Tichenor, Olien, Harrison & Donohue, "Mass Communication," 673
 Blankenburg, "News Accuracy," 376.
 Lawrence & Grey, "Subjective Inaccuracies," 753
 For a discussion of the role of opinion leaders, see Shearon Lowery and
Melvin DeFleur, "Personal Influence: the Two-Step Flow of Communication," in
Milestones in Mass Communication Research (White Plains, N.Y.: Longman,
 Cecilie Gaziano and Kristin McGrath, "Measuring the Concept of
Credibility," Journalism Quarterly 63(3):451-462 (Autumn 1986).
 Cecilie Gaziano, "How Credible is the Credibility Crisis?" Journalism
Quarterly 65(2):267-268;375 (Summer 1988).
 See, for example, Michael Singletary, "Components of Credibility of a
Favorable News Source," Journalism Quarterly 53(2):316-319 (Summer 1976);
Gaziono and McGrath, "Measuring the Concept," 452; and Tony Rimmer and Advid
Weaver, "Different Questions, Different Answers? Media Use and Media
Credibility," Journalism Quarterly 64(2):28-36;44 (Spring 1987).
 For example, in a review of four major credibility surveys conducted in
the 1980s, Gaziano found that each survey included questions regarding
perceptions of accuracy. "How Credible," 270. See also Wayne Wanta and
Yu-Wei-Hu, " The Effects of Credibility, Reliance, and Exposure on Media
Agenda-Setting: A Path Analysis Mode," Journalism Quarterly 71(2):91 (Spring
 Gaziano and McGrath, "Measuring the Concept," 452.
 American Society of Newspaper Editors, Newspaper Credibility: Building
Reader Trust, (Washington, D.C.: American Society of Newspaper Editors, 1985,
63). The study was carried out by MORI Research.
 Philip Meyer, "Defining and Measuring Credibility of Newspapers:
Developing an Index," Journalism Quarterly 65(3): 567-574 (Autumn 1988).
 Mark Douglas West, "Validating a Scale for the Measurement of Credibility:
A Covariance Structure Modeling Approach," Journalism Quarterly 71(1): 159-168
 Ibid., 165.
 Michael Burgoon, Judee Burgoon and Miriam Wilkinson, "Newspaper Image and
Evaluation," Journalism Quarterly 58:411 (1981).
 Neal Shine, "Editors chastised and cheered at 'mass paranoia' session,"
APME News, January 1984, 3-8.
 American Society of Newspaper Editors, "Newspaper Credibility," 13, 20.
 American Society of Newspaper Editors, "Why Newspaper Credibility," 1.
 Ibid., 3.
 Ibid., 20-21.
 Blankenburg, " Some Findings," 383.
 As suggested by Meyer, three negative-positive items were reversed to make
the semantic polarities consistent. See "Defining and Measuring Credibility,"
 Past studies indicate that the return rate and perhaps validity are
improved by having accuracy surveys administered by independent academic
researchers rather than by editors. See Cranberg, "Do accuracy checks really
measure," 14-15; Meyer, "Defining and Measuring Credibility," 42-45.
 The survey dates were Jan. 18 through Feb. 17, 1999.
 By this definition, Blankenburg explains, a person making an appointment
of a civic committee would be considered "significantly mentioned" and so would
the chairman of the committee if he were quoted or present at the time of the
appointment. The new members, if merely listed, would not be deemed
significantly mentioned. Blankenburg, "News Accuracy," 376.
 John Newhagen and Clifford Nass, "Differential Criteria for Evaluating
Credibility of Newspapers and TV News," Journalism Quarterly 66(2): 277-284
(Summer 1989). Also see Mark Douglas West, "Validating a Scale," 159;
 See, for example, Bruce Westley and Werner Severin, " Some Correlates of
Media Credibility," Journalism Quarterly 41:325-35 (1964); Bradley Greenberg,
"Media Use and Believability: Some Multiple Correlates," Journalism Quarterly
43:665-70,732 (1966), and Rimmer and Weaver, " "Different Questions," 28.
 Westley and Severin, "Some Correlates," 334.
 Both means are based on 13 items. In this comparison, three of the
original 16 items were dropped because of reversed polarity measures in the
original ASNE study. See note 42.
 The seven criteria were fairness and balance, context and perspective,
clarity of writing, story placement, newsworthiness, story tone, and story
 See, for example,Valerie Burgher, "A tough act to follow at the News &
Observer," Mediaweek, June 23, 1997, 21; Philip Moeller, "The digitized
newsroom," American Journalism Review, January-February 1995, 42-46; and Kelly
Heyboer, "Computer -assisted reporting's 'Dirty Harry,' " American Journalism
Review, June 1996, 13.
 Linda Fibuch, "Under Siege," American Journalism Review, September 1995,
 Blankenburg had similar findings in his 1970 study, in which errors did
not significantly affect newsmaker opinions of the newspaper. "News Accuracy,"