Redefining "Know Nothings"
University of Texas at Austin
Communication Theory & Methodology Division
Association for Education in Journalism and Mass Communication
August 9-12, 1995
Redefining "Know Nothings"
The perverse and unorthodox argument of this little book is that
voters are not fools. . . .V.O. Key 1
Since the term "know nothings"2 was used by Hyman and
Sheatsley3 more than 45 years ago, political scientists and
sociologists have written with some alarm about the large number
of Americans who apparently are "steeped in political
ignorance."4 They base their concerns on the inability of
respondents to correctly answer a series of questions that sound
like they are lifted from a high school civics exam. As Whitney
and Wartella note, "A virtual cottage industry has arisen in the
past few years in making out the American public as a bunch of
Even the landmark "knowledge gap" hypothesis developed by
Tichenor, Donohue and Olien6 perpetuates this narrow concept of
knowledge. The original research on which the hypothesis is based
tested knowledge of such relatively obscure things as the names
of earth satellites. Researchers did not address the relevance
and usefulness of this kind of information to the average
respondent. This problem with methodology based on "textbookish
tests of political knowledge" was identified by Neuman, Russell
and Just when they said:
Interestingly, survey-based public opinion research has
traditionally done better at measuring opinion than at measuring
knowledge. . . When surveys do focus on knowledge rather than
opinion, they tend to be asked primarily on rather narrowly
conceived questions that one might associate with high school
civics, such as the length of senators' terms or a definition of
'Electoral College.' 7
Such pedagogical prejudice is not surprising since many
researchers spend at least some of their day in classrooms. By
limiting the kind of questions they use to define knowledge,
these scholars have perpetuated an intellectual elitism that
values factual knowledge over conceptual knowledge.
The research reported here, therefore, has a two-fold
purpose. First, I will demonstrate construction of an index to
measure knowledge that gives respondents full credit for
familiarity with public affairs. I will then use the index to
test this measure against two relationships:
H1: As media use increases, knowledge increases.
H2: As social interaction increases, knowledge
The original work by Hyman and Sheatsley indexed knowledge
based on respondents' ability to answer five foreign affairs
questions. They covered such post-World War II issues as a Paris
meeting of the Big Four foreign ministers, a proposed loan to
England then being debated in Congress, and the political status
of Palestine. Of the 1,292 persons interviewed, Hyman and
Sheatsley defined about one in seven as "chronic know-nothings."
Only 12 percent were aware of all five issues.8 In 1964, Lane
and Sears identified four kinds of information used to test
public knowledge in surveys: (a) political leaders; (b) political
issues; (c) government actions; and (d) political institutions. 9
Using national public opinion polls from 1966 and 1970,
Glenn concluded "a large proportion of the American public can
not, or recently could not, intelligently vote or participate in
the democratic process."10 The polls he used asked people
factual questions like the name of their Congressman, the
identity of columnists and writers such as Walter Lippmann and
Joseph Alsop and the definition of "open housing."
In 1988, Bennett applied a familiar classroom grading scheme
to the measurement of public knowledge. Using a composite of ten
indices that cover such issues as awareness of the political
parties' "good" and "bad" points and knowledge of congressional
candidates' names and parties, he passed out grades: A = at least
90 percent correct; B = 80-89 percent correct; C = 70-79 percent
correct; D = 60-69 percent correct; and F = anyone not able to
answer at least 60 percent of the questions correctly. On the
basis of this grading scheme, he concluded that 29 percent of
adult Americans are "know nothings." He considered awarding a
passing score to those who answered at least 50 percent of the
questions but discarded that idea, saying:
. . . given the ease with which some items could be finagled and
the leniency with which others were scored, to accept a standard
so low would debase the currency of political information. .
.Although strictly speaking, most of those who failed know
"something" of public affairs, their inability to achieve a
passing mark on a grade-inflated test constitute sufficient
reason to refer to them as "know-nothings."11
In a study one year later, Bennett said that despite being
better educated, Americans' knowledge of public affairs decreased
between 1967 and 1987. That study scored the number of
individuals able to correctly name three public officials in
several national surveys conducted during that period.12 In
another study Neuman found that 56 percent of the population was
unable to identify congressional candidates by name. He said:
Data shows overwhelmingly that even the basic facts of political
history, the fundamental structure of political institutions, and
current political figures and events escape the cognizance of the
great majority of the electorate.13
Recently, Carpini and Keeter14 used an extensive
questionnaire containing 50 open-ended factual questions about
government and politics. Gallup asked 14 of them on various
national surveys. They asked such things as "Will you tell me
what the term 'veto' means to you?" While the questions were
open-ended, they still looked for specific factual information.
The percentage of correct answers ranged from a low of ten
percent who knew the ratification date of the women's suffrage
amendment to a high of 96 percent who knew the length of a
presidential term. The median correct score was 50 percent.
Carpini and Keeter say:
While factual knowledge is not the only standard by which to
measure a citizenry, one can make the case that knowledge about
the people, institutions, processes, and substance of national
politics is a necessary, if not sufficient, prerequisite for an
effective democracy. While we do not claim to have identified
the specific bits of information the public must know, we think
the information "tested" in this paper is both important in its
own right and is a reasonable sample of the larger pool of
knowledge one might expect from an educated citizenry.15
Clearly, many these authors put a premium on the ability of
respondents to recite factual information rather than giving them
freedom to talk about those things they found important. Lane
and Sears recognized that these kinds of questions smack of:
. . . "school knowledge" as contrasted to "life knowledge" and as
such carried with them the faint coloration of dry, useless
information. Few life decisions hinge upon them and their
implications for policy matters men care about, though really
enormous, are thoroughly obscure to most of the public.16
By concentrating on factual knowledge, scholars diminish the
value of conceptual knowledge and ignore the essential role of
cognitive schema in human information processing.
On this subject, Lodge and Hamill say:
Confronted with a blizzard of facts, figures, and images from
which political impressions are formed and judgments made, the
individual must of necessity impose some perspective on the world
to make it comprehensible. An effective cognitive framework
allows the citizen selectively to attend to some stimuli and
disregard others, to group together otherwise disparate bits and
bytes of information, to store in memory a representation of this
information, and then to retrieve it. . .17
In other words, human beings depend on broad conceptual
schema to process and store knowledge. This suggests that
researchers should open their lens as wide as possible when
testing public knowledge. Researchers need to design methods to
test knowledge based on the way people store and retrieve
information. Instead of maintaining the kind of narrow fact-based
focus that has marred much of the study of public knowledge in
the past 45 years, the research reported here will suggest a way
of measuring knowledge that gives full credit for knowledge of
public affairs that does not depend on factual recall and then
test levels of knowledge against two variables.
Data used in this study were drawn from a telephone survey
conducted between March 2 and 14, 1994. Respondents consisted of
474 randomly selected adult heads-of-household from the city and
suburbs of Austin, Texas.18 Primary and alternate telephone
numbers were drawn from the 1994 Greater Austin Telephone
Directory published by Southwestern Bell Telephone Company.
Using a method developed by Keir, et al.19 the final digit
of the random list of phone numbers was modified so that even
unpublished numbers (such as new connections and unlisted
telephones) had a random chance of being selected. Demographics
for the sample compared favorably with published census data for
the Austin area.20 Interviewers were trained before the calls
began and supervisors monitored the survey while it was in the
field. Completed surveys were subjected to random verification
and valid responses were coded and processed using standard data
The survey covered a range of political and ideological
issues as well as open and closed response questions about
respondents' involvement in community groups and their media-use
habits. Four open-ended questions were used to develop a
knowledge score for each respondent. Two questions covered
national/ international issues. One dealt with a state issue and
one with a local issue. The four questions used to construct the
index asked respondents to:
y Name the best program proposed by President Clinton.
y Name his worst program.
y Explain the general idea behind a high-profile incident
involving Kay Bailey Hutchison, the first female elected to
the U. S. Senate from Texas. The question left it to the
respondent to define the "incident." Most talked about her
indictment on charges stemming from her term as state treasurer.
y Give information about what they thought were the
underlying causes of the environmental problems affecting
Barton Creek. Protection of the creek, which feeds a natural
swimming pool that is a popular city recreational and tourist
attraction, was the subject of intense political debate in the
community prior to the survey.
The two questions about Clinton's programs came early in the
survey after respondents were asked to rate the performance of
the President and First Lady. The questions about Sen. Hutchison
and Barton Creek were separated from the Clinton questions and
from each other. None of the open-ended questions involved
prompting by the interviewers. Interviewers recorded answers
A four-point "knowledge index" was developed for each
respondent, as follows:
0 = None = No questions answered.
1= Low = Answered one question.
2 = Moderate = Answered two questions.
3 = High = Answered three questions.
4= Very High = Answered four questions.
Respondents were given credit for any answer to a question that
required thought.21 In construction and scoring method, the
Austin knowledge index is very similar to that used by Hyman and
Sheatsley in the original "know nothing" article.
Results and Findings
Sixty percent (n = 289) of the Austin sample answered at
least three of the four questions. As shown in Table 1, almost
one-fourth of the respondents (n = 112) scored "very high" (a
perfect 4.00) by giving an answer to all four questions. This
compares with 12 percent who answered all questions in the 1947
"know nothing" profile by Hyman and Sheatsley.22
Only four percent (n = 20) of the Austin respondents failed
to answer even one question. This compares with 14 percent
labeled "know nothings" by Hyman and Sheatsley. The number which
demonstrated no knowledge of the four issues on the Austin survey
also is much smaller than the 29 percent that Bennett labeled
"know nothings" in 1988 because they could not achieve a passing
score on his civics' test.
Number of Respondents at Each Level of Knowledge
Score Frequency Percent
None 0.00 20 4.2
Low 1.00 54 11.4
Moderate 2.00 111 23.4
High 3.00 177 37.3
Very High 4.00 112 23.6
TOTAL 474 100
Note: This index is a composite score based on a value of "l"
assigned for each question answered and a value of "0" for each
question not answered. Maximum possible score = 4.00.
Questions: What do you think is the best program that President
Clinton has proposed? What do you think is the worst program
that President Clinton has proposed? If you had to explain the
general idea of this incident (involving U.S. Senator Kay Bailey
Hutchison) to someone who doesn't know about it, what would you
tell them? What do you think are the underlying causes of the
environmental problems at Barton Creek?
As a validity check, scores from the knowledge index were
compared to respondents' reports of their own level of knowledge
of public affairs. All correlations were significant (p<.001).23
Hypothesis One: Media Use
H1: As media use increases, knowledge increases.
The first hypothesis was tested applying two operational
definitions of media use. Knowledge index scores were first
compared to reported use of six types of media, as shown in Table
Correlation of Knowledge Index
To Reported Frequency Use of Six Types of Media
Spearman df Chi-Square
Type of Media
Newspaper .246*** 16 53.56***
Local TV News .051 16 10.68
Network TV News .121** 16 20.16
CNN .122** 16 32.34**
Radio News .274*** 16 50.42***
Newsmagazines .212*** 8 28.99***
** p <.01. *** p<.001.
Questions: How often do you read a daily newspaper? . . .Watch
local evening TV news? . . . Watch network TV evening news? . .
.Listen to radio news? . . .Watch CNN, the 24-hour cable news
channel?. . Read a weekly newsmagazine such as Time, Newsweek or
U.S. News and World Report? For all but the newsmagazine
category, response options were as follows: Never or Seldom, 1-2
Days a Week, 3-4 Days a Week, Nearly Every Day, Every Day. For
newsmagazines, the response categories were: Never or Seldom, 1-3
Times a Month, Every Week.
Austin respondents reported how often they read a daily
newspaper, read newsmagazines, listened to radio news, and
watched television news programs, including local and network
evening news casts as well as the 24-hour cable news channel
(CNN). In this comparison, only local television evening
newscasts failed to show a significant relationship to knowledge
The second test looked at aggregate effects of media use. To
do this test, a composite index of overall media use was
developed for five of the six media identified in Table 2.24
This index is similar to the knowledge index. Each level of
reported media use was assigned a numerical value as follows:
Value Frequency of Use
0 = Never or Seldom.
1 = 1 or 2 Days Per Week.
2 = 3 or 4 Days Per Week.
3 = Nearly Every Day.
4 = Every Day.
For example, a person who watched both local and network
evening news nearly every day, read a newspaper every day and
listened to radio news three or four days-per-week would earn a
"high" media-use score of 12. On the other hand, someone who
reported watching local evening news every night, but used no
other forms of media information, would receive a low score of
four. The maximum possible score for any respondent was 20 (5 X
As shown in Table 3, about three-fourths of the respondents
were "moderate" to "high" media users (scores of six to 15).
Eleven percent (n = 52) were "very high" media users, while 15
percent (n = 69) were low media users. Three respondents used
none of the five types of media measured.
Number of Respondents at Each Level of Media Use
Media Use Index
Score Frequency Percent
None 0 3 .6
Low 1-5 69 14.6
Moderate 6-10 188 39.7
High 11-15 162 34.2
Very High 16-20 52 11.0
TOTAL 474 100
Note: The Media Use Index aggregates responses to the following
five questions: How often do you read a daily newspaper? How
often do you watch network TV evening news? How often do you
watch local evening TV news? How often do you watch CNN, the
24-hour cable news channel? How often do you listen to radio
news? Maximum possible score = 20 based on the following values
for responses: Never or Seldom = 0; 1-2 days a week = 1; 3 - 4
days a week = 2; Nearly every day = 3; Every day = 4.
As shown in Table 4, there is a significant relationship
between aggregate media use and knowledge (Spearman Correlation =
.280 p<.001). "Very high" media users answered all four
questions almost three times as often as "low" media users.
Cross-Tabulation of Percentages of Respondents
By Scores on Media Use Index and Knowledge Index
Media Use Index
None(0.00) - - 11.6 4.8 1.2 1.9
Low(1.00) 66.7 15.9 12.8 9.9 1.9
Moderate(2.00) 33.3 30.4 26.1 21.0 11.5
High(3.00) - - 29.0 39.4 35.2 50.0
Very High(4.00) - - 13.0 17.0 32.7 34.6
(n. 3) 100%
(n. 69) 100%
(n. 188) 100%
(n. 162) 100%
Chi-Square = 53.77, df = 16 (p<.001) Spearman Corr = .280
Note 1: The Knowledge Index aggregates answers to four questions.
(See Table 1).
Note 2: The Media Use Index aggregates answers to five questions.
(See Table 3).
Hypothesis Two: Social Interaction
H2: As social interaction increases, knowledge increases.
The second hypothesis was tested by comparing the knowledge
index scores to respondents' reported frequency of discussion of
news with family and friends. As shown in Table 5, data
demonstrate a significant relationship (Spearman Correlation =
.330. p=<.001) between the two variables. Respondents who
discuss news with family and friends every day were seven times
more likely than those who never discussed news to score "very
high" on the knowledge index.
Cross-Tabulation of Percentages of Respondents Reporting
Discussion of News
With Friends or Family By Knowledge Index Score
Never or Seldom 1 or 2 Days Per Week 3 or 4 Days Per
Week Nearly Every Day Every Day
Knowledge Index Score
None (0.00) 19.4 6.5 0.9 1.0
Low (1.00) 19.4 14.1 12.7 4.9 8.7
Moderate (2.00) 29.0 25.0 32.7 16.5 15.4
High (3.00) 27.4 37.0 32.7 43.7 42.3
Very High (4.00) 4.8 17.4 20.9 34.0 33.7
(n. 62) 100%
(n. 92 ) 100%
(n. 110) 100%
(n. 103) 100%
Chi-square = 87.38 df= 16 (p < .001) Spearman Corr = .330 (p <
.001) Cramer's V = .215 (p<.001)
Question: How often do you discuss the news with your friends or
Note: There are three missing cases.
A similar strong correlation (Spearman's = .343, p<.001)
also exists between frequency of discussion of news with friends
and family and aggregate media use. Figure 1 shows the
relationship between each pair of variables: media use, knowledge
and discussion of news with family and friends.
Spearman Correlations Between Paired Sets of Three Variables:
Discussion of News With Friends and Family,
Knowledge Index and Media Use Index
Note: p<.001 for all three correlations.
Discussion and Implications
Analysis of data from the Austin survey found support for
research hypotheses. That is, knowledge is related to media use
and to social interaction. As media use increases, knowledge
increases and as social interaction increases, knowledge
increases. Both findings support theories about knowledge
previously advanced by scholars from the earliest contemporary
discussions of public knowledge of civic affairs.
Hyman and Sheatsley reasoned that there is a relationship
between knowledge of public affairs and information media. They
said "know-nothings" would soon improve their knowledge scores if
the information media were somehow "channeled into their
When he looked at the decline in knowledge scores between
1967 and 1987, Bennett made a direct connection to media, saying
the "primary culprits for diminished political information are
diminution in political interest and lessened reliance on
Findings relating to social interaction further validate
Popkin's theory of the reasoning voter. He used cognitive and
psychological research to conclude that conversations between
people enhance knowledge and reasoning. He said:
. . . voters actually do reason about parties, candidates, and
issues. They have premises and they use those premises to make
inferences from their observations of the world around them. .
.People use shortcuts which incorporate much political
information; they triangulate and validate their opinions in
conversations with people they trust and according to the
opinions of national figures whose judgments and positions they
have come to know.27 (Emphasis Added)
In addition, the research reported here found that the
public demonstrates a broader scope of knowledge when allowed to
respond to open-ended questions that do not presuppose a "right"
or a "wrong" answer. Such questions allow respondents to offer
information about issues and ideas that are most salient to them
as opposed to being bound by quizzes of purely factual
information. I do not believe that giving this broad discretion
to respondents contaminates this measure of knowledge. This
method merely gives individuals the opportunity to demonstrate
the knowledge they do have.
After a series of experiments, Geer concluded that
open-ended questions of this type, on balance, measure important
concerns of respondents. He says such questions do not
necessarily result in expression of superficial concerns nor are
answers overly influenced by information the respondents recently
The variety of answers given in the Austin survey to the
questions about President Clinton's programs suggest the level of
thoughtfulness involved. Respondents could give any answer they
wanted to the two questions about Clinton's programs and many
While "health reform" dominated the responses in both
categories29, respondents named more than 18 different programs
as Clinton's "best" and more than 17 programs as his "worst."
In his book on mass opinion, Zaller says:
. . . citizens do not typically carry around in their heads fixed
attitudes on every issue on which a pollster may happen to
inquire; rather, they construct 'opinion statements' on the fly
as they confront each new issue. . . . in constructing their
opinion statements, people make greatest use of ideas that are,
for one reason or another, most immediately salient to them -- at
the 'top of the head.'30
The four open-ended questions used for the knowledge index
illicit just this type of "top of the head" response. Respondents
had to draw on their own store of knowledge to articulate a
response without prompting or prelude. For these reasons, it can
be argued that open-ended questions are valid measures of
knowledge, albeit conceptual rather than factual.
Finally, this study suggests that the level of knowledge
demonstrated in the 1994 Austin survey is much higher than that
demonstrated using a similar scale in 1947. The dramatic
differences between the Austin scores and the 1947 scores by
Hyman and Sheatsley warrant further comment. There is some basis
to view the two measures as relatively comparable. Both studies
use broad conceptual questions about current affairs instead of
relying on a fact-based civics' exam testing procedure. Both
studies also give credit for all correct answers.
The populations measured, however, may not be comparable.
As the Capital of Texas and seat of state government, about
one-fourth of Austin's labor force makes a living in the public
arena as government employees. In addition, Austin has a college
and university population of more than 100,000 and is the most
highly educated community of its size in the United States.31
For these reasons, findings may not be applicable to other
While this research is suggestive, it leaves a number of
unanswered questions that warrant further research. Although
this study shows a relationship between knowledge level and two
variables -- social interaction and media use -- further
statistical analysis could help determine whether they are
dependent on a fourth intervening variable, such as education.32
The single measure of social interaction used for this test
(discussion of news with family and friends) could be
strengthened by pairing it with other measures of social
interaction, such as membership in community groups.
If additional research replicates the findings in the Austin
survey, a more accurate picture of the electorate's knowledge of
public affairs may emerge to replace the distorted image of a
"know nothing" electorate.
1Key, V.O. (1966) The Responsible Electorate (Cambridge, Mass.:
Belknap Press of Harvard University Press), p. 7.
2 There is no relationship between this term and the bigoted
Know-Nothing Party (or "American Nativists") that espoused
anti-foreign and anti-Catholic sentiments in the mid 1800s.
3 Hyman, Herbert H. and Paul B. Sheatsley (1947), "Some Reasons
Why Information Campaigns Fail," Public Opinion Quarterly
4 Bennett, Stephen Earl (1988) "'Know-Nothings' Revisited: The
Meaning of Political Ignorance Today," Social Science
Quarterly 69(2):467-490, p. 476.
5Whitney, D. Charles and Ellen Wartella, "The Public as Dummies,"
Knowledge: Creation, Diffusion, Utilization 10(2):99-110,
6Tichenor, P.J., G.A. Donohue and C.N. Olien (1970) "Mass Media
Flow and Differential Growth in Knowledge," Public Opinion
Quarterly 34(2):159-170, p. 164.
7Neuman, W. Russell, Marion R. Just, Ann N. Crigler (1992) Common
Knowledge: News and the Construction of Political Meaning
(Chicago: University of Chicago Press), p. 13.
8Hyman and Sheatsley (1947), pp. 413-414.
9Lane, Robert E. and David O. Sears (1964) Public Opinion
(Englewood Cliffs, N.J.: Prentice L. Hall, Inc.), p. 58.
10Glenn, Norval D. (1972) "The Distribution of Political
Knowledge in the United States" in Political Attitudes and
Public Opinion ,Dan D. Nimmo and Charles M. Bonjean, Eds., (New
York: David McKay Company, Inc.), pp. 273-283.
11Bennett (1988), pp. 482-483.
12Bennett, Stephen Earl (1989) "Trends in Americans' Political
Information, 1967-1987" American Politics Quarterly 17(4):
13Neuman, W. Russell (1986) The Paradox of Mass Politics:
Knowledge and Opinion in the American Electorate (Cambridge:
Harvard University Press), p. 15.
14Carpini, Michael X. Delli and Scott Keeter (1991), "Stability
and Change in the U.S. Public's Knowledge of Politics," Public
Opinion Quarterly 55(4): 583-612.
15Carpini and Keeter (1991), p. 606.
16Lane and Sears (1964), p. 61.
17Lodge, Milton and Ruth Hamill (1986) "A Partisan Schema for
Political Information Processing" American Political Science
Review 80(2): 505-519.
18 Sampling Error = 4.5 percent.
19Kier, Gerry, Maxwell McCombs, Donald L. Shaw (1991) Advanced
Reporting: Beyond News Events (Prospect Heights, Ill.:
Waveland Press, Inc.).
20The sample contained an even distribution of males (n = 233)
and females (n = 240). Partisanship reflected a fairly even
distribution among Republicans (28 percent), Democrats (35
percent) and Independents (32 percent). Half the sample is
between the ages of 25 and 44. Almost half of those sampled
reported that they have at least a bachelor's degree. More than
60 percent reported annual household incomes of $30,000 or
more. Two variables -- race and education -- were out of range.
The sample proportion identifying themselves as Caucasian or
White was high and the portion identifying themselves as
Latino or Hispanic was low. While many Texas Hispanics identify
themselves as Caucasian, the U.S. Census classifies them as
Hispanic/Latino. There also was an under-sampling of individuals
who had no college and an over-sampling of those with graduate
degrees. The difference between the sample and census data may
be explained by definitions used. The census numbers include
everyone 18-years- old or older while the sample included only
heads-of-household. The latter definition would have excluded
high school students still living at home and senior citizens
living with adult children.
21 For the Clinton program questions, answers like "All" or
"None" did not count. To get credit, respondents had to
actually articulate a program by name or reference.
22Hyman and Sheatsley (1947), p. 414.
23Questions: "Would you describe yourself as 'Very Informed,'
'Somewhat Informed' or "A Little Informed' about: Foreign
Issues? National Issues? State Issues? Austin-Area Issues?"
Spearman's Correlations between answers to these questions
and respondents' scores on the Knowledge Index were as
follows: Foreign Issues, .319; National Issues, .367; State
Issues, .265 and Austin-Area Issues, .256 . All were
24The composite index does not include newsmagazine readership
because response categories were not compatible with those of
25Hyman and Sheatsley, p. 414.
26Bennett (1989), p. 432.
27 Popkin, Samuel L. (1991) The Reasoning Voter: Communication
and Persuasion in Presidential Campaigns (Chicago: University
of Chicago Press), p. 7.
28Geer, John G. (1991) "Do Open-Ended Questions Measure 'Salient'
Issues?" Public Opinion Quarterly 55: 360- 370.
29 In all, 163 respondents said "health reform" was Clinton's
best program while 156 said it was his worst program.
30Zaller, John R. (1992) The Nature and Origins of Mass Opinion
(New York: Cambridge University Press, 1992), p. 1.
31Austin Community Profile (1993) Greater Austin Chamber of
Commerce , p. 5.
32 For instance, Zaller (1992) has suggested that frequency of
political discussions with peers and self- reporting about
media use have little impact on news perception if the research
design controls for general awareness. (p. 44).
LIST OF TABLES
TABLE 1: Number of Respondents at Each Level of Knowledge
TABLE 2: Correlations of Knowledge Index To Reported
Frequency Use of Six Types of Media
TABLE 3: Number of Respondents at Each Level of Media Use
TABLE 4: Cross-Tabulation of Percentages of Respondents By
Scores On Media Use Index and Knowledge Index
TABLE 5: Cross-Tabulation of Percents of Respondents
Reporting Discussion of News With Friends or
Family By Knowledge Index Score
LIST OF FIGURES
Figure 1: Spearman Correlations Between Paired Sets of Three
Variables: Discussion of News With Friends and
Family, Knowledge Index and Media Index.