MILLERC DEBATES 93 RTVJ Presidential Debate Interviewing Styles
Questions vs. Answers in the 1992 Presidential Debates:
A Content Analysis of Interviewing Styles
Carolyn B. Miller
Michigan State University
East Lansing, MI 48824
Paper Submitted to the Radio -Television Journalism Division
Association for Education in Journalism and Mass Communication
Carolyn Miller is a doctoral candidate in the Mass Media Ph.D. Program
in the College of Communication Arts and Sciences at Michigan State
This content analysis compares the technical quality of
questions asked by journalists and non-journalists of presidential
candidates George Bush, Bill Clinton and Ross Perot during the 1992
Presidential Debates. It also examines candidatesU answers to see if
technically better questions resulted in technically better answers.
Some of the more notable findings include that audience members were
twice as likely to have their questions answered than print reporters,
and that of the candidates, Bill Clinton provided direct answers to
questions twice as often as Ross Perot.
The definition of SbetterS applied was developed and tested by
researcher Ronald Ostman in his study of relatedness of questions and
answers in President KennedyUs Press Conferences. Only those variables
found to have significance in the Ostman study were applied in this
one. A category called Rfrequency of subjects mentioned in questions and
answersS also was included to gauge whether candidates and interviewers
(journalists and non-journalists)had the same agenda during the debates.
The 1992 Presidential Debates were the first to use a variety of
interviewing formats to ask questions of the candidates. Questions posed
to candidates during the three presidential debates1 were generated by
panels of journalists, moderators, and an audience of voters. Often
while answering questions, even the candidates asked questions of each
Increasingly, questions of elected officials are not left to
professionals. Soaring ratings for radio and television talk shows in
the six months preceding the 1992 election2 show a demand by candidates
and voters alike for "unfiltered conversation and opinion." What is
unknown is what effect the lack of a filter may have on the quality of
questions and answers. The Presidential debates presented a unique
opportunity to examine this phenomenon.
This study explores whether a particular method of asking
questions during the debates was more effective in getting complete
answers from the candidates. Through content analysis, the study also
examines the relatedness of question and answer -- did the candidate
answer the question, or did he avoid the question by changing the
subject? It also analyzes these answers in terms of who asked the
questions, comparing whether journalists or audience members were more
likely to get the answers they sought from the candidates.
It is important to note that this twofold examination of the
quality of questions and answers plus the content of questions and
answers is not necessarily correlated. While a question may be
technically good, it does not guarantee that the content presented in
the question will be addressed in the answer. Similarly, a poorly
constructed question may generate a good answer if correlation exists
between the content of each.
This comparison of interviewing styles is important from the
perspectives of both the audience and candidates. If one format
(moderator, panel, questions from audience) provides viewers with more
information than another, perhaps voters may want to reconsider use of
the filter - the journalist - for interviewing candidates for the
presidency. This study begins to explore what purpose that filter
serves by comparing the quality of questions and answers of when it used
to when it is not.
Review of Literature
Far more many "how-to" books address interviewing skills than
research articles. A review of journalism research showed few mentions
of how the nature of asking questions affects the quality of answers. Of
existing research, most deals with just one side of the interviewing
For example, Leon and Allen analyzed the "readability" of
answers given by presidential candidates in the 1988 Presidential
Debates to more accurately state which candidate "won."3 However, their
study did not explore the questions which generated the answers, thereby
not allow for comparisons of context and content within the
The only study specifically designed to look at both sides of
the interviewing equation is by Ostman, Babcock and Fallert, 4 who
examined whether "good" questions elicit "good" answers.
The authors compared reporters' questions to answers given by
President John F. Kennedy in formal press conferences. In defining what
constitutes a "good" question, Ostman et. al. purposively selected 16
criteria or "pointers" from three references commonly referred to by
journalists, and used to train journalism students in interviewing
skills. 5 Each pointer was made a question-answer (Q-A) category for
content analysis purposes. As many of these Q-A categories are
incorporated into the current study, they are listed in detail below
(for definitions, see Measures).
Ostman Question/Answer Categories
1)Avoid words with double meanings.
2)Specify exactly the time.
3)Specify exactly the place.
4)Specify exactly the context.
5)Make explicit all alternatives, or make none of them explicit.
6)Unfamiliar or technical subjects ought to be prefaced with
explanations or illustrations.
7)Ask questions in terms of the respondent's own immediate and recent
experience rather than generalities.
8)It is often helpful to ask questions which elicit information about
how many facts a person has about a topic of interest.
9)It is often helpful to ask questions which elicit opinions and
attitudes of the respondent - what is thought or felt about a particular
subject at a particular point in time.
10)It is often helpful to ask questions which elicit respondent's
evaluation of his or her own behavior or thoughts in relation to others.
11)Avoid "loaded" or "leading" questions (those which suggest to the
respondent the answer which the interviewer wants to hear).
12)Avoid questions which contain emotionally-charged words.
13)Avoid embarrassing questions, because they often lead to untrue
14)Adhere to the principles of good grammar when asking questions.
15)Avoid multi-part questions (which introduce more than one subject).
16)Avoid long questions.
In 13 of the above 16 Q-A categories, the authors rejected the
null hypotheses, finding a relationship between how the question was
asked and answered. This indicates that comparisons between the nature
of questions and answers are effective in analyzing content generated
from a political forum.
1. Did "professional" interviewers such as journalists ask "better"
questions than non professional interviewers?
2. Did "professional" interviewers such as journalists get "better"
answers than non professional interviewers?
3. Did print or broadcast reporters ask the best questions during the
4. Are presidential candidates more likely to answer or not answer
questions posed by professional interviewers or non professional
5. Which candidate provided the RbestS answers?
6. Which candidate was most likely to answer questions as posed?
The unit of analysis for this study was each question-answer
set. The context unit was each 90-minute debates in which all
question-answer sets occur. The purpose was to learn if RbetterS
questions provide RbetterS answers. To study what types of interviewers
ask "better" questions and get "better" answers, a determination of what
is "better" must be made.
To ensure the greatest reliability in an area receiving little
research to date, the current study defines "better" by applying many of
the question/answer categories defined, developed and tested by Ostman
et. al. to the debate context. These categories are best thought of as
"tips" for asking and answering effective questions. The more categories
the interviewer adheres to in asking the question, the better. "Better"
answers are similarly defined. The more categories the respondent
applies in his answer, the better. Consequently, the categories are
written so RyesS answers are always RbetterS and RnoS answers are
Some adaptations to Ostman's categories have been made to
provide a more appropriate fit with the current study. For example, the
debate study excludes two of three Ostman categories where no
relationship was found (grammar, facts) and includes one category
(loaded) where statistical significance was not met but the data was in
the direction of the research hypothesis. It also consolidates several
Ostman categories which did not appear to be mutually exclusive.
Besides the Ostman categories, this study looks at variables
based on characteristics of who asks (moderator, panel member or
audience member) and answers (Bush, Clinton, Perot) each question. It
also incorporates descriptive variables such as the gender and
affiliation (print or broadcast journalist, or audience member) of the
To better understand the extent to which questions and answers
relate, this study also includes a category concerning the dominant
topic of each question asked and answered. As some question and answers
have more than one topic, this category allowed more than one category
to be recorded.
After pretesting the instrument with the 1992 Vice
Presidential debate, the determination was made to limit the RanswerS
portion of this study to the first answer given by a candidate. Although
candidates had at least one turn answering each question, only the first
respondent had two minutes to complete answers. Subsequent answers by
candidates were restricted to one minute or less, making comparisons
between the two types of answers unequitable(although there is no
evidence supporting that a RbetterS answered is delivered in two
minutes than one). At the very least, this limits the Rtime of answerS
threat to validity, and the percentage of answers generated in response
to another candidateUs comments, rather than the interviewerUs question.
Content of each presidential debate was analyzed using two
coders. Content was obtained by taping the debates broadcast on Cable
News Network. Intercoder-reliability was 88 percent and was established
by having each coder cross code 25 percent (12 question-answer sets).
Reliability between sub categories ranged from 63 percent to 100
percent (identity of interviewer, order of respondents, use of double
meanings in questions and answers). 6 In order to obtain intercoder
reliability of 88 percent, the Ostman category with the least amount of
reliability (63 percent) was dropped from the study. This category was
RDoes the interviewer ask questions which elicit respondents
self-perceptions?S The RanswerS version of this category was RDoes the
respondent use self perceptions as part of his answer?S The low
reliability scores for the Rself-perceptionS category may be explained
by its similarity to a broader category: RDoes the interviewer ask
questions which elicit opinions and attitudes of the respondent?S
Justification for each of the question-answer categories
included in this study is provided below. Recall that two categories
were found to have no significance (facts, grammar) in the Ostman study
and were subsequently dropped from this study.
In addition, two Ostman categories thought not to be mutually
exclusive were collapsed into narrower categories. These were "avoid
questions which contain emotionally-charged words" (combined with "avoid
loaded questions); and Ravoid leading questions" (combined with "make
explicit all alternatives, or make none of them explicit").
Question-Answer categories Rspecify exactly the time,S Rspecify exactly
the place,S and specify exactly the contextS were combined into one
category, "be specific."
The Ostman category of "Avoid long questions" was dropped
because question and answer lengths were artificially limited by rules
of each debate format. Similarly, categories calling for interviewers
and respondents to Rmake all or none of the alternatives explicitS and
Ravoid multi-part questions or answersS were slightly adjusted to
incorporate the time-sensitivity of a televised debate.7
The following categories were assigned a "yes" or "no" for each
question and answer. The more "yes" categories coded, the better the
question or answer, interviewer or respondent.
1. "Avoid double meanings" when asking or answering questions
because of increased likelihood of confusion. Double meanings can be
compared to "semantic noise" or "the cause wrong interpretation of
messages."8 As Ostman et. al. explain, the word "dope" can be
interpreted as 1)illegal drugs, 2)information, 3) an uninformed or
stupid person, 4)to plan a course of action, and 5) to figure something
out which formerly was a mystery.
2 " Be specific" means to avoid generalizations when asking or
answering questions. This category is designed to study whether specific
questions are more likely to generate specific answers. For example, a
specific question would not ask: RDo you support eliminating world
hunger?S Rather, it would state: RDo you support United Nations relief
efforts to Somalia?S Similarly, a specific answer would focus on the
United NationUs relief efforts to Somalia, rather than famine in
general. Also, specific questions and answers provide context by setting
the issue in place and time.
3. "Making all or none of the alternatives explicit" helps
control bias on the part of the interviewer. Because the debate format
made it is impractical to Rmake all alternatives explicit,S this
category was modified to say: RDoes the interviewer avoid providing
potential answers as part of the question?S The corresponding RanswerS
category was RDoes the respondent avoid mentioning the various positions
that can be taken on an issue, rather than stating what position he
takes?S Candidates responding with a RbetterS answer take one position
per issue, rather than trying to appeal to all factions.
This category is important because if potential answers are
offered as part of questions, respondents may select that answer because
he/she perceives it is what the interviewer wants to hear. These
questions are also called "leading questions" because suggest to the
respondent a "socially acceptable" answer.
4. "Prefacing unfamiliar or technical subjects with explanations
or illustrations" gives each respondent the same base of information
from which to answer the question. Similarly, it helps ensure that
answers are understood by the general population, not just experts.
This category could be helpful particularly in improving the votersU
comprehension of issues. RBetterS questions and answers explain
unfamiliar and technical subjects so everyone may understand the context
of the discussion.
5. "Asking questions in terms of the respondent's own immediate
and recent experience rather than generalities" encourages respondents
to include timely developments in answers. For example, asking
President George Bush to describe what effect his sonUs role in the
Savings and Loan crisis had on his campaign is likely to yield a more
tailored response than asking how he feels about banking. This differs
from category two as it focuses on the respondent rather than the
topic of the question.
6. "It often is helpful to ask questions which elicit opinions
and attitudes of the respondent - what is thought of felt about a
particular subject at a particular point in time." A good example would
be questions that begin by stating: "What is your opinion on...?" or
RHow do you feel about...?R Likewise, RbetterS answers or those which
include opinions on a particular subject. This differs from category
five because it deals with opinions and attitudes of respondents, which
may be less factual or verifiable than experiences.
7. "Avoid 'loaded' questions and answers." Loaded questions use
words with emotional connotations and which state one premise and ignore
others. 9 Loaded questions "set up" the respondents by favoring one type
of response over another. Likewise, loaded answers may Rset upS other
candidates, thereby reflecting the question rather than answering it.
For example, a loaded question would state: "Mr. President,
don't you think it is pathetic that no country has lifted a finger to
help the people of Bosnia?" Conversely, a non-loaded question would ask:
"Mr. President, how would you compare the United States' foreign
assistance package to Bosnia to that of other nations?"
8. "Avoid embarrassing questions because they often lead to
untrue answers." An embarrassing question makes a person ill at ease,
self-conscious or uncomfortable. According to Ostman, a distinguishing
characteristic of such questions is a personal reference to the
respondent in a challenging or accusatory context. This normally refers
to shame, violation of commonly accepted social norms or rules, or to
behavior or attitudes normally considered personal or private. Answers
to embarrassing questions often are hesitant, confused, disorganized,
and reflect obstructed thoughts and logic.10
This category has been revised slightly to provide consistency
within a coding format that assigns RyesS answers to RbetterS questions
and answers. Therefore, the RanswerS version of this category is stated:
RDoes the candidate avoid changing the subject when an embarrassing
question is asked?S This assumes that a RbetterS answer would tackle an
embarrassing question, perhaps hoping to set the record straight, rather
than avoiding it and allowing misconceptions to remain.
9. "Avoid multi-part questions." These are questions which
introduce more than one subject. A two or three-part question or answer
on a single subject is not a multi-part question or answer. Rather, a
multi-part question or answer combines subjects, such as health care and
law enforcement, or the environment and child care. Stressing a single
element in a question gives a respondent less leeway in answering the
question, thereby making the answer more direct. An example of a
multi-part question is: "Should the United States send troops to protect
the Kurds in Northern Iraq and enforce United Nations relief efforts in
10. The final measure compared the topic (s) of each question to
that of each answer. Each question had between one and five topics.
Topics were listed by coders in the order which they were mentioned by
either the interviewer or respondent. Topics included the following
categories: 1)family values; 2)budget deficit; 3)taxes; 4) inflation;
5)unemployment; 6)welfare; 7)health insurance; 8)health care;
9)environment; 10)vice presidents; 11)U.S. military spending; 12)past
military service of candidates; 13)North American Free Trade Agreement
14)education; 15) U.S. disasters; 16)campaign process; 17)special
interest groups 18)foreign affairs; 19)abortion; 20)law enforcement;
21)Congress; 22)trustworthiness; 23) change; 24)banking; 25) women and
minorities, and 26) business and consumer affairs. Definitions for
topics are provided in Appendix I.
A total of 48 question-answer sets were recorded during the
three Presidential debates; 17 in the first debate, 15 in the second and
16 in the third. The first debate used a moderator/panel format; the
second used a moderator/audience format/ and the third debate was
divided between a moderator and panel format.
It was expected that journalists would do a better job than non
journalists in asking questions of the candidates. This result was
anticipated because journalists generally have more training and
experience in asking questions than non journalists. This expectation
was supported by the data, just as it was in the Ostman study. CramerUs
V, a measure of the relationships' strength, ranged from .11 (avoid
providing potential answers in questions) to .49 (avoid double
Journalists did a RbetterS job of asking questions than
non-journalists 77 percent of the time, or in seven of nine Ostman
categories. The moderators of the debates asked the best questions, with
an average score of 84 percent.11 The average for panel members was 71
percent, while audience members averaged 62.4 percent.
[Insert Table One About Here]
Moderators were most likely to ask RbetterS questions, reporting
100 percent in the categories of Ravoid double meanings,S Rcite recent
experience,S Ravoid embarrassing questions,S and Relicit opinions and
attitudes,S (panel and audience members also scored 100 percent in the
latter category). In two categories, moderators asked worse questions
than panel members but better questions than audience members, those
being Ris the question specificS and Ravoid providing potential
Panel members asked the best questions in the categories of
Ravoid double meanings and Rbe specific.S Of the three groups of
interviewers, panel members were the most likely to ask RloadedS
questions and questions phrased to embarrass the candidates.
Audience members appeared to have the most trouble in citing
recent experience of candidates in questions, doing so in only 30
percent of the questions. They also asked questions combining a number
of topics at least twice as often as their professional counterparts.
It is interesting to note that audience members scored better
than journalists only in the categories of Ravoid loaded questionsS and
Ravoid embarrassing questions.S Arguably, these are the hardest types
of questions to ask, and therefore the audienceUs success in these areas
may be more to fear of offending the candidates rather than a concern
for asking good questions. Based on the data, journalists at the debate
appear to have no such fear.
[Insert Table 2 About Here]
Print reporters were most likely to ask questions coded either
as RloadedS or Rembarrassing.S It is interesting to note that while
print and broadcast journalists were relatively similar in their
tendency to ask RloadedS questions, print reporters were twice as likely
to ask RembarrassingS questions.
Print reporters also were more likely to Ravoid providing
potential answersS as part of their questions, scoring significantly
higher than either broadcast reporters or audience members. While they
were slightly less RspecificS than broadcast reporters, they were
slightly more likely to Rcite recent experience of the candidatesS in
According to Ostman's study on Kennedy Press Conferences,
"better" questions do yield "better" answers. A comparison of questions
to answers in this study shows similar results.
[Insert Table 3 About Here]
As seen in Table 3, candidates generally gave RbetterS answers
to RbetterS answers. For example, all respondents included opinions in
answers to questions designed to elicit opinions.
ClintonUs answers were RbetterS than BushUs or PerotUs in the
majority of categories, including Ravoid double meanings,S Rbe
specific,S Rprovide direct answers,S Rcite recent experience,S and
Rpreface technical answers with illustrations or examples.S Bush was
slightly more likely than Clinton to Ravoid loaded answers,S although
much more likely than Perot. Perot gave RbestS answers exclusively in
only one category, Rciting recent experienceS in answers.
The two categories where candidates gave significantly RworseS
answers than questions were Ravoid changing the subject when asked
embarrassing questions,S and Ravoid including more than one subject in
Bush answered embarrassing questions in 15 percent of the cases,
while Clinton answered only 6.7 percent of questions coded as
Rembarrassing.S Perot was not asked any embarrassing questions based on
the Ostman definition. However, Perot was the most likely of the three
candidates to use answers containing more than one subject, doing so in
71 percent of his answers. Bush gave discussed more than one subject in
63 percent of his answers, while Clinton did so in 46 percent of his
It is important to note than some questions contained more than
one subject, although this was half as likely as answers containing more
than one subject. Of 48 total questions, 17 (35 percent) had at least
two subjects. Conversely, However, 33 (69 percent) of 48 total answers
had at least two subjects.
It could be argued that candidates discussing more than one
subject were actually providing RbetterS answers to a multi-subject
question, despite that these answers were not considered RbetterS by
the Ostman model. Therefore, an additional variable was introduced to
measure whether the question was answered at all, exclusively, or as
part of a sequence of subjects. RSequence of subjectsS is defined as
speaking to the subject of the question, but also including unrelated
topics in the answer.S
[Insert Table 4 About Here]
Clinton was twice a likely as Perot to answer to limit his
answers to the topic of questions. Clinton answered just the question
posed by interviewers 73 percent of the time, meaning he did not include
subjects other than those presented in questions. Clinton provided
answers including other subjects for 20 percent of the questions, and
did not answer about 7 percent of the questions.
Perot was least likely to answer questions, changing the subject
in 21.4 percent of the questions. Bush was just slightly behind Perot in
this category, not answering 21.1 percent of questions. However, Bush
was much more likely than Perot to provide direct answers to questions.
Perot was most likely to answer questions as part of a sequence of
subjects, doing so 43 percent of the time. Bush was least likely to use
this method of answering questions.
Even though audience members did not ask better questions than
journalists, they were almost twice as likely to have the content of
their questions addressed directly by candidates.
[Insert Table 5 About Here]
Candidates gave direct answers to 85 percent of questions posed
by audience members, compared to 50 percent posed by broadcast
journalists and 46 percent posed by print journalists.
Print journalists were least likely to have their questions
answered by candidates. While broadcast journalists had better luck in
getting answers, one third of the answers to broadcast journalistsU
questions were provided in sequenced answers.
Finally, it is interesting to explore the correlation of
subjects in questions to answers. In several cases, it seemed
interviewers and respondents had different agendas.
[Insert Table 6 About Here]
Table 6 compares the frequency of subjects mentioned in debate
questions and answers. Candidates discussed taxes almost twice as often
in their answers as interviewers did in questions. Candidates also were
much more willing than interviewers to talk about Congress, the campaign
process, special interest groups, law enforcement and crime, and
trustworthiness. Conversely, topics such as Rchange,S vice presidentsS
and Rabortion,S all receiving tremendous news coverage during the
primary race, were hardly mentioned by either side during the debates.
In general, interviewers appeared to want a more evenly
distributed subject range than candidates. The most striking discrepancy
was in the area of Reducation;S interviewers brought up ReducationS
issues more than twice as often as candidates.
This study seems to show that RbetterS questions do generate
RbetterS answers. However, it is important to note that RbetterS
question and answers were those fitting the Ostman definition of
Rbetter.S When RbetterS is defined by whether the topic of the
question matched that of the answer, a different pattern emerges. In
those cases, non- professional interviewers appeared to be much more
effective in getting straight answers to their questions.
It seems that a RtechnicalS definition of better such as the
Ostman model is insufficient to predict how often questions will be
answered. While it is interesting that data support the concept of
Rtechnically betterS questions invoking technically RbetterS answers,
the bottom line is whether or not the question is answered. Future
research might address why candidates preferred to answer voters
questions more often than journalistsU. One explanation might be that
votersU questions were easier to answer than journalistsU, so candidates
answered the easy questions and side-stepped the more difficult ones.
It is not surprising that moderators asked the RbestS question
of any study group. They had the more opportunities to pose questions to
the candidates which alleviated the temptation to bundle too much
information into a single question. Nor is it surprising that audience
members asked the RworstS questions. Voters given one chance to speak
directly to the current or future president of the United States would
want to make the most of the experience. This would explain why they
tended to combine at least two subjects in their questions, in some
cases as many as four.
As professional interviewers, most journalists have learned to
ask questions which might frighten or intimidate the RaverageS person.
This explains why journalists were much more likely than audience
members to ask questions defined as RloadedS and Rembarrassing.S While
print and broadcast journalists were relatively similar in their
tendency to ask RloadedS questions, print reporters were twice as likely
to ask RembarrassingS questions. This may be a function of print
reportersU need to go beyond the surface in reporting and writing; to
provide details and color as part of their news analysis role.
RCiting recent experienceS assumes a knowledge of the latest
developments in an area. It is not surprising then, that journalists
were more likely to Rcite recent experienceS in questions. In their
reporting roles, journalist have access to more information than the
average reader. It is predictable that they would use this knowledge in
formulating their questions.
As print reporters generally produce longer and more detailed
stories than broadcast reporters, it is not surprising that they were
the most likely of the study groups to Rcite recent experienceS in
questions. Conversely, broadcast journalistsU news stories are shorter
and more time sensitive, perhaps forcing them to Rbe specificS more
often than print reporters.
It is interesting that the candidate who provided RbetterS
answers in a majority of categories was elected President of the United
States. Of the candidates, Clinton also was most likely to address the
content of questions by providing direct answers. Even though Clinton
was the most likely of the three candidates to not answer questions
coded Rembarrassing,S he gave direct answers to non embarrassing
questions more than any other candidate. As did other candidates, he
seemed most willing to answer questions posed by audience members.
However, it is important to note that Clinton and Perot were
able to answer questions based on what they would do if elected
president, rather than what they had done as president over the past
four years. The necessity for Bush to often answer questions based on
his presidential record rather than his platform may have made it more
difficult for him to provide RbetterS answers.
In several cases, respondents wanted to make more of issues
than did interviewers. RTaxesS and RCongressS were the favorite themes
of candidates, who may have hoped to explain the economy by passing the
buck. Candidates also appeared to blame the political process and the
media, referring to them twice as often as mentioned in questions. It is
unknown why candidates would downplay ReducationS in answers unless they
perceived it to hold less importance than economic and health care
The question remains why candidates gave more direct answers to
questions posed by audience members than journalists, when both groups
are voters. In terms of addressing issues, it didnUt seem to matter to
candidate that audience questions were technically RworseS than
journalists.S It is possible that candidates viewed audience members as
being less threatening than journalists, or that candidates felt
television viewers would associate with audience members more than
journalists. This could cause candidates to Rgo out of their wayS to
answer the questions from the audience.
It also is possible that candidates enjoyed talking with
audience members more than the media. Recall George BushUs favorite
bumper sticker: RAnnoy the Media--Re-elect Bush.S If this is the case,
it seems candidates least favored print journalists, as they were least
likely to have questions answered.
The Freedom Forum Media Studies Center at Columbia University
termed the 1992 Presidential election Rthe season of uncertainty.S This
uncertainty is partly due to new competition media face from an unlikely
source: readers and viewers. Former adviser to Ronald Reagan Michael
Deaver and Rolling Stone reporter William Greider argued that Rthe
rise of talk shows reflects a backlash against the Testablishment
press.US12 Greider believes that the public has come to resent ReliteS
media coverage of ReliteS politicians in a year when vcters are in a
decidedly-anti establishment mood.S
Question difficulty aside, this study cannot explain why
candidates were much more likely to answer questions posed by
non-journalists than journalists during the presidential debates.
However, it does undercut the argument that only skilled journalists who
ask RbetterS questions can get straight answers. When asked to explain
the direct-access phenomenon that characterized the 1992 Presidential
election, CNN talk show host Larry King offered one explanation: RHow
you do it is what counts. Most reporters are incapable of asking simple,
Perhaps then, the uncertainty lies in the technical definition
of Rbetter.S It may well be that OstmanUs model was well suited to the
Kennedy era, but outdated in a time of cable, electronic town halls,
video cassette recorders, and candidate 800 numbers. Since the audience
seemed to have all the answers, a survey may be a good place to start.
Definitions of question/ answers topics used in the study.
Family values = Any mention of families and the struggles they face. A
family can be any collection of people living under the same roof. This
category should be recorded if the words "family values" or "the
American family" are stated in the question or answer.
Budget deficit = Referral to the national deficit in the federal budget.
This will be in "trillions" of dollars.
Taxes = What Americans pay on income, property and items purchased.
Often mentioned in reference to income brackets, meaning how much a
person earns from various sources.
Inflation = Refers to a general increase in prices of goods and
Unemployment = The number of people who are out of work. This includes
those who have been temporarily and seasonally displaced as well as
those affected by long-term joblessness.
Welfare= Government-provided support for those unable to support
themselves. This category includes food stamps, Aid to Families With
Dependent Children, General Assistance, and social security. Issues on
Medicare and Medicaid are contained under the heading Rhealth
Health insurance= Payment plans designed to cover costs associated with
health care. Commonly mentioned plans include Medicare and Medicaid,
Blue-Cross Blue-Shield, and Health Maintenance Organizations (HMOs).
This category should be include if any mention of health insurance, or
lack thereof, is mentioned (specific plans need not be named).
Health care= Care provided by hospitals and other organizations. This
category also refers to any research designed to improve care. This
would include (but is not limited to) mention of research referring to
cancer, heart disease and AIDS.
Environment= Refers to mention of anything living in our world that is
not human. This also refers to the oceans and ground soil. It also
refers to any chemicals, or industry that rely on the environment - such
as the timber industry. Also includes mention of the Environmental
Protection Agency, and any special interest groups who are concerned
with environmental issues, such as Green peace.
Vice Presidents= Any mention of Dan Quayle, Albert Gore, or James
U.S. military spending= Refers to how much money the U.S. government has
spent, is spending or will spend on military expenses. This can include
what the money was used for, how it was spent, why it was necessary, who
spent it, and where it was spent.
Experience of candidates= Refers to the past professional and personal
experience of candidates. May include military service, business
decisions, and personal issues, such as marital infidelities.
North American Free Trade Agreement= Refers to the agreement which would
open the Mexican and Canadian borders for increased importing and
exporting with the United States.
Education= Refers to any mention of preschool, K-12, higher education or
continuing education. This could include subjects such as paying for
college, job training or retraining, and research grants to
U.S. disasters=Refers to disasters taking place in the United States in
since January, 1992. Includes Los Angeles riots, hurricanes in Florida
and Hawaii, and the Chicago Floods, and follow-up efforts to those
Campaign process= Refers to the political selection process leading up
to the election, including primaries, campaign speeches and stops, and
debates. Also refers to the media, including any news organization or
journalist covering any activity related to the election, from the
primaries, to campaigning and debates. Also included any mention of
coverage on issues, such as include fair, unfair; biased, unbiased;
Special Interest Groups = Any group organized to a promote a specific
agenda or goal. This category also includes lobbyists, those people who
work to have their position included in federal legislation.
Foreign Affairs = Focus on activities that have, are or will take place
outside of the United States. These activities may or may not directly
involve the United States. They may include: the Persian Gulf War; deals
involving arms for hostages; Soviet Union breakup; China; South Africa;
Bosnia; Germany; Nicaragua; and Japan. This category also includes
international business, and exports.
Abortion= Refers to any mention of pending legislation, state laws,
court cases, demonstrations, case histories, religious concerns, or the
actual medical process of abortion. Commonly used phrases include
"pro-life," "pro-choice," "anti-abortion" and "a woman's right to
Law enforcement= Refers to any mention of organizations protecting and
enforcing federal, state, or city laws. Can also include mentions of
police organizations charged with not doing their jobs. Can include law
enforcement from the FBI through state and local police. Also includes
investigations by these organizations and funds allocated for them. Also
includes mention of any crime. Crime is defined as any illegal activity
which would be pursued by law enforcement officials, such as drug deals.
Congress= Refers to the legislative branch of the United States federal
government, composed of the House of Representatives and the Senate.
This category refers to any mention of Congressional activity, from
votes to bank scandals. This category also refers to mention of any
particular member of Congress.
Trustworthiness= Any mention of placing one's belief and confidence in
the ability of a candidate.
Change=Any mention of a plan to make the future different from the past
or present. Also includes why such an alteration may or may not be a
Banking= Any reference to the Savings and Loan Crisis, the fate of
commercial banking in the United States and overseas, credit, savings,
domestic and international loans, currency value, and interest rates.
Women and minorities=Refers to the roles women and minorities play in
the work force, education, government, the family, and the military, as
well as health resources and programs directed at women and minorities.
Minorities includes anyone other than of Caucasian descent.
Business and consumer affairs=Refers to Wall Street and the Stock
market, small and privately-owned businesses, consumer protection, U.S.
industry, exports and imports (other than those covered through the
North American Free Trade Agreement) union and labor issues, and
employee safety and retraining.
1The three Presidential Debates were held October 12, 15 and 19, 1992.
2Peter Viles, "Talk Radio Riding High," Broadcasting, June 15, 1992, p.
3Mary-Ann Leon and T. Harell Allen, "Improving Political Campaign
Reporting: The Use of Precision Journalism in the 1988 Presidential
Debate," Mass Communication Review, 17:3 (1990), 14-22.
4Ronald E. Ostman, William A. Babcock and J. Cecilia Fallert, "Relation
of Questions and Answers in Kennedy's Press Conferences," Journalism
Quarterly, 58:4 (1981), 575-581.
5Interviewing sources used by the authors were: Maxwell McCombs, Donald
Lewis Shaw and David Grey, Handbook of Reporting Methods (Boston:
Houghton Mifflin Company, 1976); Charles H.Backstrom and Gerald D.
Hursh, Survey Research (Evanston, Ill.; Northwestern University Press,
1963); Eugene J. Webb and Jerry R. Salancik, RThe Interview or The Only
Wheel in Town," Journalism Monographs 2:11 (1966).
6Intra-coder reliability, expressed as ScottUs Phi, was as follows:
Avoid double meanings in questions =100; Be specific=88; Make all of
none of the alterantives explicit in questions =83; Preface unfamiliar
or technical subjects with explantions or illustrations=92; Ask
questions in terms of respondentUs immediate and recent experience=92;
Elicit opinions and attitudes of respondent=88; Avoid loaded
questions=71; Avoid embarrassing questions=79; Avoid multi-part
questions=79; Topic of question=96; Avoid double meanings in answer
=100; Be specific in answer=92; Make all of none of the alternatives
explicit in answer =88; Preface unfamiliar or technical subjects in
answer with explanations or illustrations=96; Cite immediate and recent
experience in answer=83; Provide opinions and attitudes in answer=88;
Avoid loaded answers=75; Answer embarrassing questions=79; Avoid
multi-subject answers=88; Topic of answer=100; . These values reflect
the recalculation of ScottUs Phi after one category with low reliability
was dropped from the study.
7The Ostman study of Kennedy press conferences analyzed questions and
answers for more than a 12 month period. There likely was more time
during a press conference to Rmake all alternatives explicit,S a luxury
is impractical during a 90-minute debate. Also, as members of the media
or audience had few opportunities to ask questions of the candidates, it
is likely that they would want to fit more into each question, thereby
asking the multi-part questions that RbetterS interviewers avoid.
Similarly, candidates may feel that they need to convey a lot of
information in a small amount of time, making multi-part answers more a
necessity than error.
8Colin Cherry, On Human Communication (New York: Science Editions, Inc.,
1961), pp. 240.
9Ostman, op. cit., p. 576.
11This average was taken by dividing the cumulative scores of criteria
designed to measure RbetterS questions and dividing by nine, the number
12Dirk Smillie, RTalking to America: The Rise of Talk Shows in the T92
Campaign,S in An Uncertain Season: Reporting in the Postprimary Period,
(New York: Columbia University, 1992), p. 23.
13Ibid., p. 26.