Population Validity and Subject Selection Bias in Eight Marketing and
Mass Communication Journals: A Critical Review
Dennis T. Lowry and Katherine H. Sundararaman
Southern Illinois University Carbondale
A probability sample of 508 empirical articles from four prestigious
marketing journals and four prestigious mass communication journals from
1991 through 2000 were evaluated to determine the population validity and
subject selection bias of the studies. Both disciplines used a
preponderance (59.3%) of non-probability samples, and one-third of the
articles did not report sufficient sampling procedures to permit
replication. A majority of studies did not report sampling completion
rates; only 3.4% of the studies reported margin of error information. Mass
communication articles relied more heavily on student samples than did
marketing articles (34.4% vs. 19.3%). However, mass communication articles
were significantly more likely to report geographic location, education
levels, and sex of samples. Overall, both disciplines have room for
Population Validity and Subject Selection Bias in Eight Marketing and
Mass Communication Journals: A Critical Review
This study investigates two separate critical concerns pertaining to the
quality of research published in major journals in the disciplines of
marketing and mass communication. These concerns are the extent to which
the scholarly articles in these disciplines demonstrate sufficient
population validity, and also the extent to which they, collectively, are
weakened due to subject selection bias.
In the social sciences, the Publication Manual of the American
Psychological Association (2001) is probably more influential than any
other book in establishing guidelines for what should be reported in the
methods sections of scholarly articles. According to this manual, the
method section of a scholarly article should report sufficient details to
allow "the reader to evaluate the appropriateness of your methods and the
reliability and validity of your results. It also permits experienced
investigators to replicate the study if they so desire" (p. 17).
Why is this important? According to Riffe, Lacy and Fico (1998),
replicability is a "defining trait of science" (p. 105). Without
replication, the arguments against one's conclusions would be so
overwhelming as to render the findings meaningless (Abelson, 1995). Hence,
reliability, validity, and the ability to replicate research are central to
the advancement of scholarship in marketing and mass communication.
This importance is corroborated by the manuscript submission guidelines of
many marketing and mass communication journals. For example, the (then)
editor of Journalism & Mass Communication Quarterly recommended documenting
all steps taken so that the research could be replicated (Folkerts,
1996). The Journal of Marketing Research's editorial policy specifically
states "Empirical research should be reported in sufficient detail that
readers can evaluate and replicate the methodology" ("Editorial policies,"
1995, p. iv). How well do the journals in these two disciplines live up to
Furthermore, when marketing or mass communication studies use human
participants or subjects:
Appropriate identification . . . is critical to the science and practice of
psychology, particularly for assessing the results (making comparisons
across groups); generalizing the findings; and making comparisons in
replications, literature reviews, or secondary data analyses. The sample
should be adequately described. (American Psychological Association, 2001,
Authors should report such characteristics as sex, age, race, level of
education, and other relevant variables. How well do the journals in these
two disciplines live up to these reporting standards?
Although the replicability of a study is clearly of importance, it is not
the primary reason for research. As the initial chapters of social
research textbooks point out, the primary goal of scholary research is to
explain and understand human behavior. If this is the case, the
populations used in social research hold a pivotal role in the
interpretation of a study's results; to what extent can the results be
interpreted in terms of larger populations?
This question addresses a concept known as population validity, or the
degree to which the study's sample represents the population of interest
(Bracht & Glass, 1968). Population validity is an extension of the more
generally known concept of external validity. External validity is defined
as the degree of generalizability of a study, or more specifically, how
much a study's results can be interpreted in terms of a larger population,
setting, treatment variable or measurement variable (Cambell & Stanley,
1963). In other words, how well does a particular study represent "the
bigger picture," both in terms of the population and the environment?
In the past, the concept of population validity has been discussed
primarily within the context of experimental studies. However, the concept
can be logically extended in varying degrees to all types of empirical
social research. Any research that intends to be interpreted beyond the
specific sample used in a study invites a critical analysis of its
Unfortunately, population validity of scholarly research is not often
clearly demonstrated. As Lowry (1979) found in his seven-year review of
seven communication journals, 54% of studies did not clearly state what
type of sample was used; 60% did not specify the population or the intended
universe; and, 78% did not specify their sampling frames. Although all of
the reviewed studies may have maintained strong population validity, this
cannot be assumed. Without presenting evidence, a study has no
demonstrated population validity (Lowry, 1979).
Subject Selection Bias
Even when sample and population details are provided, a study's population
validity may be suspect. A widely used practice in university research
(mostly for reasons involving time and money) that directly affects
population validity is the use of students as research subjects (Babbie,
1988; Judd, Smith, & Kidder, 1991). More than a half century ago, this
common practice was criticized as being too narrow a way to study
behavioral phenomena. As McNemar (1946) put it, "The existing science of
human behavior is largely the science of the behavior of sophomores" (p.
333). Furthermore, Rosenthal and Rosnow (1975) stated that, "McNemar's
observation may now seem too conservative an assessment. A science of
informed and organized volunteer sophomores may be on the horizon" (p.127).
Many researchers believe the main problem with using the college student
population for scholarly research is that students do not represent a valid
substitute for more general populations. Students vary from their adult
counterparts in terms of maturity, life experience, intellectual
stimulation and curiosity, and leisure time activities (Carlson, 1971). In
addition, they are easily influenced, have less defined social and
political opinions, and generally have an underdeveloped sense of self
(Sears, 1986). All of these factors could potentially render a sample of
students different from a broader population in ways meaningful to the
Some academic journals explicitly state their bias against articles with
student samples (cf. "Note to Contributors," 1997). According to former
Journal of Advertising Research editor Arthur Kover, "we do not usually
accept articles that use student samples to explain behavior of other
people" (1998, p. 5).
The field of marketing is not exempt from this concern. When examining the
use of students in marketing research, Burnett and Dunne (1986) found that
student samples were unique from adult samples in their responses in every
category examined. Researchers concluded that using a student sample to
represent a general population would be misleading and false. Cunningham,
Anderson and Murphy (1974) noted sociopsychological differences between
student samples and household subjects when conducting marketing
research. These differences manifested themselves in the students' and
adults' purchasing decisions and product information needs, key factors in
examining new product marketability.
Although the above evidence is engaging, can we unilaterally condemn the
practice of using students as subjects? No. Some studies found that
students could indeed be used as substitutes for certain populations, but
only under certain conditions. Oakes (1972) explained that, when
behavioral phenomena do not interact with subject characteristics, student
subjects can be appropriate. In that case, the population used in research
has no bearing on the external validity, since demographic and
psychographic characteristics do not have bearing on the research
purpose. In a study examining the use of students as substitutes for
businessmen in marketing research, Khera and Benson (1970) found that when
students have a background in the task at hand, they can be good
substitutes. For instance, when evaluating a speaker and the quality of a
presentation, an ability most individuals master at a very young age,
students had similar responses to the businessmen. For tasks in which
students did not have previous experience, there were significant
differences between students and businessmen. As a result, researchers
suggest the use of student samples should be limited to investigating
hypotheses dealing specifically with student populations. If the findings
are generalized to broader populations, extreme caution must be exercised
(Browne & Brown, 1993).
Other studies examining the differences between student samples and adult
samples are inconclusive (Enis, Cox, & Stafford, 1972; Shuptrine,
1975). These studies found significant differences between the student
respondents and the adult respondents, but determined no discernable
pattern that identified one group or the other as more ideal research
participants. Regardless, both studies cautioned against the use of
student samples. For various reasons generally involving the factors of
time and money, however, many marketing and communication researchers
persist in the use of students as subjects.
Regardless of the prudence of using students as subjects, the meta-analyses
on the use of students are in agreement on this finding: the majority of
social science articles reviewed did indeed use college students as
research subjects. It is a very common practice. In a review of the
Journal of Personality Assessment spanning 60 years (1937-1997),
researchers found that 57% of research subjects in sampled JPA articles
were undergraduate college students (Holaday & Boucher, 1999). Korn and
Bram (1988) found that 83.2% of sampled articles in three psychology
journals used college students (undergraduates and graduate students) as
subjects. Sears (1986) noted that 82% of sampled articles in three
psychology journals used college students, 75% of whom were
undergraduates. In a review of multiple articles evaluating the use of
student subjects, Gordon, Slade, and Schmitt (1986) concluded that
approximately 75% of research articles in social psychology have used
Given the extensive attention devoted to this topic in the sister
discipline of social psychology, it is surprising to find that this topic
has been largely ignored in the marketing and mass communication
disciplines. Based on an extensive review of Communication Abstracts from
1970 through 2002, we found only one quantitative review of this practice
in the mass communication discipline. In that study, Lowry (1978) reviewed
seven communication journals, finding that 40% of articles used college
student research participants, but only 27% used college
undergraduates. Based on a search of the multidisciplinary full-text
journal database InfoTrac in July 2001 and December 2002, we found no
reviews of this practice in the marketing discipline over the last 20 years.
The purpose of this study was to fill this methodological void in the
marketing and mass communication literature. We wanted to determine which
of the two disciplines was doing the better job in reporting key
methodological details to readers, and to determine the extent to which
mass communication journals have improved in the last two decades, using
the Lowry (1978) study as a baseline. More specifically, this study was
designed to address eight research questions:
RQ 1. What were the most frequently used types of probability and
non-probability sampling methods?
RQ 2. In what proportion of the studies were the sampling frame and
sampling procedures specified?
RQ 3. In what proportion of the studies involving sampling was the
completion rate specified?
RQ 4. In what proportion of the studies were the demographics of the sample
specified? This includes the geographic area in which the sampling was
done, and the sex, age, occupation, and education level of the respondents.
The importance of research questions 1 through 4 cannot be
over-emphasized. If journal articles in the disciplines of marketing
research and communication research do not routinely report the above
information, then it is simply impossible for readers to determine the
population validity of the reported research. Without this information, it
is impossible for other scholars to interpret and generalize the findings,
and it is certainly impossible for them to replicate the studies. The
ability to replicate is crucial. King (1986) argues:
The best general way to judge the adequacy of reporting is to determine if
the analysis can be replicated. It, of course, need not be replicated, but
in order to contribute methodological and theoretical information to its
readers, a paper must report enough information so that the results it
gives could be replicated if someone actually tried. (p. 684)
If the APA Publication Manual (2001) is correct in considering such
information "critical to the science and practice of psychology" (p. 18),
then presumably it is also critical to the science and practice of
marketing and mass communication research.
RQ 5. In what proportion of the survey research studies was the margin of
error of the sample specified? This piece of information is considered
mandatory in reporting survey research findings, according to the American
Association for Public Opinion Research (2003), because it is vital in
estimating the accuracy of the findings.
RQ 6. In general, did mass communication research report significantly
different levels of the above elements than did the discipline of
marketing? The purpose of this research question was simply to be able to
compare and evaluate the two disciplines. Which discipline is doing the
RQ 7. Specifically in terms of subject selection bias, was the proportion
of marketing research studies using student samples significantly different
from the proportion of mass communication studies doing the same? In other
words, which discipline is doing the better job of avoiding the pitfall of
building its knowledge base on the backs of college students?
RQ 8. In general, did the discipline of mass communication research do a
significantly better job of reporting the above elements in the 1990s than
in the 1970s? Is the discipline improving, staying the same, or getting worse?
The universe for this study consisted of all issues of four major marketing
journals (Journal of Marketing, Journal of Marketing Research, Journal of
Advertising, and Journal of Advertising Research) and all issues of four
major mass communication journals (Journalism & Mass Communication
Quarterly, Journal of Broadcasting & Electronic Media, Mass Communication &
Society, and Journal of Communication) for the years 1991 through 2000,
inclusive. There was a total of 318 issues of these eight journals for
this ten-year period.
A systematic random sample was used to select 25% of the issues. We were
able to obtain hard copies of all of the sampled issues; therefore, the
sample completion rate was 100%.
The information coded included: research method(s) employed, whether the
sampling frame and sampling procedures were specified, the type of sample,
whether the content population of the sample was described (for content
analyses studies only), the sample size, whether the completion rate was
stated, whether the margin of error was stated, and the types of units
sampled. For those studies using people as units, the presence of the
following demographic information was also coded: geography of the sample,
education level, occupation, sex, and age. These categories were based on
Lowry's (1978) article on population validity and student samples. Only
empirical articles involving quantitative measurement were
coded. Scholarly essays, legal articles, historical articles, case
histories, and book reviews were excluded.
All coding was done by the authors. Inter-coder reliability was measured
by having both authors code a 20% random sub-sample of the total
sample. The overall proportion of agreement for all variables was
.88. Reliability was also measured for three subcategories of variables,
measured on a category-by-category basis. The agreement for basic journal
information was .99. The agreement for variables pertaining to research
methods was .81, while the agreement for the coding of reported
subject/respondent variables was .89.
The sample produced a total of 797 articles, 508 (64%) of which were
empirical articles used in this study. Of the empirical articles, 49% were
from mass communication journals and 51% were from marketing journals.
Research Question 1 asked: What were the most frequently used types of
probability and non-probability sampling methods? As shown in Table 1,
non-probability samples were used in 59.3% of the articles, while
probability samples were used in only 26.9%. The most frequently used type
of probability sample was the simple random sample, used in 15% of the
articles. The most frequently used type of non-probability sample, used in
29.7% of the articles, was the convenience or volunteer
sample. Interestingly, in 11.4% of the articles, the sampling method was
unclear, even though all of the journals are peer reviewed, and some of the
journals are among the most prestigious in their disciplines.
Research Question 2 asked: In what proportion of the studies were the
sampling frame and sampling procedures specified? In the articles in which
the use of a sampling frame was relevant, the sampling frame was specified
56.2% of the time (59.8% in mass communication articles and 52.7% in
marketing articles). In terms of the actual sampling procedures used,
these procedures were specified in 66.5% of the studies (79.8% in mass
communication articles and 50.5% in marketing articles). Thus, without
these key pieces of sampling information, one-third of the studies could
not be replicated, even though the ability to replicate findings is crucial
to the advancement of marketing and mass communication research.
Research Question 3 asked: In what proportion of the studies involving
sampling was the completion rate specified? Even though it is a very
meaningful statistic that is easy to calculate, the completion rate was
reported in only 58.3% of the studies where this measure was applicable
(69.1% of mass communication articles and 48.3% of marketing articles).
Research Question 4 asked: In what proportion of the studies were the
demographics of the sample specified? In this analysis, the demographics
of geographic location, occupation, education level, sex, and age were
sought. Geographical location of the sample was the most frequently
reported measure of demographics (69.3%), while age was reported the least
frequently (34.2%) for both mass communication and marketing journals. As
shown in Table 2, for all five of the variables, the mass communication
journals did a better job of reporting demographic information, and the
differences were statistically significant for three variables, using the
z-test for significance between two independent proportions.
Research Question 5 asked: In what proportion of the survey research
studies was the margin of error of the sample specified? Surprisingly,
where margin of error was relevant to a study, it was reported in only 3.4%
of the articles sampled (2.8% of mass communication articles and 4.3% of
Research Question 6 asked: In general, did mass communication research
report significantly different levels of the above elements than did the
discipline of marketing? As shown in Table 2, when reviewing the reporting
of demographic information, there is a significant association between the
type of journal (marketing or mass communication) and the frequency with
which the samples' geography, education level and sex are provided. Mass
communication journal articles report this information significantly more
than their marketing counterparts.
In addition, there is a signification association between the reporting of
the completion rate (X2 = 7.53, df = 2, p < .05) and sampling procedures
(X2 = 23.046, df = 2, p<.001) and the type of journal. Again, mass
communication journal articles included this information more frequently
than did the marketing journal articles.
Research Question 7 asked: Specifically in terms of subject selection bias,
was the proportion of marketing research studies using student samples
significantly different from the proportion of mass communication studies
doing the same? Indeed, there was a signification association between the
type of samples (student versus non-student) used in marketing journals and
mass communication journals (X2 = 12.13, df = 2, p<.01). As shown in
Table 3, both mass communication journals and marketing journals use
non-student samples more frequently than student samples. However, the
mass communication journals are significantly more open to the charge
leveled by McNemar (1946) more than a half century ago than are the
Research Question 8 asked: In general, did the discipline of mass
communication research do a significantly better job of reporting the above
elements in the 1990s than in the 1970s? The percentage of reporting has
increased from the 1970s to the 1990s in every category except the
demographic variable of occupation. The largest improvements were seen in
the reporting of sampling frames and sampling procedures. This observation
must be interpreted with caution, however, because only three of the mass
communication journals used in this study are identical with the seven
journals used by Lowry (1978). Interestingly, when these three journals
are directly compared, the changes in reporting are fairly consistent
across the journals, as shown in Table 4. For all categories the frequency
of reportage increased from the 1970s to the 1990s, except geographic area,
occupation and education level. In fact, for geographic area and
occupation, all the journals decreased the frequency with which these data
As mentioned earlier, the reporting of certain sampling and demographic
information is critical to the determination of population validity and to
the ability of other researchers to replicate a study, two of the basic
principles of empirical research. In addition, the inclusion of this
information in often explicitly required by the manuscript submission
guidelines of scholarly journals (cf. "AMA Journal Publication Policy,"
1992). If this is the case, the results of research questions 1-4 raise
troubling implications for the disciplines of marketing and mass
communication research. Since sampling method, completion rate, sampling
frame and procedures, and sample demographics are easy to report, why are
the scores not close to 100%?
Similarly, if margin of error is considered a mandatory statistic in the
reporting of survey research (American Association for Public Opinion
Research, 2003), then how can we, as researchers, be satisfied with
including these data in only 2.8% of the relevant published studies
involving surveys? Is this satisfactory? After all, major newspapers,
news magazines, and network TV newscasts routinely report this information;
it is considered essential for interpreting survey research results. In
this respect, then, the major news media are ahead of the marketing and
mass communication journals analyzed in this study.
On the positive side, the mass communication discipline has shown marked
improvement in the last two decades in terms of reporting this
information. As a whole, the disciple has improved its reporting in six of
eight categories (see Table 4), and the largest increases were shown in the
reporting of sampling frame and sampling procedures. Through this, and
through recently published editorial policies expressly requiring reportage
of population, sampling procedures and response rate, we know that the
importance of this information is recognized (cf. "Editorial policy,"
2002). Still, only 37% of the journals are reporting the age of their
samples, and only 48% are reporting the sex, two characteristics that are
easy to calculate, yet can have a large influence on the replicability of
one's findings. Again, why are these journals not close to 100%?
Perhaps most importantly to the results of the study itself, why are we
still so heavily reliant on student samples, especially in mass
communication research? Research has shown that student samples differ
from non-student samples in ways that may be meaningful to the study's
results (Burnett & Dunne, 1986; Carlson, 1971; Cunningham, Anderson, &
Murphy, 1974; Sears, 1986). Although mass communication was more reliant
on students for research (34.4%) than was the field of marketing (19.3%),
both disciplines were doing better in this respect than the field of
psychology. Meta-analyses in that field found student subjects used in 57%
(Holaday & Boucher, 1999), 75% (Gordon, Slade, & Schmitt, 1986), 82%
(Sears, 1986), and 83% (Korn & Bram, 1988) of the studies, respectively.
Also encouraging, mass communication has improved in its use of student
samples from the 1970s. Lowry's 1978 study reviewed seven communication
journals from 1970 to 1976, and found that 40% of articles relied
exclusively on student samples.
On one hand, the disciplines of marketing research and mass communication
research can certainly do better in continuing to reduce the number of
studies based on volunteer student subjects. On the other hand, as alluded
to earlier, not all research with students can be considered
unworthy. Non-random or non-probability samples also have their
utility. Universalistic research, that which seeks to build theory or test
claims that apply across differing populations, does not attempt to
generalize to a broader population (Berkowitz & Donnerstein, 1982; Judd,
Smith, & Kidder, 1991). As such, universalistic research does not need to
exclude student samples. Also, a non-random sample is as worthy as a
random sample, and much cheaper and easier to come by.
It is the particularistic research samples that are of special
concern. Particularistic research seeks to generalize to broader
populations (Berkowitz & Donnerstein, 1982; Judd, Smith & Kidder, 1991).
Therefore, rather than unilaterally criticizing research conducted with
student subjects, one must identify the research purpose in order to
determine if the use of student subjects is appropriate. This was beyond
the scope of the present study, but would be desirable to measure in future
research. This same limitation applies to all of the other studies cited
above on this point (cf. Gordon, Slade, & Schmitt, 1986; Holaday & Boucher,
1999; Korn & Bram, 1988; Lowry, 1978; Sears, 1986).
Another limitation of the present study is that it used only four journals
to represent each discipline, and it could be argued that we are drawing
broad conclusions from a small sample. Nevertheless, the eight journals
analyzed were significant ones in each discipline, and are considered
representative of research excellence in their respective fields.
We are not arguing that editors and reviewers should simply reject
manuscripts reporting research that uses only student subjects. Instead,
we agree with Pyrczak (1999): "If journal editors routinely refused to
publish research reports with this type of deficiency, there would be very
little, if any, published research on most of the important problems in the
social and behavioral sciences" (p. 48). However, student samples should
be used sparingly and with sufficient disclaimers. If not, we
collectively risk the reputation of our disciplines and willingly
perpetuate the "science of the sophomore."
Abelson, R. (1995). Statistics as principled argument. Hillsdale,
NJ: Lawrence Erlbaum Associates.
AMA journal publication policy on disclosure of research
methodology. (1992). Journal of Marketing Research, 29, 161.
American Association for Public Opinion Research (2003). Standards and
best practices [Online]. Retrieved January 10, 2003,
American Psychological Association. (2001). Publication manual of the
American Psychological Association (5th ed.). Washington, DC: American
Babbie, E. (1988). The practice of social research. Belmont,
CA: Wadsworth Publishing Company.
Berkowitz, L., & Donnerstein, E. (1982). External validity is more than
skin deep. American Psychologist, 37, 245-257.
Bracht, G., & Glass, G. (1968). The external validity of
experiments. American Educational Research Journal, 5, 437-474.
Browne, B., & Brown, D. (1993). Using students as subjects in research on
state lottery gambling. Psychological Reports, 72, 1295-1298.
Burnett, J., & Dunne, P. (1986). An appraisal of the use of student
subjects in marketing research. Journal of Business Research, 14, 329-343.
Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental
designs for research. Chicago: Rand McNally.
Carlson, R. (1971). Where is the person in personality research?
Psychological Bulletin, 75, 203-219.
Cunningham, W., Anderson, W., Jr., & Murphy, J. (1974). Are students real
people? Journal of Business, 47, 399-409.
Editorial policies. (1995). Journal of Marketing Research, 32, iv.
Editorial policies. (2002). Journal of Communication, 52, 242.
Enis, B., Cox, K., & Stafford, J. (1972). Students as subjects in consumer
behavior experiments. Journal of Marketing Research, 9, 72-74.
Folkerts, J. (1996). An editorial comment. Journalism and Mass
Communication Quarterly, 73, 280-281.
Gordon, M., Slade, L., & Schmitt, N. (1986). The "Science of the Sophomore"
revisited: from conjecture to empiricism. Academy of Management Review, 11,
Holaday, M., & Boucher, M. (1999). Journal of Personality Assessment: 60
years. Journal of Personality Assessment, 72, 111-124.
Judd, C., Smith, E., & Kidder, L. (1991). Research methods in social
relations (6th ed.). Fort Worth, TX: Harcourt Brace Jovanovich College
Khera, I., & Benson, J. (1970). Are students really poor substitutes for
businessmen in behavioral research? Journal of Marketing Research, 7, 529-532.
King, G. (1968). How not to lie with statistics: Avoiding common mistakes
in quantitative political science. American Journal of Political Science,
Korn, J., & Bram, D. (1988). What is missing in the Method section of APA
journal articles? American Psychologist, 43, 1091-1092.
Kover, A. (1998). Editorial: Jumps. Journal of Advertising Research,
Lacy, S., Riffe, D., & Randle, Q. (1998). Sample size in multi-year
content analyses of monthly consumer magazines. Journalism & Mass
Communication Quarterly, 75, 408-417.
Lowry, D. (1978). Subject selection bias in communication studies.
Journalism Quarterly, 55, 577-578.
Lowry, D. (1979). Population validity of communication research: Sampling
the samples. Journalism Quarterly, 56, 62-68.
McNemar, Q. (1946). Opinion-attitude methodology. Psychological Bulletin,
Note to Contributors. (1997). Journal of Advertising Research, 37(3), 6-7.
Oakes, W. (1972). External validity and the use of real people as subjects.
American Psychologist, 27, 959-962.
Pyrczak, F. (1999). Evaluating Research in Academic Journals: A Practical
Guide to Realistic Evaluation. Los Angeles: Pyrczak Publishing.
Riffe, D., & Freitag, A. (1997). A content analysis of content analyses:
Twenty-five years of Journalism Quarterly. Journalism & Mass Communication
Quarterly, 74, 515-524.
Riffe, D., Lacy, S., & Drager, M. (1996). Sample size in content analysis
of weekly news magazines. Journalism & Mass Communication Quarterly, 73,
Riffe, D., Lacy, S., & Fico, F. (1998). Analyzing media messages: Using
Quantitative Content Analysis in Research. Mahweh, NJ: Lawrence Erlbaum
Rosenthal, R., & Rosnow, R. (1975). The volunteer subject. New York: John
Wiley & Sons.
Sears, D. (1986). College students in the laboratory: Influences of a
narrow data base on social psychology's view of human nature. Journal of
Personality and Social Psychology, 51, 515-530.
Shuptrine, F. (1975). On the validity of using students as subjects in
consumer behavior investigations. Journal of Business, 48, 383-390.
Use of Probability and Non-Probability Samples by Eight Marketing and Mass
Percentages of articles reporting
Mass communication articles
(N = 248)
(N = 260)
(N = 508)
Multiple probability samples
Mixed probability and non- probability samples
a Does not total 100% due to normal rounding error.
Reporting of Demographic Characteristics by Eight Marketing and Mass
Reporting frequency (%)
Mass communication articles
(N = 248)
(N = 260)
Note. The percentages are independent scores and not intended to sum to 100%.
Use of Student Samples in Marketing and Mass Communication Journals
Frequency of use (%)
Type of human sample
Mass communication articles
(N = 248)
(N = 260)
Note: Percentages do not total 100% due to normal rounding error. For
student vs. non-student samples, mass communication articles vs. marketing
articles, X2 = 12.13, df=2, p<.01
Comparison of Methodological Details Reported in 3 Mass Communication
Journals in the 1970s and 1990s
Frequency (%) of Reporting in 1970sa
Frequency (%) of Reporting in 1990s
Note: The three journals are Journalism & Mass Communication Quarterly,
Journal of Broadcasting & Electronic Media, and Journal of
Communication. The percentages are independent scores and are not intended
to sum to 100%.
a As reported in Lowry's (1978) study.