Using the Internship as a Tool for Assessment:
A Case Study
Dr. Beverly Graham
[log in to unmask]
Ms. Pamela G. Bourland
[log in to unmask]
Dr. Hal W. Fulmer
[log in to unmask]
Department of Communication Arts
Landrum Box 8091
Georgia Southern University
Statesboro, GA 30460-8091
The Internships and Placement Interest Group Division
Association for Education in Journalism and Mass Communication
Washington, D.C., August 1995
RUNNING HEAD: Internship & Assessment ABSTRACT
With increasing concern for accountability in education, faculty
and administrators currently are seeking improved means by which
to evaluate their programs. The internship provides a natural
feedback loop for program assessment as it bridges the gap
between the academic and the applied worlds of communication.
Specifically, this paper, by case study, explores the utility of
the analysis of student internship feedback and its subsequent
impact on program assessment and development.
One of the most necessary activities in the communication
discipline today is also one of the most challenging: how to
assess the communication program in a meaningful and timely
fashion. Program assessment, as well as institutional
accreditation, demands a close relationship between program
goals, measurement of the progress toward these goals, analysis
of these measurements, and a feedback mechanism for communicating
the results back to the goal setters. Certainly, programs in
communication are expected to operate in some kind of harmony
with the communication activities of organizations outside the
university in professional settings.
This paper describes one possible activity in this matrix:
the use of student internships as a means of assessing the
communication program from which the students will take their
degrees. An internship program is an excellent opportunity to
generate feedback for assessment; that is, communicating needed
information back to those individuals who are charged with
setting goals and making decisions for the program. Of
particular interest in this paper is the relationship between the
activities of the students during their internships and the
content of the courses in their major. The internship is a
crucial step in bridging the gap between the academic and the
applied worlds of communication for students as well as faculty,
and as such, provides a significant opportunity to assess what,
if any, distance exists between the academic program and the
practical world beyond the ivory towers. The following essay
explores assessment, and then presents a case study involving
public relations internship letters which were used to generate a
checklist for the assessment process.
Forty states have some type of assessment mandates
according to Theodore J. Marchese (1990-91), vice president of
the American Association for Higher Education and editor of
Change Magazine. Presently the state mandates are relatively
permissive: each institution is free to devise an assessment plan
that best fits its individual needs, but the fact that the
majority of the states have a mandate for assessment relays the
significance of such a concept to the educator. At its December
13, 1989, Board of Regents meeting, the University System of
Georgia approved the following policy on planning and assessment:
Each institution shall have a plan, submitted to the
Chancellor's office, which will contain the
institution's current goals and priorities, a summary
of significant assessment results and associated
improvement objectives, and action plans by which
institutional priorities, including improvements in
effectiveness, will be achieved. (Section 200--
The language used in the policy makes it clear that a formalized
plan addressing goals, priorities, assessment results and
improvements in effectiveness must be in place. The concept of
assessment has moved from the notion of a "good idea" to a
requirement in academe. The number of states mandating
assessment and the language used in board policy should motivate
educators to be proactive in assessment especially while the
directives remain relatively permissive. This window of
opportunity for proactive participation in the assessment process
should be a high priority in academe.
Moreover, with budget cuts and program reductions a reality
in higher education, assessment has perhaps gone from being a
high priority to being a lifeline. Many programs threatened with
budget cuts or even program elimination claim assessment as the
best protection. Often deans, presidents, chancellors, or
governing boards do not understand the unique characteristics or
goals of programs (Atwater, 1993). Assessment materials provide
empirical data that can define and verify not only the nature of
programs but show continuing progress toward academic goals.
Assessment has evolved through the interaction of the
concepts of academic improvement and external accountability
(Ewell, 1992). The undergraduate curriculum reform of the early
1980s was prompted in part by the choice-based curriculum of the
1960s. It became apparent in the 1980s that students were ill
prepared for college and that students graduating from college
lacked basic skills necessary for the transition into the work
place. Three major themes emerged from the curriculum reform
debate: high standards, active student involvement in the
learning process, and explicit feedback on performance (National
Institute of Education, 1984). By late 1986, both legislators
and governors had become sensitive to the impact of monies being
put into postsecondary education. Especially during tight budget
years, state governments wanted to see the return on their
investment, thus the accountability movement was put into motion.
Assessment is defined as:
The process of determining the degree to which expected
results have been achieved in the actual outcomes of
institutional activities, and of consequently improving
the institution's performance of those activities.
Assessment is accomplished through formal, systematic,
observation, measurement, statistical analysis, testing
or other means. (University System of Georgia Outline
for Developing Models for Assessing Outcomes of the
The motivating purpose of assessment should be improvement, an
improvement not obtained by comparing institutions but by
programs evaluating themselves to determine their effectiveness.
Assessment requires that programs question and reflect upon
appropriate curriculum, educational experiences, and the amount
of student learning. Ultimately, tools (observations, testing,
etc.) for measuring results should be developed that provide
programs with feedback. The end result of this feedback provides
input for improvement of programs. The process of assessment
helps frame the abstract notion of "learning" into measurable
results, aiding the educator in answering the questions of "how
well are we doing what we should be doing?"
The assessment cycle (Assessing Outcomes in the Major, 1992)
is diagrammed as follows:
As a cyclical process with no beginning or end, the model
continuously provides feedback to the program.
Feedback is perhaps the paramount benefit of the assessment
cycle in that it identifies areas of weakness or strength for the
program's improvement. For example, an outcome for a journalism
program could be that "students have a working knowledge of the
First Amendment before they graduate." If questions pertaining
to the First Amendment are not adequately answered by the
students on a senior exit exam, faculty members in the journalism
program need to look to new curriculum or revised course content
aimed at improving the program. Curriculum changes, changes in
course sequencing, new assignments or learning activities are all
examples of how assessment can be used for improvement.
Bourland, Graham, and Fulmer (1995) summarized the results of an
intern feedback analysis and indicated that it could be used for
a variety of purposes such as evaluating potential and existing
sites, helping students select courses, as well as for assessing
capstone courses. They wrote (p. 13),
This kind of study might be characterized as outcomes-
driven, student-generated, and feedback-sensitive for
program assessment. Clearly, this kind of information is
useful for faculty, students and administrators in building,
maintaining and revising successful public relations
programs. Faculty members can use these results to
determine overall strengths and weaknesses of their program
as well as focus specific attention on the saliency of a
particular capstone course.
The methods and procedures used in assessment are varied.
Program participants are encouraged to use creativity and their
expertise when developing measurement methods for assessment. An
exit exam is only one option. Other programs alternately
implement senior exit interviews, alumni surveys, internship
supervisor surveys, student portfolios reviews, capstone course
evaluations, standardized tests, pre-test/post test, etc. While
much of program assessment leans toward quantification through
surveys and tests, qualitative methods such as the interviews
also provide creative and insightful opportunities for
assessment. Greene (1994, p. 54) wrote of qualitative evaluation
in social programs, "Qualitative methods, for example, can
effectively give voice to the normally silenced and can
poignantly illuminate what is typically masked."
Faculty members are considered the most credible sources for
formulating assessment measures for their particular programs.
Faculty members have the best knowledge of what their students
should know, and therefore should be the best people to formulate
assessment methods. When program members join together to
tabulate and discuss the measurable results of assessment methods
significant information emerges. An advantage of this joint
effort is it helps faculty members objectify themselves from that
which they know best; for when faculty develop courses, monitor
content, and advise students, it becomes difficult to fairly
evaluate how effectively a program is working.
In the case that follows, the faculty of a public relations
program decided to incorporate the students' voices in one method
of assessment. As such, the program rather than the students
were tested, allowing the program a reflection of itself as a
mirror of current and future practices. The method represented
qualitative and quantitative approaches. Faculty from both the
public relations program and the departmentally related speech
communication program were involved in the program evaluation.
The Case Study: An Assessment Process
The internship, as a pre-professional experience for stu-
dents, affords an excellent opportunity for program assessment.
Regardless of the particular internship program's discipline, a
review of student activities based on student reports, portfolios
and discussions with supervisors provides a basis for feedback.
For example, discussions with internship site supervisors and
interns in the past have led the authors to make course adjust-
ments such as incorporating expanded discussion of "pitching"
stories to the media.
Feedback can also be more formal, more empirically driven,
with a structured approach for evaluating the internship
experience. In this case study, a content analysis of intern
feedback in the form of mid-term letters provided a listing of
student activities, a listing which was then applied in a formal
assessment of the public relations program of a mid-sized
university in the southeast. The assessment process is described
below beginning with an overview of the content analysis, and
followed by the application of the resulting checklist based on
established program outcomes or objectives.
The actual checklist originated from an earlier study in
which the researchers (Bourland, Graham & Fulmer, 1995) examined
102 mid-term letters, or narrative accounts, from students
participating in a senior-level, full-time, required internship
program. The mid-term letters described the interns' activities
and progress, and represented approximately 200 hours of work per
student, 54 different sites, and a two-year time span. The
letters were one of several standard requirements of the intern-
ship program (in addition to a mid-term meeting, final report,
portfolio and interview), and as such, rendered an unobtrusive
panorama of the internship experience. Traditionally the faculty
supervisor evaluates the student's progress and determines
whether the site is meeting its contractual obligations, based on
these mid-term reports. Students also know their letters are
posted for review by other students.
The method for analyzing the letters entailed two different
authors reviewing each letter to extract tasks and other
recurring items. Using an adaptation of Lofland and Lofland's
(1984) categories for analyzing social settings, the items
derived from the content analysis were defined as acts (specific
tasks with tangible products) and encounters (interpersonal
development items such as networking and interacting with
vendors). Two of the three authors, furthermore, had to agree on
the identification of the repetitive acts and encounters culled
from the narratives.
The content analysis of these narratives yielded 89 various
repetitive activities described by the students, which were then
collapsed into 46 categories and ranked. Items achieving a
mention by at least 15 of the 102 students resulted in a "top 21"
listing of recurring acts and encounters (See Table 1). The 21
items mentioned by students included special event planning and
managing; writing for a variety of purposes -- press releases,
memoranda, letters, etc.; use of technology (the fax, desktop
publishing programs, etc.) as well as general office work;
exposure to the "real world" and self-development; vendor
contacts along with networking opportunities and participation in
meetings; and collateral development such as writing and
designing newsletters, brochures and signage. Other items
mentioned were research, reports and media relations.
Items below the 15 percent demarcation tended to be more
specific divisions of items already represented (e.g., "features"
with 14 mentions was more specific than the press releases or
newsletters) or represented specialized areas (e.g., job
interviewing, with 7 mentions, or international public relations,
with 2 mentions). These items were not collapsed because they
did not clearly go into one singular category and because the
students' distinctions could not be retained. Items not
mentioned as frequently were, however, considered in light of
future trends which would affect the program and the field.
The results of this analysis of student feedback served as
the basis for assessing the public relations program, according
to established program outcome or objective statements. For
example, one outcome was: "To offer students a program consistent
with current public relations practices." Using the public
relations sequence of courses, each item in the content analysis-
derived list of public relations activities was compared to each
course identified by course descriptions or syllabi as addressing
the itemized topic.
Virtually every course in the pre-major area, the major, and
upper-division courses (related to the major) were identified as
addressing at least one and as many as ten of the top 21 intern
activities. Only four activities were not directly reflected in
a course within the program of study. Three were general office
work; real world references which highlighted the transition
between school and work; and self-development or comments
relative to an increased sense of professionalism, and improved
portfolios as well as organizational and deadline abilities.
These three activities, however, are expected results of the
internship process itself. The fourth activity not represented
in the program of study was special event management (versus
planning), although basic planning elements are covered in at
least three different required courses. The other finding in
this application section was that of all the classes, the public
relations writing course received the highest number of
corresponding activities from the internship program.
Based on these results, the following recommendations were
forwarded. The first recommendation was to begin to address the
needs of special event management versus planning by offering an
elective course in special events.
The other recommendations centered around the fact that the
public relations writing course was so heavily weighted with the
activities performed by the interns. To maximize the benefits
offered by this course, the assessment team suggested creating a
separate "Specialized Publications" course which could address
writing for, editing and designing brochures, newsletters,
signage, etc., thereby allowing the existing course to focus more
on broader writing applications. Additionally, since the
subjects covered in this class were so important for the
internship or for entry-level work, students (as well as faculty)
would benefit from a lower student-faculty ratio in the public
relations writing course. To achieve these recommendations, both
an upgraded computer lab for class as well as a faculty or
computer staff position would be requisite.
While these recommendations and their implementation are
still under review, the internship experiences provided an
important feedback loop for program assessment. This case study
suggests three key points about the usefulness of internships in
assessment. First, the internship can be translated into
empirical results, greatly assisting those individuals who
directly shape the public relations program. This empiricism
should also be advantageous when confronting non-program
administrators (deans, vice-presidents) with the need for
additional human and physical resources.
Second, this case study highlights the significant
confluence which occurs between the program administrator (the
faculty typically), the students of the program, and those
individuals who supervise students in the work place. Such a
convergence of feedback heightens the usefulness of the
internship for assessment; that is, this assessment is not
unilaterally driven by only one of these groups. In many cases,
unfortunately, program assessment rests with program
administrators' perceptions of successes and needs.
A final conclusion which might be drawn from this case study
concerns the holistic nature of this kind of assessment. The
internship experience reflects on the entire program of study and
highlights its strengths and weaknesses. Program assessment
which is conducted via examination scores in introductory
classes, for example, are limited in their revelation. This
holistic assessment is reflected in the following model:
Interpretation of Program Needs
Changes in the Program Objectives
Further Analysis of the Internship Experiences
Continued Interpretation of Program Status
On-going Changes in Program Objectives.
As this model indicates, this case study suggests the
evolutionary nature of program assessment via the internship.
Random samples of letters from interns can be evaluated annually
to maintain a list which accurately reflects current practices in
the field, or at least current expectations of the interns.
Additional benefits of this holistic assessment include using the
internship checklist as a basis for evaluating capstone courses,
conducting in-depth interviews with students and internship site
supervisors, or reviewing student portfolios.
The assessment process reviewed herein focuses strictly on
programs; that is, it addresses the state of the program
according to current field practice. A natural limitation of
this case is that it does not address the student's level of
expertise nor faculty effectiveness. Additionally, syllabi and
course content may not exactly parallel what occurs in the
classroom. Total program assessment would certainly require the
application of a variety of assessment measures.
This paper has suggested, via case study, the usefulness of
the internship experience for program assessment in public
relations. This essay especially noted the strong link between
program objectives and the vital feedback loop provided by
student internship activities. Through the methods discussed in
this essay, internship experiences can be translated into a
checklist of knowledge necessary for the successful transition
from the classroom to professional settings. The internship
experiences provide a basis for holistic assessment of a
program's strengths and weaknesses and can provide empirically-
driven justifications for program changes.
An Internship Job Description
Rank Category # of References
1 Press Releases 56
2 Event Planning 52
3.5 Technology 46
3.5 General Office Work 46
5 "Real World" 42
6 Vendor Contacts 41
7 Newsletters 39
8 Research 37
9 Networking 33
10.5 Self-Development 32
10.5 Miscellaneous Writing 32
12 Meeting Participation 27
13.5 Signage/Display 26
13.5 Office Writing 26
15 Event Set-up 24
16 Media Relations 22
17 Telephone Contacts 19
18 Brochures 18
19 Press/Information Kits 17
20.5 Promotion/Publicity 15
20.5 Reports 15
Atwater, A. (1993). Reassessing and reestablishing our
academic province. Journalism Educator, pp. 73-76.
Assessing Outcomes in the Major. (1992). Resource Manual
for University System of Georgia, (Interim Report).
Bourland, P.G., Graham, B.L. & Fulmer, H.W. (1995).
Defining the public relations internship through student
feedback: A content analysis of mid-term reports. A paper
submitted for consideration to the Public Relations Division of
the Southern States Communication Association for presentation at
its annual meeting in Memphis, Tennessee, April 1996.
Ewell, P.T. (1991). To capture the ineffable: New forms of
assessment in higher education. In G. Grant (Ed.), Review of
Research in Education, 17, Washington, DC: American Educational
Greene, J.C. (1994). Qualitative Program Evaluation:
Practice and Promise. In N. K. Denzin & Y. S. Lincoln (Eds.),
Handbook of qualitative research, pp. 530-544. Thousand Oaks, CA:
Lofland, J., & Lofland, L.H. (1984). Analyzing social
settings, 2nd ed. Belmont, CA: Wadsworth Publishing Company.
Marchese, T.J. (1990-91). Assessment's next five years.
Association for Institutional Research Newsletter, pp. 1-4.
National Institute of Education, Study Group on the
Conditions of Excellence in American Higher Education. (1984).
Involvement in learning: Realizing the potential of American
higher education. Washington, DC: U.S. Government Printing
System Task Force on Assessing Outcome in the Major.
(1992). Definitions and principles pertaining to assessment.
University System of Georgia Outline for Developing Models for
Assessing Outcomes of the Major.
University System of Georgia. (1989). Board of Regents
Policy on Planning and Assessment. (Section 200).