Cable Subscribers' Service Expectations
An Examination of Cable Television Subscribers'
Service Expectations and Preferences
Randy Jacobs, Ph.D.
School of Communication
University of Hartford
West Hartford, CT 06117
E-mail [log in to unmask]
Running Head: Cable Subscribers' Service Expectations
This paper reports data collected on cable subscribers' expectations and
preferences for installation, repair, and service representative availability.
The data were gathered in 607 telephone interviews and analyzed using a
performance elasticity approach that incorporated three expectations standards.
The results reveal the range of performance expectations consumers hold for
cable service and compare these standards with actual system performance in
light of service satisfaction evaluations. Implications for research and cable
system management are discussed.
An Examination of Cable Television Subscribers'
Service Expectations and Preferences
The Telecommunications Act of 1996 was designed to, among other things,
generate robust cross-industry competition between the telephone, cable
television, and direct broadcast satellite industries (Pub. Law 104-104). With
the apparent growth of direct broadcast satellite services and trade press
reports about telephone and cable company forays into their respective
businesses, the Act seems to have been successful at stimulating effective
competition. However, there is evidence that suggests otherwise. A recent FCC
report to Congress argues that despite an increase in the number of direct
satellite subscribers to nearly 5 million, cable's share of the nonbroadcast
television market remains at nearly 89%. Moreover, rather than aggressively
invading each other's turf telephone and cable companies have pulled back and
appear to have reached a form of d tente (Wilke, 1997).
Although competition may not be breaking out nationwide, in some
communities the telecommunications marketplace is heating up. For example, the
first statewide challenge to incumbent cable providers by a local phone company
is currently underway in Connecticut. Southern New England Telecommunications
Corp. (SNET) recently began offering cable television to the 2,000 residents of
Uniondale, a distant suburb of Hartford. SNET's service offers 80 channels,
including premium and pay-per-view, and is priced about the same as TCI, the
cable provider that holds the franchise in Uniondale (Kirkpatrick, 1997).
Meanwhile, TCI now offers in SNET's service area People Link, a local phone
service. And, in response to SNET's cable effort and aggressive marketing by
direct satellite companies, TCI just rolled out ALL TV, a state-of-the-art
digital 136-channel service with an interactive, on-screen program guide
(Keveny, 1997). The top-of-the-line ALL TV package costs $64.99 per month.
Where effective competition with the cable television industry does exist,
consumers' ability to differentiate between multichannel video providers is
challenged. If the program offerings are equivalent, or nearly identical, and
differences in the price structure are negligible, as with SNET's offering and
TCI's standard service, then what remains to distinguish one product from
Several years ago cable providers began to gird for competition through
attitude bolstering image advertising and on-time installation and repair
service guarantees (Robichaux, 1995). Creating positive brand associations is
one important way to build loyalty among cable subscribers presented with
competitive options from telephone companies like SNET. And determining
subscriber expectations and preferences to provide service that creates customer
satisfaction is another.
This paper reports data collected as part of a larger study of cable
subscriber expectations conducted for a local cable system operator. This
analysis is centered on subscribers' expectations and preferences regarding the
installation and repair of cable and customer service representative
Consumer Expectations & Service Quality
In recent years, a large body of academic and popular literature addressing
the importance of customer satisfaction has been generated. Customer
satisfaction research is concerned with the "extent to which products and
services meet customers' wants and needs" (Dutka, 1994, p. 37). Conceptually,
it is widely believed that wants and needs create expectations and meeting
customer expectations results in satisfaction. Zeithaml, Parasuraman, and Berry
(1990), widely respected service quality researchers and the developers of the
SERVQUAL instrument, believe finding out what customers expect is essential to
providing service quality. Thus, creating satisfaction is "largely a matter of
understanding and meeting customer requirements and expectations, particularly
those that are essential to customer retention and loyalty" (Brandt, 1992).
Although it is commonly believed that satisfaction or dissatisfaction can
be gauged by comparing customer expectations with perceived performance (i.e.
expectancy disconfirmation), recent research suggests that performance can, and
perhaps should, be compared to standards other than expectations (Dutka, 1993).
Essentially, the question is, how should we define "expectations"?
Several studies of performance standards (expectations) and satisfaction
have been designed to test alternative comparison standards in search of the
best predictor of satisfaction/dissatisfaction (see e.g. Myers, 1991; Tse &
Wilton, 1988). Myers (1991) argues that if a customer who purchases a product
or service expects poor performance, then actual poor performance, although
meeting expectations, would not result in satisfaction. Therefore, he
recommends a higher comparison standard than expected product performance, which
represents a product's predicted performance based on past experience (Myers,
1991; Tse & Wilton, 1988).
Spreng and Olshavsky (1992) suggest comparing performance with customer
desires to gain a more accurate measure of customer satisfaction. Woodruff,
Cadotte, and Jenkins (1983) discuss "experience-based norms" that take into
account past experiences with competing products and companies. Other possible
comparison standards include minimum tolerable performance standards (i.e. the
least acceptable level or what performance must be in order to continue
purchasing), comparisons to an "ideal" or wished for level of product or service
performance, and equitable performance which is the level of performance the
consumer ought to receive, or deserves, given costs, effort, previous product
experiences (Dutka, 1993; Myers, 1991; Tse & Wilton, 1988).
While academic researchers have been working to identify the definition of
expectations that, when compared to actual performance (i.e. subtractive
disconfirmation), best predicts overall satisfaction, corporate management's
information needs often relate to more mundane issues. Cable system operators,
for example, are under pressure to achieve incompatible goals such as satisfying
subscribers' programming wants and needs while reducing programming costs. Or,
a system operator might be working to meet new customer expectations for
installation without compromising repair service for existing customers. From
this perspective, the issue becomes not which comparison standard best predicts
overall satisfaction, but how managers can use knowledge of consumer performance
expectations, perceptions, and satisfaction to make better informed decisions.
D. Randall Brandt, Vice President, Burke Customer Satisfaction Associates
recommends utilizing multiple measures of expectations in a performance
elasticity approach (Brandt, 1992). Brandt's approach compares actual measures
of performance with several measures of expectation or standards of comparison,
such as ideal and minimum tolerable expectation measures. Analyzing these data
for specific performance attributes reveals the performance range in which
customer satisfaction is likely to result.
The performance elasticity approach is advantageous for several reasons.
It provides a perspective of customer satisfaction in terms of how performance
compares to what customers want or require. It also permits assessment of
differences in comparison standards of customer expectations and requirements.
Further, it furnishes information regarding actual customer expectations and
requirements themselves. Finally, the performance elasticity approach permits
tracking of changes in customer expectations and requirements.
This study was designed and conducted for a single cable television system
operator. The goal of the study was to assess subscribers' expectations and
preferences across a range of service performance attributes. The results were
used by management in strategic planning and allocation of system resources.
Several research questions were articulated to guide this research effort.
For the data reported here, the research questions were as follows:
RQ1. What are cable subscribers' expectations for timeliness of cable
installation and repair?
RQ2. How do subscribers perceive actual system performance on
timeliness of cable installation and repair?
RQ3. What is the magnitude of subscriber satisfaction
(dissatisfaction) with the timeliness of cable installation and
RQ4. What are cable subscribers' day and time preferences for
installation and service calls to the home?
RQ5. What are cable subscribers' preferences for customer service
In addition, the study design allowed for the testing of the comparison
standards employed to determine if they were, indeed, measuring distinct
conceptualizations of expectation.
The data reported here were collected as part of a survey of subscribers to
a large cable system in the Hartford-New Haven Designated Market Area (DMA).
Data were gathered in telephone interviews conducted between July 18 and July
31, 1995 by trained communication graduate students.
A systematic random sample of telephone numbers was drawn from the system's
database of current subscribers. A minimum of two attempts were made to contact
busy, no answer, and machine answered numbers. Out of 894 valid attempts there
were 607 completed interviews and 287 refusals for a response rate of 68% (Frey,
1989). The respondents were 41% male and generally middle-aged with 45% between
35 and 54 years of age. Thirty percent were over 55 years old. The sample was
relatively affluent with 30% earning between $50,000 and $74,999 and nearly 25%
earning $75,000 or more. Almost 50% had earned a college degree or higher. The
average household size was 2.72 persons and 47% of the households had three or
more members. Among the respondents, 30% subscribed to one or more premium
(pay) cable channels.
The questionnaire items were pilot tested and refined to enhance their
clarity for respondents. The questions concerning subscriber expectations for
the timeliness of installation and repair were developed to generally emulate
the performance elasticity approach described by Brandt (1992) but address the
specific planning interests of the cable system operator. Therefore, with the
agreement of cable system management, items were created to gauge expected
(predicted), minimum tolerable, and equitable performance.
For both installation and repairs a standardized set of four items was
used. Expected performance was measured by asking: "If you ordered cable
television/cable repair service today, how long would you expect it to take for
installation to be completed/for the repairs to be completed?" Minimum
tolerable performance (i.e. the longest amount of time) was operationalized by
asking: "Thinking about cable installation/cable repair service, what do you
feel is the maximum acceptable amount of time between ordering cable and
installation being completed/between requesting cable repair service and the
repairs being completed?" Equitable performance was assessed by asking: "What
do you consider a reasonable amount of time between ordering cable and
installation being completed/between requesting cable repair service and the
repairs being completed?" Actual performance for both installation and repairs
was measured by asking: "Thinking back to when you ordered cable television/last
requested cable repair service, how long did it take for the
installation/repairs to be completed?" The responses and coding for all eight
of these items were "same day=0, one day=1, two days=2, three days=3, four
days=4, five or more days=5."
Satisfaction with the time required for both cable installation and repairs
was operationalized by asking: "How satisfied were you with the amount of time
between ordering cable and installation being completed/between requesting cable
repair service and the repairs being completed?" A 4-point scale was used to
measure satisfaction (very satisfied=4; not at all satisfied=1).
In addition to the expectations and actual performance items, subscriber
preferences for the day and time of installation and repair were solicited.
Subscribers' day preferences for installation and repairs were operationalized
by asking: "If you were ordering cable today/calling for cable repairs today,
when would you prefer to have a technician come to your home? Responses for
these two items were "weekdays before 8am, weekdays between 8am and 6pm,
weekdays after 6pm, Saturdays before noon, Saturdays after noon, Sundays before
noon, Sundays after noon." A contingency question was asked of those who
indicated "weekdays between 8am and 6pm" to pinpoint their preferred time of
day. Responses for the time of day included "between 8am and 10 am, between
10am and noon, between noon and 2pm, between 2pm and 4pm, and between 4pm and
Subscribers were also given the opportunity to enhance the availability of
in-person and telephone customer service representatives. Respondents were
asked about the necessity each of five customer service availability options.
The options included in-person service counters open between 1pm and 5pm on
Saturdays, service counters open Sundays, additional service counters opened in
system towns currently without their own, telephone service representatives
available on Sundays, and telephone service representatives available overnight.
A 4-point scale was used to gauge necessity (very necessary=4; not at all
The demographic data, summarized above, were collected as follows. Ordinal
scales were used to measure education (1=did not graduate from high school,
2=graduated from high school, 3=some college, 4=graduated college, 5=some
postgraduate work, 6=earned postgraduate degree), household income (1=less than
$10,000, 2=$10,000 to $19,999, 3=$20,000 to $34,999, 4=$35,000 to $49,999,
5=$50,000 to $74,999, 6=$75,000 to $99,999, 7=$100,000 to $124,999, 8=$125,000
or above), and age (1=18 to 24, 2=25 to 34, 3=35 to 44, 4=45 to 54, 5=55 to 64,
6=65 and older). Household size was recorded as reported by the subscriber and
gender was noted by the telephone interviewer. Each subscriber's service level
(1=basic only, 2=basic plus premium) was also recorded from cable system
The data were tabulated and analyzed using SPSSx (Norusis, 1990).
Descriptive statistics were run on the total sample for each item of interest.
To address research questions 1 and 2, performance elasticity analyses of
subscribers' installation and repair expectations and actual performance
perceptions were conducted. T-tests were used to determine if subscriber
expectations were different across the three comparison standards. Frequency
distributions for the installation and repair satisfaction items were analyzed
to answer research question 3 and to provide context for interpreting the
elasticity analyses. Research questions 4 and 5 were also answered with
frequency distributions of subscribers' responses.
Table 1 about here
Table 1 reports the frequency distributions and mean scores for the
expected, equitable, and minimum tolerable expectations measures along with the
perceived actual performance for cable installation. Examining the mean scores
for the expectation measures, it is apparent that subscribers expect more than
they think would be equitable or the minimum tolerable. In other words, they
expect faster installation than what they think would be reasonable or the
maximum acceptable amount of time. The mean score differences among the
comparison standards are all statistically significant (p < .001) further
supporting the belief that the expectation measures are conceptually different.
Regarding actual performance, it is interesting to note that the average
actual time from calling to order cable to completion of the installation is
reported as less than the mean minimum acceptable and equitable expectation
measures but greater than the expected (or predicted) installation time. Thus,
the performance elasticity analysis suggests that although actual performance
does not meet subscribers' expected performance, the cable system operator may
still have some leeway in its performance before subscriber satisfaction is
significantly reduced. System performance in cable installation exceeds (is
completed more quickly than) what subscribers consider the minimum tolerable
level and more demanding reasonable level of performance. Moreover, as
subscribers' satisfaction with installation indicates, the actual performance
shortfall versus expected installation performance does not seem to matter much.
A very respectable 88% of respondents reporting they were very (55%) or somewhat
(33%) satisfied with the installation completion time. If increasing subscriber
satisfaction with installation is important to management then the system
operator may want to devote more resources (e.g. manpower) to this service.
Table 2 about here
Table 2 reports the performance elasticity analysis for repair service.
When examining the mean scores for the expectation measures, it is once again
apparent that subscribers expect more than they think would be equitable or the
minimum tolerable. In other words, they expect faster repair service than what
they think would be reasonable or the maximum acceptable amount of time. And,
as with the comparison standard measures for installation, the mean score
differences among the comparison standards are all statistically significant (p
< .001). Clearly, the three comparison standards are measuring different
In this case subscribers report actual performance on repair services
exceed (are completed more quickly than) all three measures of expectation for
repair service time. And respondents' satisfaction ratings support the cable
system's strong performance on this service attribute. Nearly 91% of
respondents reported they were very (66%) or somewhat (25%) satisfied with the
time until repairs were completed. Since current repair service exceeds the
more demanding expected level of performance, the system operator could, if
desired, divert some resources from repair service to other service areas (i.e.
improve installation times) without jeopardizing their high customer
satisfaction rating for repair service.
Table 3 about here
The data addressing research question 4 are presented in Table 3. They
suggest day and time planning decisions should be relatively straightforward and
essentially the same for installation and repairs. For both installation and
repairs, subscribers overwhelmingly prefer services to be performed weekdays,
either between 8am and 6pm or after 6pm. About 9% prefer service calls weekdays
before 8am and 9% (repairs) and 13% (installation) prefer Saturday mornings.
Saturday afternoons and Sundays are apparently off limits for service calls.
Figure 1 about here
Figure 1 reports the data on subscribers' preferences for enhanced customer
service availability (RQ5.). With regard to the availability of telephone
service representatives, nearly 65% of subscribers feel it is very or somewhat
necessary for telephone representatives to be available on Sunday. However,
just 39% feel it is very or somewhat necessary for telephone representatives to
be available overnight. For in-person counter service, 75% believe it is very
or somewhat necessary for counters to be open Saturday afternoons. Only 47% of
respondents believe it is very or somewhat necessary for in-person counters to
be established in towns other than those already open. Just 28% want service
counters open on Sundays.
In a competitive business environment seeing a product or service from the
consumer's point of view is essential. And in the Connecticut market
competition among telephone, cable, and direct broadcast satellite companies has
This paper reports information collected on cable subscribers' expectations
for and perceptions of actual installation and repair service performance.
Stated preferences for service call days/times and enhanced access to customer
service representatives were also examined. The results have a number of
implications for cable system management and expectations driven satisfaction
The performance elasticity analyses reveal a range of subscriber
performance expectations for cable installation and repairs. Performance within
the range, and certainly above the minimum tolerable level, is likely to support
subscriber satisfaction. These results suggest that this cable system is doing
well on both installation and repair. Subscriber satisfaction for both is high
although subscribers' perception of installation performance places it at about
the reasonable (equitable) level of expectations. With this information cable
system management can make intelligent decisions about the allocation of its
It is interesting to note that not only are subscriber expectations higher
for repairs across all three comparison standards, but the minimum tolerable
standard (maximum acceptable amount of time) for repairs is faster than the
expected standard for installation. This obviously reflects the importance
subscribers place on repair performance and is not surprising since a cable
service interruption is disruptive to a household's established media
consumption patterns. So, for this cable operator, even though its repair
performance is strong, the importance of timely repairs to its subscribers
militates against allocating resources away from repair service. If management
feels it is important to improve installation performance those resources should
probably not come from repair service.
Management's concern with servicing subscribers' repair and installation
needs also drove its desire to ascertain day and time preferences for service
calls. The straightforward stated preference data can help to synchronize the
system's manpower scheduling with subscriber preferences. For example, 35% of
cable subscribers desire installation/repair service weekdays between 8am and
6pm and roughly 40% prefer installation/repair service weekdays after 6pm. To
satisfy these preferences the system operator now knows, for better or worse,
that a split work schedule for service technicians is needed.
Similarly, management's desire to maintain high standards of customer
service led to tentative plans for enhancements in subscriber access to customer
service representatives. Again, the stated preference data suggested, for
example, the addition of Saturday counter hours would be well received but that
Sunday hours would go largely unappreciated. Indeed, management's plans to open
additional service counters in towns without their own (two counters were open
among the six towns within the service area) were canceled as a result of this
research. While the results did reveal moderate interest in additional customer
service counters, a cross-tabulation and chi-square test of the data did not
reveal a significant association between subscriber preference and town of
residence. Since residents of the four towns lacking service counters were no
more likely to desire additional counters than those in towns with counters,
management decided to forgo its plan.
On a conceptual level, the performance elasticity approach to evaluating
customer expectations proved to be very telling. Each comparison standard was
measuring a different level of expectation and this shed light on subscribers'
limits of service performance. One finding of particular note was subscribers'
"expected" standards always exceeded their equitable standards. In other words,
their expectations exceed what they believe is reasonable performance. This may
simply be an indication of these subscribers' demanding nature or, perhaps, the
respondents were really expressing their "ideal" level of responsiveness. To be
sure, cable system operators must work to exceed the equitable standard.
The elasticity analysis offers another benefit. Each expectation level can
be translated by management into service performance goals for day-to-day cable
operations. (Of course performance elasticity analysis can be used for studying
expectations and satisfaction in any industry.) Also, as Brandt (1992) noted,
these standards can now serve as a baseline for expectations tracking and
monitoring of subscriber satisfaction.
Finally, limitations of this study should be noted. Since the sample was
drawn from just a single cable system, these results cannot be confidently
generalized beyond the cable system's subscribing population. Subscribers'
service expectations and preferences are likely to vary widely across cable
systems. Still, for the practical purposes of this study, the local nature of
cable systems demands a local, system specific sample.
In addition, for cable management's benefit analysis of the data was
limited to the most basic statistical techniques. Greater understanding of the
predictive ability comparison standards would likely be derived from a
subtractive disconfirmation analysis of the actual performance, expectations,
and service satisfaction measures.
Brandt, D.R. (1992, January). Designing questionnaires to gauge customer
requirements and satisfaction. Paper presented at an Institute for
International Research Conference, Orlando, FL.
Dutka, A. (1994). AMA Handbook for Customer Satisfaction. Lincolnwood,
ILL: NTC Business Books.
Frey, J.H. (1989). Survey Research by Telephone (2nd ed.). Newbury Park,
CA: Sage Publications.
Keveny, B. (1997, March 31). A surfin' safari. The Hartford Courant, p.
Kirkpatrick, D.D. (1997, March 11). SNET is offering cable-TV service in
Connecticut. The Wall Street Journal, p. B6.
Myers, J.H. (1991, December). Measuring customer satisfaction: Is meeting
expectations enough? Marketing Research, 35-43.
Norusis, M.J. (1990). SPSS Base System User's Guide. Chicago: SPSS.
Robichaux, M. (1995, March 8). Viewers' horror stories cast cable TV as
villain. The Wall Street Journal, p. B1, B12.
Spreng, R.A. & Olshavsky, R.W. (1992). A desires-as-standard model of
customer satisfaction: Implications for measuring satisfaction. Journal of
Satisfaction, Dissatisfaction and Complaining Behavior, 5.
Telecommunications Act of 1996, (Pub. Law 104-104).
Tse, D.K. & Wilton, P.C. (1988). Models of consumer satisfaction
formation: An extension. Journal of Marketing Research, 25, 204-212.
Wilke, J.R. (1997, January 6). FCC sees rise in competition going slowly.
The Wall Street Journal, p. B6.
Woodruff, R.B., Cadotte, E.R., & Jenkins, R.L. (1983). Modeling consumer
satisfaction processes using experience-based norms. Journal of Marketing
Zeithaml, V.A., Parasuraman, A., & Berry, L.L. (1990). Delivering Quality
Service: Balancing Customer Perceptions and Expectations. NY, NY: The Free
Table 1: Cable Installation Performance Elasticity
Expectations / Experience
Type of Service
Average # of days
Table 2: Cable Repair Service Performance Elasticity
Expectations / Experience
Type of Service
Cable repair service
Average # of days
Table 3: Day and Time Preferences for Service
Calls to the Home
Type of Service
Wkdays before 8am
Wkdays 8am to 6pm
Wkdays after 6pm
Sat. before noon
Sat. after noon
Sun. before noon
Sun. after noon
Weekday time period
8am to 10am
10 am to noon
noon to 2pm
2pm to 4pm
4pm to 6pm