Immmersive 360-Degree Panoramic Video Environments: Research on
"User-Directed News" Applications
Larry Pryora, Susanna Gardnera, Albert A. Rizzob and Kambiz Ghahremanib
a Annenberg School for Communication, School of Journalism, University of
Southern California, 3502 Watt Way, Los Angeles, Calif. 90089-0281, USA
([log in to unmask])
b Integrated Media Systems Center, University of Southern California, 3740
McClintock Ave, EEB 131, Los Angeles, California, 90089-2561, USA
Cultural critics note that electronic media's "new expressive technology"
marks a democratization of discourse and a revival of rhetorical practice.
"The oral world returns in hyperliterate form," Lanham argues. And, as
Ong points out, for an oral culture, "learning and knowing means achieving
close, empathetic communal identification with the known." For the early
Greeks, to listen to Homer or other oral performers was an empathetic and
participatory act, allowing the audience members to shape the story until
they were satisfied. This is an entirely different experience from the
objectively distanced, passive reaction one has today from reading text or
watching 2-D video. In the oral world, to use Ong's phrase, one becomes
immersed in the story, "encased in the communal reaction, the communal 'soul.'"
This freedom from the constraints of linear textual discourse and video
presentation helps explain the global hunger for virtual environments. But
it is as if we have emerged from a flat black-and-white world into the 3-D
spaces of the Internet without a map and no rules or recognizable traffic
signals. "While art history, geography, anthropology, sociology, and other
disciplines have come up with many approaches to analyze spaces as a
static, objectively existing structure, we do not have the same wealth of
concepts to help us think about the poetics of navigation through
(computer) space," Manovich laments.
The realism of new technology has opened up a set of aesthetic
possibilities, as the movie and digital game industries are eagerly – and
profitably – demonstrating. But the ability to use 3-D computer graphics to
construct virtual spaces, based on existing social spaces, also has great
promise for journalism. The non-fiction storyteller can observe and
digitally record actual physical spaces, such as neighborhoods within
cities and their inhabitants, preserve these spaces accurately and "without
succumbing to illusionism; the virtual representation encodes the city's
genetic code, its deep structure rather than its surface." The viewer
can then float through this virtual world and engage actively in the story
by controlling the narrative flow.
Computer graphic modeling systems and advanced graphic interfaces, such as
the head-mounted display (HMD), create a sense of presence that is more
immediate and realistic than traditional media. Technology now allows a
journalist to craft and depict virtual social environments for the viewer
to enter and experience first-hand, to create an immersive presence that
permits engagement with the environment. The key role of the journalist
becomes that of creating an accurate and comprehensive visual experience, a
sense of "realism" that meets the user's desire for immediacy, and a
natural setting that invites exploration.
A well-told text or video story can also allow the reader or viewer to have
a sense of "being there." Well-designed graphics in print or television
enhance that experience. But these are passive acts. Consciousness always
hovers above the external, objectified code or image, threatening to
intervene at the slightest break in concentration or sensory distraction.
As readers and 2-D viewers, we skate across this perspective, which is
essentially arbitrary, since it is the creation of an external
writer/director/videographer and "pushed" at the audience according to
long-established codes of interpretation, such as text or voice-over, image
choice and cropping, juxtaposition of images to enhance meaning, genre of
story type and color.
The ability to create virtual reality (VR) represents a fundamental
departure from linear text code and ritualized video concepts. In
particular, recent advances in Panoramic Video (PV) camera systems have
produced new methods for the creation of virtual environments. With
these systems, users can capture, play back and observe pictorially
accurate 360-degree video scenes of "real world" environments. When
delivered via an immersive HMD, an experience of presence within captured
scenarios can be available to a news audience seeking realism and immediacy.
Virtual reality systems surround the viewer with a computer-generated
image. As Jay David Bolter and Richard Grusin note, "virtual reality should
come as close as possible to our daily visual experience. Its graphic space
should be continuous and full of objects and should fill the viewer's field
of vision without rupture." The space created by the journalist allows a
freedom of movement that becomes the defining quality of new media: user
control of the point of view. This sense of control promotes the creation
of a virtual self, a sense of being there and doing things at a level that
engages the unconscious – the non-verbal, graphics-dominated realm of
understanding that is lauded by postmodernists, empirically mapped by
cognitive psychologists and increasingly defined through semiotics and
"What makes interactive graphics unique is that the shifts (in perspective)
can now take place at the viewer's will," Bolter and Grusin point out.
Such a "pull" technology is contrasted against traditional video in which
the view is controlled at the source and is identical for all observers.
Along with other computer graphic modeling methods, PV overcomes the
passive and structured limitations of how images are presented and
perceived. The recent convergence of camera, processing and display
technologies makes it possible for a user to have a choice in viewing
direction and focus.
This freedom has metaphorical significance because it allows situated
viewing and the ability to transport oneself mentally – and emotionally and
even physically, by means of haptics – into another environment. Viewer
involvement and connectedness, for example, takes a quantum leap over the
experience provided by talk shows, in which the viewer can participate via
e-mail or by phoning in, or reality television, in which the camera
placement offers viewer involvement in every moment of the show
participants' lives. Broadcast talk shows and realistic television still
leave the viewer at the mercy of each medium's ability to "both show and
conceal, reflect and distort the realities which they represent." By
contrast, a validly constructed (i.e. journalistically accurate and
comprehensive) virtual environment gives the viewer the ability to move
through that space in a way that defines a virtual self apart from the
journalist's point of view. As Bolter and Grusin note, this freedom can
serve a radical cultural purpose: "to enable us to occupy the position, and
therefore the point of view, of people … different from ourselves. To
occupy multiple points of view [serially if not simultaneously] becomes a
new positive good and perhaps the major freedom that our culture can
offer." This point of view identification allows the viewer to enter
the other person's world and relate to others in an empathetic way. VR
becomes a "path to empathy" or a "visual construction of empathy." This
gives the viewer a new way of knowing reality: "immediate, embodied,
emotional and culturally determined." VR allows the freedom to become
someone else, the highest degree of empathetic living and experience. This
"new kind of camera" means that the viewer can assume the inquirer task of
the journalist and explore a given social environment at will.
This does not make the journalist an obsolete appendage. Digital technology
preserves several traditional roles of the newsperson, in some cases
enhancing them, and adds new responsibilities. First, the journalist
continues to perform the job of forward scout, a social explorer who seeks
out settings and circumstances that the average person might not be aware
of or, for various reasons – danger, inconvenience, distance, logistics –
might not wish to enter. Second, the journalist is still a producer,
assembling the resources – vehicles, equipment, personnel, press passes and
permissions, background information, training, food and water,
communications, etc. – necessary to cover a story, to do "the shoot."
Third, the digital journalist acts as an information architect, placing the
story in a context and social setting that defines the physical space,
providing a structure in which the viewer can experience VR. Within this
space, the image changes in time as the story progresses. Fourth, the
writer or broadcaster sets up or anchors stories through leads,
introductions and voice-overs.
The VR environment opens the way for the journalist to play a more
important anchor role, to become the "immersant," allowing the "immersed"
audience to witness his or her journey through a virtual world. The viewer
can assume the point of view of the journalist, and that "immersant" then
becomes "a kind of ship captain, taking the audience along on a journey;
like a ship captain, she occupies a visible and symbolically marked
position," Manovich says. The journalist can also give meaning or
context to the VR images, much as the text of a caption will anchor the
meaning of a photograph. In digital multimedia journalism, the
journalist assumes an enhanced role of researcher/librarian by assembling
background information, related texts, stories, outside links, documents,
maps, graphics, video sidebars and a potentially limitless array of
contextual material for the viewer at any time he or she decides to freeze
the PV environment in time and seek supplemental information. Both the ship
captain role and how the PV perspective interacts with surrounding
contextual information will be subjects of further research by our
User-Directed News initiative.
A large gap now exists between the theoretical benefits of VR news
environments and their actual usability. This technology has only entered
the preliminary phase. Ultimately, the use of PV content in presenting news
will depend on how viewers can best observe, interact with, enjoy and
benefit from dynamic PV scenarios. At this point, PV has limitations
regarding functional interactivity. Whereas users operating with a
computer-graphics VE scenario are usually capable of both 6DF navigation
and interaction with rendered objects, PV immersion allows mainly for
observation of the scene from the fixed location of the camera with varying
degrees of orientation control (i.e. pitch, roll and yaw). In spite of this
limitation, the goals of certain application areas, including news
presentation, may well be matched to the assets available with this type of
PV image capture and delivery system. It is now capable of meeting the high
requirements for presenting real locations inhabited by real people.
Moreover, alternative methods to support "pseudo-interaction" are possible
by augmenting panoramic imagery with video overlays and computer-graphics
objects. This paper will briefly present the technical details of our PV
system, describe the scenarios we have captured thus far and highlight our
2. Brief system overview and technical description
Panoramic image acquisition is based on mosaic approaches developed in the
context of still imagery. Mosaics are created from multiple overlapping
sub-images pieced together to form a high resolution, panoramic, wide
field-of-view image. Viewers often dynamically select subsets of the
complete panorama for viewing. Several panoramic video systems use single
camera images, however, the resolution limits of a single image sensor
reduce the quality of the imagery presented to a user. While still image
mosaics and panoramas are common, we produce high-resolution panoramic
video by employing an array of five video cameras viewing the scene over a
combined 360-degrees of horizontal arc. The cameras are arrayed to look at
a five-facet pyramid mirror. The images from neighboring cameras overlap
slightly to facilitate their merger. The camera controllers are each
accessible through a serial port so that a host computer can save and
restore camera settings as needed. The complete camera system (Figure 1) is
available from FullView, Inc.
The five camera video streams feed into a digital recording and playback
system that we designed and constructed for maintaining precise frame
synchronization. All recording and playback is performed at full video
(30Hz) frame rates. The five live or recorded video streams are digitized
and processed in real time by a computer system. The camera lens
distortions and colorimetric variations are corrected by the software
application and a complete panoramic image is constructed in memory. With
five cameras, this image has over 3000x480 pixels. From the complete
image, one or more scaled sub-images are extracted for real-time display in
one or more frame buffers and display channels. Figure 2 shows an example
of the screen output with a full 360° still image extracted from the video.
The camera system was designed for viewing the images on a desktop
monitor. With a software modification provided by FullView Inc., we were
able to create an immersive viewing interface with a SONY Glasstron
head-mounted display (HMD). A single window with a resolution of 800x600 is
output to the HMD worn by a user. A real-time (inertial-magnetic)
orientation tracker is fixed to the HMD to sense the user's head
orientation. The orientation is reported to the viewing application through
an IP socket, and the output display window is positioned (to mimic pan and
tilt) within the full panoramic image in response to the user's head
orientation. View control by head motion is a major contributor to the
sense of immersion experienced by the user. It provides the natural
viewing control we are accustomed to without any intervening devices or
3. Exploratory field testing and user testing
The capture, production and delivery of PV scenarios present unique
challenges. Application development decisions require informed
consideration of pragmatic issues involving the assets/limitations that
exist with PV scenarios, the requirements of the application and how these
two factors relate to user capabilities and preferences. Based on our
initial field-testing experience, we outlined a series of guidelines and
recommendations for the creation of PV scenarios that appeared in Rizzo et
al. The areas covered in that paper dealt with pragmatic production
issues, determination of suitable PV content, display and user interaction
considerations, audio/computer graphic/PV integration issues and hardware
options for maximizing accessibility. These recommendations were based on
our experience in PV scenario production from a producer/developer
standpoint and from user feedback provided by approximately 400-500
individuals at the time. Since then, we have continued to collect user
feedback and have used this data to inform the design process in our
evolving PV application research and development program.
Field-trials with the PV camera and user testing with acquired content have
been conducted across a range of scenarios to explore feasibility issues
for using this system with a variety of user applications. The following
test scenarios were captured in order to assess the PV system across a
range of lighting, external activity, camera movement and conceptual
conditions. Informal evaluation of users' responses to these scenarios has
been conducted with controlled experiments currently underway for some of
the applications. Our PV scenarios have included:
1. An outdoor mall with the camera in a static position in daytime lighting
with close background structures and moderate human foot traffic, both
close-up and at a distance.
2. An outdoor ocean pier with the camera in a static position with both
long shots of activity on a beach and close-up activity of human foot
traffic and amusement park structures on the pier.
3. The interior of an outside facing glass elevator with the camera in a
static position and the elevator smoothly rising 15 floors from a low light
position (e.g., tree-shielded street level) to more intense lighting as the
elevator ascended above the tree line.
4. Traveling on a canyon road with the camera mounted in the bed of a
pickup truck for 30 minutes at speeds ranging from 0-40 mph under all
daylight ranges of lighting (low shaded light to intense direct sun).
5. Same as #4, except at night on a busy well-lit street (Sunset Boulevard
in Los Angeles), and on a freeway traveling at speeds from 0-60 mph.
6. A USC football game within the Los Angeles Coliseum from both static and
moving positions in daytime lighting, with extreme close-ups of moving
people and massive crowd scenes (40,000+ people).
7. An indoor rock concert in a theatre (Duran Duran) from a static position
under a variety of extreme lighting conditions in the midst of an active
crowd, slightly above average head level.
8. Two artistic projects were done in collaboration with the UCLA Digital
Media Arts Department and the USC School of Fine Arts. The UCLA project
involved the capture of dancers performing around the 360-degree field of
view of the camera. Significant post-production work took place to display
the panoramic capture within an immersive theatre that incorporated live
dancers in a mixed reality installation. The USC project involved building
a circular fish tank around the camera with live tropical fish swimming
within and a coral reef photo serving as background on the outermost tank
wall. The users wore an HMD that helped to create the illusion of being
immersed within the swimming fish environment for one minute. Following
this sequence, the coral reef photo background was manually removed to
reveal the activity in the laboratory where the capture occurred creating a
dramatic "breaking of the illusion" effect. This application also served as
an early test for a future project in which the panoramic camera will be
placed within a sealed Plexiglass tube and lowered into a very large
commercial aquarium exhibit.
9. Thirteen scenarios were created in an indoor office space for an "anger
management in the workplace" application. In these scenarios, actors
portrayed agitated and insulting co-workers who addressed the camera (and
vis a vis, the clinical user wearing the HMD) with provocative and hostile
verbal messages (Figure 3). The scenarios were designed to provide role
playing environments for patients undergoing psychotherapy for issues
relating to anger management in the workplace, or as it is commonly
referred to as "Desk Rage." The patients wearing the HMD in these
scenarios have the opportunity to practice appropriate responses to the
characters and employ therapeutic strategies for reducing rage responses.
Traditional methods of therapy in this area have mainly relied on guided
imagery or role-playing with the therapist. It was hypothesized that PV
content could serve to create immersive simulations that patients will find
more realistic and engaging, and research is currently underway to assess
this with clinical users at The VRMH center in San Diego, CA (See:
10. A Virtual "Mock-Party" with the camera in a static position in the
center of an indoor home environment in the midst of an active party with
approximately 30 participants (Figure 4). This "scripted" scenario was shot
while systematically directing and controlling the gradual introduction of
participants into the scene and orchestrating their proximity and
"pseudo-interaction" with the camera. The scenario was created for a
therapeutic application designed to conduct graded exposure therapy
with social phobics. We have also experimented with pasting "blue screen"
capture of actors (using a single video camera in the lab) into the
panoramic scenes. The actors address the camera with a spectrum of socially
challenging questions that provide the clinical user with opportunities to
practice social verbal engagement in a psychologically safe environment.
The separate capture and pasting of characters will allow the therapist to
introduce a progressively more challenging level of social stress to the
patient when deemed appropriate based on therapist monitoring of patient
self-report and physiological responses. User testing on this project with
clinical populations is anticipated to begin in June, 2003.
4. User Directed News research program
The User Directed News project is based on the idea that as journalism
moves into the 21st Century, new forms of information technology (IT) stand
to revolutionize methods for acquiring, packaging, organizing and
delivering newsworthy information content. With these advancements in IT
will come both opportunities and challenges for creating systems that
humans will find to be usable, useful and preferred options for interacting
with newsworthy information content. However, a number of pragmatic and
user-centered questions need to be addressed scientifically before a
determination of the value of this system can be made.
Research Design Summary - The User-Directed News project at IMSC and the
Annenberg School seeks to address these production problems and other
limitations of PV technology.
The research is ongoing and cumulative, based on our previous technical
experience, lessons learned in the field and through usability testing.
Equipment problems are being worked on before our next field project,
including easily assembled modular units and mobile power sources, for
example. Future production will also include more extensive and refined
"shells" of information for the viewer to interact with, as well as the
integration of advanced database retrieval methods. Our approach is
multidisciplinary, involving journalism, cognitive psychology,
Communications theory and engineering.
As our initial research phase into news applications, on Sunday, January
12, 2003, we loaded our Panoramic Camera and supporting computers and other
equipment into a panel van and took it to a block of downtown Los Angeles
between 4th and 5th streets and Towne Avenue, the center of the city's
large homeless population. We had a crew of two principal investigators
(Pryor and Rizzo), two co-investigators with advanced technical and
graphics skills (Gardner and Ghahremani), two journalism graduate students
(Michael Fanous and Naomie Worrell), who have broadcast experience, and
three production assistants.
We chose this physical space and social environment for several reasons. It
was a scene of harsh human deprivation, a street lined on both side with
tents and temporary shelters, mainly cardboard and blankets, which were
arrayed along both sidewalks. The streets were bordered with shuttered
warehouses and parking lots, with one active non-governmental mission in a
hotel-like building at the western end of the block. The view was one of
clutter, dirt and grime. That Sunday was a hot, sun-driven day, but it is a
climate that can – and did – change in hours to cold and rain. The stark
conditions of this homeless population, in itself, is a compelling story,
one that requires understanding, analysis and empathy to correctly
comprehend. Beyond the physical and social scene, the story has powerful
socio-economic overtones as the city of Los Angeles seeks to "improve"
downtown and expand its redevelopment program into parts of the Central
Core occupied by homeless people for many decades, if not since the city
was founded in the 18th Century. The city's economic plan calls for
developers to take control of an expanding Central Business District and
convert the land uses from warehouses and light manufacturing (mainly
garment lofts) to residential apartments, retail, commercial and
entertainment uses. To accomplish this requires moving the homeless
population, the legal justification being the trespassing on sidewalks,
loitering, public health "threats" and crime (drugs, prostitution and other
illegal commerce). This is a politically, economically and socially
contentious issue of great complexity and emotion, in other words, an
important news story. Parts of this story have been covered by the Los
Angeles Times and other local news outlets, with notable variations in
thoroughness, accuracy and cultural sensitivity. (One person familiar with
the Towne Avenue scene said that TV film crews would drive up the street in
pickup trucks, shooting video from the truck bed without stopping or asking
permission. One film crew, he said, had come through a few days before we
did our video work with a production assistant in the back of the truck
armed with a water squirt gun "to rile people up as they drove by.")
Two of our Annenberg School of Journalism students, Fanous and Worrell, had
done major projects focused on the plight of LA's homeless population.
Their expertise and willingness to film the scene on Towne with our
360-degree camera strongly influenced our selection of this location. But
the scene had another element that will be an important part of our ongoing
investigation of User-Directed News, the ability of PV technology to
capture symbolism. The Towne Avenue block is a highly symbolic
multicultural, multiracial environment that has mythical overtones, both
laudable, for the emotions of empathy that Skid Row can evoke, and morally
deplorable for the negative emotions it can trigger – insensitivity,
bigotry and violence with tinges of "ethnic cleansing." Only two weeks
after we shot the scene on Towne, L.A. police officers descended on the
street, accompanied by city trash trucks, and confiscated and removed all
of the tents and temporary shelters and dispersed the street dwellers on
threat of jail for violating an ordinance against sleeping or sitting on
sidewalks. (Copies of the ordinance were posted on warehouse walls along
the street.) Clearly, this was a scene in which iconic symbols and myths,
both good and bad, played a strong and visible role. Without going into the
theory of signs in this paper, we were aware of the symbolic elements of
the scene, of the semiotic possibilities embraced by this location and the
importance this would play in whatever script we wrote or images we
captured that day.
We parked our van in an alley next to the mission at the corner of 5th and
Towne and placed our camera in the middle of the street. Ghahremani and
Gardner remained in the van to run the computers, Worrell took up station
in front of one of the camera's five lenses to deliver her script and
Fanous sat under the camera, out of its field of vision, to hold cue cards
for Worrell. Pryor and Rizzo talked with the street's residents to explain
what the filming project was about and to win their support and agreement
to do the filming. It became evident from this interaction, if brief, that
this group of people had established a cohesive community with a strong
identification and purpose of self-protection, as well as good internal
communication. The message spread up and down the street that we were from
nearby USC and were working on a research project. Not everyone agreed that
we should tape but majority ruled, and we were allowed to proceed. The
actual taping, involving one aborted start and two run-throughs, took about
The videotapes were later edited and the five video images combined by
Gharhremani and Rizzo for display on a PC console and for use in the
head-mounted device. Two perspectives resulted, one a traditional 2-D
perspective using one camera, which focused on Worrell as she delivered her
narrative and showed the scene behind her; the other was a 360-degree
panorama, each of the five images being joined seamlessly to produce a
realistic duplication of the scene from the fixed location of the camera.
Worrell became only one element within the computer space.
Methodology - The current study will compare the memory performances of
two groups of 30 undergraduate research subjects, aged 20-40, following the
presentation of the mini-news documentary in two different viewing formats.
Condition 1 will have users view the two-minute news story in a
"traditional" single frame flatscreen viewing format. This group of users
will have access to the one field of view containing the reporter's
delivery of the story, as is common practice in a standard on-the-scene
reporting approach. Condition 2 users will have access to view the complete
360-degree arc of the environment from where the news story was reported.
Users in this condition will view the news story from within a head-mounted
display and have free choice to observe the PV scene from any perspective
within the 360-degree arc. Condition 2 users will also hear the exact same
verbal delivery from the reporter as presented in Condition 1.
Following exposure to the 2-minute story, users in both groups will be
tested on multiple measures of memory (recall and recognition) for the
information presented in the story and on user preference for use of the
system. Memory for the content of the news story will also be tested again
one week later. Users in both conditions will be compared on preference for
viewing format and on Presence ratings using the Witmer and Singer Presence
Questionnaire. As well, head tracking data from users in Condition 2
will be quantified to produce a metric of exploratory behavior within the
360 degree PV scene. These metrics will examine the total distance
traversed within the 360-degree arc and the amount of time that users spend
focusing on the reporter compared with total "off-reporter" exploration.
This design will allow for the comparison of groups on immediate
acquisition/retention of content and on long-term recall/recognition
retrieval. We hypothesize that the sense of "being there" or "presence"
will be enhanced in Group 2 by way of using an immersive HMD, and that this
added engagement will increase long term recall by providing better
contextual retrieval cues that leverage episodic memory processes. While
the groups may not differ on measures of immediate memory, due to competing
distraction effects nullifying immersion based gains in the HMD condition,
we predict that when subjects are tested one week later, the contextual,
episodic memory and presence effects will operate to produce much better
Early results of this first phase of research will be available in July.
Some of the basic questions that this methodology is designed to
? Will users generally prefer to have news delivered in the 360-degree HMD
? Does immersion and self-selection compel the user to prefer this method
of being "involved" in the story?
? Will reporters be able to adapt to this more "free form" method of
reporting and what challenges will this produce for reporters in delivering
"stories" to users who may not chose to follow the information flow in a
traditional fixed "linear" manner?
? Will choice of viewing interfere with the acquisition of the logical
story line in a news report?
? Will users be able to recall key points of the reported event in a
? Will long term memory be enhanced in the immersive HMD condition?
? What types of news events would this system be best suited for in terms
of user preference and information processing issues and what are the key
elements of newsworthy events that might predict successful outcomes for
use of the system?
? Will users naturally explore the 360 environment and choose to use this
Future research will deal with applications of increasingly advanced PV and
multimedia technology to news scenarios. The goal will be to create virtual
news spaces that will allow the viewer to "move in, around, and through
information," to use Bolter's description. We will eventually have an
"interfaceless" interface "in which there will be no recognizable
electronic tools – no buttons, windows, scroll bars or even icons as such.
Instead the user will move through the space, interacting with the objects
'naturally,' as she does in the physical world."
We will develop another news scenario in fall 2003 and conduct usability
tests with a larger population that will be more representative of the
general population. The content "shells" surrounding the virtual space will
also be complete and helpful to the viewer. In addition to seeking to
answer the questions listed above, we will record and analyze how the
viewer interacts between the virtual and archival worlds and the preferred
paths of navigation, points of view and perspectives. We will also measure
degrees of connectedness or empathy.
The development and adaptation of PV technology to news scenarios should be
a matter of some urgency to traditional publishers and broadcasters who,
many readership surveys show, are in danger of losing a substantial segment
of their younger audience to the Internet and alternative sources of
entertainment, especially digital games and music. The key to survival will
be their ability to connect with the Under 35 audience and the vital teen
market. This age group "gets" VR. Surging games sales indicate a cultural
hunger for this perspective within computer space. Whether the young
viewers are to be enticed into non-fiction scenarios will depend on how
actively journalists adopt advanced technology such as PV systems. Media
survival is not all that is at stake. A worldwide generation of future
citizens may be lost to PV scenarios that have little relevance to social
and political skills or may even be antithetical to them. If that comes to
pass, Plato will have been proved right – the cultural swing away from
linear literacy and into orality and rhetoric allowed the poets to
trivialize education and destroy the state.
 Richard A. Lanham, The Electronic Word: Democracy, Technology and the
Arts, The University of Chicago Press, Chicago, 1993, pg. 214
 Walter J. Ong, Orality and Literacy(New Accents), Routledge, London
and New York, 2002, pg. 45
 Lev Manovich, The Language of New Media, The MIT Press, Cambridge,
2002, pg. 259
 ibid, pg. 260
 Nick Lacey, Image and Representation; Key Concepts in Media Studies,
St. Martin's Press, New York, 1998, pgs. 32-35. This is a concise summary
of standard video techniques.
 James, M.S. (2001), "360-Degree Photography and Video Moving a Step
Closer to Consumers." Retrieved March
23, 2001, from
 Jay David Bolter and Richard Grusin, Remediation; Understanding New
Media, The MIT Press, Cambridge, London, 2000, pg. 22
 ibid, 243
 Jonathan Bignell, Media Semiotics; An introduction, Manchester
University Press, Manchester and New York, 1997, pg. 153
 Bolter and Grusin, Remediation., pg. 245
 Ibid, 246
 Manovich, Language, pg. 261
 Bignell, Semiotics, pg. 95
 Nayar, S. K. (1997), "Catadioptric Omnidirectional Camera," Proc. of
IEEE Computer Vision and Pattern Recognition (CVPR).
 FullView.com Inc. (2003). Retrieved February 15, 2003, from
 Intersense Inc. (2003). Retrieved February 15, 2003, from www.isense.com
 Rizzo, A.A., Neumann, U., Pintaric, T. and Norden, M. (2001). "Issues
for Application Development Using
Immersive HMD 360 Degree Panoramic Video Environments." In M.J. Smith, G.
Salvendy, D. Harris, & R.J. Koubek (Eds.), Usability Evaluation and
Interface Design (pp. 792-796). New York: L.A. Erlbaum
 Daw, J. (2001), "Road Rage, Air Rage and Now 'Desk Rage,'" The
Monitor of the Amer. Psych. Assoc., 32, (7).
 Rothbaum, B.O. & Hodges, L.F. (1999), "The Use of Virtual Reality
Exposure in the Treatment of Anxiety Disorders." Behavior Modification, 23
 Witmer, B.G. & Singer, M. J. (1998). "Measuring Presence in Virtual
Environments: A Presence Questionnaire."
Presence: Teleoperators and Virtual Environments, 7 (3), 225-240.
 Bolter and Grusin, Remediation, pg. 23