TUCoPS :: Cyber Culture :: cyberspc.txt

Travels in Cyberspace

Travels in virtual reality (computer cyber space)

SINCE SPRING OF  89 I've made the rounds of the cyberspace circuit, from
AutoCad's "Weird Science" rollout in Anaheim on "VR Day" in June, to the
near-riot at Pacific Bell's Texpo in San Francisco the next day, when jaron
Lanier showed off his "Reality Built for Two" in a secret demonstration
room. I've visited most of the key research sites, from Mountain View,
California, to Chapel Hill, North Carolina, to Seattle, Washington, and back
to Sausalito. Between road trips, I reported some of my preliminary
observations on the WELL. Here are some reports from the outposts of
cyberspace, adapted from my WELL postings, with no real attempt to hang them
together into a framework. DOCKING MOLECULES IN CHAPEL HILL The primary
research instrument of the sciences of complexity is the computer. It is
altering the architectonic of the sciences and the picture we have of
material reality Ever since the rise of modem science three centuries ago,
the instruments of investigation such as telescopes and microscopes were
analytic and promoted the reductionalist view of science. Physics, because
it dealt with the smallest and most reduced entities, was the most
fundamental science. From the laws of physics one could deduce the laws of
chemistry, then of life, and so on up the ladder. This view of nature is not
wrong; but it has been powerfully shaped by available instruments and
technology. The computer, with its ability to manage enormous amounts of
data and to simulate reality, provides a new window on that view of nature.
We may begin to see reality differently simply because the computer produces
knowledge differently from the traditional analytic instruments. It provides
a different angle on reality -Heinz Pagels, The Dreams of Reason The
University of North Carolina at Chapel Hill is the home of one of the most
important and longest-running VR research projects. Driving in from the
airport, I noticed that the motto on North Carolina license plates "First In
Flight" - is appropriate to what I think of as the "Kitty Hawk" state of the
technology. The work at UNC with chemists and virtual model builders has
been going on for twenty years, and is yielding practical results. The
molecular-docking demonstration was a conversion experience for me, at a
point where I had grown skeptical about VR conversion experiences. I've been
excited by the VR demos I've seen for the last year, of course, but I can
see now that my initial excitement was amplified by my internal
extrapolation factor: I had already watched one computer revolution emerge
in Silicon Valley. I remember reading, in 1974, about a company that would
send a microprocessor-based computer for personal use in a kind of build-it
yourself kit: the now-legendary Altair from long-defunct MITS. The data
input on the Altair was accomplished by toggle switches, and the output
device was a small panel of indicator lights. The idea of having my own
computer seemed like a neat idea, but I was nowhere near the kind of
enthusiasm that would have forced me to shell out a couple hundred dollars
for a kit. A couple guys ten years younger than myself saw what something
like the Altair could become someday, and founded Apple Computer. I thought
about the Altair when I looked at that first, crude, monochrome wireframe
world at NASA/Ames. I knew I was looking at an Altair, and extrapolated that
by the time VR technology evolves to a Mac Il level, these grainy,
time-delayed, cartoony "worlds" and the sense of presence they evoke might
truly become a level of reality. The sense of presence, not the inherent
sexiness of the virtual world, is the source of the conversion experience.
And that sense of actually being in another place - cyberspace - can be
enhanced by the proper use of sound, kinesthetic, and tactile feedback.
Conversion experiences in computer science, particularly in the realm of
computer interfaces, have driven the evolution of personal computers. A man
by the name of J.C.R. Licklider had a conversion experience with the PDP-L
in the early 1960s. The PDP-l was the first interactive microcomputer. You
could use a light pen and interact with it directly. It was a puny computer
in today's terms, so there wasn't a great deal that could be done with it.
But Licklider saw its potential and when he went to work for ARPA, funding
futuristic computer research, he ended up funding the development of the
interactive computing systems he had envisioned in a flash the first time he
sat down with a light pen and touched the screen of a PDP-L. Another
maverick computer scientist, originally supported by ARPA, later at SRI, and
now at Stanford, was also motivated by a conversion experience. One day in
1950 Doug Engelbart realized that the problems of the world were becoming
too complex for people to solve without technological assistance, and that
future computers might be used to amplify the power of human intellect, as
well as perform their first takes of numerical calculation and data
processing. Engelbart's vision of computers that could augment human
intellect was a conceptual breakthrough triggered by a thought experiment
rather than a real experience with a computer, but it was based on his
experiences during the war, when he spent hours staring at radar display
screens. john Walker is another person with the vision to see the
development of virtual reality as a realistic technology to base an
industrial effort on. A legendary programmer and, as it turned out, a shrewd
entrepreneur, Walker was one of the founders and the president of AutoDesk,
a company that has sold hundreds of millions of dollars' worth of programs
for doing computer-aided design (CAD) on personal computers. In 1988, riding
the enormous success of his company, he boldly proposed that AutoDesk ought
to get in the cyberspace business. Walker's paper, published internally as
"Through the Looking Glass," was the story of one person's conversion
experience - a person who happened to have a successful software company to
speed development of his vision. Whenever I stop and think about it, I tend
to agree with the VR visionaries who see this as the biggest thing in
cultural transformation since the printing press. Every time I try it out
for myself, however, I find myself wishing for more visual details, less
time-lag when I move my head, more tactile presence. But the
molecular-docking demonstration I was given at the University of North
Carolina was the convincer for me. It felt like an "intuition amplifier" - a
means of augmenting intellectual capabilities for dealing with complexity.
And it isn't a technology that might be possible in 1995. It's here today.
The head-mount is one of several different displays for the docking setup.
There is a wall-size screen and a special display monitor that is viewed
through more conventional 3-D eyeglasses using electronically polarized
lenses and LCD screens. I used the eyeglasses which quite effectively
displayed the colored clouds of pretzeled molecules depicting protein
receptor sites engulfing the maddeningly complex drug molecule, which was
represented as a tinkertoy-like complex made of solid balls or as a skeletal
structure of lines. The problem here is one of geometrical complexity: there
are far too many possible spatial configurations of drug molecules and
protein molecules for a chemist to find the optimum binding position by
conventional means. The big convincer of the docking demo is the arm, a
device that represents the force-fields that bind molecules together or
cause them to repel one another in terms of mechanical forces that you sense
by gripping a pistol-grip on the end of an electromechanical arm. The arm
descends from the ceiling in classic "sword of Damocles" style. I put on the
glasses, put my foot on a deadman switch, and held the grip. The trigger
grip activates the force feedback. Releasing the grip is like lifting the
mouse from the table. The molecular model of an actual anti-cancer drug
molecule (methotrexate) was already positioned inside the model of the
protein receptor site (dihydrofolate reductase).

My job was to find an exact fit in which the two compounds could tightly
bind. The arm has six degrees of freedom, and exerts enough force to tire
your arm if you actively wrestle with a molecule for many minutes. I tried
to twist, rotate, jam, tweak, and frob the thing into place by looking at
the 3D jigsaw puzzle on the screen and manipulating it with my hand. It
didn't take any time at all to develop a sense that I was actually feeling a
molecule "out there" in the space defined by the screen. Even though I know
very little about the chemical architecture symbolized by the various
colored clouds and tinkertoy bonds, I could feel my way into a place where
the arm resisted at a minimum amount between its degrees of freedom. It's
like there is a little pocket of relaxation in the middle of the
force-puzzle-cloud, and if you can feel your way into it, your arm has to
work a whole lot less. When I wrestled the molecule into a relatively
satisfactory zone, bright yellow vectors shot out from the comers of the
drug skeleton. Ming Ouhyoung, the senior graduate student in charge of the
project, pointed out a series of metal knobs on the arm. I was gripping the
molecule in place with my right arm. With my left hand, I could frob the
drug molecule until the yellow lines disappeared  thus deforming the
potential bonds as far as quantum mechanics permits). I imagine that would
have been meaningful if I knew anything about chemistry. In fact, it was
hard to imagine how a chemist could ever devise a molecule to fit that kind
of configuration without 3-D modeling tools; it's a good example of the
class of problems where human thinking capabilities come up against a
complexity barrier. It turned out that there were five little knobs to frob.
The next one minimized the energy levels at certain sites, as displayed by a
simple bar graph that popped up in a window in a corner of the visual space.
I didn't know anything about chemistry, and I had been able to use all my
experience in the world of gravity and manipulable objects, my gut-feel of
the world, to advance a hard problem further than most chemists could have
done without any computer modeling. There are fields in which further
scientific progress is simply not possible without allowing scientists to
stick their heads and hands into 3-D simulations. NASA specialists are using
virtual reality to investigate the complexities of airflow patterns over
airfoil surfaces. The human immune system, with its billions of reactions
per second, and its intricately shape-coded antigens, is another system that
must be modeled in three dimensions in order to be understood. The flows of
atmospheric gases, and other vital planetary systems, are good candidates
for 3-D visualization. Perhaps another scientific/technological field that
cannot be studied in any other way is the telecommunications web that has
grown around the planet into what Xerox PARC researcher Bernardo Huberman
calls "a computational membrane." Tektronix Corporation, which started out
as an oscilloscope company, is already marketing a hardware/ software
package called CAChe (computer-aided chemical modeling). CAChe is a
molecular-modeling program with 3-D input control, stereo 3-D output, and
high computing speed. Tektronix's stereo frame-buffer board fits in a Mac II
and drives a liquid-crystal, stereo frame shutter that covers the monitor's
screen. The unit, transparent to the naked eye, reverses the polarity of the
emerging screen's image at 120 hertz, which provides each eye with a left or
right view at 60 hertz per eye. The view through "electronic shutters"
create a stereoscopic 3D effect by showing alternate views to each eye.
Architectural walk-throughs" in cyberspace have already influenced the
construction of at least one historically appropriate building - Sitterson
Hall, home of the virtual-worlds research laboratory of the University of
North Carolina at Chapel Hill. Before construction began, UNC VR specialists
converted the floor plans into a cyberspace that could be "walked through"
with a head-mounted display and treadmill. Those who were going to use the
building discovered that two walls in the lobby were uncomfortably close
together, creating a cramped feeling. The architect disagreed, until he took
a walk through the simulated building and was convinced to move the wall
when construction began. MARGARET MINSKY'S VIRTUAL SANDPAPER

THE FIELD of tactile and kinesthetic force-feedback is perhaps the most
leadng-edge front of the VR revolution, since so much more is known about
visual and auditory perception than about tactile perception. Margaret
Minsky's thesis is a Media Lab-UNC collaboration. The demonstration of 
virtual sandpaper" had been developed in Chapel Hill, but the actual
intelligent joystick I experienced was in her lab, the Snakepit, down in the
bottom of the Media Lab building in Cambridge at MIT. (It says "Snakepit" on
the door, and there were stuffed snakes woven into the ethernet cables
overhead, I noticed.) The force-feedback arm at UNC descended from the
ceiling, rather awesomely. Margaret's joystick looked like a chopstick on
top of a steel ice-cream maker. The mechanisms for two degrees of freedom
were inside the steel box. I grabbed the cylindrical control rod like a
pencil and used it to move the cursor across the screen of Margaret's Mac
11. She used various menus to create small patches on the screen, filled
with different designs - thick or thin alternating bars, shaded to designate
rounded or rough edges; fractal surfaces that looked like unpolished
granite. Margaret's ultimate goals involve the full human sense of texture
and other related tactile senses. What are the perceptual characteristics
that distinguish fur from sandpaper, and how can they be simulated?
Margaret's specific project involved building a virtual texture simulator
that would allow her to attempt to replicate the research of a
psychophysiologist studying human tactile perception with traditional
psychophysical methods. I moved the steel chopstick like a pen, and when the
cursor moved across the graphic patch of rounded bars, I could feel, through
the variations in feedback force (which were translating the slope of the
virtual curve traversed by the tip of the joystick into counterforces that
resisted my movements in the right direction at the precise amount of
force), the bumpiness of the virtual surface. I felt something bumpy "out
there" with my hands, the way you feel a fence "out there" by running a
stick along it. Then I ran the cursor over a fractal surface and it felt
like I was trying to write with a ballpoint pen on the surface of a piece of
granite. Again, there was a palpable chunk of virtual granite in my
whatever-you-call-the-gut-equivalent-of-"mind's eye." "Where is 
out-there'?" is a very good question. Was I feeling it in my fingers? At the
end of the joystick? On the surface of the screen that depicted the cursor
and the virtual texture? Depending on how I thought about it, I could move
my sense of presence from one to another of those locations. Given visual
and auditory cues, I could see that this sense of physical presence could be
made much more plastic than we are accustomed to feeling when dealing with
solid objects in the external world. She even had a virtual-texture version
of the GRAPH teapot.  For historical reasons having with the whims of a
University of Utah computer scientist who came up with some of the earlier
renderings of solid surfaces, the Association Computing Machinery's annual
Special Interest Group - Graphics conference has always included
increasingly realistic renderings of teapots, year after year. This year,
Nicholas Negroponte harangued the computer graphics subculture about their
obssession with ever-more-realistic teapots and demanded that they direct
their attention back to the use of graphics in the computer interface.
Virtual teapots, I realized, span both areas of concern.) While Margaret and
I talked, I kept running the surface of the cursor over the contours of the
teapot. A strange sensation. I could see how adding this to the kind of
kinesthetic feedback offered by the UNC arm, and the eyephones, and the
datasuit, and 3D audio could begin to approximate vanilla reality to a
disturbing degree. The molecular-docking project had audio feedback to
signify molecular "bump forces," and NASA demos show how auditory tory cues
could be very helpful in trying to fit two pieces of machinery together in
space, via teleoperators ators. Imagine trying to put a key in an unfamiliar
lock in the dark. Imagine if the key and the lock beeped in the right way.
You could couple your muscle movements to your acoustic apparatus for
sensing space. The elasticity of the human capacities cities for feeling
spaces that do or do not exist is another big open question. HOMEBREW VR
JUST CAME BACK from a nifty little ride in one of the first, if not the
actual first  ever homebrew cyberspace. It was assembled from absolute
scratch in one month flat. A little more than 30 days ago, Eric Gullichsen
and Pat Gelband left Autodesk, where they had been working on the cyberspace
project, to start their own company, Sense8. The system they put together is
crude, experientially speaking - about as crude as the Altair, the first
microcomputer kit of the mid-1970s. Since Eric and Pat live and work within
a five minute drive of my house, I've had occasion to observe their progress
firsthand. They got a Pol hemus position-sensing system (easily the most
expensive part of the apparatus) and built their own head-mounted display
from more or less the same off-the-shelf parts that were used at NASA. The
computer is a modified Amiga. Until they get a glove, they are using a
6-degree-of-freedom orb that has two buttons on it. Very nice. In some ways,
the orb is a better control device than the glove. The glove is very helpful
in establishing your sense of presence and orientation in a virtual world,
but the technology right now is nowhere near as finely tuned as the orb; it
is far easier to zoom around a molecule or a floorplan with the orb than it
is, at present, with a glove. They put together a computing and rendering
engine for about $2,000. Then they wrote the code.

I remember dropping by a couple times while they were working it out. Pat
would be doing mathematics with pencil and yellow pad in the kitchen; Eric
would be hacking code in the living room. Having built a cyberspace software
system once before was a big help, but they wanted to do their own system a
different way, for hackeresque as well as legal reasons. They finally got it
working in mid-February.

The first world they had working was just a green plane - thirty polygons or
so - with three pyramids. You could use the orb or the buttons and your line
of sight to fly around.

Of course, VPUs multi-hundred-thousand-dollar version is slick, and far from
slick enough yet. But the homebrew version, which costs about one percent of
what VPUs system does, is certainly more than a hundred-thousandth as
exciting as the high end worlds. The important point is that it is an
existence proof of homebrew VR. just as enthusiasts like jobs and Woz and
the rest of the homebrew computing club forced the PC to evolve from the
Altair to the Apple, VR enthusiasts can add their efforts to the more
well-funded projects in universities and industrial labs. It is now possible
for people to build systems and exchange worlds, to propagate improvements,
to evolve the way personal computers did. It remains to be seen whether
there will be very many cyberspace homebrewers, or whether they come up with
a rich set of tools, or whether they find ways to share their efforts. But
Sense8's system proves you don't have to be NASA. You don't even have to be
Autodesk. You can do it in your from living room, the way Eric and Pat did.

Tomorrow morning, they pack up the system in black ammo boxes and head for a
cyberspace conference in Barcelona, with William Gibson and Tim Leary. I
wish I could say I was covering the story. I'm packing my raincoat and
heading for Seattle.

HITL, THE PORT OF SEATTLE, AND VR

AS A COMMUNICATION-AUGMENTING TOOL

I JUST RETURNED from Seattle and Vancouver. Tom Fumess, who was director of
the Air Force Wright-Patterson AFG "Super Cockpit" project for 23 years, has
started the "Human Interface Technology Laboratory" (HITL) at the that
University of Washington. Except for Ivan Sutherland who pretty much quit
the field after creating the "Sword of Damocles" head-mounted display (which
got its name from the fact that the headset the was connected to a heavy
electromechanical tracking device mounted in the ceiling), Furness has been
in this research the longest. Very neat guy. He wants to build a laboratory
to create the hardware, the software, and the mindware the task-specific
applications that will enable people to use VR technology to augment their
physical and mental capabilities. He's very much in the tradition of Doug
Engelbart (intellectual augmentation) and Fred Brooks (intellectual
amplification). One of the more interesting interviews I conducted up there
was with Cecil Patterson, the information systems director for the Port of
Seattle, who is eager to work with HITL to set up a VR system. He has some
interesting reasons for pursuing this technology First, he recognizes that
there is a need for better communication between engineers, facilities
planners, and potential clients, when it comes to discussing the actual
physical configuration of future port facilities. He sees VR as a kind of
"what-if" machine for computer-aided design (CAD). The problem with most CAD
is that designers understand what renderings of designs on a computer screen
mean far better than their clients. The best way to find out how you feel
about a three-dimensional design is to walk around in it and handle it. The
second and, to my mind, most interesting reason Patterson wants VR is that
most of these clients who are in on the planning stages of
multi-hundredmillion-dollar plans are japanese, Chinese, and others for whom
English is not a native language. He hopes that misunderstandings, delays,
and bugs that are caused by the language problem might be mitigated if the
engineers, planners, and clients on both sides of the Pacific could walk
through VR versions of the proposed construction during every stage of the
planning process. That way, even though the spoken language barrier may
remain, the pictorial mental models of what they are planning will be much
more in accord. When different people talk about a three-dimensional object,
there is some question about how similar their mental models are. When they
talk about it and walk around a 3-D model, their mental models are likely to
be much more highly synchronized. jaron Lanier has his dream of VR being the
matrix from which a visual language will emerge, which is a very interesting
idea - but I'm not sure how, when, or if it can be accomplished. But VR as a
communication-augmentation device seems to me immediately practical. I think
this is a very savvy use of the technology. The port directors know that
miscommunication in the planning of such expensive facilities will affect
the region's economic well'being for decades to come. Spending a few tens of
thousands on hardware and software for visualizing and communicating is a
very economical first step in a billion-dollar plan.

BUILDING WORLDS WITH JARON

IN RESPONSE to my frequent pleas, Jaron gave me a world-building tutorial
last night. My objective is to master the basic steps well enough to build a
world of my own, then step into it and fly around. I had an idea in mind.
Since we had been conversing about ecstasy and VR and my theory that a
cleverly designed world might help create a healthy sort of ecstasy, I
thought I'd like to build a full-scale kiva - a ritual space used by the
Pueblo tribes of the southwest. There would be a subterranean chamber, and a
ladder out of it. At the top of the ladder is the surface of the planet. If
you flew off the planet, you would see oceans, continents, and clouds. There
would be a moon, orbiting the planet. And stars. A basic cosmos. It is, in
fact, the first actual planet that anybody has built at VPL. Maybe the first
virtual planet anybody ever built anywhere. The worldbuilding process starts
with Swivel-3D, a slightly Macdraw-like (but more complicated) tool for
creating 3-D models on the Macintosh. Later, after the basic structures are
created, another program is used to add dynamics; ultimately, the software
describing the set of objects that constitutes a world is moved from the
Macintosh format to a Unix-readable form. Then VPI:s language, "Body
Electric," is used to map the world to the input devices. Worlds created
this way can be linked and embedded within one another. Who knows what
future planet-builders might add to our basic design? jaron handled the
commands this time while I helped him zero in on what I had in mind. The
first image of the world was a wireframe sphere, which we colored blue in
solid mode. It is faster to shape the basic structures in wireframe, then
issue a menu command to render them as solids. Then lighting and shadow
effects can be tweaked. What you see are four windows, three showing views
of the object being edited, from the x, y, and z axes, and one window
showing the object as it would appear in perspective. Next, we created a
duplicate sphere, just slightly larger than the blue one, and colored one or
two regions of the second sphere brown. Then we centered the second sphere
around the same center as the first one and linked them together. Since the
second sphere was just a bit larger, the brown "continents" stood out from
the blue "oceans." The next sphere, colored white, had tinier regions as 
clouds," and it stuck out quite a bit more on the z-axis. We only used about
80 polygons out of a maximum 2,000 possible for each frame, so it isn't the
most realistic world when you see it close up. Not yet. Zooming away into
space, it looks pretty good, though. Recognizable as a planet. Then we
created a smaller, gray sphere, linked and constrained it so it appears to
be orbiting the planet. And stars. That's as far as we got on that pass.
More later. TACTILE FEEDBACK FROM BRAILLE TO DILDONICS

DAVE JOHNSON of the TiNi Company invited me to come by his
laboratory-office-factory to see the work they are doing with a
tactile-feedback prototype. It was in a postmodern corrugated-steel
lightindustrial building in Emeryville, a formerly decaying heavy-industrial
area south of Berkeley that now seems to be reemerging as a center of late
twentieth-century microtechnologies - there are software companies and
futurists, genetic-engineering plants and digital mapping outfits within a
few blocks of TiNi. The TiNi plant was reminiscent of what Edison's Menlo
Park facility must have been like - everything under one roof. johnson had
been working under contract for the Air Force. The "Super Cockpit" project
at WrightPatterson Air Force Base had included plans for a glove that
included miniature force-feedback sensors so pilots could get the fingertip
feel of virtual switches; that is, the pilot wears a head-mounted display
and sees a virtual depiction of the landscape (with bright red  zones of
lethality" surrounding anti-aircraft missile batteries, and overlaid grids
marking optimal flight paths to targets, and eyetracking target-detection,
etc) and a depiction of a virtual control panel. He reaches out his hand to
one of the virtual switches, and when he actuates it, he not only sees the
computer-graphic representation.of the switch move, and hears it "click" if
need be, but he also feels it toggle. This glove doesn't exist yet. And
neither does a multi-line braille computer terminal. But the TiNi folks
think their technology will lead to such items. (And I think that you don't
have far to go to build a tactile-sensitive bodysuit, once you can build a
tactile sensor-actuator glove.) TiNi uses "shape memory" alloys such as
nitinol as the basis for a little grid of what look like little
ballpoint-pen tips. The alloy assumes one shape when it is cast, then when
it is cooled, it can be formed into other shapes; when heated again, it
returns to the original shape. It can be used to perform the kind of
mechanical switching that solenoids do, on a smaller scale. By entering the
proper command to the computer interface, the 6- by 5-pin array, about 3/4"
square, starts moving. I touched my finger to the grid, and felt something
like a pencil lead underneath a piece of cloth moving across my fingertip as
the rows of pins were activated in the proper sequence; I could feel the
individual pins, but I could also perceive their synchronous movement as
being akin to a pencil lead underneath a piece of cloth - there was a
tactile whisper of possibility. The speed and pattern of activation can be
controlled by software. Not a heck of a lot is known about tactile
perception, certainly not in comparison to the scientific knowledge about
visual and auditory perception. But those pins FELT GOOD! Kinda tickly and
soothing. My friend Flash Gordon has a chair that does something with your
vertebrae that can seem obscenely pleasant. It's a bit like that. The
possibility of virtual dildonics, however, is a topic of its own. I see at
least ten years, probably more like twenty, of extensive research to get to
a truly lifelike televirtual tactile experience. At a recent scientific
conference in Santa Barbara, I met the head of the machine perception group
at AT&T Bell labs, whose research goal is to find a way for AT&T customers
to actually "reach out and touch someone" (although perhaps not as
intimately as would-be dildonists fantasize).

VIRTUAL WORLDS AND THE VIRTUAL COMMUNITY

WHEN I STARTED traveling from one research site to another, and started
collecting information about virtual worlds research, it became clear to me
that the many different related subdisciplines necessary for building
virtual worlds are proliferating information very rapidly - too fast for
anybody to keep up. As a firm believer in the power of electronically
mediated virtual communities, I proposed to Tom Furness that HITL sponsor a
newsgroup on Usenet (WER #65, p. 112). This would have several benefits.
First, it would serve as an informal channel for exchanging information in
the VR research community, and a place to discuss issues. Second, it would
make people in the field aware of each other's efforts. Third, it would make
it easier to gather information for my book. I was already a
participant-observer. I might as well just jump right into the field I'm
trying to chronicle. I agreed to become the moderator of the new newsgroup,
which is called scivirtual-worlds. If you have access to Usenet, you should
be able to gain access. The following is excerpted from the statement that
first proposed the new newsgroup, drafted by Bob jacobson at HITL:

The Human Interface Technology Laboratory at

the University of Washington proposes to host

this newsgroup for the study of  virtual-world

phenomena. We believe that the coming proliferation

of virtual-world phenomena, made possible

by powerful virtual-interface technology,

requires the scientific community served by

Usenet to begin debating how this technology

will be employed. Further, with additional research

on virtual-world phenomena taking place

at more and more research sites, and in a growing

Press <CR> for more ! 


number of fields - aerospace, medicine,

entertainment, education, and science - it is

imperitive that there be a forum where the outcomes

of this research can be shared most widely.

A  virtual world " is a unique, intangible bu t

highly designed information environment generated

by a computer and transmitted by  virtual interface"

technology to a user who  enters"

the virtual world via appropriate sensory mechanisms.

The virtual-world environment can

be as complex as a three-dimensional "sense

surround" comprising seamless visual, aural,

and tactile cues; or as simple as a computer conferencing

system. Virtual worlds are designed to

increase the bandwidth of communication between

the computer and the human being, to

facilitate their interaction, and ultimately to

improve the human being's understanding and

performance. The subject of this newsgroup will

be virtual worlds in all their aspects: the theory

of virtuality, the technology that is being developed

and employed to create virtual-world environments,

the people and places working on

virtual worlds, and the philosophical questions

and social consequences attendant upon the

emergence of this new medium of communication.

The Laboratory intends to make available

via Usenet a database referencing the items in

its considerable library regarding virtual-worlds

phenomena and research. The database is in

preparation. An announcement will be made

when this archive is publicly available. n

----------------------------------------------------------------------------

Call THC BBS: +1 604 361 1464 HST   1:340/26   Over 6,200 Text Files!
"Reach for the edges of your mind"


TUCoPS is optimized to look best in Firefox® on a widescreen monitor (1440x900 or better).
Site design & layout copyright © 1986-2024 AOH