The Future of Hypertext
by Jakob Nielsen
Excerpt from Jakob Nielsen's book "MULTIMEDIA AND HYPERTEXT: THE INTERNET AND BEYOND", AP Professional 1995. (Figures and other images have been left out of this online version in the interest of download speed.)
What Happened to the Predictions from my Earlier Book?
My first hypertext book, "Hypertext and Hypermedia," was published in
1990 so it is appropriate to consider how the predictions in that book
compare to reality five years later. In 1990 I predicted that "three to
five years later" (that is, at the time of this writing), we would see
the emergence of a mass market for hypertext. Indeed, this has more or
less come to pass with CD-ROM encyclopedia outselling printed ones and
with large numbers of hypermedia productions available for sale in computer
shops. This "mass market" is still fairly small, though, compared with
the market for printed books, magazines, and newspapers, but it is getting
I also commented on the fact that most hypertext products in 1990 ran
on the Macintosh and predicted a shift to the IBM PCs as a natural consequence
of the mass market trend I discussed. This shift has certainly come to
pass and most new hot hypermedia products ship for the PC long before they
come out in Mac format (many are never converted).
My second main prediction for 1995 was the integration of hypertext
with other computer facilities. This has mostly not come to pass, even
though the PenPoint operating system did include hypertext features as
a standard part of the system. Most computer systems still have impoverished
linking abilities and make do with a few hypertext-like help features like
the Macintosh balloon help that allows users to point to an object on the
screen and get a pop-up balloon to explain its use.
More generally integrated hypertext features must await the fully object-oriented
operating systems that are currently being developed by most leading computer
vendors. It turned out to be too difficult to add new elements to the computational
infrastructure of existing systems and expect the mass of applications
to comply with them. We are currently living through the last years of
the Age of Great Applications with hundreds or thousands of features. These
mastodons are ill suited to flexible linking and have too much momentum
to adjust to fundamentally new basic concepts like universal linking and
embedded help. I predict that in another five to ten years a fundamental
change will happen in the world of computing and that integrated object-oriented
systems will take over. This will happen slowly, though, because the current
generation of Great Applications is powerful indeed. It will take some
time for the smaller, more agile mammals -- sorry, modules -- to win.
A final prediction failed miserably only a few years after it was made:
I predicted that universities would start exchanging Intermedia webs on
a regular basis to build up extensive curricula across schools. In fact
what happened was that the Intermedia project got its funding canceled
and the system does not even run on current computers since it has not
been maintained for years. I stand by my original conviction that Intermedia
was the best hypertext system available in 1990. It deserved to have succeeded,
but short-sighted funding agencies are of course nothing new.
Short-Term Future: Three to Five Years (1998-2000)
In the short term of three to five years, I don't really expect significant
changes in the way hypertext is done compared to the currently known systems.
Of course new stuff will be invented all the time, but just getting the
things we already have in the laboratory out into the world will be more
than enough. I expect to see three major changes:
- the consolidation of the mass market for hypertext
- commercial information services on the Internet
- the integration of hypertext and other computer facilities
Consolidating the Mass Market for Hypertext
Hypermedia products are already selling fairly well even though it is
still only a minority of personal computers that have sufficiently good
multimedia capabilities to do them justice. I would expect very few personal
computers to be sold in the industrialized world from now on without multimedia
capabilities, so the market for hypermedia should get much larger in the
next few years.
We already have one product category, encyclopedias, that is selling better in hypermedia form than in paper form. I would expect a few more such product categories to appear, but I would expect most product categories to be dominated by traditional media in the short term.
One product category that is a prime candidate for initial multimedia
dominance is movie guides. The moving images and the ease of updating make
this domain a natural for a combination of CD-ROM and online publishing,
and several large studios have indeed started to distribute trailers for
their films through America Online and other services. Microsoft Cinemania
was one of the first hypermedia movie guides and I would expect many more
to appear. TV guides will also move from print to hypermedia for much the
same reasons. Furthermore, cable TV services with 500 or more channels
are just too much for a printed guide to describe in any kind of usable
format. Instead of scanning TV listings, users will have to start using
a combination of search methods and agent-based program recommendations
. For example, if my TV guide knows that I normally watch Star Trek
and that most other Star Trek fans seem to like a certain new show on channel
329, then it might prompt me with a recommendation to watch that show one
I also expect the emergence of many more engaging interactive multimedia
games and entertainment products as the convergence of Hollywood and Silicon
Valley shakes out. Current computer games have a paucity of storytelling
talent and production values and thus have to make progress through the
story needlessly difficult to draw out the experience enough to make it
worth the purchase price for the buyer. With more powerful computers and
with more talented writers, directors, and other creative talent, computerized
entertainment should become much smoother to navigate and should present
richer worlds for the user to explore without being killed every few turns.
Truly new entertainment concepts will have to await the next century, though:
I don't expect to see really intelligent environments that change the story
to fit the individual user until after the year 2000.
Commercial Hypertext on the Internet
We will definitely start seeing charging mechanisms on the Internet.
In order to get good quality information, people will have to pay something
for the necessary authoring and editing efforts. It will only be a matter
of a very short time before a NetCash system gets established to allow
people to pay for their consumption of information over the Internet.
Integrating Hypertext with Other Computer Facilities
The reader will probably have noticed that I am quite enthusiastic about
the possibilities of hypertext. Even so, it must be said that many of the
better applications of hypertext require additional features to plain hypertext.
We are currently seeing a trend for hypertext systems to be integrated
with other advanced types of computer facilities. For example, there are
several systems that integrate hypertext with artificial intelligence (AI).
One such system was built by Scott M. Stevens  at the Software Engineering
Institute at Carnegie Mellon University for teaching the software engineering
technique called code inspection. Code inspection basically involves discussing
various aspects of a program during a meeting where the participants each
have specified roles such as reviewer, moderator, or designer of the program.
It turns out that it is impossible to teach people this method without
first having them participate in a number of meetings where they are assigned
the various roles. This process can be quite expensive if the other meeting
participants have to be experienced humans.
It is possible, however, to simulate the other participants on a computer
by the use of artificial intelligence. By doing so, the person learning
the code inspection method can have as many training meetings as necessary,
and the student can even go through the same meeting several times to take
on the different roles. The major part of the meeting simulation system
is therefore an AI method for finding out how the other meeting participants
would react to whatever behavior the student exhibits. But the system also
includes a lot of text in the form of the actual program being discussed,
its design specifications, and various textbooks and reports on the code
inspection method. These texts are linked using hypertext techniques.
Another example of the integration of hypertext with other techniques
is the commercial product called The Election of 1912 from Eastgate Systems.
Whereas the code inspection meeting simulator was primarily an AI system
with hypertext thrown in for support, The Election of 1912 is primarily
a hypertext system but also includes a simulation.
Most of the 1912 system is a hypertext about the political events in the United States in 1912 with special focus on the presidential election of that year. It is possible to read the hypertext in the normal way to learn about this historical period and its people. There is also a political simulation to increase student motivation for reading through the material.
Basically the simulation allows the user to participate in "running" for president in 1912 by "being" the campaign manager for Teddy Roosevelt. The user can plan the travel schedule for the candidate, what people he should meet with, and what issues he should speak out on in which cities. During the simulation, the user can call up a map showing the "result" of opinion polls for each state.
The simulation is hooked into the hypertext system in such a way that
the user can jump from the simulated information about a state to the actual
historical information about a state to understand why the voters in that
state have reacted as they have to the candidate's speeches.
There are also hypertext links from the meetings with various possible
supporters and famous people in the simulation to the hypertext's information
about these people and from the issues in the simulation to the discussion
about these issues in the real election.
Medium-Term Future: Five to Ten Years (2000-2005)
Toward the turn of the century we should expect to see widespread publishing
of hypertext material. It is also likely that the various forms of video,
which are currently quite expensive, will be part of the regular personal
computers people have in their homes and offices. So we should expect to
see true hypermedia documents for widespread distribution.
Within this timeframe there is also some hope for a solution to the practical problems of data compatibility between systems. Currently it is impossible for a user of, say, an IBM OS/2 system to read a hypertext that has been written on, say, a Unix Sun in KMS. In the world of traditional, linear text, the data compatibility problem has been more or less solved some time ago, with the possibility to transfer word processor and spreadsheet documents between Windows and Macs being just one of the more spectacular success stories.
Current work on the "hypertext abstract machine" (HAM) or similar ideas
for interchange formats will almost certainly succeed to the extent that
it will be possible to consider publishing the same hypertext on several
different platforms. This change will increase the size of the market even
more and will therefore add to the trend toward focusing on content. Of
course there will still be many hypertexts that will need to take advantage
of the special features available on some specific platform like the Macintosh,
and these hypertexts will then not be universally available.
The availability of hypertext documents on several platforms is not just a commercial question of the size of the market for the seller. It is also a social issue for the readers of hypertext. Just imagine how you would feel as a reader of traditional printed books if there were some mechanism which restricted you to reading books printed in a specific typeface like Helvetica. But this is exactly the current situation with hypertext: If you own Hyperties, then you can read hypertext documents written in Hyperties but not those written in KMS, HyperCard, Guide, Intermedia, NoteCards, ....
An interchange format will partly resolve this problem. But the problem
will be solved only partly because of the special nature of hypertext,
which involves dynamic links to other documents. It is not even remotely
likely that we will see Ted Nelson's original Xanadu vision of having all
the world's literature online in a single unified system fulfilled in the
medium-term future. Therefore hypertext documents will get into trouble
as soon as they start linking to outside documents: Some readers will have
them, and others will not. It is likely that most hypertexts in the medium-term
future will remain as isolated islands of information without a high degree
of connectedness to other documents. This has been called the docuislands
Even to the extent that many documents are served over the WWW, they
may still be isolated from a hypertext perspective since inter-document
linking lacks much to be desired. People mostly link between documents
on their own server and under their own control since cross-site links
are subject to become dangling without any warning as the remote site changes
its file structure.
Several of the current practical problems with hypertext documents should
be expected to be solved in the medium-term future. For example, there
is currently no standard way to refer to a hypertext document the way there
is with traditional books, which have a standard ISBN numbering system
and recognized publishers with established connections to bookshops. If
you want a book, you go to the bookshop and buy it. And if they don't have
it, they can get it for you.
Hypertexts are published by a strange assortment of companies to the
extent that they are published at all. Many a good hypertext is available
only from the individual or university who produced it. And the sales channels
for hypertexts are either nonexistent or a mixture of mail order and computer
shops, many of which do not carry hypertext.
In a similar way, regular libraries almost never include hypertexts
in their collections and wouldn't know how to deal with electronic documents
anyway. There are a few online services that store a small number of electronic
documents, but since there is no systematic way to determine where a given
document is stored and since most hypertext documents are not available
online anyway, these services cannot substitute for traditional libraries.
It is very likely, however, that the medium-term future will see established publishers and sales channels for hypertext documents. Whether they will be the same as the current system for printed books remains to be seen and would depend on the level of conservatism in the management of publishers and major booksellers. Libraries will certainly have to handle electronic documents, and many of the better and larger libraries are already now starting to invite hypertext specialists to consult on ways to deal with these issues.
Intellectual Property Rights
I expect major changes in the way intellectual property rights (e.g.,
copyright) are handled. These changes really ought to happen within my
short-term time frame since they are sorely needed, but realistically speaking,
changes in the legal system always trail technological changes, so I don't
expect major changes until the year 2000.
There are two major problems with the current approach to copyright:
first, "information wants to be free" as the hackers' credo goes, and second,
the administration of permissions and royalties adds significant overhead
to the efforts of people who work within the rules. Under current law,
hypermedia authors are not allowed to include information produced by others
without explicit permission. Getting this permission is estimated to cost
about $220 on the average in purely administrative costs such as tracking
down the copyright owner, writing letters back and forth, and possibly
mailing a check with the royalty payment. If you are negotiating the rights
to the large-scale reproduction of an entire book or all the paintings
in a museum then $220 for administration is negligible, but that is not
so if you want to assemble a large number of smaller information objects.
To overcome the problem with administrative overhead, it is possible
that information objects will start to include simplified mechanisms for
royalty payment. Each object could include attributes indicating the copyright
owner, the owner's NetCash account number, and the requested royalty for
various uses of the material. This approach would be similar to the Copyright
Clearance Center codes often found printed in each of the articles in a
research journal. With the current system, every time somebody copies an
article from one of the journals associated with the Copyright Clearance
Center, they (or their library management) can send the Center the article
code and a check for the $2.00 or so that are the requested royalty for
that article. The Copyright Clearance Center codes require manual interpretation
and would not be suited for information objects costing a cent or less
per copy, but an object-oriented data representation would allow computers
to process the payments automatically. If each data object came with payment
information, the holders of intellectual property rights should not mind
wanton distribution of copies of their materials since they would get paid
every time somebody actually used the information.
The more basic problem, though, is that "information wants to be free."
Once you know something, it is very hard to pretend that you don't. Thus,
it is difficult to charge people every time they use some information.
Also, information does not get worn or diminished by being copied. This
is in contrast to traditional goods like a cake: if I let you have a bite
of my cake, there is less left for me to eat, but if I let you look at
the front page of my newspaper, I will not suffer any adverse effects.
It is possible to establish barriers to information dissemination, either
by legal and moral pressure ("don't copy that floppy") or by actual copy
protection. Users tend to resent such barriers and have been known to boycott
vendors who used particularly onerous copy protection mechanisms. An alternative
would be to eliminate copyright protection of the information itself and
let information providers be compensated in other ways. It is already the
case that software vendors generate most of their revenues from upgrades
and not from the original sales of the software, so it might be feasible
to give away the software and have the vendors live off upgrades and service.
Moving software from a sales model to a subscription model (possibly with
weekly or daily upgrades) would make it similar in nature to magazines
and newspapers where people pay to get the latest version delivered as
soon as possible.
It is also possible for intellectual property owners to get compensated,
not from the information itself, but from a derived value associated with
some other phenomenon that is harder or impossible to copy. This is the
way university professors currently get compensated for their research
results. They normally give away the papers and reports describing the
results and get paid in "units of fame" that determine their chance of
getting tenure, promotions, and jobs at steadily more prestigious institutions.
Fame may also provide a reasonable compensation model in many other cases.
For example, people visit the Louvre to see the real Mona Lisa even though
they already have seen countless reproductions of the image. In fact, the
more the Mona Lisa is reproduced, the more famous it gets, and the more
people will want to visit Paris and go to the Louvre. Therefore, the Louvre
should welcome hypermedia authors who want to use Mona Lisa bitmaps and
should allow them to do so for free. Similarly, a rock band might make
its money from tours and concerts and give away its CDs (or at least stop
worrying whether people duplicate the CD with DAT recorders), and a professional
society could get funded by conference fees instead of journal sales. The
trick is to use the information (e.g., music CDs and tapes) to increase
the value of the physical or unique event for which it is possible to charge.
Long-Term Future: Ten to Twenty Years: 2005-2015
What will happen in the really long-term future, more than twenty years
from now? That is for the science fiction authors to tell. In the computer
business, ten to twenty years counts as long term future indeed, so I will
restrict myself to that horizon in the following comments.
Some people like Ted Nelson expect to see the appearance of the global hypertext (e.g. Xanadu) as what has been called the docuverse (universe of documents). I don't really expect this to happen completely, but we will very likely see the emergence of very large hypertexts and shared information spaces at universities and certain larger companies.
Already we are seeing small shared information spaces in teaching applications,
but they are restricted to the students taking a single class. In the future
we might expect students at large numbers of universities to be connected
together. Another example of a shared information space is a project to
connect case workers at branch offices of a certain large organization
with specialists at the main office. The staff at the branch office does
not follow the detailed legal developments in the domain that is to be
supported by the system since they also have other responsibilities. Therefore
the specialists would maintain a hypertext structure of laws, court decisions,
commentaries, etc., which would be made available to the case workers at
the branches. Also, these local people would describe any cases falling
within this domain in the system itself with pointers to the relevant hypertext
nodes. In this way they would in turn help the specialists build an understanding
of practice and they would also be able to get more pointed help from the
specialists regarding, for instance, any misapplication of legal theory.
So given these examples which are already being implemented, I would
certainly expect to see the growth of shared information spaces in future
hypertext systems. There are several social problems inherent in such shared
spaces, however. If thousands or even millions of people add information
to a hypertext, then it is likely that some of the links will be "perverted"
and not be useful for other readers. As a simple example, think of somebody
who has inserted a link from every occurrence of the term "Federal Reserve
Bank" to a picture of Uncle Scrooge's money bin. You may think that it
is funny to see that cartoon come up the first time you click on the Bank's
name, but pretty soon you would be discouraged from the free browsing that
is the heartblood of hypertext because the frequency of actually getting
to some useful information would be too low due to the many spurious links.
These perverted links might have been inserted simply as jokes or by
actual vandals. In any case, the "structure" of the resulting hypertext
would end up being what Jef Raskin has compared to New York City subway
cars painted over by graffiti in multiple uncoordinated layers.
Even without malicious tampering with the hypertext, just the fact that
so many people add to it will cause it to overflow with junk information.
Of course the problem is that something I consider to be junk may be considered
to be art or a profound commentary by you, so it might not be possible
to delete just the "junk."
A likely development to reduce these problems will be the establishment
of hypertext "journals" consisting of "official" nodes and links that have
been recommended by some trusted set of editors. This approach is of course
exactly the way paper publishing has always been structured.
An example of such a development can be seen in the way many users have
given up following the flow of postings to the Usenet newsgroups. Instead
of reading the News themselves, many people rely on friends to send them
interesting artucles or they subscribe to electronic magazines , which
are put out by human editors who read through the mass of messages to find
those of interest for whatever is the topic of their magazine.
It would also be possible to utilize the hypertext mechanism itself
to build up recordsof reader "votes" on the relevance of the individual
hypertext nodes and links. Every time users followed a hypertext link,
they would indicate whether it led them somewhere relevant in relation
to their point of departure. The averages of these votes could then be
used as a filter by future readers to determine whether it would be worthwhile
to follow the link.
Another potential social problem is the long-term effects of nonsequentiality.
Large parts of our culture have the built-in assumption that people read
things in a linear order and that it makes sense to ask people to read
"from page 100 to 150." For example, that is how one gives reading assignments
to students. With a hypertext-based syllabus students may show up at exam
time and complain about one of the questions because they never found any
information about it in the assigned reading. The professor may claim that
there was a hypertext link to the information in question, but the students
may be justified in their counter-claim that the link was almost invisible
and not likely to be found by a person who was not already an expert in
the subject matter.
The reverse problem would occur when the professor was grading the students'
essays, which would of course be written in hypertext form. What happens
if the professor misses the single hypertext link leading to the part of
the paper with all the goodies and then fails the student? Actually the
solution to this problem would be to consider hypertext design part of
the skills being tested in the assignment. If students cannot build information
structures that clearly put forward their position, then they deserve to
fail even though the information may be in there in some obscure way.
A further problem in the learning process is that novices do not know
in which order they need to read the material or how much they should read.
They don't know what they don't know. Therefore learners might be sidetracked
into some obscure corner of the information space instead of covering the
important basic information. To avoid this problem, it might be necessary
to include an AI monitoring system that could nudge the student in the
right direction at times.
There is also the potential problem that the non-linear structure of
hypertext as being split into multitudes of small distinct nodes could
have the long-term effect of giving people a fragmented world view. We
simply do not know about this one, and we probably will not know until
it is too late if there is such an effect from, say, twenty years of reading
hypertext. On the other hand, it could just as well be true that the cross-linking
inherent in hypertext encourages people to see the connections among different
aspects of the world so that they will get less fragmented knowledge. Experience
with the use of Intermedia to teach English literature at Brown University
 and Perseus to teach Classics  did indicate that students were several
times as active in class after the introduction of hypertext. They were
able to discover new connections and raise new questions.
Most current hypermedia systems are based on the notion that each information
object has one canonical representation, meaning that a given node should
always look the same. The WYSIWYG (what you see is what you get) principle
has actually been fundamental for almost all modern user interfaces, whether
for hypertext systems or for traditional computer systems. There is something
nice, safe, and simple about this single-view model, and it is certainly
better than the previous model where all information on the computer screen
was presented in monospaced green characters no matter how the document
would look when printed.
The single-view model is best suited for information objects that have
an impoverished internal representation in the computer because the computer
will have no way to compute alternative views and thus has to faithfully
reproduce the layout given by the human user. It may take more than ten
years to get to the next model, but eventually I expect the computer to
process information objects with significantly more attributes than it
currently has. Once the computer knows more about each information object,
it will be able to display them in various ways depending on the user's
specific needs for the user's current task.
One example of a multi-view hypertext is the SIROG (situation-related
operational guidance) project at Siemens . SIROG contains all parts
of the operational manual for accidents in a Siemens nuclear power plant
and is connected to the process control system and its model of the state
of the plant. The hypertext objects have been manually encoded by the authors
with SGML attributes that inform SIROG about the part of the plant and
the kind of situation they describe. The system can match these attributes
with the actual situation of the plant and present the relevant parts of
the manual to the operators. Obviously, unexpected situations can occur
and the system may not have a perfect understanding of the plant state.
Therefore, SIROG relies on the operators' abilities to read and understand
the content of the manual: it is not an intelligent system to diagnose
nuclear power plant problems; it is a system that tries to display the
most relevant part of the manual first rather than having the manual always
look the same every time the operators bring it up.
Program code is another example where multiple views are both relevant
and Possible [FN 1] . Sometimes a programmer wants to see only the executable
code, sometimes a commented view is better, sometimes an overview of the
program's structure is most helpful, and sometimes a view annotated with
debugging or performance metering will be needed. For third-party programmers
who want to reuse a program through its API simplified views of the interface
structure will suffice (and indeed, a view of the internal structure would
The CastingNet system  uses a frame-based model with highly structured nodes. Each node is a frame with a number of named slots that represent its properties and the type of the node determines what frame will be used to represent its content. Each slot has a mapping function that maps each possible value of that slot onto an axis. In one CastingNet example, hypertext nodes that represent research conferences may have slots for the country in which the conference is held, the date of the conference, some classification of the topic of the conference, and many more attributes of conferences and conference announcements.
The conference-venue slot would map onto a country axis that could be
represented as a linear list, as a world map, or in many other ways. The
attribute axes can be used to construct many different two- or three-dimensional
overview diagrams of the hypertext according to criteria that are of interest
to the user. The frame-based representation also allows the system to construct
links between nodes that are that have identical or closely related values
along one or more axes. For example, it would be possible to link from
a given conference node to nodes representing other conferences taking
place in the same location at the same time (or the following or previous
week) as a conference a user was thinking of going to. Also, due to the
mapping functions, it will be possible to link to nodes that have different,
but compatible, slots. For example, instead of linking to other conference
nodes with similar venue slots, it would be possible to link to the user's
personal information base of friends and contacts to see what people lived
in the same town and might be visited in connection with a conference trip.
This link would be possible because the address nodes for each friend would
have slots for city and ZIP code that could be mapped to the same axis
as the venue slot in the conference frame.
The frame-axis model of CastingNet provides flexible ways for the user
to restructure the views of the hypertext and to link the information in
new ways as new needs are discovered. This type of flexibility will be
very important in the future as people start accumulating large personal
information bases, but it will only be possible if the computer has ways
of representing the same information in multiple ways depending on the
interpretations that are currently of most value to the user.
Hypermedia Publishing: Monopoly or Anarchy?
Assume that in fact most information will migrate online and that it
will be published in hypertext form with substantial multimedia components.
What will this change imply for the publishing industry and the dissemination
of information in the future?
Two opposite trends might be possible: either information publishing
will be concentrated in a few, near-monopolistic companies, or the ability
to publish information will be distributed over many more companies than
are now involved. Both trends actually seem plausible due to different
characteristics of hypermedia publishing.
Information publishing currently involves five different stages:
Creating the content
Editing the content, including the selection of the authors
Manufacturing copies of the resulting product
Distributing copies of the product through the sales channels
Advertising and promoting the product
The middle stage, manufacturing physical copies of the information product,
will more or less go away for hypermedia publishing because the users will
be creating their own copies on demand as they download information from
the Internet. Even for non-Internet hypermedia products, the physical duplication
of CD-ROMs or floppy disks will account for an increasingly small part
of the overall cost of the information products. Currently, the cost of
printing and binding books (or newspapers or other information products)
is one of the factors that limit the number of publishers of information
products. The widespread availability of desktop publishing and photocopies
have changed the picture somewhat and has made it economically feasible
for people outside the established publishing houses to produce information
products in small numbers.
Even though anybody can print, say, 100 copies of a booklet, the last
two stages of the publishing process, distribution and promotion, have
limited the actual sale and utilization of small-scale information products
with pre-hypertext technology. What do you do after you receive the 100
copies of your booklet back from the QuickPrint shop? To get any sales
you would have to get bookstores to carry it, newspapers to review it,
and you would need a way to ship a copy to the one resident of New Zealand
who might happen to be one of your customers.
Production, distribution, and promotion limitations have caused the
publication of paper-based information products to be concentrated in the
hands of a number of publishers that is much smaller than the number of
potential authors or providers of information. At the same time, the barriers
to setting up a new publishing company are fairly low and there is in fact
a large number of publishers in business all over the world, some of which
have very small market shares but are still profitable.
With the move to distributing information products over the Internet,
distribution will go away as a limiting factor for information publishing,
supporting a trend towards having more publishers. Essentially everybody
with a workstation and an Internet subscription can become a publishing
house and sell information products over the net. Indeed, with the WWW,
the trend so far has been for a large number of people to set up their
own individual "publishing houses" in the form of their home pages and
associated hyperlinked material. Promotion and advertising will still tend
to favor larger publishers with more resources, but the most effective
type of promotion for Internet hypertext will be having others link to
your material, and an author can potentially be linked to by many others
without being affiliated with a major publisher.
The trends discussed so far in this section would seem to favor a kind
of information anarchy where many more authors get published than is currently
the case and where the publishing gets done by a larger number of companies.
There are several opposing trends, though. The first is the information
overload that is already taking its toll on people's ability to check out
new WWW sites. With more and more information placed on the Internet, a
backlash might cause many users to stay within safe Web sites that are
known to offer high-quality material. As a parallel consider the way many
people get highly aggravated by junk mail and deliberately try to avoid
Information monopolies are encouraged by three phenomena: production
values, fame, and critical mass. With the move from simple text-based hypertext
to multimedia-oriented hypermedia production values become steadily more
important. For a text-only hypertext product, an individual author can
craft a good product, but a full-scale multimedia product with video and
animation takes a crew of graphic artists, set designers, actors, make-up
specialists, camera-people, a director, etc. It may still be possible for
a small company like Knowledge Adventure to produce a better dinosaur CD-ROM
than Microsoft [FN 2] by including 3-D dinosaur attack animations, but
material produced by the average dinosaur enthusiast pale in comparison
with either CD-ROM. Professionally produced multimedia titles with good
production values look so much better than amateur productions but they
require much higher capitalization and thus can only be produced by a fairly
small number of companies. We are already seeing a trend to higher production
values on the WWW with the major computer companies hiring specialized
production staff and graphic artists to dress up their home pages and smaller
companies with boring pages (or worse, ugly graphics designed by a programmer)
will lose the production-value battle for attention.
Fame is the second factor that tends to support monopoly in information
products. By definition, fame favors a few people or companies that are
particularly well known, and the more people read or hear about someone
famous, the more famous that person becomes. Consider two hypothetical
new film releases, one starring Arnold Schwarzenegger and one with another
hunk with equal acting talent whom nobody has heard of before. Which film
will attract the largest audience? Why, Arnold's, of course. Thus, after
these two films have played the world, Schwarzenegger will be more famous
than before (since many people will have seen him) and the unknown hunk
will remain unknown.[FN 3]
Even though production values and fame both tend to favor a small number
of large hypermedia producers, the third factor, critical mass, may be
the killer monopoly enhancer. Hypertext products derive significant added
value from links and add-on information. Consider again the example of
dinosaurs. Microsoft Encarta (a hypertext encyclopedia) could have hypertext
links to Microsoft Dinosaurs, thus simultaneously increasing sales of Microsoft
Dinosaurs and adding value to Encarta. Of course, a third-party hypertext
encyclopedia could have a link to 3-D Dinosaur Adventure but depending
on the way distribution and royalty mechanisms play out, there may not
be much synergy to doing so. Also, independent software vendors (ISVs)
may produce add-on modules to widely sold products like Encarta to provide
special effects such as an animated physics simulation to better explain,
say, gravity. All experience from the PC market shows that ISVs only port
software to the most popular platforms, so the availability of add-on modules
will tend to favor hypermedia titles that have already established critical
mass with respect to market share.
It is currently impossible to predict which of the two opposing trends
to anarchy and to monopoly will prove the strongest. It is possible that
a third way will emerge where the hypermedia market is dominated by a small
number of near-monopoly players who set standards and mainline directions
while a much larger number of garage-shop publishers add variety and have
the ability to explore new options faster than the giants.
It is possible that a new phenomenon will emerge in the form of "temporary
monopolies" that dominate for a short period only to be replaced virtually
overnight. The Internet seems to encourage temporary monopolies because
of its ability to instantly distribute software to millions of users who
all are part of the same discussion groups and rumor mills. Consider two
examples: the Netscape browser for the WWW increased its market share from
0.1% to 64% during the four-month period from August to December 1994 (and
the previous market leader, Mosaic, dropped from 73% to 21% in the same
period), and usage of the Lycos search server increased at an annualized
rate of 130 million percent during the fall of 1994.
For printed materials, there has not traditionally been a single city
or geographic area that dominated world market. Books and newspapers are
published throughout the world and it is not particularly advantageous
for an author to move to a specific location in order to be published.
Many countries have a concentration of large book publishers and newspapers
in their capitals but traditionally most provincial towns have had their
own newspaper(s). In the U.S., New York City serves as the closest approximation
to a creative center for word production. Two of the three national newspapers
in the U.S., The New York Times and the Wall Street Journal, are published
in New York (the third, USA Today, is published in Arlington, Virginia).
However, the combined circulation of the three national newspapers is only
4.5 million copies which is 7% of the circulation of all newspapers in
the U.S. In other words, 93% of newspaper circulation in the U.S. is due
to local papers.[FN 4]
In contrast to the print industry, the film and computer industries
have had a disproportionate part of their most important companies located
in a very small number of geographic areas. The film industry is concentrated
in the Los Angeles area not just with respect to the U.S. market, but with
respect to the world market: 80% of the box office receipts in Europe go
to American films [FN 5] and all major top-grossing blockbuster releases
[FN 6] for the last several years have been American.[FN 7] The computer
industry is not quite as concentrated as the film industry,[FN 8] but the
Silicon Valley area between San Francisco and San Jose still serves as
the main creative center for this industry with a very large number of
established and start-up companies. In order to benefit from the availability
of skilled staff, suppliers, customers, and other partners in the creative
center, 20% of the largest 50 software companies that were founded in Europe
have since moved their headquarters to the U.S.[FN 9]
The existence of internationally dominant creative centers for the film
and computer industry serves to motivate high-caliber personnel to relocate
to those areas from all other parts of the world, leading to further centralization
of activity. By concentrating so many people who work in the same field,
creative centers ensure an extraordinary degree of cross-fertilization
which again leads to rapid refinement and dissemination of ideas . Technology
transfer by warm bodies is known to be much more effective than technology
transfer by technical reports, and the extensive job hopping that takes
place in a creative center therefore benefits all the companies in the
area by increasing their exposure to new ideas and techniques.
It is currently too early to predict whether the multimedia and hypermedia industry will be dominated by creative centers in the way the film and computer industries are, or whether it will follow the less centralized precedent set by the publishing industry. In principle, the Internet allows people to work together on a multimedia project without the need for them to live in the same area, but in practice intensely creative work, brainstorming, design, and the informal contacts that lead to most business still require people to be colocated in the same physically proximate reality (PPR). Virtual reality and cyberspace are not the same as dinner at Il Fornaio.
Since the two main components of the multimedia industry are the computers and moviemaking, the most likely candidates for a creative center for multimedia are Silicon Valley and Hollywood. Whether the resulting "Siliwood" will form around San Francisco's "multimedia gulch" or around the Hollywood studios is currently the topic of much debate, but there is no doubt that California is a very likely home for one or more creative centers for multimedia. Of course, if the monopoly scenario outlined above turns out to happen, there will be definite advantages to be located close to Microsoft headquarters in Redmond, WA, and the Seattle area is already attracting a large number of highly creative multimedia companies.
 Isbister, K., and Layton, T. (1995). Agents: What (or who) are they? In Nielsen, J. (Ed.), Advances in Human-Computer Interaction vol. 5. Ablex.
 Landow, G. P. (1989). Hypertext in literary education, criticism, and scholarship. Computers and the Humanities 23, 173-198.
 Maltz, D., and Ehrlich, K. (1995). Pointing the way: Active collaborative filtering. Proc. ACM CHI'95 Conf.
 Marchionini, G., and Crane, G. (1994). Evaluating hypermedia and learning: Methods and results from the Perseus Project. ACM Trans. Information Systems 12, 1 (January), 5-34.
 Masuda, Y., Ishitoba, Y., and Ueda, M. (1994). Frame-axis model for automatic information organizing and spatial navigation. Proc. ACM ECHT'94 European Conference on Hypermedia Technology (Edinburgh, U.K., September 18-23), 146-157.
 Oesterbye, K., and Noermark, K. (1994). An interaction engine for rich hypertext. Proc. ACM ECHT'94 European Conference on Hypermedia Technology (Edinburgh, U.K., September 18-23), 167-176.
 Saxenian, A. (1994). Regional Advantage: Cultural and Competition in Silicon Valley and Route 128. Harvard University Press.
 Simon, L., and Erdmann, J. (1994). SIROG - A responsive hypertext manual. Proc. ACM ECHT'94 European Conference on Hypermedia Technology (Edinburgh, U.K., September 18-23), 108-116.
 Stevens, S. M. (1989). Intelligent interactive video simulation of a code inspection. Communications of the ACM 32, 7 (July), 832-843.
(FN 1) Multi-view representations of program code are particularly easy to produce because so many software development tools exist to parse the code and interpret it in various ways. For other kinds of information objects we still need better ways of letting the computer manipulate structure and meaning.
(FN2) To be fair, I should mention that I don't know for sure whether Knowledge Adventure's 3-D Dinosaur Adventure is better than Microsoft Dinosaurs. I just know that 3-D Dinosaur Adventure got the best review of the two in a magazine I was reading, and since I thought that one dino CD would be enough, I only bought 3-D Dinosaur Adventure. This proves my main point, though: when there are too many dinosaur hypermedia titles available, people tend to concentrate on a few published by reputable sources (Microsoft) or that have particularly good production values (3-D animations). I do not have time to fully research the market for dino CDs; I just read reviews of the two that seemed the best bets.
(FN3) The movie star example is not entirely irrelevant to a discussion of hypermedia markets. Compton's Interactive Encyclopedia is using Patrick Stewart (British actor famous for his role as the captain in Star Trek-The Next Generation) as a narrator in the hope that some of his star quality will rub off on the hypermedia product.
(FN4) The influence of New York as a creative center for the print industry is larger than the market share of its newspapers indicates: The New York Times and the Wall Street Journal are read disproportionally by influential decision makers and opinion leaders and the city publishes magazines accounting for nearly half of the magazine industry's national revenues.
(FN5) For comparison, European films only have a 1% market share of the U.S. film market.
(FN6) All films grossing more than $100 million outside U.S. in 1993 were American: Jurassic Park ($530M), The Bodyguard ($248M), Aladdin ($185M), The Fugitive ($170M), Indecent Proposal ($152M), Cliffhanger ($139M), Bram Stoker's Dracula ($107M), and Home Alone 2 ($106M). The only non-U.S. film to come close was Les Visiteurs, which scored a $90M non-U.S. box office. But since Les Visiteurs had zero dollars in U.S. sales in 1993, it ended as number 27 on the list of world-wide box office receipts-all the top 26 films were American.
(FN7) For the week before Christmas in 1994, only 26 of the 110 films on the top-10 lists in Belgium, Brazil, Denmark, France, Italy, The Netherlands, Norway, South Africa, Spain, Sweden, and Switzerland were non-American according to Variety magazine. Disney's The Lion King was number one in seven of the eleven countries and number two in two more (Variety article "Lion king over Euro holiday B.O.," January 8, 1995, p. 18).
(FN8) The top-five companies only account for 33% of the software market, whereas the top-five film companies have an 81% market share.
(FN9) Classified by the location of their headquarters, the distribution of the 30 software companies with the largest sales in Europe is as follows: U.S. 19, France 5, Germany 4, Italy 1, U.K. 1.
Retirado de http://www.kweb.it/hyperpage