Towards a Hypertext Ecology (1987)

While searching for something else in my boxes of decaying paper files recently, I ran across a couple of articles I wrote for publication in 1987/88. Neither one was published; one was submitted to Reason, and they gently declined it as “not of general interest.” The other one was written on a handshake with a telecom trade journal editor, but he lost his job between that handshake and the time I got the article written.

I think they’ve held up reasonably well. Obviously, I got some things spectacularly wrong. (In particular, if you had told me that I’d be living within a mile of BellSouth (now AT&T) headquarters in the year 2010 and still couldn’t get fiber to my home, I’d have thought you were crazy. On the other hand, I consistently get 10 Mb/s on my Comcast cable modem, so I’m pretty happy with that.) And I thought CD-ROMs were going to be important.

On the other hand, I think I got a few things spectacularly right. I not only think I predicted the World Wide Web, but I predicted the “Net Neutrality” scuffle, too! I obviously haven’t edited these at all; I ran them through OCR and added HTML tags. So… into the Wayback Machine, and enjoy!

(The other blast from the past is here.)

Towards a Hypertext Ecology

Part I: HyperCard versus Hypertext

By now, you have probably encountered a wealth of information about Apple’s HyperCard. It is an incredibly impressive piece of programming that Apple is labelling as a “personal toolkit for information.” But, as the name gives away, HyperCard is being widely touted as the vanguard of a new breed of “hypertext.”

Hypertext is a child of the computer age; it defines linked information, with links being defined by the author (and others), not just by the mechanics of placing words and pictures on the printed page. The concept, initially planned for mainframes, has been around for many years, with implementations appearing in the last few years on various microcomputers. HyperCard, by virtue of being free to all new Macintosh users, appears to legitimatize hypertext as the information interface of the future.

This is all well and good, and many of the dazzling initial applications of HyperCard can indeed change the way people use their computers. Even today, HyperCard stacks can display simple black-and-white animation and play back digitized sound recordings. In the future, it can and certainly will be used to access color animation and video as a front end to CD-ROM and other new data storage formats. However, it lacks the capacity to handle today’s most common reference tasks: retrieving large quantities of text with occasional illustrations.

Without such capabilities, stand-alone HyperCard applications may be limited to a flood of address books and Finder replacements. You may have obtained a copy, poked around in some of the demonstration “stacks” — and, perhaps, wondered what all the furor is about. By itself, it may not seem worth the hardware investment in extra memory and larger monitors required to use it to its full potential. (After all, why should I keep my address list in HyperCard under MultiFinder when Acta does it so well on my existing system?)

But this is not being fair to HyperCard or to Bill Atkinson. The existing release of HyperCard is not claimed to be a final product, and Atkinson plans to continue refining it for years to come. But a vision of the future of hypertext could not be complete without considering the technological environment of that future — an environment where information availability threatens to overwhelm any user, and where paths to that information are at least as important as the information itself.

Sticky buttons

First, let us consider some of the failings of HyperCard as it exists today. The first, and most critical, is the lack of “sticky buttons.” This allows an information link to be attached to a concept rather than to a particular position on a page. For example, if I referred to Alexander Hamilton in an article, pressing the mouse on the words “Alexander Hamilton” should display some information about that man. But in HyperCard, buttons are defined by screen position. If I edited my article to put in some information about Aaron Burr, moving the Alexander Hamilton reference further down the screen, the button would no longer be with its intended reference. Reformatting the article (for a different font size or column width) would have the same effect. With sticky buttons, however, the on-screen “hot spot” would move with the reformatted text to remain over the words “Alexander Hamilton.”

Without sticky buttons, the existing implementation of HyperCard is much more “hypergraphics” than it is “hypertext.” Screen-oriented design limits HyperCard to linking information in screen-sized chunks that are not likely to change. (Indeed, although text fields may be scrolled, the window may not even be resized from the “classic” Mac size of 512 by 342 pixels.) This makes it useful for on-line help and CD-ROM interfaces, but not for a true hypertext environment where information is published, linked, and edited on-line by many users simultaneously.

Other programs, other problems

Other Macintosh applications handle this task better. Guide, the original Macintosh hypertext program, is much more text oriented than HyperCard. It includes sticky buttons, although it does not have the extensive graphics capabilities and control over screen layout provided by HyperCard. Even with some irritating choices for user interface (italics meaning one thing, underlining another, etc.), Guide had the potential to grow into an impressive hypertext system. However, it is difficult for a commercial program to compete with free system software (as HyperCard is classified), so the future of Guide seems bleak.

More, from Living Videotext, is not normally considered a hypertext program. Even so, it handles some hypertext functions better than either HyperCard or Guide. Links are easy to follow (either hierarchical or through cloning) and multiple-window management is superb. Documents can grow to very large sizes gracefully, while editing or reformatting the text does not alter information links. Graphics and text can be mixed reasonably well, and the interface is intuitive to anyone who has ever learned outlining in sixth grade. However, More was designed to create and manage outlines, and this makes it too hierarchical to be an implementation of hypertext.

Workstations for hypertext

Considering the limitations of each of these programs, what would be the optimum user interface for a future hypertext program? One parameter is clear: the release and market acceptance of the Macintosh II has forever changed the base of future Macintosh applications. Within a few years, most new applications will be written assuming 68020 or better processing power, hard disks, several megabytes of memory, and large screens in infinite variety. (Those of us clinging to our Mac Pluses will be forced to limp along with Excel. Cricket Draw, and — someday — FullWrite Professional… still powerful tools, but tools that will appear shabby compared to those available to better-heeled comrades.)

Computer monitors are by definition bulky items, since the display must be large enough for a human to read comfortably. The standard Macintosh II displays 640×480 pixels of information, and larger screens are available. This size makes them partially immune to the dramatic cost reductions seen in microchips, hard disks, and other computer technologies, but economies of scale are sure to drive the cost of large high-resolution monitors ever downward. It seems not unreasonable to assume that a hypertext workstation could easily command a screen of 1200×800 pixels at 72 dpi with 256 gray shades available. Higher resolution, larger screens, or color would be available to those who needed it. But even with this minimum, the user interface for a hypertext application (or any Macintosh application) could change significantly.

On-screen display of buttons

First, sticky buttons could be indicated by a light gray shading underneath the affected text. This could be disabled by user preference (for a rapid browser not wishing to link to any other information), but, when enabled, would provide an unobtrusive indication of what items have been linked for further study. (Compare to the rather clunky indications used by Guide due to the on-off nature of the pixels on the original Macintosh: italics, asterisks, and so forth.)

However, a more visually appealing way of indicating buttons is not sufficient. One failing of HyperCard, Guide, and other existing hypertext programs is that an item is linked to one-and-only-one destination: that intended by the author. This is clearly inadequate for browsing through large information spaces; multiple links must be available. One way of indicating this consistent with the Macintosh interface is for a button to have an associated “pop-up” menu giving a range of options for connecting information. In an example given by Eric Drexler, “Links in a description of a coral reef will lead to both texts on reef ecology and tales of hungry sharks.”

[Hypercomment: Drexler’s book, Engines of Creation, is one of the most mind- opening books I have ever read. Both the specific technology (that of molecular machinery) and the discussion of cultural impact (generic to all technologies) are clearly reasoned and well presented. Chapter 14 is devoted to hypertext, and I strongly recommend it to all those interested in the topic]

Customizable link display

To prevent information overload, these links should be customizable by the user. Optimally, an artificially intelligent “genie” would display references only to material that would interest its “master.” Failing this, a system of customizable options should be relatively easy to set up.

For example, many Macintosh users are interested in electronic music. I am not. On CompuServe’s MAUG, I have set my “user options” to skip over all messages labelled as pertaining to electronic music. If for some reason I wish to change this preference (such as doing research for a friend without access to CompuServe), I can do so at any time. Normally, however, I proceed onward unaware of controversies raging over MIDI links and opcodes and so forth. A similar system, vastly extended, should be feasible for hypertext. (I will return to this topic a little later.)

Two-way links

Another important extension to HyperCard’s implementation of buttons would be two-way links. This would allow the user to press a button to find out “What points here?” as well as “What does this point to?” This would provide the perfect way to track down references, popular topics, and half-remembered quotes from a forgotten magazine article.

[This type of capability will probably by used by some as an argument in favor of a multi-button mouse, especially in an authoring environment: one button for editing, one for following a link, one for tracing a reference backwards, and so forth. However, this would deny Apple’s vision in implementing one-button mice ever since the Lisa. Proper software design and screen interfaces can always make a one-button mouse sufficient. As early as MacPaint, this was handled by choosing a tool from a palette and then performing an operation. Extensions of this, plus keyboard shortcuts for the expert user, can always maintain the intuitive friendliness of a one-button mouse.]

Window replacement

Another problem with HyperCard (as well as other hypertext programs such as Guide) is their reaction to pressing a button. The program instantaneously takes you to wherever that button is linked, just as if that is where you had always intended to go. Sometimes, this is true. But consider a student reading a hypertext article on the Punic Wars. When reading about Hannibal crossing the Alps, he or she could, with the click of a button, be in the midst of a treatise on European geography. Even in the simple HyperCard stacks available today, this can be confusing and disorienting for the unsuspecting user.

HyperCard has excellent methods of backtracking, of course, but this shouldn’t be necessary. A linked window should not replace the existing window, but should simply open a new window in front of the old. These new windows could then be rearranged, stacked, used for cut-and-paste, or closed like any other Macintosh window. The original article would always remain on screen, a mouse click or a menu selection away.

Browser’s tools

The complexities of managing information in multiple displays with such a wealth of choices would send many users scurrying for pencil and paper: “Now, I need to remember to look down that path once I’m finished with this one.” This should not be necessary. An automated “note pad” function should keep track of paths taken and not taken, with allowance for user comments about how and why. A graphical display of links should be summonable, highlighting areas the user tracked down and areas missed. Most importantly, programmable user options would allow the user to specify what level of detail he or she is interested in. This would prevent an overwhelming number of choices for the casual user, but allow the full complexity of the system to be available to the more experienced student.

Implementation of large databases

Once sticky buttons and a friendly user interface are developed, the task of the information providers becomes that of implementing these features for very large databases. The sheer volume of information can be a serious hurdle, since a truly useful hypertext system would allow the user to access all the information contained in a good-sized library. Optical scanners with rudimentary AI programs will be required to scan millions of pages and convert them into machine-usable format. More difficult than entering all the information, however, will be defining the buttons and their links.

Like the neurons in a human brain, the intelligence in a large network of this type does not consist of the individual items, but in the linkages between them. As Jerry Pournelle has foreseen, by the end of this century, any computer-literate citizen will be able to obtain the answer to any question for which an answer has been discovered. Intelligence will not be defined as having information, but in knowing how to ask the right questions to find it. The proper use of hypertext can make these questions intuitive; indeed, it should make the questioning process transparent. A user will simply follow links in a natural progression, finding one of many possible paths to the object of his or her search.

User-controllable linking

Each user will have idiosyncratic search methods and individual types of useful information. Therefore, accessible links and categories of links should be user controllable. A particular user would “teach” the hypertext system about his or her background: education, interests, job requirements, and so forth. This information could be used to filter the buttons displayed and linkages offered to that user. A user would only see those links which match his or her criteria, unless intentionally overridden.

This not only would help prevent information overload, but would save the user money as well. Under the proposal for Ted Nelson’s Project XANADU (the ancestor of all hypertext proposals, and one still being created), authors would be paid royalties based on the number of users accessing their material. With controllable links, users would not waste money accessing information outside their area of expertise or interest. Indeed, if the defaults are overridden, the system could take notice of that fact. Say, for example, I was reading a biography of physicist Richard Feynman. If a certain paper sounded interesting, I might try linking to a copy, whether it matched my personal linkage criteria or not. Before displaying the text, the hypertext system should display a dialog box something like: “Note: This reference assumes detailed knowledge of quantum chromodynamics. Access will cost $0.35. Do you want to access it?” At this point, I would click the Cancel button, leaving Dr. Feynman 35 cents poorer and myself considerably chastened.

Assignment of linkages

Although it is tempting to delegate the tedious task of assigning links to the same computer program responsible for scanning in books and magazines, this would probably not be feasible in the near future. This task requires a great deal of judgement and correlation of wide-ranging knowledge with the context in question. For example, consider a reference to someone being “like Caesar’s wife, above reproach.” This could logically be linked to articles on famous quotations, on the importance of reputation for public officials, or on Gary Hart’s 1987 campaign for the Presidency. However, it should not be linked to a biography of Calpurnia, wife of Julius Cœsar — which is surely what any self-respecting computer program would try to do.

Delivering the information

Much work on wide-ranging, densely-interconnected databases is hampered by the limitations of today’s technology. Databases are currently limited in size (such as a typical 20 MB hard disk) and/or in access speed (such as a typical on-line service at 1200 baud). However, developing a true hypertext system will take years. We should consider its development based not on today’s technology, but on technology that is reasonably sure to be in the marketplace in the next five to ten years. As we will see, this can mean a quantum leap in the amount of information that can be efficiently accessed by a hypertext workstation. This problem of information delivery will be discussed in the second part of this article.

Part II: Hypertext in an Age of Infinite Bandwidth

[In the first half of this article, we examined some of the current shortcomings of HyperCard and listed some of the features desirable for a future implementation of true hypertext. Most of these features depended on technology that is currently available, but prohibitively expensive — in particular, access to large chunks of information such as photographs, video, and lengthy text articles. This half examines upcoming technologies that will make such access feasible for the average consumer — essentially, ushering in an age of infinite bandwidth.]

CD-ROM

Among the technologies soon to be used in delivering hypertext information, nearest to term are CD-ROM (compact disk – read only memory) devices, very similar to CD music players. These disks are small and inexpensive, but can store many gigabytes of information. (A gigabyte is equal to one billion bytes, or one thousand megabytes. This would be roughly equal to twice the amount of text in the Encyclopedia Brittanica.) CD-ROM disks can mix text, graphics, digitized sounds, and video in a single reference.

Some CD-ROM devices are already on the market, and availability of software (databases) for them is expected to begin strong growth in 1988. Since CD-ROM is “read- only” (cannot be altered by the user), disks will be treated somewhat as almanacs or encyclopedias — a great deal of information, current when published, but requiring new versions annually (or at some convenient interval).

Optical fibers

Slightly further out for most consumers, but firmly in the mainstream of telecommunications, are optical fiber links. These communication “pipes” can carry stunning amounts of information error-free over great distances. Since the early 1980% they have reshaped the telecommunications industry and led to the installation of billions of dollars of state-of-the-art cable and equipment.

To date, the most noticeable purchasers of fiber optic cable and electronics have been the members of the telephony industry: local exchange carriers, who serve as your local phone company, and inter-exchange carriers, such as AT&T, MCI, and US Sprint. Fiber systems have allowed these users to vastly increase the capacity and flexibility of their networks. As prices continue to fall, however, optical fibers will provide end users with real-time access to vast amounts of information.

Optical fiber communications is based on transmitting brief light pulses over hair-thin strands of glass. This technology was first proposed in the mid-1960’s; after 15 years of laboratory experimentation, the first commercially successful systems began service in the late 1970’s. Newer systems were introduced at a rapid clip, offering telephone companies and long-distance carriers ever-increasing bandwidth and reach. (Bandwidth refers to the amount of information that can be carried by a particular system; reach refers to the distance a signal may be transmitted before requiring costly electronic regeneration.)

Many technologies have converged to make high-bandwidth fiber optic systems economical. The advent of VLSI (very-large-scale integrated) circuits allows single chips to perform functions that used to require entire shelves of equipment. New software tools allow companies to develop complex control systems to automatically monitor and maintain optical systems. And rapid advances in semiconductor lasers have led to incredible price drops for these solid-state devices that form the heart of any optical transmission system: a tiny laser, no larger than a grain of rice, that would have cost tens of thousands of dollars in the mid-1970’s can now be purchased for less than $10 for use in compact-disc players.

From the early days of optical fiber systems at the beginning of this decade, transmission speeds were at least an order of magnitude greater than those of copper-wire-based telephony systems. Early systems operated at 45 Mb/s (megabits per second, where a megabit is one million bits), while systems operating at 565 Mb/s and higher have been in daily use for nearly two years. By the early 1990’s, systems operating at 2.4 Gb/s (gigabits per second, where a gigabit is one billion bits) should be readily available to meet the skyrocketing communications needs in the public communications network. This stunning information capacity leads to a shift in the type of information that may be provided to an end user — not just a difference in amount, but a difference in kind.

Narrowband and wideband services

First, a few definitions are in order. All services available to the casual user today are categorized as narrowband: voice, data over modems, even the digital ISDN links expected to be deployed within a few years. Narrowband services can be classed as those requiring up to 64 kb/s (64,000 bits per second) of information. This is adequate for good voice reproduction, as on a telephone, and more than adequate for most computer links (often 1200 bits per second, occasionally up to 9600 bps). (Note that telephony, unlike the computer industry, measures digital data in terms of bits, not bytes. Also, a thousand is a thousand, unlike a “K” which is 1024. This means that a 10 megabit/second communications link could transmit about 1.2 megabytes per second. This is taken into account in the arithmetic that follows.)

Some services today are being labelled as wideband. These services require a significantly higher bit rate than narrowband services — up to several megabits per second. Wideband links can be used for high-speed data transmission between sites, or may be used to carry a number of individual voice channels multiplexed onto a single carrier. A standard industry speed known as a Tl link is 1.544 Mb/s; this is frequently used by large companies who lease Tl facilities from telephony carriers and purchase Tl multiplexers to interface these links to their voice and data networks.

Broadband services

Above the wideband rate, very-high-bandwidth services are usually classified as broadband. (The dividing line is not very sharp: Tl links, at 1.5 Mb/s, are normally not termed broadband; Ethernets, at 10 Mb/s, normally are.) These are still undergoing definition, since current networks are unable to deliver broadband links to end users except in very special (and expensive) circumstances. However, the falling prices of optical fiber systems (cable and electronics) and the rising demand for higher and higher bit rates will inevitably lead to commercially-available broadband links. Initially, these links will probably be around 150 Mb/s; eventually, they will be limited only by customer demand and willingness to pay.

The transmission medium for broadband services will be, of course, optical fiber. “Fiber to the home” has been a rallying cry for many of the telephone operating companies throughout the 1980’s. Various trials have proven that residential fiber connections are technically feasible. Although still more expensive than standard copper pairs, fiber costs are dropping to the point where, by the mid-1990’s, fiber to the home will be economically feasible as well.

As we will see, broadband services delivered to the office workstation or the home terminal will open up a huge range of possibilities to the end user. Nowhere is this more exciting than in the implications for hypertext.

Transmission speeds

The telephone service that you have in your home today is transmitted over analog circuits. This technology is essentially unchanged from the methods invented by Alexander Graham Bell (although, naturally, vast technical improvements have been made). However, your home is quite possibly served by a digital central office switch.

(The chances of this are highest if you live in a fast-growing suburb of a large city; lowest if you live in a distant rural area.) If so, your voice is digitized and transmitted between offices at the rate of 64 kb/s, an international standard for voice-grade communication.

ISDN service (Integrated Services Digital Network) will extend this digital link to your doorstep. This would allow you to have a direct link into your Macintosh or other device at 64,000 bits per second — clearly, a great improvement over the 1200 baud modems common today. This link would certainly be wonderful for existing services such as CompuServe and various BBS systems, but it is still insufficient for other services, such as entertainment audio and video. Consumer pressure for enhanced-quality video and programming on demand will lead to the development of broadband fiber links to home and office beginning in the mid-1990’s. Once these links are in place for entertainment systems, computer-based applications are sure to follow in short order. Let us examine some possible “chunks” of data, transmission times required over various communication links, and their relationship to an evolved hypertext system.

The samples I have chosen for Table 1 are as follows: a “classic” 9-inch Macintosh screen, a Macintosh II screen (with 256 shades of gray-scale information), an average 400-page novel, a high-resolution color monitor (1200×800 pixels with 24-bit color), a 20 megabyte hard disk, the World Almanac, 30 minutes of CD-quality music, the Encyclopedia Brittanica (with and without pictures), and 10 minutes of broadcast-quality color video. Each of these represent items that might logically be transmitted over a multimedia information-retrieval system such as hypertext.

I have made rough estimates of the number of bytes required to transmit each of these chunks of data, without allowing for compression. (Modern compression techniques could reduce the bit rate required for any of these transmissions by a factor of two or more.) Also, note that these numbers relate to transmission times only: the figure for transmitting a full-screen color image is exactly that, and does not take into account processing time or screen refresh rates. It would be foolish to try and estimate processing power a decade in the future — however, a brief check of past trends and an appreciation of high-temperature superconductors would indicate that rapid improvements in processing speed are not likely to slow down for many years to come.

I show transmission times for these pieces of information at four bit rates: 1200 bits per second (the speed of most modems today), 64 kb/s (the speed of ISDN digital links), 150 Mb/s (the speed of an early broadband services network), and 2.4 Gb/s (the speed of optical communication systems being deployed near the end of this decade). None of the figures should be taken as accurate to the last decimal place; however, some interesting order of magnitude relationships appear.

Data Mbytes 1200 baud 64 kb/s 150 Mb/s 2.4 Gb/s
Classic Mac screen 0.02 2.9 minutes 2.6 seconds 1.1 millisec 70 microsec
Mac II screen, 256 grays 0.3 49.2 minutes 38 seconds 16 millisec 1.0 millisec
Average novel 0.7 1.7 hours 1.5 minutes 39 millisec 2.4 millisec
Hi-res color monitor 2.7 6.6 hours 5.9 minutes 151 millisec 9.4 millisec
20 MB hard disk 20 2.0 days 0.7 hours 1.1 seconds 70 millisec
World Almanac 32 3.2 days 1.2 hours 1.8 seconds 112 millisec
30 minutes CD music 170 17.2 days 6.2 hours 9.5 seconds 0.6 seconds
Brittanica, text only 458 1.5 months 16.7 hours 25.6 seconds 1.6 seconds
Brittanica, with pictures 3,400 11.5 months 5.2 days 3.2 minutes 11.9 seconds
10 minutes color video 10,700 3.0 years 16.2 days 10.3 minutes 37.4 seconds

A variety of observations may be made from this table. First, consider typical files of the size downloadable today — such as a MacPaint screen shot on CompuServe. This takes approximately 3 minutes at 1200 baud, but would take only 3 seconds over an ISDN link. A very large CompuServe file, such as a gray-scale Mac II screen, will be transmitted in less than a minute. Obviously, direct 64 kb/s digital service will greatly speed access to the types of information available today.

More interesting, however, are the last two columns — representative of broadband communication systems known to be feasible with today’s technology. Over a 150 Mb/s link, a user could download the entire contents of a 20 megabyte hard disk in little over a second. With 2.4 Gh/s of bandwidth available, the entire Encyclopaedia Brittanica — with pictures — is available in under 12 seconds! By comparison, this would take nearly a year at 1200 baud. Clearly, broadband services will give the user access to totally different kinds of information than those available today.

From AT&T studies of long-distance telephone dialling, the typical user’s “irritation time” — from entering a command to expecting some sort of response — is about six-tenths of a second. Computer programs that take much longer than this to respond to user commands seem sluggish and awkward. Today, Macintosh applications can use this much time in scrolling between sections of a document. Tomorrow, a workstation will be able to use this time to download the contents of the World Almanac… five times. In a hypertext system, clicking on the title of a Bach sonata could produce a photographic- quality image of Bach and a CD-quality rendition of the sonata… with almost no time lag.

Economics of broadband transmission

Naturally, many barriers must be overcome before such systems are available to the public — but there apparently are no technical reasons why they cannot be built. For example, 565 Mb/s fiber optic systems are in use today in thousands of locations throughout the country. Technically, there is no reason why similar systems, optimized for broadband service instead of telephony requirements, could not be installed in homes today. Today, however, these systems cost tens of thousands of dollars apiece… clearly not feasible for home use.

But consider the history of the semiconductor laser, alluded to above. These devices were invented in the 1960’s. Like all semiconductor devices, they are so small that the cost of materials involved is trivial. However, through the 1970’s, semiconductor lasers sold to researchers cost many thousands of dollars. Why? Because the fabricator of the chip had to recover the one-time cost related to development of the design of the chip and creation of an assembly system. With only a few hundred or thousand units sold, lasers were fabricated by hand or, at most, in semi-automated systems. Naturally, the cost for an individual device was very high.

Enter the compact-disc player. Suddenly, the demand for semiconductor lasers was measured not in the thousands, but in the millions. New factories could be built to mass-produce these devices. The factories were not cheap, but their cost could be spread over vast numbers of lasers. Economies of scale came into play, and the cost of semiconductor lasers dropped within a few years to less than ten dollars. The result? One of the most popular consumer electronic items of the decade, with millions of units sold… each with technical sophistication that would have been nearly inconceivable only a decade ago.

Similar price drops have occurred with custom VLSI devices, with flat-panel display screens, with innumerable items once available only in very limited quantities. As technology improved to make these items mass-produceable, consumer pressures brought about huge price drops. There is every reason to believe that a similar pattern will be seen for high-bandwidth optical communication systems.

Tariff issues

Being able to build inexpensive terminals, however, is not the end of the economic story. Local operating companies are regulated by Federal and state governments in terms of what services they can offer and what prices, or tariffs, they can charge for these services. Many regulatory changes will have to be made before your telephone company could offer you broadband service, and the answers are by no means simple. (For example, if you can obtain enhanced video with picture quality equal to 35mm film from your telephone company, what does this do to your local CATV franchise? Or, if the cable franchise is the first to run an optical fiber to your home, can it offer you telephone service?)

In addition, pricing for these services is a thorny problem. Traditionally, telephone companies have charged users proportionately to the bandwidth they use: one telephone line costs a pre-defined amount of money, two lines costs twice that, and so forth. Broadband services, however, make this concept obsolete. A 150 Mb/s channel could transmit the equivalent of over 2,000 voice circuits. If a single telephone line costs $15/month, this would lead to a broadband channel costing $30,000 per month! Clearly, this would never be a popular service. On the other hand, if a broadband channel is priced at a reasonable consumer target of $60/month, this would mean that a single voice channel should cost only three cents per month! At that rate, your local telephone company would quickly go out of business.

It is not even clear if the traditional flat rate should apply to these circuits. If one resident uses a broadband channel for continuous video viewing, while another downloads databases via hypertext for a few seconds per hour, should they be charged the same rate? A usage-sensitive pricing structure may have to be devised, with complex monitoring and billing taking place at the central office.

These regulatory and pricing issues will not be solved easily or quickly. Like many other technologies, broadband capability for video and hypertext may be available before society’s institutions are equipped to handle it.

The hypertext ecology

These issues, however, are certain to be solved because in the end, the customer will insist on it. Limited-scale trials will convince operating companies that end-users are willing to pay for these services. Competitive pressures between telephone companies and CATV operators will result in a race to bring fiber to the home. CD players, digital audio tape (DAT), enhanced or high-definition TV, and other new technologies will raise consumer expectations for information access ever higher. And — perhaps in the background, perhaps not — future-generation hypertext systems will begin to provide friendly, useful front-ends to vast amounts of information. Once consumers are hooked on being able to access the answer to any question almost instantaneously, there will be no going back. The hypertext revolution will be truly underway.

Barry Commoner, environmentalist and one-time Presidential candidate, once proposed two laws of ecology: (1) Nothing ever goes away, and (2) Everything is connected to everything else. These laws indeed apply to our physical world, but they will apply equally well to a hypertext-based information world. Information, once entered into the nationwide or worldwide network, will never disappear. It will always be retrievable (perhaps with annotations by the author or others directing the reader to changes or new interpretations since the original entry). And all pieces of knowledge will be inextricably connected in an densely woven meshwork, where one item leads to the next leads to the next in a dizzying network of logical links. No two users will follow the same path through the meshwork, since each individual will follow different interests or different lines of reasoning.

As users develop new information or new links between existing pieces of information, these can be added to the meshwork. Elements of the network will evolve, just as in a natural ecology, growing larger or smaller, changing function, or merging with other elements. The rules and relationships will be as complex as those in our natural ecology, and as mutable. And people will be part of this meshwork as well — indeed, the most important part of all. For the hypertext ecology will not exist for its own purposes, but will exist to serve its users — providing easier, faster, and more intuitive access to information for all.

Technologies now on the drawing boards will make this hypertext ecology possible. The challenge for the hypertext application designers, the information providers, and the network operators will be to extend the ease of use of HyperCard to a future hypertext system linking millions of users to quadrillions of bits of information… as easily as clicking a mouse.

Written at the invitation of a telecom trade journal in late 1987 or early 1988, but never published.