BABBA issue # 13 - March 1994, page 14

Part 2: An Overview of SCSI Technologies - This month Fred continues his four-part series on SCSI technology with an overview of terms. Part three will cover peripheral/adaptor installations. Part four will cover hints and kinks, including how to make your own SCSI cables.

Are You Looking SCSI Lately?

(By Fred Townsend)

What is SCSI? SCSI is an acronym for Small Computer System Interface....but Interface to what?

Perhaps a better name would have been Computer Peripheral Interface, for not all SCSI applications involve small computers, nor does the name system necessarily apply. Today SCSI applications range from terabyte jukebox archival data-storage systems to 2.5-inch laptop disk drives. SCSI interfaces can be found on tape drives, CD-ROMS, scanners, printers, disk drives, and hundreds of custom applications. These sub-system peripherals typically form building blocks, rather than complete systems. This provides a flexibility beyond what is possible at the system level. Let's look at the various SCSI building blocks, starting with protocols.

Protocols Govern Information Transfer
The original focus of the SCSI specification was on defining a feature-rich SCSI command set of rules primarily aimed at hard disks, but applicable by smart peripherals of all kinds.The protocols, sometimes referred to as the software side of SCSI, are a set of rules that govern the operation of the SCSI community of electronic devices. The rules are translated by the manufacturers into the firmware that controls their hardware.

A SCSI circuit consists of a Master and one or more Targets. When two or more SCSI targets are accommodated, the interface is usually referred to as a SCSI bus. The term interface is sometimes used in place of bus when a non-expandable single target is connected and frequently refers to a Proprietary or Non-Complaint interface.

Maximum flexibility is achieved because SCSI devices can describe their capabilities and parameters to the master and the masters host. Using these descriptions the host can, statically and sometimes dynamically, configure the SCSI components for maximum performance or maximum capability.

Static configuration begins at system boot-up with the bus master. Multiple masters can reside on the bus so the master with the highest address becomes the permanent bus master. Other masters, if present, are known as temporary bus masters. Temporary bus masters are also targets. At power-up or boot time the ranking SCSI master initiates the session by calling roll.

When an address is called by the master the target device answers with a short message equivalent to present. At this point the master knows only a device resides at that answering address. Then the master re-interrogates the answering address with a command equivalent to Tell me about yourself. In this way the master learns the characteristics of each target, including whether that target has temporary bus mastering capability. There is no requirement for the targets be similar, so a hard disk and a printer can coexist on the same bus.

The master knows, from its previous interrogation, the characteristics of each device so it can request each target use, or not use, any of its features during the session. Multi-threaded operation is fully supported. If one target lacks a capability it does not prevent the master from using that capability with some other device.

What are SCSI characteristics? Characteristics range from a device's physical parameters, such as type of device, manufacturer, capacity, and serial number, to the repertoire of commands or modes the target understands.

Fast Track
With attention from mainframe computer companies as well as Apple, SCSI was off to a fast start. There were some problems getting the hosts to always communicate correctly with the SCSI devices, but with a defined set of protocols, the problems were soon ironed out.

The original specification defined protocols, but did not address other aspects of SCSI in detail. The committee envisioned the mechanical, electrical, and timing aspects of SCSI would pretty much define themselves. Only at one point, the connector interface, was there any attempt at hardware standardization, and this was somewhat ignored by the mainframe manufacturers. The manufacturers were accustomed to defining their own connectors and cabling, so were not concerned with optional SCSI mechanical specifications.

SCSI Cables - Frequently a Source of Problems
It is common practice in designing cables to use the odd-numbered conductors as signal returns (grounds) while the even-numbered conductors carry the signals. The odd-even alternation is a technique that transforms a ribbon cable into clustered transmission lines. High-quality transmission lines are necessary to carry the high bandwidth SCSI signals. Failure to recognize this requirement was one source of cable problems.

The original SCSI specification called for nine data, and nine control signals. With grounds, the minimum number of wires came to about 40. It is not clear if the original specification, extracted from the Shugart specification (SASI), envisioned future expansion. Perhaps it was just handy because 50-wire cables were already used on eight-inch floppy disk drives, or perhaps they just wanted a connector that could not be mistaken for the MFM hard disk connector. Whatever the reason, the selection a 50-conductor connector and cable provided 10 extra conductors that would tempt some manufactures to design custom applications, but not enough conductors to support WIDE or DIFFERENTIAL applications.

SCSI's optional pin-outs were a problem for the do-it-yourselfers. Non-mandatory connectors and pin-outs allowed host adapter designers to copy the approach of the mainframers by using connectors and pin-outs that were convenient to their own boards rather than following the optional recommended pin-outs. Unfortunately, this forced the use of adapter cables that, because of their custom pin-outs, were expensive and hard to get. One manufacturer used a DB-25 connector for their external interface, the same connector already used for the XT's serial and parallel ports. This provided endless possibilities for errors.

IBM BIOS Not Friendly to SCSI
The biggest SCSI problems were seen in placing SCSI within the IBM box. The IBM BIOS was not friendly to any hard drives, particularly SCSI hard drives. The IBM XT hard disk interface was awkward at best.

To overcome the XT's deficiencies, IBM tried a different approach on the AT. It seemed that some of IBM's utilities like FORMAT needed to know the size of the disk. Rather than trust the user to enter the disk capacity directly, the BIOS wanted the drive parameters so it could compute drive capacity. But drive parameters were hard to obtain and sometimes confusing.

IBM chose to treat the symptom rather than the problem by placing drive tables containing drive parameters within the AT BIOS. This decision would bite IBM and its camp followers at almost every turn.

For example, take just one drive parameter, sector size. All modern drives are soft sectored, meaning that sector size can be a variable. What is the benefit of having a SCSI drive that can format eight different sector sizes if the BIOS only understands one size?

Work Around to the Work Around
The apparent solutions to the IBM BIOS hard disk problem was to either use a device driver or replace the BIOS. To a host adapter manufacturer this is like being asked, Do you want to be hung or shot? Device drivers must be installed from the hard disk and if the hard disk is not booted, the driver can not be installed. Changing the BIOS directly is equally impossible, since the BIOS resides in non-writable ROM.

Lies, Lies, Damned Lies
The realizable solution was to patch the BIOS using a technique known as a run-time patch or an overlaid BIOS. At boot-up, after the BIOS has been moved from ROM to RAM and before the hard disk is accessed, the BIOS searches its address space for peripheral devices. At this time the host adapter interrupts the boot process to install a patch contained within its own ROM. The patch redirects the BIOS hard disk processes to those contained within the overlay BIOS. The overlay interprets and filters information going and coming from the standard BIOS. If necessary, it alters the information and in effect lies to the standard BIOS.

For instance, if there are three physical hard drives, the BIOS would become confused because it only understands a maximum of two physical hard drives. In this case the overlay tells the BIOS there are three logical drives and since the BIOS understands up to twenty-six logical drives everything proceeds normally.

A New Beginning
The SCSI architects knew they could not anticipate all requirements in their original specification so they installed the hooks for future adaptation and invited the expansion of the command set to serve the needs of other devices. The tape industry was one of the first to take advantage of this process by adding over 100 additional commands.

There were others that argued that just adding commands was not enough. SCSI needed some fixing too. As the mix-or-match IBM clone market emerged, it became clear that better connectors and cables were needed. Also, there needed to be specifications for terminating those cables and perhaps a way of making SCSI faster. In turn, a faster SCSI would mean more cable problems.

In 1991, almost six years after the release of the original specification, SCSI-2 was released. (The original SCSI specification was renamed SCSI-1 to distinguish it from SCSI-2.) Differential and serial variants were among the many new features. The specification, actually a collection of specifications, totals over 400 pages, over twice the size of SCSI-1.

SCSI 1 and 2 versions vary significantly in their approach and content. SCSI-1 addressed software issues while almost totally ignoring the physical side. SCSI-2 attempted to correct this deficiency by adding cable and connector specifications. It also added the SCSI-WIDE capability with 16 and 32-bit buses.

The SCSI-1 specification suggests a maximum cable length of six meters. Six meters (almost 20 feet) is plenty of cable for desktop machines but a little short for the mainframes, where the disk bays are sometimes 50 feet away from the CPUs. To provide longer cable lengths, SCSI-2 added differential cable specifications. Differential signals are much more robust and therefore may be used at much longer distances.

Membership on a SCSI committee is a difficult and expensive ordeal. It is not something most individuals can do without sponsorship. It should not surprise anyone the big companies are more heavily represented on the committee. It also should not surprise anyone that politics plays in the decision making.

One of the criticisms of SCSI-1 was the overly permissive nature of its optional specifications. SCSI-2 did little to change permissiveness. Six options were removed and four requirements were added. In deference to the large companies already using their own connectors and pin-outs, those pin-outs became the recommended pin-outs.

In deference to the smaller companies doing their own thing, the pin-outs were made optional. SCSI-2 did little to fix the known problems of SCSI-1. There is reason to ask why SCSI-2 was even released. The apparent answer was after six years, it was time to release something.

Work on SCSI-3 began almost immediately. As of this writing, SCSI-3 is an unapproved specification. Until it is approved, by the SCSI ASC (American National Standard of Accredited Standards X3T9 Technical Committee) the currently approved version remains at the SCSI-2 level. That does not stop companies from using the current working version of SCSI-3 as a design standard. However, companies doing so proceed at the risk the specification may change before final release.

SCSI-3's 600-plus pages break new ground. Unlike its predecessors, it is a much more restrictive specification. Mandatory connectors and minimum and maximum cable lengths are specified. Also, the electrical characteristics of the cables and the signals driving the cables are specified.

SCSI-3 appears to have solved many of SCSI's lingering problems while creating a few new problems. For instance, it specifies a minimum of 0.3 meter (1 foot) between cable connectors. This may cause some stuffing problems for small boxes. Also, the internal connectors are not retained by screws or clips. A slight vibration or torquing of the cable will cause them to pop out. Unless re-specified, manufacturers will need to use supplemental restraints to keep the cables from bouncing out. Next month, installing SCSI.

Page 14 had ads for the Terminal One and the Weasel Den 2 BBSs.

here here

The Review Corner

Reprinted from ComputerTalk Magazine

QmodemPro for Windows version 1.0

(By Tony Curro)

The long-awaited communications program for Windows has arrived and the wait was worth it. Qmodem, which started nine years ago as a shareware venture by John Friel and the Forbin Project, quickly became a DOS communications program of choice. Fairly recently, Mustang Software ( acquired Qmodem. They, along with John, gave us Qmodem 5.0, QmodemPro v1.0, and now, QmodemPro for Windows (QMPFW).

QMPFW is all things to all users. I had been a DOS Qmodem user since 1987. I chose to abandon it, because I preferred a communications program for Windows (Qmodem worked fine from a DOS shell in Windows, but was not Windows-oriented). Now that I have QMPFW, I feel like I've returned home.

QMPFW installs quickly and has an intuitive Windows interface. You get a toolbar across the top, and a macro bar and status line on the bottom. For those who prefer a cleaner screen, you can eliminate the two bars. Pressing F1 gives you context sensitive help from anywhere within the program.

Features Galore
QMPFW comes with Windows (.WAV) sound files, RIP icons, ANSI music support, and all the popular transfer protocols (including Zmodem). QMPFW has more than 30 terminal emulations, including common ANSI, VIDTEK, and the newest RIPscript format featuring graphics and mouse support.

QMPFW includes a text editor and a GIF viewer to view a GIF image while downloading it. You can also view GIFs stored on your hard disk. QMPFW even lets you zoom any GIF or BMP file, and copy portions of the pictures to the Windows clipboard.

Fax Support
QMPFW can send and receive faxes using any Class 1 or Class 2 fax modem. You can choose among eight fonts for faxing. Use one of the fax cover sheets included, or create your own using the variables shown in the manual. Documents for fax must be in either an ASCII format, or a PCX/BMP graphics image.

Mega-Phonebook Support
You can have the phone book pop up on startup, or click on the dialer icon to bring it up. The phone list appears in a typical line-by-line format, or you can view it in icon mode. The icons are on the left, with the standard dialing information next to each. There are many icons supplied or you can add any valid icon of your choice. The phone book can be sorted and printed in a number of ways.

Each phone book stores up to 4,096 entries, with an unlimited number of phone book files. Each entry in the phone book can have five phone numbers. You can also create groups of people to call. For example, if you call five offices each morning to get a report, you can put these in one group and call all five without having to re-select them each day.

Creating, adding, or deleting numbers or groups is simple. Deleted entries are not automatically removed from the phone book file, even though the entry no longer appears on screen. Using the Pack command, (from the File Menu) will permanently remove these entries.

Converting your old phone book will not be a problem. QMPFW can convert any phone book file from previous versions of Qmodem and QmodemPro. In addition, QMPFW includes a DOS convert icon. This converts ProComm Plus 1.1 & 2.0, Telix 3.1x, and Boyan 5.x phone book files.

Other features:

QMPFW Requirements:

QMPFW comes with both 5.25" and 3.5" disks, and has a 30-day money-back guarantee. Upgrades are available from previous versions of Qmodem.

Page 16 had ads for Mookie's Place BBS (, and the IBBS West, Lincoln's Cabin, and RoadKill Grill BBSs.

Page 17 had a full-page ad for Mustang Software (
Page 18 had a full-page ad for the Clark Development Company.
Page 19 had a full-page ad for Delphi Internet (

Macintosh Cross-Platform Operating Systems - Bedrock or Quicksand?

(An opinion by Paul B. Pearson)

The once uniform world of Macintosh is about to be shattered into a Tower of Babble of cross-platform operating systems from Microsoft (Chicago and Cairo), Taligent (Pink), and perhaps others. Enter the RISC-based Power PC - And the days of the free-lunch Mac OS are over.

Until now, developers who wanted to sell both Macintosh and PC software had to write and maintain two sets of code; one for the Mac and another for the PC.

A cross-platform environment lets you write one code set, which may be compiled for both platforms. One method of doing this has been to use a very high-level development environment. Examples of these environments are XVT, Serius, and Prograph. These tools produce two sets of code (or more), such as one for Mac and one for Windows. For the most part, these have been acceptable for in-house MIS systems, but fall short for most mainstream software requirements. Some companies, including Microsoft and Symantec, have built their own cross-platform development environments, but have not made them available to other developers.

Apple's Troubles
1993 was a bad year for Apple and its developers. The infamous lawsuit against Microsoft was lost. Apple's stock plummeted to about 1/3 of its value, prompting two buyout offers (one friendly, from Hewlett-Packard, and one hostile from Sony). Perhaps worse, Apple may have lost credibility with its developers in 1993. Even the usually great annual party for developers was a disaster. With the psychedelic '60s as the theme, esteemed Dr. Timothy Leary, the co-inventor of LSD, was invited to speak, but was kicked out by security when he (surprise!) advocated drug usage.

A Warning for Developers
Paraphrasing from memory, a portion of the speech by Jean-Louis Gasse, the famous and controversial former Apple bigwig, (later to become outcast) at a World-Wide Developers Conference (WWDC) a few years ago:

"Apple is like a big, fat, sow pig. The sow suckles and nurtures its young, as Apple does its developers. Occasionally, the sow rolls over killing its offspring. The moral is; do not get too close to the sow, lest the sow roll over on you." My apologies, since Mr. Gasse's precise words were, of course, far better than my paraphrased remembrances, but the thoughts, as well as the analogy, were too good not to pass on. Mr. Gasse paid his own way from France to deliver that message to Apple developers.

Developers for the Macintosh have always had a rough time. First, it takes considerably longer to write programs for the Mac. Then you have to compete against Apple, which seems to throw more and more into the Mac System software. Then there is Claris, Apple's commercial software company.

Apple constantly pulls the rug out from under developers with ambitious, new System software, demanding the kind of re-engineering of programs that only large companies can afford. And then you end up with about 1/8 to 1/10th of the PC market.

If that wasn't enough to make you put up with the IBM PC/clone world, there are the limited 'C' software development platforms for the Mac. These are primarily the under-powered but easier-to-use THINK C (from Symantec), or the cumbersome MPW (from Apple).

MacApp Unfinished
Some years ago, Apple introduced a new application framework for developing Mac programs, called MacApp, for easing creation of Macintosh interface components. Years later, MacApp remains badly flawed and unfinished. Reports are that Apple had only one engineer working on MacApp late last year.

A group of MacApp programmers resorted to forming their own support group, MADA. MADA originally stood for "MacApp Developers Association". Recently, MADA decided to disassociate itself from MacApp. If you think that's scary, we haven't even gotten to the part about Bedrock yet!

Back to Bedrock
Perhaps the biggest announcement at Apple's 1993 World-Wide Developers' Conference (WWDC) was the Apple-Symantec alliance to create a new cross-platform development environment, named "Bedrock". The announced migration path to Bedrock was to be Apple's C/C++ "MacApp". That should have brought up the warning flags. That, and the fact the food was even worse than that usually served at "announcement" parties.

Symantec urged Apple to use Microsoft's OLE (Object Linking and Embedding) standards for Bedrock. Despite this, Apple adamantly insisted on implementing its own OpenDoc architecture, which is, once again, unfinished.

Winds of Change
Meanwhile, Microsoft has announced that it will productize its internal cross-platform development environment this summer. Apple is suddenly trying to de-emphasize the cross-platform needs of Mac developers, while at the same time, cross-platform work at Claris continues. Bedrock, way behind schedule, and without Symantec, is now going to be used as the vehicle to bring OpenDoc to Mac developers - whether they like it or not. Therein lies the crux of the problem: The purpose of these development systems may be to infuse the development community with proprietary technologies, and, effectively, Gillooly the competition.

The word 'proprietary' is one that is being used (more often of late) to describe the Macintosh platform. Late in 1993, the Boeing company decreed that the Macintosh was "proprietary", and was to be phased out. Now that Apple has begun selling the Mac OS, developers often find themselves competing for all but the most vertical of markets. Last year, Roger Heinman, then an Apple VP, declared that "Apple is a software company." - shortly before he went to work for another software company: Microsoft.

While there is a growing concern about competition from Apple among Macintosh software developers, dealers are also finding themselves competing against Apple more and more. Some years ago, one of my engineers laughed at the notion that Apple would sell products by direct mail order.

It started with the Apple Developers program. This required someone to say they were planning to develop an Apple software product sometime in the next 2 years and pay a fee to Apple. Then, they could begin buying hardware directly from Apple, for their own use, for less than a dealer's cost.

These direct sales were extended to educational institutions, Value Added Resellers (VARs), and recently, to businesses. At the same time, Mac pricing has been reduced, lowering dealer's profit margins.

Another cause for alarm came a couple of years ago, when Apple gave HyperCard, formerly System software, to Claris, where it was sold as a commercial product. A new generation of developers had used this high-level development environment (that MacApp should have been) to create software products. As part of the System software, developers had previously been able to license and bundle HyperCard with their products for a mere $100 per year.

Then there was the Xtend debacle, where developers were invited to share their file formats, with the understanding that they would be able to use Xtend with their own products. Claris later decided to keep the technology proprietary.

Now that Apple has begun selling its System software, it has opened the door for Microsoft to become a legitimate threat with its forthcoming cross-platform system software and development environment. While die-hard Mac fans, including your author, expect the Microsoft system software to be a somewhat pale and clunky imitation of Apple's Mac OS, the cross-platform allure may prove overwhelming to developers. Remember that Apple opened the door for Microsoft to get much closer to the Mac OS than it would have previously dared, with the failed lawsuit.

The other piece of the cross-platform puzzle is the RISC computer. The Apple Power PC, scheduled for release next month, will have emulation modes for Macintosh or IBM, but will not run both systems simultaneously. IBM and Apple formed Taligent to create operating system software with the express intention of eliminating Microsoft's dominance of the OS market. Might Microsoft spell relief "anti-trust"?

If Microsoft can deliver a cross-platform solution (Cairo) that includes OLE 2, a cross-platform OS, and a development environment, it may establish dominance over both markets, well before OpenDoc and Bedrock get off the ground.

Developers may have to choose between platforms, based on licensing issues, and whose platform is more open, versus whose is more proprietary. Credibility may be the decisive factor for developers. If Apple is to succeed, they will have to make good on the promises of years past; not just continue making new ones.

At issue may well be the future of the Macintosh itself. While Mac developers may continue to develop cross-platform software on the Mac, if the development tools are available, most of their customers may be on the IBM (and clones) PC side. As the new RISC machines begin to take over, the question of "Is it a Mac or is it a PC?" will become "Is it a Microsoft or Apple OS?".

Page 20 had ads for DC-to-Light, and the Olde Stuff BBS.

Page 21 had ads for RGB Monitor Repair,
Just Computers (,
and the Black Rose and iNFormation Exchange BBSs.

The New Superhighway?

(By Steve Kong and Mark Shapiro)

No one disputes Vice President Al Gore's role in promoting data transfer-related technology. However, while Mr. Gore runs around the country extolling the "new" Information Superhighway, a lot of people are looking at him and saying, "New? "

The Superhighway has been here for a long time. The old technology is just getting a spiffy new name. This is not a bad thing, because the new name and new promotion will further the growth of online communications.

What is the Information Superhighway?
The media has classified many technology-related topics under the new promotional name of Information Superhighway. A few radio and TV broadcasters have hinted their broadcasts are part of the Information Superhighway! Rather than a single technology or product, the Information Superhighway is a buzzword to describe many old and new technologies. The Internet is the real Information Superhighway.

The Internet
Internet has connected people around the world for years, providing many services, including:
BBSs and the Internet
More BBSs are offering Internet email and Usenet services every day. Currently, few BBSs offer other Internet services such as FTP, IRC, and Telnet. Most System Operators can not afford the cost of a "real-time connection" to the Internet. BBSs usually stick with a UUCP connection. UUCP is popular because it is the least expensive alternative.

A UUCP connection has Internet mail and newsgroup messages spooled to disks at each site. Data packets are sent to the site's server periodically. This can range from once every 5 minutes to once a week.

Real Time BBSs?
Both Sysops and the public crave the real-time offerings of the Internet. People want to chat with people around the world in real time. People want to get files from the thousands of gigs on the Internet.

The most basic "real-time" connection is called a SLIP (Serial Link / Internet Protocol), with connection speeds up to 28,800 bit/s. A better real-time connection type is TCP/IP (Transmission Control Protocol/Internet Protocol), with speeds exceeding 57.6Kbps. A real-time connection can cost a bundle. Expect costs to be about $175 per month, plus startup fees and the considerable software/phone line/hardware investment.

The bulk of Superhighway growth will likely come from local online services. If the government wants to expand the Superhighway, I suggest it work on lowering the connection costs. This way, Sysops would be able to offer real-time services at little or no charge. This would really get people cruising on the Information Superhighway.

More Information Superhighway topics:

Computer Multimedia
Multimedia is occasionally included in discussions of the Information Superhighway. Multimedia computers typically have CD-ROMs, animation software, and interactive programs. Rather than being part of the Information Superhighway, multimedia capabilities are part of the "Computer Technology Superhighway". (Pardon my invention of a spiffy new name.)

The "Computer Technology Superhighway" describes the exponential acceleration of electronic and computer technology that makes things like multimedia and the "Information Superhighway" practical. Data compression, especially hardware-based, is an important part of any networked-multimedia application. Revolutionary advances in disk drives, memory, software, peripherals, and processors have increased the efficiency and potential of most applications and technology based fields.

Interactive TV
This is commonly labeled as a tool of the Information Superhighway. To some extent, the uses of Interactive TV are similar to what can be achieved through high-speed modems, although for now, no amount of modem compression can provide a full-duplex, live, full-screen, high-definition picture with audio.

In the beginning, Interactive TV will be devoted to games and entertainment. Interactive TV should also be used for education.

The Internet has been here for years, but no significant educational courses have been offered through it. A few universities are already accessible through modems, but it costs the same as conventional education. Ideally, the Internet and Interactive TV could be linked together to form a low cost universally accessible educational system.

Interactive Educational TV faces two challenges:

1) Will it be done right? We have had educational TV shows for a long time. There are already programs where you can earn college credits. To date, these programs typically offer college credits only for lower level elective subjects.

Why not be able to earn a college degree by participating with Interactive TV? What better (and more cost effective) way is there to share the best teachers with the maximum number of students? Being self-paced, Interactive TV could maximize the educational benefits for both the brightest and weakest students. Also, imagine the savings in time, money, and environmental wear. Of course, human interaction and exchange is necessary for education, particularly for pre-adults. Perhaps the time spent in conventional schools could be reduced while the quality of that time is increased.

2) Who will pay for educational Interactive TV? Those who create quality entertainment usually get rewarded for their efforts. The reward for creating educational products is more elusive. The difference between educational and entertainment software is similar to the difference between candy and vegetables. Like vegetables, education does not give instant gratification.

Since the government regulates education, perhaps they can meet the challenge of organizing and implementing a quality Interactive TV education system for the masses. The education should be real and meaningful, replacing some classroom time.

High-Bandwidth Technologies
ISDN, ATM, and other high-bandwidth technologies are often touted as being the Information Superhighway. Improvements in transmission line technologies and implementations will lower the cost of high-speed access to where the end-users can benefit. The increase of speed will make it more practical to move large amounts of data, voice, and video.

Improved transmission line speeds will not cause a revolution. Rather, the improved speeds will have an evolutionary effect. To put it in perspective, BBSs are still used for much the same purposes with both 28.8K and 1200 bit/s modems. An ISDN connection to your house is not that much faster than today's 28.8K modems.

Wireless Technologies
Wireless (usually radio) technologies are also grouped under the Information Superhighway umbrella. For the most part, wireless technology adds great convenience to existing applications.

The Big Picture
"Information Superhighway" is the new buzzword. Every day, more companies announce that they are going to be a part of our future Information Superhighway needs. Even database companies are claiming to be key players. Several key industry players are currently in a mad rush to be the first to corner the market on the ultimate 500-channel wireless computer/Interactive TV, connected to the Internet, with voice and video processing.

What it Means to You
When you hear the words "Information Superhighway", think of a linear progression of existing computer and communication technologies and the somewhat linear effect it will have on our lives. The Information Superhighway won't go to work for you, won't brush your teeth, or pay your taxes. It will make society more efficient. This can only accelerate the permanent trend of needing fewer people to work in our society. Perhaps a reduction of the 40-hour work-week is in order...

Pages 22 though 34 had detailed listings of Bay Area BBSs.

Page 22 had an ad for the Bust Out BBS.

Page 27 had ads for the UFO BBS, Atlas BBS/Internet Service (, and UNIROM (

Highway Asphalt

(By Robert Holland)

Pacific Bell announced last November that it would spend $16 billion to begin building a portion of (proponent) Al Gore's data superhighway in Silicon Valley and other metropolitan areas of California.

Pac Bell plans to replace all analog phone equipment, from the connection box on the side of your house to the switching equipment in the local phone offices. In the process, Pac Bell plans to eliminate phone line noise by taking copper wire out of the equation and replace it with coaxial line.

The Phone Company Delivers
Whether you talk about Vice President Al Gore's vision of a national data superhighway, or today's Internet, the data has to travel over some medium.

These huge, diverse networks rely on connection services offered by the phone companies. The Internet would not exist without the services of the phone companies that carry the data. If your business wants a live connection to the Internet, you have to install and pay the monthly charge for a digital phone line (typically 56 Kbps) that connects your Internet node computer to your local Internet service providers computer. These services are expensive.

Under Pac Bell's plan, each home would have access to digital phone lines with much greater capacity than 56-Kbps lines. Pricing for digital access to network services could drop drastically for business, but don't expect home access to come cheap.

Pac Bell Wasting No Time
Residents of Silicon Valley, California, received notice that their local phone office would be upgraded from analog to digital technology. Pac Bell plans to connect more than 1.5 million homes to the new services by 1997, and 5 million by the turn of the century. Silicon Valley customers can expect full digital telephony services by 1996.

Pac Bell is rushing to capitalize on the surge toward delivered digital services. The competition is stiff, with cable companies stringing fiber optic lines that can deliver 500 channels of television to each home.

Digital Switching
The key to Pac Bell's plan was the development of the digital phone switch for the local telephone office. A digital switch, such as AT&T's 5ESS, or Northern Telecom's DMS-100, carries no analog signals. The digital phone switch is a huge computer-controlled array of solid-state switches, and handles all call routing by software.

All voice or modem-generated analog signals from your house or office will be converted into small packets of digital data. These packets are delivered to their destination based on the address embedded into each packet. At the destination, the packets are assembled and converted back into analog information. Other network components bring digital to the doorstep.

Home Run
We have had digital phone lines for years. In the past, digital phone traffic was limited to the lines between local phone offices, or to businesses who paid enormous sums for access to specially laid digital phone lines. Now, in California, the digital revolution will reach our homes.

Fiber-optic cable will run from the digital switch at the center, out to the neighborhoods. The fiber will terminate amongst the neighborhoods at Host Digital Terminals (HDTs). The HDTs collect signals from each home, combines them, and sends them on to the central digital switch.

Each HDT and fiber-optic cable can serve as many as 500 homes. A single coaxial cable will run from each home to their local HDT. The cable has the capacity to carry data at ethernet (10 megabits/second) speeds. The bandwidth capacity of the coax cable will range from 50 - 750 MHz from the local switch to your house, and 5 - 40 MHz in the opposite direction.

The bandwidth is allocated in a lopsided fashion in order to provide near-broadcast-quality television signals (at 60 frames per second) to each house. This means you'll be able to send out data at speeds as fast as a local area network, but your ability to transmit video will be extremely limited.

Mounted on each house will be a device called, in phone company jargon, a Network Interface Unit (NIU). The NIU converts data received from the HDT and sends it over the homes copper wire, or through coax to a box on top of the television. The NIU converts digitized voice data into the analog signals that your current phone, television, and modem use. The NIU also monitors network quality.

Because your analog signals will be converted to digital information at your house, line noise could become a thing of the past. Digital information is less susceptible to noise because if a few packets fail to arrive, they will be sent again.

Who Ya Gonna Call?
While the digital network is good news for Californians, we may have to wait for the rest of the country to catch up. Take heart. Pacific Bell is not the first out of the gate with digital phone technology. BellSouth Telecommunications last summer announced they had connected their 50 millionth telephone line to a 5ESS switch.

Speed Bumps
Because any phone connection is only as good as the least common denominator, you may find that calls to analog parts of the country are still subject to line noise.

A step up to all-digital equipment in the home will prove costly. You won't be able to connect a cheap ethernet card to the phone network and expect it to work. You'll have to subscribe to a digital service, like ISDN (Integrated Services Digital Network). You will also have to install an ISDN terminal adapter in your computer to take advantage of the service. A Hayes ISDN terminal adapter for a PC lists for $1,395.

Furthermore, you might find yourself billed not for access time, but for the amount of data you transmit.

Obvious BBS Implications
It is possible that Sysops will soon offer direct digital connections to their BBSs. Digital telephony makes it possible for multitudes of BBS operators to form ad-hoc networks that connect at very high speeds. Imagine transferring a 1-megabyte file in under a minute!

More Speed Bumps
Today's modems can transmit and receive data simultaneously at high speeds, but most BBS software has you wait for a file transfer to finish before continuing. It may take a long time before a reasonably priced digital BBS system appears on the market.

Finally, the phone tariff structure may require the sender of data to pay the bill. Todays analog lines bill the caller for the amount of time spent online, not for data transferred. On a digital system, the BBS operator may be required to pay for the amount of data downloaded by callers.

Entertainment Pays the Bills
One thing is certain. Pacific Bell and competitors will be beating down our doors to sell us the digital services that will pay for the $16 billion installation bill. Whether we use our analog car and access the analog highway to rent a videotape, or dial up a movie from the digital network is a choice we may get to make in the next few years.

Page 35 had an ad for the PRiME MERiDiAN BBS.
Page 36 had an ad for the InfoDude Communications BBS.

Page 37 had a full-page ad for PC-TEN

Page 38 (back cover) had a full-page ad for TeleText Communications.

End of Issue 13. Go back, or to Issue 14, or to Mark's home page.