Perhaps a better name would have been Computer Peripheral Interface, for not all SCSI applications involve small computers, nor does the name system necessarily apply. Today SCSI applications range from terabyte jukebox archival data-storage systems to 2.5-inch laptop disk drives. SCSI interfaces can be found on tape drives, CD-ROMS, scanners, printers, disk drives, and hundreds of custom applications. These sub-system peripherals typically form building blocks, rather than complete systems. This provides a flexibility beyond what is possible at the system level. Let's look at the various SCSI building blocks, starting with protocols.
A SCSI circuit consists of a Master and one or more Targets. When two or more SCSI targets are accommodated, the interface is usually referred to as a SCSI bus. The term interface is sometimes used in place of bus when a non-expandable single target is connected and frequently refers to a Proprietary or Non-Complaint interface.
Maximum flexibility is achieved because SCSI devices can describe their capabilities and parameters to the master and the masters host. Using these descriptions the host can, statically and sometimes dynamically, configure the SCSI components for maximum performance or maximum capability.
Static configuration begins at system boot-up with the bus master. Multiple masters can reside on the bus so the master with the highest address becomes the permanent bus master. Other masters, if present, are known as temporary bus masters. Temporary bus masters are also targets. At power-up or boot time the ranking SCSI master initiates the session by calling roll.
When an address is called by the master the target device answers with a short message equivalent to present. At this point the master knows only a device resides at that answering address. Then the master re-interrogates the answering address with a command equivalent to Tell me about yourself. In this way the master learns the characteristics of each target, including whether that target has temporary bus mastering capability. There is no requirement for the targets be similar, so a hard disk and a printer can coexist on the same bus.
The master knows, from its previous interrogation, the characteristics of each device so it can request each target use, or not use, any of its features during the session. Multi-threaded operation is fully supported. If one target lacks a capability it does not prevent the master from using that capability with some other device.
What are SCSI characteristics? Characteristics range from a device's physical parameters, such as type of device, manufacturer, capacity, and serial number, to the repertoire of commands or modes the target understands.
The original specification defined protocols, but did not address other aspects of SCSI in detail. The committee envisioned the mechanical, electrical, and timing aspects of SCSI would pretty much define themselves. Only at one point, the connector interface, was there any attempt at hardware standardization, and this was somewhat ignored by the mainframe manufacturers. The manufacturers were accustomed to defining their own connectors and cabling, so were not concerned with optional SCSI mechanical specifications.
The original SCSI specification called for nine data, and nine control signals. With grounds, the minimum number of wires came to about 40. It is not clear if the original specification, extracted from the Shugart specification (SASI), envisioned future expansion. Perhaps it was just handy because 50-wire cables were already used on eight-inch floppy disk drives, or perhaps they just wanted a connector that could not be mistaken for the MFM hard disk connector. Whatever the reason, the selection a 50-conductor connector and cable provided 10 extra conductors that would tempt some manufactures to design custom applications, but not enough conductors to support WIDE or DIFFERENTIAL applications.
SCSI's optional pin-outs were a problem for the do-it-yourselfers. Non-mandatory connectors and pin-outs allowed host adapter designers to copy the approach of the mainframers by using connectors and pin-outs that were convenient to their own boards rather than following the optional recommended pin-outs. Unfortunately, this forced the use of adapter cables that, because of their custom pin-outs, were expensive and hard to get. One manufacturer used a DB-25 connector for their external interface, the same connector already used for the XT's serial and parallel ports. This provided endless possibilities for errors.
To overcome the XT's deficiencies, IBM tried a different approach on the AT. It seemed that some of IBM's utilities like FORMAT needed to know the size of the disk. Rather than trust the user to enter the disk capacity directly, the BIOS wanted the drive parameters so it could compute drive capacity. But drive parameters were hard to obtain and sometimes confusing.
IBM chose to treat the symptom rather than the problem by placing drive tables containing drive parameters within the AT BIOS. This decision would bite IBM and its camp followers at almost every turn.
For example, take just one drive parameter, sector size. All modern drives are soft sectored, meaning that sector size can be a variable. What is the benefit of having a SCSI drive that can format eight different sector sizes if the BIOS only understands one size?
For instance, if there are three physical hard drives, the BIOS would become confused because it only understands a maximum of two physical hard drives. In this case the overlay tells the BIOS there are three logical drives and since the BIOS understands up to twenty-six logical drives everything proceeds normally.
There were others that argued that just adding commands was not enough. SCSI needed some fixing too. As the mix-or-match IBM clone market emerged, it became clear that better connectors and cables were needed. Also, there needed to be specifications for terminating those cables and perhaps a way of making SCSI faster. In turn, a faster SCSI would mean more cable problems.
In 1991, almost six years after the release of the original specification, SCSI-2 was released. (The original SCSI specification was renamed SCSI-1 to distinguish it from SCSI-2.) Differential and serial variants were among the many new features. The specification, actually a collection of specifications, totals over 400 pages, over twice the size of SCSI-1.
SCSI 1 and 2 versions vary significantly in their approach and content. SCSI-1 addressed software issues while almost totally ignoring the physical side. SCSI-2 attempted to correct this deficiency by adding cable and connector specifications. It also added the SCSI-WIDE capability with 16 and 32-bit buses.
The SCSI-1 specification suggests a maximum cable length of six meters. Six meters (almost 20 feet) is plenty of cable for desktop machines but a little short for the mainframes, where the disk bays are sometimes 50 feet away from the CPUs. To provide longer cable lengths, SCSI-2 added differential cable specifications. Differential signals are much more robust and therefore may be used at much longer distances.
Membership on a SCSI committee is a difficult and expensive ordeal. It is not something most individuals can do without sponsorship. It should not surprise anyone the big companies are more heavily represented on the committee. It also should not surprise anyone that politics plays in the decision making.
One of the criticisms of SCSI-1 was the overly permissive nature of its optional specifications. SCSI-2 did little to change permissiveness. Six options were removed and four requirements were added. In deference to the large companies already using their own connectors and pin-outs, those pin-outs became the recommended pin-outs.
In deference to the smaller companies doing their own thing, the pin-outs were made optional. SCSI-2 did little to fix the known problems of SCSI-1. There is reason to ask why SCSI-2 was even released. The apparent answer was after six years, it was time to release something.
Work on SCSI-3 began almost immediately. As of this writing, SCSI-3 is an unapproved specification. Until it is approved, by the SCSI ASC (American National Standard of Accredited Standards X3T9 Technical Committee) the currently approved version remains at the SCSI-2 level. That does not stop companies from using the current working version of SCSI-3 as a design standard. However, companies doing so proceed at the risk the specification may change before final release.
SCSI-3's 600-plus pages break new ground. Unlike its predecessors, it is a much more restrictive specification. Mandatory connectors and minimum and maximum cable lengths are specified. Also, the electrical characteristics of the cables and the signals driving the cables are specified.
SCSI-3 appears to have solved many of SCSI's lingering problems while creating a few new problems. For instance, it specifies a minimum of 0.3 meter (1 foot) between cable connectors. This may cause some stuffing problems for small boxes. Also, the internal connectors are not retained by screws or clips. A slight vibration or torquing of the cable will cause them to pop out. Unless re-specified, manufacturers will need to use supplemental restraints to keep the cables from bouncing out. Next month, installing SCSI.
Page 14 had ads for the Terminal One and the Weasel Den 2 BBSs.
The long-awaited communications program for Windows has arrived and the wait was worth it. Qmodem, which started nine years ago as a shareware venture by John Friel and the Forbin Project, quickly became a DOS communications program of choice. Fairly recently, Mustang Software (www.mustang.com) acquired Qmodem. They, along with John, gave us Qmodem 5.0, QmodemPro v1.0, and now, QmodemPro for Windows (QMPFW).
QMPFW is all things to all users. I had been a DOS Qmodem user since 1987. I chose to abandon it, because I preferred a communications program for Windows (Qmodem worked fine from a DOS shell in Windows, but was not Windows-oriented). Now that I have QMPFW, I feel like I've returned home.
QMPFW installs quickly and has an intuitive Windows interface. You get a toolbar across the top, and a macro bar and status line on the bottom. For those who prefer a cleaner screen, you can eliminate the two bars. Pressing F1 gives you context sensitive help from anywhere within the program.
QMPFW includes a text editor and a GIF viewer to view a GIF image while downloading it. You can also view GIFs stored on your hard disk. QMPFW even lets you zoom any GIF or BMP file, and copy portions of the pictures to the Windows clipboard.
Each phone book stores up to 4,096 entries, with an unlimited number of phone book files. Each entry in the phone book can have five phone numbers. You can also create groups of people to call. For example, if you call five offices each morning to get a report, you can put these in one group and call all five without having to re-select them each day.
Creating, adding, or deleting numbers or groups is simple. Deleted entries are not automatically removed from the phone book file, even though the entry no longer appears on screen. Using the Pack command, (from the File Menu) will permanently remove these entries.
Converting your old phone book will not be a problem. QMPFW can convert any phone book file from previous versions of Qmodem and QmodemPro. In addition, QMPFW includes a DOS convert icon. This converts ProComm Plus 1.1 & 2.0, Telix 3.1x, and Boyan 5.x phone book files.
Page 17 had a full-page ad for Mustang Software
Page 18 had a full-page ad for the Clark Development Company.
Page 19 had a full-page ad for Delphi Internet (www.delphi.com)
A cross-platform environment lets you write one code set, which may be compiled for both platforms. One method of doing this has been to use a very high-level development environment. Examples of these environments are XVT, Serius, and Prograph. These tools produce two sets of code (or more), such as one for Mac and one for Windows. For the most part, these have been acceptable for in-house MIS systems, but fall short for most mainstream software requirements. Some companies, including Microsoft and Symantec, have built their own cross-platform development environments, but have not made them available to other developers.
"Apple is like a big, fat, sow pig. The sow suckles and nurtures its young, as Apple does its developers. Occasionally, the sow rolls over killing its offspring. The moral is; do not get too close to the sow, lest the sow roll over on you." My apologies, since Mr. Gasse's precise words were, of course, far better than my paraphrased remembrances, but the thoughts, as well as the analogy, were too good not to pass on. Mr. Gasse paid his own way from France to deliver that message to Apple developers.
Developers for the Macintosh have always had a rough time. First, it takes considerably longer to write programs for the Mac. Then you have to compete against Apple, which seems to throw more and more into the Mac System software. Then there is Claris, Apple's commercial software company.
Apple constantly pulls the rug out from under developers with ambitious, new System software, demanding the kind of re-engineering of programs that only large companies can afford. And then you end up with about 1/8 to 1/10th of the PC market.
If that wasn't enough to make you put up with the IBM PC/clone world, there are the limited 'C' software development platforms for the Mac. These are primarily the under-powered but easier-to-use THINK C (from Symantec), or the cumbersome MPW (from Apple).
A group of MacApp programmers resorted to forming their own support group, MADA. MADA originally stood for "MacApp Developers Association". Recently, MADA decided to disassociate itself from MacApp. If you think that's scary, we haven't even gotten to the part about Bedrock yet!
Symantec urged Apple to use Microsoft's OLE (Object Linking and Embedding) standards for Bedrock. Despite this, Apple adamantly insisted on implementing its own OpenDoc architecture, which is, once again, unfinished.
The word 'proprietary' is one that is being used (more often of late) to describe the Macintosh platform. Late in 1993, the Boeing company decreed that the Macintosh was "proprietary", and was to be phased out. Now that Apple has begun selling the Mac OS, developers often find themselves competing for all but the most vertical of markets. Last year, Roger Heinman, then an Apple VP, declared that "Apple is a software company." - shortly before he went to work for another software company: Microsoft.
It started with the Apple Developers program. This required someone to say they were planning to develop an Apple software product sometime in the next 2 years and pay a fee to Apple. Then, they could begin buying hardware directly from Apple, for their own use, for less than a dealer's cost.
These direct sales were extended to educational institutions, Value Added Resellers (VARs), and recently, to businesses. At the same time, Mac pricing has been reduced, lowering dealer's profit margins.
Another cause for alarm came a couple of years ago, when Apple gave HyperCard, formerly System software, to Claris, where it was sold as a commercial product. A new generation of developers had used this high-level development environment (that MacApp should have been) to create software products. As part of the System software, developers had previously been able to license and bundle HyperCard with their products for a mere $100 per year.
Then there was the Xtend debacle, where developers were invited to share their file formats, with the understanding that they would be able to use Xtend with their own products. Claris later decided to keep the technology proprietary.
Now that Apple has begun selling its System software, it has opened the door for Microsoft to become a legitimate threat with its forthcoming cross-platform system software and development environment. While die-hard Mac fans, including your author, expect the Microsoft system software to be a somewhat pale and clunky imitation of Apple's Mac OS, the cross-platform allure may prove overwhelming to developers. Remember that Apple opened the door for Microsoft to get much closer to the Mac OS than it would have previously dared, with the failed lawsuit.
If Microsoft can deliver a cross-platform solution (Cairo) that includes OLE 2, a cross-platform OS, and a development environment, it may establish dominance over both markets, well before OpenDoc and Bedrock get off the ground.
Developers may have to choose between platforms, based on licensing issues, and whose platform is more open, versus whose is more proprietary. Credibility may be the decisive factor for developers. If Apple is to succeed, they will have to make good on the promises of years past; not just continue making new ones.
At issue may well be the future of the Macintosh itself. While Mac developers may continue to develop cross-platform software on the Mac, if the development tools are available, most of their customers may be on the IBM (and clones) PC side. As the new RISC machines begin to take over, the question of "Is it a Mac or is it a PC?" will become "Is it a Microsoft or Apple OS?".
Page 21 had ads for RGB Monitor Repair,
Just Computers (www.justcomp.com),
and the Black Rose and iNFormation Exchange BBSs.
No one disputes Vice President Al Gore's role in promoting data transfer-related technology. However, while Mr. Gore runs around the country extolling the "new" Information Superhighway, a lot of people are looking at him and saying, "New? "
The Superhighway has been here for a long time. The old technology is just getting a spiffy new name. This is not a bad thing, because the new name and new promotion will further the growth of online communications.
A UUCP connection has Internet mail and newsgroup messages spooled to disks at each site. Data packets are sent to the site's server periodically. This can range from once every 5 minutes to once a week.
The most basic "real-time" connection is called a SLIP (Serial Link / Internet Protocol), with connection speeds up to 28,800 bit/s. A better real-time connection type is TCP/IP (Transmission Control Protocol/Internet Protocol), with speeds exceeding 57.6Kbps. A real-time connection can cost a bundle. Expect costs to be about $175 per month, plus startup fees and the considerable software/phone line/hardware investment.
The bulk of Superhighway growth will likely come from local online services. If the government wants to expand the Superhighway, I suggest it work on lowering the connection costs. This way, Sysops would be able to offer real-time services at little or no charge. This would really get people cruising on the Information Superhighway.
The "Computer Technology Superhighway" describes the exponential acceleration of electronic and computer technology that makes things like multimedia and the "Information Superhighway" practical. Data compression, especially hardware-based, is an important part of any networked-multimedia application. Revolutionary advances in disk drives, memory, software, peripherals, and processors have increased the efficiency and potential of most applications and technology based fields.
The Internet has been here for years, but no significant educational courses have been offered through it. A few universities are already accessible through modems, but it costs the same as conventional education. Ideally, the Internet and Interactive TV could be linked together to form a low cost universally accessible educational system.
Interactive Educational TV faces two challenges:
1) Will it be done right? We have had educational TV shows for a long time. There are already programs where you can earn college credits. To date, these programs typically offer college credits only for lower level elective subjects.
Why not be able to earn a college degree by participating with Interactive TV? What better (and more cost effective) way is there to share the best teachers with the maximum number of students? Being self-paced, Interactive TV could maximize the educational benefits for both the brightest and weakest students. Also, imagine the savings in time, money, and environmental wear. Of course, human interaction and exchange is necessary for education, particularly for pre-adults. Perhaps the time spent in conventional schools could be reduced while the quality of that time is increased.
2) Who will pay for educational Interactive TV? Those who create quality entertainment usually get rewarded for their efforts. The reward for creating educational products is more elusive. The difference between educational and entertainment software is similar to the difference between candy and vegetables. Like vegetables, education does not give instant gratification.
Since the government regulates education, perhaps they can meet the challenge of organizing and implementing a quality Interactive TV education system for the masses. The education should be real and meaningful, replacing some classroom time.
Improved transmission line speeds will not cause a revolution. Rather, the improved speeds will have an evolutionary effect. To put it in perspective, BBSs are still used for much the same purposes with both 28.8K and 1200 bit/s modems. An ISDN connection to your house is not that much faster than today's 28.8K modems.
Pages 22 though 34 had detailed listings of Bay Area BBSs.
Page 22 had an ad for the Bust Out BBS.
Page 27 had ads for the UFO BBS, Atlas BBS/Internet Service (www.gilroy.com), and UNIROM (www.unirom.com).
Pacific Bell announced last November that it would spend $16 billion to begin building a portion of (proponent) Al Gore's data superhighway in Silicon Valley and other metropolitan areas of California.
Pac Bell plans to replace all analog phone equipment, from the connection box on the side of your house to the switching equipment in the local phone offices. In the process, Pac Bell plans to eliminate phone line noise by taking copper wire out of the equation and replace it with coaxial line.
These huge, diverse networks rely on connection services offered by the phone companies. The Internet would not exist without the services of the phone companies that carry the data. If your business wants a live connection to the Internet, you have to install and pay the monthly charge for a digital phone line (typically 56 Kbps) that connects your Internet node computer to your local Internet service providers computer. These services are expensive.
Under Pac Bell's plan, each home would have access to digital phone lines with much greater capacity than 56-Kbps lines. Pricing for digital access to network services could drop drastically for business, but don't expect home access to come cheap.
Pac Bell is rushing to capitalize on the surge toward delivered digital services. The competition is stiff, with cable companies stringing fiber optic lines that can deliver 500 channels of television to each home.
All voice or modem-generated analog signals from your house or office will be converted into small packets of digital data. These packets are delivered to their destination based on the address embedded into each packet. At the destination, the packets are assembled and converted back into analog information. Other network components bring digital to the doorstep.
Fiber-optic cable will run from the digital switch at the center, out to the neighborhoods. The fiber will terminate amongst the neighborhoods at Host Digital Terminals (HDTs). The HDTs collect signals from each home, combines them, and sends them on to the central digital switch.
Each HDT and fiber-optic cable can serve as many as 500 homes. A single coaxial cable will run from each home to their local HDT. The cable has the capacity to carry data at ethernet (10 megabits/second) speeds. The bandwidth capacity of the coax cable will range from 50 - 750 MHz from the local switch to your house, and 5 - 40 MHz in the opposite direction.
The bandwidth is allocated in a lopsided fashion in order to provide near-broadcast-quality television signals (at 60 frames per second) to each house. This means you'll be able to send out data at speeds as fast as a local area network, but your ability to transmit video will be extremely limited.
Mounted on each house will be a device called, in phone company jargon, a Network Interface Unit (NIU). The NIU converts data received from the HDT and sends it over the homes copper wire, or through coax to a box on top of the television. The NIU converts digitized voice data into the analog signals that your current phone, television, and modem use. The NIU also monitors network quality.
Because your analog signals will be converted to digital information at your house, line noise could become a thing of the past. Digital information is less susceptible to noise because if a few packets fail to arrive, they will be sent again.
A step up to all-digital equipment in the home will prove costly. You won't be able to connect a cheap ethernet card to the phone network and expect it to work. You'll have to subscribe to a digital service, like ISDN (Integrated Services Digital Network). You will also have to install an ISDN terminal adapter in your computer to take advantage of the service. A Hayes ISDN terminal adapter for a PC lists for $1,395.
Furthermore, you might find yourself billed not for access time, but for the amount of data you transmit.
Finally, the phone tariff structure may require the sender of data to pay the bill. Todays analog lines bill the caller for the amount of time spent online, not for data transferred. On a digital system, the BBS operator may be required to pay for the amount of data downloaded by callers.
Page 35 had an ad for the PRiME MERiDiAN BBS.
Page 36 had an ad for the InfoDude Communications BBS.
Page 37 had a full-page ad for PC-TEN
Page 38 (back cover) had a full-page ad for
End of Issue 13. Go back, or to Issue 14, or to Mark's home page.