Monday, Mar. 28, 1988

Fast and Smart

By Philip Elmer-DeWitt

The computer at the University of Illinois is simulating something that no one saw: the evolution of the universe in the aftermath of the Big Bang. Re- creating conditions that may have prevailed billions of years ago, the computer reveals on a remote screen how massive clouds of subatomic particles, tugged by their own gravity, might have coalesced into filaments and flattened disks. The vivid reds, greens and blues of the shapes are not merely decorative but represent the various densities of the first large structures as they emerged from primordial chaos in the near vacuum of space.

At the Massachusetts Institute of Technology, another computer is struggling to learn what any three-year-old child already knows: the difference between a cup and a saucer. What the youngster sees at a glance, the computer must be taught, painstakingly, one step at a time. First it must comprehend the concept of an object, a physical thing distinguished from the space around it by edges and surfaces. Then it must grasp the essential attributes of cupness: the handle, the leakproof central cavity, the stable base. Finally, it must deal with the exceptions, like the foam-plastic cup whose heat-insulating properties are so good that it does not need a handle.

These experiments illustrate the paradox at the heart of today's computer science. The most powerful computing machines -- giant number crunchers possessed of speed and storage capacities beyond human comprehension -- are essentially dumb brutes with no more intellectual depth than a light bulb. At the other extreme are computers that have begun to exhibit the first glimmers of human-like reasoning, but only within the confines of narrowly defined tasks.

For 40 years scientists have labored to make headway at these two frontiers of computer research. One group, working with the lightning-fast machines known as supercomputers, is always pushing for more raw power, more blazing speed. The other group, writing programs that show the rudiments of artificial intelligence, explores the mysteries of human thought. Each of these two grand scientific enterprises, backed by billions of research dollars and blessed with some of the century's best minds, has proceeded as if the other did not exist.

But there are signs that the two broad avenues of computer research may be starting to converge, that today's most advanced machines may someday evolve into electronic brains that are not just incredibly fast but smart as well. The quest has been taken up by almost every major nation. And no wonder: the potential rewards -- in industrial productivity, scientific research and national security -- are staggering. Grown men glow with childlike excitement when they describe robots that will see their way around a factory, typewriters that will take dictation, defense systems that will make the world safe from nuclear arms.

The two fields of computer research are at different stages in their life cycles. Artificial intelligence is just getting started: the first commercial projects appeared less than five years ago, and are now finding widespread application (see following story). The supercomputer manufacturers, on the other hand, having supplied high-speed processors to government labs and intelligence agencies for a quarter-century, are now experiencing a growth so explosive that it has taken even the most optimistic industry leaders by surprise. Sales of the machines, which cost $5 million to $25 million each, have increased 25% a year or more over the past decade, and in 1988 will pass the $1 billion-a-year mark for the first time.

Some 300 supercomputers now work at tasks as diverse as ferreting out oil deposits, analyzing muscle structures and creating special effects for Hollywood films. With the spread of supercomputer networks, high-speed computing power is available to anyone with a personal computer and a telephone hookup. "The world will never be the same," says Doyle Knight, director of the John von Neumann National Computer Center in Princeton, N.J. "Soon every industry, every science, every walk of life will in some way be touched by supercomputing."

Speed and power are what distinguish supercomputers from their humbler cousins. In the early days of the industry, speed was measured in thousands of FLOPS, an acronym for floating-point operations per second, in which the decimal point is moved in very large and small numbers. Today's largest machines are measured in gigaFLOPS, or billions of operations a second. Tomorrow's will be measured in teraFLOPS, trillions of operations a second. A single supercomputer going at teraFLOPS speed will have the power of 10 million personal computers working at full throttle.

The most powerful supercomputers are surprisingly small and sleek, some not much bigger than a California hot tub. But looks can be deceiving. Supercomputers often squeeze out the last bit of processing speed by shrinking the distances electrons have to travel within their wiring. They are tightly packed workhorses that require a whole array of supporting equipment. Some employ full-size mainframe computers just to shuttle programs in and out of their processing units. The machines may be connected, by cable or satellite, to hundreds of remote terminals that can transform raw numerical output into stunning 3-D graphics. They often need industrial-size refrigeration units to keep the rush of electronic signals within them from melting down their circuitry. The thermal output of the University of Minnesota's supercomputers is used to heat a garage.

For most of the supercomputer era, the market for the most powerful machines has been dominated by one firm, Cray Research of Minneapolis. With 178 of its distinctive C-shaped models installed around the world, Cray accounts for 60% of all the supercomputers sold. The closest competitor, located directly across the Mississippi River in St. Paul, is the company from which Cray split off in 1972: Control Data Corp. CDC, which in 1983 created a supercomputer subsidiary called ETA Systems, is holding steady with a 12.7% market share. Coming up quickly is a trio of Japanese manufacturers -- NEC, Hitachi and Fujitsu -- that entered the supercomputer race in 1983 and has since captured 23% of the world market.

But this tidy pie chart may soon be upset by the surprise entry of a new player that for the past two decades has been most conspicuous by its absence from the supercomputer market: IBM. In December the largest computer manufacturer (1987 sales: $54.2 billion) announced that it had struck a deal with Steve Chen, one of the foremost supercomputer designers, who jolted the computer world last September by suddenly leaving his post as a vice president at Cray. With financial aid from IBM, Chen has set up his own company to develop a machine 100 times as fast as any currently on the market. "People say that IBM is just dipping its toes into the water," notes Irving Wladawsky-Berger, an IBM vice president. "We're in the middle of the ocean."

IBM has not only taken the plunge but has also put its prestige and enormous resources behind a radical kind of supercomputer that represents a dramatic break from the past. Since World War II, most computers have been designed to do things one step at a time, moving data in and out of a single high-speed processor. The computer Chen is building with IBM's backing will contain not one but 64 processors, all operating at the same time, in parallel, and thus significantly cutting down computing time. IBM's decision to support a major parallel-processing supercomputer project is a sign that technology is headed in that direction. Says H.T. Kung, computer scientist at Carnegie-Mellon University: "In one move, IBM legitimized two technologies: supercomputing and parallel processing." AT&T Bell Laboratories is expected to introduce a new parallel-processing computer at the American Physical Society meeting in New Orleans this week.

Cray, IBM and AT&T could be upstaged, however, by a determined gang of innovative computer designers who have already moved beyond 64 processing units to build machines that divide their work among hundreds, even thousands of processors. Last week scientists at Sandia National Laboratories in Albuquerque announced that they have coaxed a 1,024-processor computer into solving several problems more than 1,000 times as fast as a single-processor machine acting alone, an unprecedented speedup that suggests the performance of supercomputers may in the future be related almost directly to the number of processors they employ.

Much supercomputing research is funded by the U.S. Government, whose appetite for high-speed, number-crunching power for both defense and intelligence uses seems boundless. Last year the Pentagon spent hundreds of millions of dollars trying to step up the speed of the fastest machines. One Government project that has a special need for supercomputing power is the national aerospace plane, a high-altitude aircraft intended to carry military and civilian cargo at up to 25 times the speed of sound. Since there are no wind tunnels capable of simulating such blistering airspeeds, the hypersonic plane will have to be tested on supercomputers, ideally on machines many times as powerful as existing models. Presidential Science Adviser William Graham has recommended that Congress appropriate an additional $1.7 billion to support the development of parallel-processing supercomputers that by the mid- 1990s could crunch data at teraFLOPS speed.

The military-intelligence connection is nothing new for supercomputer manufacturers. One of the first Crays to come off the assembly line in 1976 was shipped to the Lawrence Livermore National Laboratory, where it made short work of the mind-boggling mathematical equations required to design hydrogen bombs. Another early Cray without doubt was delivered to the National Security Agency in Fort Meade, Md., where it would have been put to work cracking military codes and sorting through the intelligence data that flood into the agency every day.

What is new is the rapidly growing appetite for supercomputer power in the private sector. In a classic case of a technology developed for a few specialized purposes finding application in all sorts of unexpected areas, supercomputing has spread from one industry to another like a benevolent virus. Semiconductor manufacturers use supercomputers to design ways to squeeze more transistors into a square-centimeter chip of silicon. Financial advisers use them to devise investment strategies of dizzying complexity. Biochemists need them to predict which molecules are worth testing as new medicines. Engineers rely on them to design new cars, jet engines, light bulbs, sailboats, refrigerators and artificial limbs.

No one benefits more from supercomputing than research scientists. The National Science Foundation belatedly recognized that fact in 1985, when it committed itself to spending more than $200 million to create supercomputer centers at five selected sites, plus the electronic links to connect the machines to dozens of universities and research labs. Today some 6,000 scientists at more than 200 institutions have access to the NSF centers. This availability has sparked a burst of scientific productivity in fields ranging from mathematics to fluid dynamics. Says Ron Bailey, chief of the Numerical Aerodynamic Simulation program at the NASA Ames Research Center: "Supercomputers are as significant to pioneering research today as calculus was to Newton."

Supercomputers are giving scientists unprecedented access to hidden worlds both large and small. Using the prodigious power of the Cray at the San Diego Supercomputer Center, Researchers Mark Ellisman and Stephen Young are studying a pair of noodle-like structures in the brains of Alzheimer's victims that scientists think may be a cause of premature dementia. Northwestern University Professor Arthur Freeman used a Cray-2 to produce a stunning portrait of the atomic structure of a new superconductor that carries an electric current freely at -283 degrees F. The Cray X-MP at the University of Illinois has produced a dazzling array of colorful animations, from the roiling birth of a tornado to the supersonic fountains that spew forth from black holes at the centers of galaxies. Says Nobel Physicist Kenneth Wilson of Cornell University: "An astronomer with a telescope can observe the universe over a period of 50 years. But an astrophysicist with a supercomputer can 'see' billions of years into the past and the future."

Yet for all the miracles supercomputers have made possible, their users are still not satisfied. Computer Architect Neil Lincoln jokes that the real definition of a supercomputer is a machine that is just one generation behind the problems it is asked to solve. Norman Morse, head of computations at Los Alamos National Laboratory, has eleven supercomputers at his disposal but still cannot please his weapons designers and other scientists. Says he: "We already have jobs right now that require a machine 100 times as fast as anything we have."

The race to build those faster supercomputers is well under way. In dozens of laboratories in the U.S., Europe and Japan, millions of dollars are being spent to support the efforts of hundreds of engineers and scientists, all driven by the dream of building the world's most powerful computing machine. If any one team can be said to have the head start, it is the small, tightly knit group of technicians working in an industrial park in Chippewa Falls, Wis., where Cray Research has its most important laboratories.

Chippewa Falls (pop. 13,000) enjoys a local reputation for its Leinenkugel's beer and Chippewa Springs water. But it is known around the world as the home of one of the most influential and enigmatic figures in computer science: Seymour Cray. A shy, solitary engineer who rarely gives press interviews, Cray, 62, is to supercomputers what Edison was to light bulbs or Bell to the telephone. First as a co-founder of Control Data, then for his own firm, Cray has designed an extraordinary series of high-performance machines, including the CDC 1604 (1960), CDC 6600 (1964), CDC 7600 (1969), Cray-1 (1976) and Cray- 2 (1985), each of which could at the time lay claim to being the world's most powerful computer.

In 1981 Cray stepped down as chairman of the company and became a "consultant," but that only gave him more time to focus on computer design. He is now completing plans for his next machine, the Cray-3 (due to be released in 1989), and is soon expected to focus on its successor, the Cray-4, with a single-mindedness that is legendary. "Seymour has the ability to concentrate on his work to the wholesale exclusion of everything else," says James Thornton, a former engineering colleague at CDC. "He captures the universe of what he's going to design inside his head, and there he stays until he's through."

Technologically, Cray shows no signs of losing his innovative touch. The Cray-3 is expected to be the first commercial computer to use chips made of gallium arsenide as well as the usual silicon. Electrons travel up to ten times as fast through gallium arsenide, and although the material is more difficult and costly to work with, Cray has determined that the gain in speed will justify the added expense. Recognizing the growing importance of parallel processing, Cray is planning to give his most advanced model 64 processors, instead of the four in the Cray-2 and the 16 that will go into the Cray-3. Yet Cray is careful not to move too far, too fast. "The concept of stride is very important in developing computers," Cray told a group of customers last fall. "If you take a stride that is too large, you get bruised. If you take a step in one dimension, you better be careful about taking a step in another, or the step may get too long."

Those comments were taken to be a pointed reference to the work of Steve Chen, 44, who left Cray Research abruptly when the company refused to go along with his plan to build an ambitious new machine. By the time he walked out the door, Chen was already a star in the supercomputer field. Born in China, he grew up in Taiwan, moved to the U.S., studied electrical engineering at Villanova and got his doctorate at the University of Illinois in Champaign/ Urbana. When he came to Cray Research in 1979, the company's officers thought they had found Cray's successor: someone as brilliant and dedicated as the master himself. Chen certainly aspired to be in the same class as Cray. Says he, with characteristic modesty: "There are only a few people crazy enough to do this all the time."

Chen quickly proved himself by reconfiguring the Cray-1 as a two-processor machine. The resulting computer, the Cray X-MP, became the best-selling supercomputer of all time, with more than 120 installed. Chen also designed the newly introduced Cray Y-MP, which the company hopes will match the commercial success of the X-MP. But Chen's drive to build ever more powerful computers brought him into conflict with Seymour Cray. The problem, according to Gary Smaby, a vice president at the Piper, Jaffray & Hopwood investment firm in Minneapolis, was not jealousy or a clash of personalities but contrasting technological styles. Cray's genius has been to get the most out of existing technology with a tight budget and skeleton staff. Chen took a "team approach," hiring a staff of 200 and encouraging them to push the state of the art wherever possible. In the eyes of management, Chen's proposed machine, the Cray MP, would have involved risk on five different technological fronts, including the limited use of fiber-optic cables to send some streams of data with beams of light rather than electrons. When projected costs hit $100 million, more than double the original budget, Cray Chairman John Rollwagen backed away and canceled the project outright, forcing Chen's resignation.

About 45 members of his research team at Cray defected along with Chen and set up shop twelve miles away in Eau Claire, Wis. Within three months Chen lined up the financial commitment from IBM, estimated at $10 million to $45 million. "We know what it takes to nurture visionaries," says IBM's Wladawsky-Berger. "We want Chen to swing for the fences." And that is what he intends to do. Says Chen: "Five years from now we should be at 100 billion gigaFLOPS. A problem that takes three months to do now, we want to do in a day."

IBM is not relying solely on Chen, however. As the supercomputer market reaches the magic $1 billion-a-year figure that has traditionally been the company's threshold of interest, IBM has at least six different supercomputer efforts under way, although some are primarily research projects. One experiment involves a special-purpose computer called GF-11 that fills an entire 500-sq.-ft. room. Another computer, called RP-3, will consist of eight 8-ft. cubes arranged like a giant merry-go-round in a 35-ft. ring. But even these machines will be dwarfed by IBM's most ambitious supercomputer, the TF- 1, a behemoth whose specifications include 4,000 miles of internal wiring, 33,000 high-speed processing units and a single switching device measuring 80 ft. in diameter. When completed, the TF-1 should be capable of top speeds 2,000 times as fast as today's supercomputers.

IBM's real concern in the supercomputer market may be not Cray Research but Hitachi, Fujitsu and NEC. With their first generation of supercomputers, the Japanese made clear their intention to wipe out America's 25-year lead. Today their fastest machines compare favorably with any supercomputer made in the U.S. In some applications they outperform the most advanced U.S. models. During a test comparing the newest single-processor Hitachi S-820/80 and a two-processor Cray X-MP, the Hitachi machine beat the Cray by about 10 to 1. Says Yukihiko Karaki, a professor at Senshu University in Tokyo: "Looking at these figures, one might say that Japanese users can do without Cray supercomputers."

To date the Japanese have concentrated on speeding up the performance of their fastest processing chips. As a result, they now make the world's most powerful single-processor supercomputers. But they have not, so far, begun linking large numbers of individual processors together. It is there, in parallel processing, that the U.S. still has the edge over the Japanese. A handful of small American manufacturers, including Bolt Beranek and Newman, NCUBE and Ametek Computer Research, have already started marketing parallel machines that can zip through equations at such blistering speeds that they threaten to put conventional supercomputers on the endangered list.

The sticking point with parallelism, however, is the software. Tens of thousands of man-years have been put into writing programs for traditional supercomputers. "Going parallel means starting over," says Thomas Nash at the Fermi National Accelerator Laboratory. That is why the news from Sandia last week was so important. It confirmed that there are dramatic increases in speed to be achieved by breaking large problems into small pieces and solving them simultaneously. Says David Kuck, Chen's former professor at the University of Illinois: "What's going to happen in the next decade is that we'll figure out how to make parallelism work."

Making parallelism work will benefit not just supercomputer users but also those researchers in computer science's other grand project, artificial intelligence. In fact, one of the most advanced parallel machines, a 65,536- processor computer called the Connection Machine, was built by researchers trained at M.I.T.'s Artificial Intelligence Laboratory. W. Daniel Hillis, the 31-year-old engineer who designed the computer, sees in it the first concrete evidence of what he views as an inevitable convergence of the two fields. "Supercomputing is an enabling technology for artificial intelligence," says Hillis. "Just as you couldn't build an airplane without first developing engines powerful enough to drive them, you can't build artificial intelligence without faster supercomputers."

Far more is at stake than the sale of a few multimillion-dollar machines. $ The country that leads the world in supercomputers and artificial intelligence will hold the keys to economic and technological development in the 1990s and beyond. Breakthroughs are waiting to be made in fields that range from genetic engineering to particle physics, from automated manufacturing to space exploration. There is even a chance that scientists will use the new computers to understand better the most complex machine of all, the human mind.

With reporting by Thomas McCarroll/New York, J. Madeleine Nash/Minneapolis and Charles Pelton/San Francisco