MAKING UP A MONSTER: Jill Dunbar of Computer Sciences Corp. stands next to racks of SGI servers that make up the huge Pleiades supercomputer in the NASA Advanced Supercomputing Facility at the NASA Ames Research Center at Moffett Field in Mountain View, California.

By Pete Carey

Silicon Valley is famed for spawning the desktop, mobile and cloud computing revolutions. What is less well known is that it’s one of the nerve centres for building the world’s fastest number-crunchers.
Once confined to big national laboratories, supercomputers are now in demand to crunch massive amounts of data for industries such as oil exploration, finance and online sales.
The valley’s strong hand in that business was highlighted in April when Intel landed the prime contract to design a $200 million supercomputer named Aurora to be housed at the Argonne National Laboratory in Illinois.
Aurora, developed in partnership with Cray of Seattle, will likely become the world’s fastest supercomputer when it goes online in 2018. With Aurora’s new architecture, the Santa Clara chip company appears to be taking aim at a bigger slice of what will soon be a $15 billion to $20 billion commercial market for “high-performance” computers that can give a company a competitive edge.
“Right now in the oil and gas industry, there’s an arms race to see who can get the biggest supercomputer,” said analyst Steve Conway of the research firm International Data Corp., citing one industry that has been especially aggressive in buying the machines.
Energy companies use supercomputers to pinpoint oil deposits. Car companies use them to crash virtual cars in safety tests. Procter & Gamble uses high-performance computing to design detergents and shampoo and even potato chips. Supercomputer maker Cray says “at least one” Major League Baseball team uses one of its machines to see, among other things, how batters fare against different types of pitchers.
Among the valley’s supercomputer makers or component suppliers:
Silicon Graphics International in Milpitas makes systems for national laboratories — there’s one at the NASA Ames Research Center in Mountain View — and private industry, including PayPal. In one case, a query that took 14 minutes to run on an older system was accomplished in less than a second on an SGI supercomputer, according to Ryan Quick, principal architect in advanced technologies at PayPal.
“One of SGI’s big initiatives is to leverage much of the experimental work we’ve done in developing the biggest, baddest systems out there and democratising that into a package well-suited for the enterprise world,” said Brian Freed, vice president of in-memory architecture.
Intel already supplies the chips for close to 95 percent of all high-performance machines, which includes supercomputers and their slightly less powerful cousins, according to Intersect360, a Sunnyvale consulting firm. In the mid-1990s, Intel benefited from a move by system makers to using industry-standard parts such as Intel’s processors, according to Addison Snell of Intersect360.
Hewlett-Packard ranks just ahead of IBM as a supplier of the world’s fastest supercomputers, according to a Top 500 list maintained by Berkeley researchers. It had 35 percent of the revenue from high-performance systems sold last year, according to International Data Corp.
Nvidia, in Santa Clara, supplies accelerator chips — something like the turbocharger in a car — used in a growing number of supercomputers. Its “Tesla” processors are used in machines at Google, Facebook and Baidu for speech, video and image recognition. With $279 million in high-performance computing revenue last year, “we are just beginning this era,” said Sumit Gupta, Nvidia’s general manager of the Tesla accelerated computing business.
AMD, headquartered in Sunnyvale, makes the Opteron processor that powers the Titan supercomputer at Oak Ridge National Laboratory in Tennessee and is used in Cray’s powerful XE6 supercomputer.
Mellanox Technologies, in Sunnyvale, makes leading-edge networking gear for high-performance computers and is combining with IBM and Nvidia to build two new supercomputers for the US Department of Energy’s labs at Oak Ridge and Livermore, Calif.
Think of a supercomputer as a cluster of tens of thousands of Mac workstations performing together like a symphony orchestra to process billions and trillions of bits of data every second, sometimes for hundreds of users.
Supercomputer prices run from $500,000 to more than $100 million. Some are general purpose machines that can perform tasks like 3-D modelling while hosting large numbers of users at the same time. A second type is used for one task, such as running a cloud-based service.
“A whole class of things start to become practical as the cost of computing drops,” said Alan Gara, an Intel fellow at the giant chip firm’s Santa Clara headquarters and lead system architect for the Aurora system.
“There used to be a few hundred supercomputers sold in the world each year because the prices were so high — $10 million and up,” said analyst Conway of IDC. But prices have fallen so sharply for powerful machines that “these days, companies and small organisations that wouldn’t think of getting one before can do so.” Nowadays, thousands of high-performance machines are sold every year.
Since they can provide an edge over competitors, some companies don’t want their supercomputers publicised.
“Some of the bigger companies really depend upon supercomputers to do lot of their work,” said Bill Mannel, head of HP’s Apollo server team in Houston, who added that “whether it’s the oil, auto or aircraft industry, it’s become such a core part of their development that they won’t even share details of how much they have or who they buy from.”
Most companies with valley headquarters have far-flung research and manufacturing sites, but this area is a key centre of innovation said Scot Schultz at Mellanox Technologies. “I think it’s largely because most of the core tech providers have a presence here and have been a part of the Bay Area community here for so long.”
Google — whose operations require massive amounts of computing power — is pushing the frontiers in a collaboration on a quantum computer project with NASA Ames and the Universities Space Research Association. The hope is to develop a radically different computer that in theory could do certain problems in a few days that would take today’s computers “millions of years” to perform.
“If you thought IBM’s Watson on Jeopardy was impressive,” Conway said, “where things are headed will totally leave it in the dust.” — San Jose Mercury News/TNS


Related Story