Ranger goes.. ZOOM!

by Dustin Puryear

Ooh, nifty! Sun and the Texas Advanced Computing Center at the University of Texas have released one of the “fastest supercomputers in the world”, at least according to Sun. Hmm, so 500 teraflops of computing power. That’s pretty darn fast. Sure, not as fast as my dual-core laptop, but.. Well, okay, maybe that fast, at least when my AV is scanning a Word doc before it opens. (Is it just me, or does AV tend to make your computer seem SLOW at times?)

Anyway, this got me thinking about where Microsoft is with High Performance Computing (HPC). Historically, HPC has always been in the realm of the traditional Cray-style supercomputer and, more recently, big, powerful, and distributed UNIX clusters. Microsoft has been kind-of sort-of dipping its toes into the HPC realm, but there’s certainly no concerted effort. There are at least two reasons for this that I can see:

HPC is unfamiliar territory for Microsoft. Without any qualification, HPC is an entirely new market to Microsoft. I’m not even sure they have a business model for it.
Microsoft is unfamiliar territory for HPC. In other words, there’s no history of HPC users working with the Windows platform. If you’ve ever looking at code that runs in these types of environments you’ll see a lot of reliance on libraries and utilities designed to distribute load either across a cluster of servers or the code is very intelligent about how to use the several hundred processors on the supercomputer. I’m curious how many of those libraries have been ported to Windows?

Oh, and one final bullet point:

Microsoft is about software, not hardware. As far as I know, vendors that implement HPC sell hardware. On the Sun end, there is... well, Sun. For Linux we have hardware vendors like Linux NetworX. Who’s pushing this with Windows?

Should I be expecting a 2000-node Windows HPC cluster anytime soon?