Microsoft Windows Now Scales Up and Out with a Vengeance

The latest versions of Windows allow for far greater scalability. Here's why.

There are basically three ways to improve the compute performance of an IT platform:

  • Increase the clock speed of the processor.
  • Increase the number of processors--"scale up."
  • Increase the number of computers (processor/s + memory pairs)--"scale out."

The above, of course, assumes that there is sufficient memory (RAM) to keep the processor/s busy, and sufficient I/O bandwidth to keep memory full. The former is the reason that the next release of Windows Server (Windows Server 2008 R2) will be available only in 64-bit versions.

As we discussed in an earlier posting (A Subtle Change to Microsoft Server Pricing), increasing clock speed is no longer a practical possibility, because doing so requires too much power and generates too much heat.

Processor manufacturers have responded to this limitation by increasing the number of processors that fit on a processor chip (scaling up), and this is a trend that will continue. Hardware system manufacturers have responded by increasing the number of processor chips (CPU packages) that can be plugged into a system (scaling up). This is relatively cheap to achieve when only a small number (2-4) of processor chips plug directly into a motherboard, but becomes much more expensive once a larger number of processor chips need to be supported.

Unfortunately, figuring out how to support a large number of processors at the hardware level and preventing contention for memory (RAM) from massively degrading performance deals with only part of the problem. Unless the OS can keep a large number of threads active, the expense of assembling systems with a large number of processors would be wasted. Historically, OSs have had difficulty doing this, largely because of their need to sequentialize access to data structures. It is often the case that a large number of processors are idle while waiting to access a dispatcher (thread to processor binder) data structure.

In Windows Server 2008 R2, Microsoft has achieved a major breakthrough: It has developed a lock-less dispatcher. This does not at a stroke eliminate lock contention (for example, to a database record, etc.), but it does mean that contention will not now occur at the most basic level of the OS. It is for this reason that we feel comfortable stating that Microsoft Windows Server 2008 R2 will "scale up" with a vengeance.

If systems can "scale up," why bother to "scale out"? The answer is cost. A "scaled out" cluster of inexpensive computers is much, much cheaper than a single massively (>16 processors) "scaled up" system. What is more, systems can "scale out" from a hardware standpoint, effectively without limit, while "scale up" hardware is always restricted in the maximum number processors that a particular configuration can support. So what's the catch?

The answer is software. In general, applications can only be "scaled out" when they can be broken up into an unlimited number of instances that run without interaction with each other--a so-called "shared nothing" application. A good example is an application that serves up Web content, either static or dynamic. Another, rather surprising application, is an Oracle database cluster. Neither Microsoft's SQL Server nor IBM's DB2 (UDB variant) are "shared nothing" systems, and are therefore, not amenable to "scaled out" deployment.

... Nick Shelness

One Comment

  1. Chris Eaton
    Posted July 27, 2009 at 2:18 PM | Permalink

    Hi Mr. Shelness,

    I enjoyed reading your blog posting here but felt compelled to comment about the very last paragraph. You state that Oracle has a “shared-nothing” database cluster when in fact the offering they have (called Real Application Clusters) is considered a “shared disk” database implementation. That is, all servers in the cluster share a single copy of the database. The problem with this type of architecture is that it does not meet the assumptions you had listed at the beginning of your blog. Which is “The above, of course, assumes that there is sufficient memory (RAM) to keep the processor/s busy, and sufficient I/O bandwidth to keep memory full.” In the case of a shared disk database implementation you have to be very careful when adding more servers to the cluster because that does not guarantee you will have sufficient I/O bandwidth.

    You also state that DB2 does not have a scale out shared nothing implementation. This also is not correct. DB2 for Linux, UNIX, and Windows has a shared nothing scale out implementation and has had one since 1996. Today the feature is called the Database Partitioning Feature (DPF) and is part of IBM’s InfoSphere Warehouse offering. What distinguishes this from Oracle and others is that it has been architected with your above assumption specifically in mind. That is, when you add more servers to the cluster the design is such that you are adding processors, memory and I/O bandwidth to the system to achieve a balance in the system. The database itself is broken down into smaller pieces which are then managed by the servers in the cluster individually, allowing you to grow the system in building block units. This is what allows it to scale out very efficiently. In fact the offering which includes this architecture is called the “InfoSphere Balanced Warehouse” which achieves your specific assumptions of keeping the system balanced as you scale it out so you don’t end up with processors sitting there waiting for I/O. You can find more on this offering here https://www-01.ibm.com/software/data/infosphere/warehouse/ or feel free to contact me and I would be glad to provide you with more information (I left you my email directly).

    Chris Eaton
    IBM

Post a comment

You must be logged in to post a comment. To comment, first join our community.