CC-NUMA


Also found in: Encyclopedia.
AcronymDefinition
CC-NUMACache-Coherent Non-Uniform Memory Access
References in periodicals archive ?
In these designs, accessing main memory has nonuniform access costs to a processor and in this way, architectures of this type are often called cache-coherent, nonuniform memory access or cc-NUMA architectures.
There are several factors limiting the scalability of cc-NUMA designs.
Figure 2 presents the normalized average miss latency obtained when running several applications on top of a simulated 64-node cc-NUMA multiprocessor using RSIM [29].
In this paper, we present a revision of the proposals that have recently appeared facing these two important issues in cc-NUMA multiprocessors.
In [3], cache misses found in cc-NUMA multiprocessors are firstly classified in terms of the actions performed by directories to satisfy them, and then, it is proposed a novel node architecture that makes extensive use of on-processor-chip integration in order to reduce the latency of each one of the types of the classification.
Computer hardware vendors that use "commodity" operating systems such as Microsoft's Windows NT [Custer 1993] face an even greater problem in obtaining operating system support for their CC-NUMA multiprocessors.
A large CC-NUMA multiprocessor can be configured with multiple virtual machines each running a commodity operating system such as Microsoft's Windows NT or some variant of UNIX.
The use of commodity software leverages the significant engineering effort invested in these operating systems and allows CC-NUMA machines to support their large application base.
Besides the flexibility to support a wide variety of workloads efficiently, this approach has a number of additional advantages over other system software designs targeted for CC-NUMA machines.