NUMA ( Non-Uniform Memory Access - or Non-Uniform Memory Architecture - Non-Uniform Memory Architecture ) is a computer memory implementation scheme used in multiprocessor systems when the memory access time is determined by its location relative to to the processor.
Content
NUMA with cache coherency (ccNUMA)
NUMA systems consist of homogeneous base nodes containing a small number of processors with main memory modules.
Almost all CPU architectures use a small amount of very fast, non-shared memory, known as a cache , which speeds up access to frequently requested data. At NUMA, shared memory coherency provides a significant performance advantage.
Although systems with incoherent access to NUMA are easier to design and create, it becomes extremely difficult to create programs in the classic von Neumann architecture model. As a result, all NUMA computers sold use special hardware solutions to achieve cache coherence and are classified as cache-coherent systems with distributed shared memory, or ccNUMA .
Typically, there is interprocess communication between cache controllers to maintain a consistent memory picture (memory coherence ) when more than one cache stores the same memory location. That is why ccNUMA platforms lose performance when several processors in a row try to access a single block of memory. An operating system that supports NUMA is trying to reduce this type of access frequency by reallocating processors and memory in such a way as to avoid racing and blocking.
An example of multiprocessor machines with ccNUMA architecture is the Silicon Graphics SGI Origin 2000 series of machines. The ASCI Blue Mountain supercomputer, one of the most powerful supercomputers of 1999 [1] , was a mass-parallel cluster of 48 SGI Origin 2000 machines with 128 processors each .
See also
- Uniform memory access