Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications - Volume 1, 2002
Cache Coherence Protocol Design for Active Memory Systems
M. Chaudhuri, D. Kim, M. Heinrich
1. Introduction
Active memory systems provide a promising approach to overcoming the memory wall for applications with irregular access patterns not amenable to techniques like prefetching or improvements in the cache hierarchy. The central idea in this approach is to perform data-parallel computations [2, 4, 7] or scatter/gather operations invoked via address remapping techniques [1] in the memory system to either offload computation directly or to reduce the number of processor cache misses. Both active memory approaches create coherence problems|even on uniprocessor systems|since there are either additional processors in the memory system operating on the data directly, or the main processor is allowed to refer to the same data via more than one address.
This paper focuses on the challenges of designing cache coherence protocols for active memory systems. The necessity of enforcing data coherence between the normal cache line and the re-mapped cache line makes the protocol behavior and the
performance requirements quite different from those of a conventional DSM protocol. We propose an active memory system that supports address remapping by leveraging and extending the hardware DSM directory-based cache coherence protocol.
The key to our approach is that the active memory controller not only performs the remapping operations required, but also runs the directory-based coherence protocol and hence controls which mappings are present in the processor caches.
The paper is organized as follows. Section 2 describes the protocol extensions required and protocol implementation issues unique to active memory systems. Section 3 discusses active memory protocol-related performance issues. Section 4 presents both uniprocessor speedup and the performance improvement of active memory applications on single-node multiprocessors. Section 5 concludes the paper.
2. DSM Protocol Extensions for Active Memory Systems
Our embedded memory controller runs software code sequences to implement the coherence protocol following the same
philosophy as the FLASH multiprocessor [6]. However, the controller also includes specialized hardware to speed up active
memory protocol execution. Our base (non-extended) protocol is a conventional invalidation-based MSI bitvector directory
protocol running under release consistency. For all normal (non-re-mapped) memory requests, our memory controller follows this base protocol. Each directory entry (per 128B cache line) is 8 bits wide. The sharer vector occupies 4 bits, so we can support up to 4 processors. This can be expanded to larger machine sizes by increasing the width of the sharer field. These four bits store a sharer list vector if the cache line is in the shared state or an owner identifier for the dirty exclusive state. Two bits are devoted to maintain cache line state information. The dirty bit indicates whether a cache line is in the dirty exclusive state. The AM bit is used for our active memory protocol extensions and is not used by the base protocol. Two remaining bits are left unused.
2.1 Active Memory Extensions
As an example of address remapping techniques giving rise to a data coherence problem, Figure 1 shows that the data element D0 in cache line C0 is mapped by some address remapping technique to data elements D1;D2 and D3 belonging to three different cache lines C1;C2 and C3, respectively. This means that D1;D2 and D3 represent the same data variable as D0 and our active memory protocol extensions must keep them coherent. As in the figure, if C1 and C2 are cached in the dirty and shared state, respectively, the processor may write a new value to D1, but read a stale value from D2. We extend the base cache coherence protocol to enforce mutual exclusion between the caching states of the lines that are mapped to each other so that only one cache line of the four mapped cache lines (as in the example above) can be cached at a time. If the processor accesses another mapped cache line it will suffer a cache miss and our protocol will invalidate the old cache line before replying with the requested cache line.
Figure 1. Data Coherence Problem
For each memory request, our protocol consults the AM bit in the directory entry for the requested cache line. The AM bit
of a cache line indicates whether any data in the line has been re-mapped and is being cached by processors in the system at a different address. If the AM bit is clear, there is no possibility of a coherence violation and
the protocol behaves just like the base protocol with one additional task|to guarantee coherence in the future, the protocol sets the AM bits in the directory entries of all the cache lines that are mapped to the requested line. However, if the AM bit is set, some of the re-mapped cache lines (we collectively call these lines R) are cached
in the system and there is a potential data coherence problem. The caching states of R are obtained by reading the corresponding directory entries. If R is in the dirty exclusive state, the protocol sends an intervention to the owner of R to retrieve the most recent copy of the data, updates the mapped data value in the requested cache line, sends the data reply to the requester, and writes back the retrieved cache line to main memory. If R is in the shared state,
the protocol sends invalidation requests to all the sharers, reads the requested cache line from memory (since it is clean) and sends the data reply to the requester. In both cases, the protocol updates the AM bits in the directory entries of all the cache lines mapped to the requested line.
2.2 Support for Active Memory Transpose
Although we have implemented four active memory operations Matrix Transpose, Sparse Matrix Scatter/Gather, Linked List Linearization, and Memory-side Merge [5] for space reasons we presentonly matrix transpose as an example. Assume the following: A is a square matrix of dimension N, A0 is a transposed matrix mapped to A, and two processors P0 and P1 access them. The cache line size is 128 bytes and the data element size is 8 bytes (one double word), so one cache line contains 16 data elements. At a certain point in time, the memory snapshot is as depicted in Figure 2 and P0 executes thefollowing code:
for i=0 to N-1
for j=0 to N-1
x += A'[i][j];
Figure 2. Example – Matrix Transpose
P0 reads a cache line C0 of A0 and it misses in the data cache. C0 is composed of 16 double words w00;w10; : : : ;w150 that are same as w0;w1; : : : ;w15, and the 16 cache lines C0;C1; : : : ;C15 of A contain them. P0 and P1 are caching C1, C2 and C14 in the dirty or shared states as shown in the figure. P0 sends a read request to the main memory and the memory controller invokes the appropriate coherence handler.
The protocol handler reads the directory entry of C0 and checks the AM bit. In this case the AM bit is set because C1,
C2 and C14 are mapped to C0 and they are cached. Therefore, instead of using the base protocol our extended protocol is invoked. From the physical address of C0 the address remapping hardware (the detailed micro-architecture of the controller can be found in [5]) calculates the addresses of the cache lines mapped to C0, which in this case are C0;C1; : : : ;C15. Since C0 belongs to the re-mapped address space and the re-mapped physical address space is contiguous,
from the physical address of C0 the protocol can calculate the position of C0 in the transposed matrix (i.e. A0) if it
knows the dimensions of the matrix and the size of each element of the matrix in bytes. This information along with the starting virtual address of the matrix (i.e. A) is stored in a table that is initialized at the beginning of the application via a systemlevel library call. Using the position of C0 in A0 and the starting virtual address of
A the protocol calculates the virtual addresses of C0;C1; : : : ;C15 and looks up a TLB resident in the memory controller to compute the corresponding 16 physical addresses. Next, the protocol reads the directory entries for each of these cache lines and consults the dirty bit and the sharer vector. For C1, we find that it is cached by P0 in the dirty exclusive state. The protocol sends an intervention to P0 for this line because at this point w10 has a stale
value and the most recent value is w1 residing in P0's cache. On receiving the reply from P0, the protocol updates w10 with the correct value and also writes back C1 to memory. For C2, we find that it is cached by P0 and P1 in the shared state, so the protocol sends invalidations to P0 and P1 for this line. In this case, the protocol can read the correct value of w20 directly from main memory. The case for C14 is similar to C1 except that P1, instead of P0, is caching this line. For the other cache lines that are clean in main memory, the protocol need not do anything. Now that the
protocol has evicted all the cached lines remapped to C0 from the caches of P0 and P1 and updated the data of C0 with the most recent values, it is ready to reply to P0 with C0. Finally, the protocol updates the AM bits of all the directory entries of the cache lines re-mapped to C0. Because C0 is now cached, the AM bits of C0;C1; : : : ;C15 are
set and that of C0 is clear. This guarantees correctness for future accesses to any of these cache lines.
We have described how our Matrix Transpose protocol enforces mutual exclusion between the caching states of normal
and re-mapped lines. However, this is overly strict since it is legal to cache both normal and re-mapped lines provided both are in the shared state. We find though that for Matrix Transpose, enforcing mutual exclusion achieves higher performance because all transpose applications we have examined have the following sharing pattern: a processor first reads the normal space cache lines from the portion of the data set assigned to it, eventually writes to it and then moves on to read and eventually update the re-mapped space cache lines. Therefore, accesses tend to "migrate" from one space to another. When the active memory controller sees a read request for normal or re-mapped space it knows that
eventually there will be an upgrade request for the same cache line. Further, there will be no access from the other space between the read and the upgrade. So our cache coherence protocol extensions choose to invalidate all the cache lines in the other space mapped to the requested line at the time of read. This keeps the upgrade handler much simpler because it does not have to worry about invalidating the shared cache lines. However, we found that Sparse Matrix Scatter/Gather does not exhibit a migratory sharing pattern, and therefore we relax the mutual exclusion constraint in
that case. This illustrates an advantage of exible memory controllers that can adapt the coherence protocol to the needs of the application.
2.3 Multi-node Protocol Extensions
This section discusses issues related to multi-node extensions of our single-node active memory protocol. Our base multinode protocol is an MSI invalidation-based bitvector running under release consistency with a directory entry size of 8 bytes (per 128B of cache line). To reduce the occupancy at the home node, invalidation acknowledgments are collected at the requester. To reduce the number of negative acknowledgments in the system, the home node forwards writebacks to a requester whose interventions were sent too late, and the dirty third node buffers early interventions that arrive before data replies.
Our initial findings on multi-node active memory protocol extensions suggest that special care should be taken when assigning pages to home nodes. All cache lines mapped to each other should be co-located on the same node; otherwise, a request for a local memory line may require network transactions to consult the directory states of other remote lines that are mapped to it. This would complicate the protocol handlers as well as degrade performance. Further complications can arise while gathering invalidation acknowledgments on multinode systems. The active memory protocol needs to invalidate cache lines that are mapped to the requested line and cached by one or more processors. But the requested line and the lines to be invalidated have different addresses. Therefore, invalidation acknowledgment and invalidation request messages should carry different addresses or at the time of gathering invalidation acknowledgments a mapping procedure has to be invoked so that the invalidation requests get matched to the corresponding acknowledgments. Finally, we also need to give special consideration to remote interventions. While conventional protocols may have to send at most one intervention per memory request, the active memory protocol may have to send multiple interventions whose addresses are different but mapped to the requested line. Therefore, the intervention reply handler must gather all the intervention replies before replying with the requested line.
3. Protocol Evaluation
This section discusses protocol memory overhead and protocol-related performance issues. Additional memory overhead in the active memory protocol stems from an increase in protocol handler code size, directory space overhead for re-mapped cache lines, and the space required to store mapping information in a table accessible by the embedded protocol processor. As an example of the increase in handler code size, the base protocol code size is 20KB, but adding
the protocol extensions to support the active memory Matrix Transpose discussed in Section 2 yields a protocol code size
of 33KB. Sparse Matrix Scatter/Gather, Linked List Linearization and Memory-side Merge have protocol code sizes of 24KB,
24KB, and 26KB respectively. The directory space overhead depends on the size of the re-mapped address space. Finally,
the mapping table is only 128 bytes in size. This additional memory overhead is independent of the number of nodes in the system, except that the small mapping table must be replicated on every node of the system.
There are many performance issues for active memory protocols that do not impact conventional DSM protocols. We briefly
discuss some of these here. One major potential performance bottleneck is the occupancy of the memory controller. To service a request for a particular cache line, the protocol may have to consult the directory entries of all the re-mapped cache lines (16 in the previous example) and may have to take different kinds of actions based on the directory states. Performing all these directory lookups in the software running on our programmable memory controller was too slow. Instead, our controller has a specialized pipelined hardware address calculation unit that computes 16 re-mapped cache line addresses, loads directory entries in special hardware registers and initiates any necessary
data memory accesses. Our software protocol handlers tightly control this active memory data unit, striking a balance
between exibility and performance. As an example of the average controller occupancy, in a single-node system for 1, 2
and 4 processors for the normal executions controller occupancy is 7.7%, 21.3% and 43.2% of the total execution time respectively, while for the active memory executions it is 15.5%, 29.4% and 47.8%, though one must remember that the active memory execution time is smaller, and therefore occupancy percentages are naturally higher.
Another important performance issue is the behavior of the directory data cache, which is accessed only by the programmable embedded processor core on the memory controller. Since an access to one directory entry may necessitate accesses to multiple re-mapped directory entries (16 in the above example) that may not correspond to directory entries for contiguous cache lines, the directory data cache may suffer from poor locality and hence large numbers of misses. The choice of byte-sized directory entries mitigates this problem in uniprocessor or single-node multiprocessor active
memory systems. However, directory data cache performance may become an issue for extremely large problem sizes on small machines as the directory width increases for multi-node active memory systems.
Finally, an active memory protocol may have to send multiple interventions (a maximum of 16 in the above example) and every local intervention requires a data buffer that is filled by the processor-bus interface when the processor sends the intervention reply. This puts heavy pressure on the number of data buffers needed on the memory controller. We decided to keep four buffers reserved for this purpose and recycle them if a handler needs to send more than four interventions.
References
- Carter J. B. Impulse: Building a Smarter Memory Controller. In Proceedings of the Fifth International Symposium on High Performance Computer Architecture, January 1999.
- Hall M. Mapping Irregular Applications to DIVA, A PIM-based Data-Intensive Architecture. Supercomputing, Portland, OR, Nov. 1999.
- Heinrich M., Speight E., Chaudhuri. M. Active Memory Clusters: Effcient Multiprocessing on Commodity Clusters. In Proceedings of the Fourth International Symposium on High-Performance Computing, Lecture Notes in Computer Science, Springer-Verlag, May 2002.
- Kang Y. FlexRAM: Toward an Advanced Intelligent Memory System. International Conference on Computer Design, October 1999.
- Kim D., Chaudhuri M., Heinrich M. Leveraging Cache Coherence in Active Memory Systems. In Proceedings of the 16th ACM International Conference on Supercomputing, New York City, June 2002.
- Kuskin J. The Stanford FLASH Multiprocessor. In Proceedings of the 21st International Symposium on Computer Architecture, pages 302-313, April 1994.
- Oskin M., Chong F. T., Sherwood T. Active Pages: A Computation Model for Intelligent Memory. In Proceedings of the 25th International Symposium on Computer Architecture, 1998.
|