0
6.9kviews
Explain in detail cache coherence.
1 Answer
2
112views
  1. Cache coherency implies to the consistency of data which is stored in local caches of a shared resource. This shared resource is also possibly stored in local caches of other processors.
  2. Cache is used as a temporary storage for frequently used data from memory. Cache coherence is a special case of memory coherence. When clients in a system maintain caches of a common memory resource, problems may arise with inconsistent data.
  3. Multiprocessor systems with caches and shared memory space need to resolve the problem of keeping shared data coherent. Each local cache contains an image of a portion of memory, if a word is altered in one cache, it could conceivably invalidate a word in another cache. To prevent this, the other processors must be alerted that an update has taken place and accordingly changes also need to be made.
  4. If the client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory. Cache coherence approaches have generally been divided into software and hardware approaches. The classification of cache coherency protocols is shown in the Figure 5

enter image description here

Software Approaches:

Software cache coherence schemes attempt to avoid the need for additional hardware circuitry and logic by relying on the compiler and operating system to deal with the problem. Software approaches are attractive because the overhead of detecting potential problems is transferred from run time to compile time, and the design complexity is transferred from hardware to software.

Hardware approaches:

Hardware-based solutions are generally referred to as cache coherence protocols. These solutions provide dynamic recognition at run time of potential inconsistency conditions. Because the problem is only dealt with when it actually arises, there is more effective use of caches, leading to improved performance over a software approach

  • Snoopy Protocols

    Snoopy protocols distribute the responsibility for maintaining cache coherence among all of the cache controllers in a multiprocessor. A cache must recognize when a line that it holds is shared with other caches. When an update action is performed on a shared cache line, it must be announced to all other caches by a broadcast mechanism. Each cache controller is able to “snoop” on the network to observe these broadcasted notifications, and react accordingly.

  • Directory Protocols

    Directory protocols collect and maintain information about where copies of lines reside. Typically, there is a centralized controller that is part of the main memory controller, and a directory that is stored in main memory. Any processor that needs to write has to first request the controller. The controller also ensures that only one processor is writing in the cache.

Please log in to add an answer.