| written 3.7 years ago by |
Memory mapping in cache
• Is a technique that defines how contents of main memory are brought into cache memory
• Cache memory and main memory are divided into small fixed sized blocks
• Cache memory can hold only a small subset of main memory blocks
• An algorithm is needed for mapping main memory blocks into cache blocks
Cache memory mapping techniques
• Direct Mapping
• Associative mapping
• Set associative mapping

Address Division

Associative Mapping
• A flexible mapping
• A memory block can reside in any cache block
All blocks in the cache are freely available, then any block of main memory can map to any block of cache
• If all the cache blocks are occupied then one of the existing blocks will be replaced
A replacement algorithm is required, which suggests the block to be replaced
• Eg: FCFS, LRU etc.

Address Division

Set Associative Mapping
• A combination of direct mapping and associative mapping
• Blocks of cache are grouped into sets, and the mapping allows the block of main memory to reside in any block of specific set
. Within a set a main memory block can be mapped to any of the available cache blocks
Block j of main memory is mapped to the cache block (j modulo m) where m is the number of sets in the cache
• It requires a replacement algorithm, because, inside a set, it follows associative mapping

Address Division


and 2 others joined a min ago.