最新消息:20210816 当前crifan.com域名已被污染,为防止失联,请关注(页面右下角的)公众号

Cache Write Policy and the Dirty Bit【 Write-Back vs Write-Through】

工作和技术 crifan 3032浏览 0评论

Cache Write Policy and the Dirty Bit

In addition to caching reads from memory, the system is capable of caching writes to memory. The handling of the address bits and the cache lines, etc. is pretty similar to how this is done when the cache is read. However, there are two different ways that the cache can handle writes, and this is referred to as the "write policy" of the cache.

  • Write-Back Cache: Also called "copy back" cache, this policy is "full" write caching of the system memory. When a write is made to system memory at a location that is currently cached, the new data is only written to the cache, not actually written to the system memory. Later, if another memory location needs to use the cache line where this data is stored, it is saved ("written back") to the system memory and then the line can be used by the new address.
  • Write-Through Cache: With this method, every time the processor writes to a cached memory location, both the cache and the underlying memory location are updated. This is really sort of like "half caching" of writes; the data just written is in the cache in case it is needed to be read by the processor soon, but the write itself isn’t actually cached because we still have to initiate a memory write operation each time.

Many caches that are capable of write-back operation can also be set to operate as write-through (not all however), but not generally the other way around.

Comparing the two policies, in general terms write-back provides better performance, but at the slight risk of memory integrity. Write-back caching saves the system from performing many unnecessary write cycles to the system RAM, which can lead to noticeably faster execution. However, when write-back caching is used, writes to cached memory locations are only placed in cache, and the RAM itself isn’t actually updated until the cache line is booted out to make room for another address to use it.

As a result, at any given time, there can be a mismatch between many of the lines in the cache and the memory addresses that they correspond to. When this happens, the data in the memory is said to be "stale", since it doesn’t have the fresh information yet that was only written to the cache. Memory used with a write-through cache can never be "stale" because the system memory is written at the same time that the cache is.

Normally, stale memory isn’t a problem, because the cache controller keeps track of which locations in the cache have been changed and therefore which memory locations may be stale. This is done by using an extra single bit of memory, one per cache line, called the "dirty bit". Whenever a write is cached, this bit is set (made a 1) to tell the cache controller "when you decide to re-use this cache line for a different address, you need to write the current contents back to memory". This dirty bit is normally implemented by adding one extra bit to the tag RAM, instead of using a separate memory chip (to save cost).

However, the use of a write-back cache does entail the small possibility of data corruption if something were to happen before the "dirty" cache lines could be saved to memory. There aren’t too many cases where this could happen, because both the memory and the cache are volatile (cleared when the machine is powered off).

On the other hand, consider a disk cache, where system memory is used to cache writes to the disk. Here, the memory is volatile but the disk is not. If a write-back cache is used here, you could have stale data on your disk compared to what is in memory. Then, if the power goes out, you lose everything that hadn’t yet been written back to the disk, leading to possible corruption. For this reason, most disk caches allow programs to over-rule the write-back policy to ensure consistency between the cache (in memory) and disk. Disk utilities, for example, don’t like write-back caching very much!

It is also possible with many caches to tell the controller "please write out to system memory all dirty cache lines, right now". This is done when it is necessary to make sure that the cache is in sync with the memory, and there is no stale data. This is sometimes called "flushing" the cache, and is especially common with disk caches, for the reason outlined in the previous paragraph.

Next: Summary: The Cache Read/Write Process

转载请注明:在路上 » Cache Write Policy and the Dirty Bit【 Write-Back vs Write-Through】

发表我的评论
取消评论

表情

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址
79 queries in 0.196 seconds, using 22.13MB memory