摘要
A common mechanism to ensure cache coherence is to issue snoop requests to all processors to check for the presence of cached data. Since most of snoop requests result in misses in caches and waste a lot of power, snoop filters are widely used to filter out unnecessary snoop requests to reduce power consumption. However, snoop filters also suffer from the similar problem that the false positive predictions consume a large amount of power. Substantially, designing an efficient snoop filter has to make tradeoff decisions between the filter rate and hardware cost. Traditionally, the snoop filter rate can be improved by increasing the memory capacity of snoop filters, but results in the burden of hardware overhead. In this paper, we propose an efficient adaptive mechanism to improve the filter rate of snoop filters by duplicating multiple copies of small snoop filters and distributing cache tags evenly to the duplicated copies according to the analytics of page tables. Experimental results show that the adaptive mechanism applied to JETTY snoop filters achieves an average of 19.17% and 76.1% improvement in the filter rate and memory reduction for the Splash 2 benchmarks, respectively.
原文 | 英語 |
---|---|
頁(從 - 到) | 1233-1240 |
頁數 | 8 |
期刊 | IEEE Transactions on Very Large Scale Integration (VLSI) Systems |
卷 | 26 |
發行號 | 7 |
DOIs | |
出版狀態 | 已發佈 - 2018 7月 |
ASJC Scopus subject areas
- 軟體
- 硬體和架構
- 電氣與電子工程