|
|
|
|
|
Study on Last-Level Cache Management Strategy of the Chip Multi-Processor |
|
PP: 661-670 |
|
Author(s) |
|
Li Lei,
An Huiyao,
Zhang Peng,
|
|
Abstract |
|
Threads of fair and effective allocation of shared resources limited is a key problem for chip multiprocessors. As the
processor core growth in the scale, multi thread for the shared resource limited system competition will become more intense, the
performance of the system will also be more significant. In order to alleviate this problem, a fair and effective multi thread shared
resources allocation scheduling algorithm is important. In all kinds of shared resources, the largest effect on the system performance
is the shared cache and DRAM system. There are essential differences between the last level cache and a cache. The goal of a cache
design is to provide fast data processor which requires high access speed. However, the object of the last level cache is to save data in
the chip as much as possible, and the access speed requirements are not too high, it is more subject to the plate number of available
transistors. Management level cache LRU strategy and its approximate algorithm are not applicable to the large capacity last level
cache for traditional. It may cause destructive interference between threads, cache thrashing of stream media program lead, which will
lead to a decline in the performance of processor. This paper focuses on the analysis of some hot problems of the last level cache
management in the process of the large capacity of multi-core platform sharing, and puts forward the corresponding costs less but the
solution became larger to improve the system performance. |
|
|
|
|
|