This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them.
"synopsis" may belong to another edition of this title.
Dr. Somayeh Sardashti earned her Ph.D. degree in Computer Sciences from the University of Wisconsin-Madison. Her research interests include computer systems and architecture, high performance and energy-optimized memory hierarchies, exploiting new memory, and hardware technologies for high performance database systems. She currently works in Exadata Storage Server and Database Machine group at Oracle Corporation. She was the winner of the ACM student research competition at Grace Hopper conference in 2013. She holds an M.S. in Computer Sciences from the University of Wisconsin-Madison, another Master's degree, and a B.S. in computer engineering from the University of Tehran.
"About this title" may belong to another edition of this title.
US$ 3.99 shipping within U.S.A.
Destination, rates & speedsSeller: Books From California, Simi Valley, CA, U.S.A.
paperback. Condition: Very Good. Seller Inventory # mon0003658476
Quantity: 1 available
Seller: BargainBookStores, Grand Rapids, MI, U.S.A.
Paperback or Softback. Condition: New. A Primer on Compression in the Memory Hierarchy 0.37. Book. Seller Inventory # BBS-9783031006234
Quantity: 5 available
Seller: California Books, Miami, FL, U.S.A.
Condition: New. Seller Inventory # I-9783031006234
Quantity: Over 20 available
Seller: Books Puddle, New York, NY, U.S.A.
Condition: New. 1st edition NO-PA16APR2015-KAP. Seller Inventory # 26395061345
Quantity: 4 available
Seller: Chiron Media, Wallingford, United Kingdom
PF. Condition: New. Seller Inventory # 6666-IUK-9783031006234
Quantity: 10 available
Seller: Majestic Books, Hounslow, United Kingdom
Condition: New. Print on Demand. Seller Inventory # 402364350
Quantity: 4 available
Seller: Ria Christie Collections, Uxbridge, United Kingdom
Condition: New. In. Seller Inventory # ria9783031006234_new
Quantity: Over 20 available
Seller: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germany
Taschenbuch. Condition: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them. 88 pp. Englisch. Seller Inventory # 9783031006234
Quantity: 2 available
Seller: Biblios, Frankfurt am main, HESSE, Germany
Condition: New. PRINT ON DEMAND. Seller Inventory # 18395061355
Quantity: 4 available
Seller: AHA-BUCH GmbH, Einbeck, Germany
Taschenbuch. Condition: Neu. Druck auf Anfrage Neuware - Printed after ordering - This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them. Seller Inventory # 9783031006234
Quantity: 1 available