This is the heart of any Server memory subsystem performance measurement. How much data can be passed within a second. Data bus utilization’s can be expressed as rates (MB per Sec) or as percentages indicating utilization (used cycles divided by total cycles, or CKE qualified cycles) and these can be broken down on a per-direction (Read/Write), per-Channel, per-Rank, or Per-Bank basis. OTF (on the fly) which is a changing from a 4 data beat burst to an 8 data beat burst needs to be handled properly in these calculations.
WHY Measure this?
- Comparing systems memory performance
- Verify the traffic is what you would expect given the software you are running and if you are running a memory test to see if the system is being stressed.
- To discern Read performance from Write performance and to help optimize software.
- To compare various memory controller/DRAM designs to see which one runs faster on an actual hardware level.
The below chart shows the MB/s contribution for Reads and Writes for each Bank in the system. If a chip kill or page retirement took place due to memory errors this type of analysis would show the reallocation of memory traffic and the degradation of performance.
How many Mbytes/second is an important metric but it only matters if you’re pushing large amounts of memory across the bus at a sustained rate. In reality that rarely happens. Most applications are bursty in nature and don’t require large amounts of information streamed to and from the processor constantly. This is why the next performance metric, Latency, is important. We will talk about that metric in our next blog post.