Page Life Expectancy and Data Throughput

Page Life Expectancy, or PLE as it’s often referred to, is a measure of how frequently the average page of memory is evicted from the buffer pool. PLE is measured in seconds, and is the number of seconds the average page remains in memory. Once you get below a certain threshold, Lower PLE means generally lower performance. What is that threshold? An often-repeated number is 300 seconds, but I like it to be 300 seconds per 4GB of memory in the buffer pool, at minimum.

Narcissus Gazing at his Reflection by Dirck van Baburen

Narcissus Gazing at his Reflection by Dirck van Baburen. Narcissus may not have had a very long life expectancy, but he looked good!

If the PLE is 300, and you’ve got 4GB of memory, you might not have a problem since that means data is only being replaced at a rate of 4GB over 5 minutes, or around 14 MB per second. Most modern disk subsystems will have no problem keeping up. However, if you have a PLE of 300 with 40GB of buffer pool, all of a sudden you’re moving 140 MB per second, and let’s not forget that’s an average. 140 MB per second might not sound like much, but when you consider that’s all the time, if becomes easy to see that the disk subsystem could use a rest!

The script below offers a quick and easy way to see the current Page Life Expectancy along with a quick calculation showing the average disk throughput implied by the PLE.

In the results, PLE is broken out by NUMA node. The results look like this:

║ Node ║  BufferSize   ║ PLE ║  MB_Throughput  ║
║  000 ║ 47891.4765625 ║ 794 ║ 60.316721111461 ║

If PLE is too low, use the output from the script to justify getting a budget to increase that server memory. However, if PLE is ludicrously high, you may want to free up some memory for some other process to use.

Let me know if you have any questions about my method, or this post. And, don’t forget to check out the rest of our posts on performance.