Detect Databases with High VLF Count


Broadly speaking, the recovery process for SQL Server consists of reading the transaction log, rolling forward committed transactions, and rolling back uncommitted transactions, since the last checkpoint. The SQL Server Transaction Log is segmented into a varying number of virtual log files, or VLFs. Reading log files in preparation for recovery is time-consuming on databases with high VLF counts. This post provides a T-SQL script for detecting databases with more than 2,000 Virtual Log Files, and shows how to automate that script with SQL Server Agent for ongoing monitoring.

The USS Monitor, detect high VLF count

   The USS Monitor – not the kind of monitoring in this post,
      but still interesting enough!

One-off Script

First, the script itself. Use this on an ad-hoc basis to check for databases with more than 2,000 VLFs, when you suspect you have mis-configured databases.

The script runs on SQL Server 2008 and higher, and is compatible with case-sensitive servers. For SQL Server 2016 SP2+, the script uses the sys.dm_db_log_info Dynamic Management Function. On earlier versions of SQL Server, the script uses the DBCC LOGINFO database console command.

Output for the above script will look like this:

║ ServerName ║ DatabaseName ║ VLFCount ║ AverageVLFSizeMB ║ MinVLFSizeMB ║ MaxVLFSizeMB ║
║ MyServer   ║ vlf_test     ║     2970 ║             9.91 ║         2.44 ║        10.00 ║

SQL Server Agent Job containing the Above Script

The following script incorporates the above script into a SQL Server Agent job scheduled to run once daily. This job sends an email listing any databases with more than 2,000 VLFs. Edit the script to include a valid email address and Database Mail Profile Name.

This post is part of our series on recovery and SQL Server Tools.

If you found this post useful, please
consider donating a small amount
to help keep the lights on and site running.