Auditing Login Events with Service Broker

Perhaps one of the most important aspects of running a secure SQL Server is auditing login events – essentially capturing who’s logging in, and who’s attempting to login. Being aware of who’s logging in, and who’s trying to login, can help troubleshoot a variety of problems.

Tintoretto, Finding the body of St. Mark

Tintoretto, Finding the body of St. Mark

SQL Server provides a couple of built-in methods for auditing login events, both successful attempts, and failed ones.

  1. Logging those records to the SQL Server Error Log, via the Server Properties dialog box. To get there, right-click the server name in the Object Explorer inside SQL Server Management Studio, and open the Security tab, where you’ll find a section for Login Auditing:
    Server Properties Dialog Box showing Security Tab

    Server Properties Dialog Box showing Security Tab

  2. Security Audits, via the Security -> Audits -> New Audit… menu in SQL Server Management Studio. This option allows you to log events into the Windows Security Event log or a binary .EVT file. See Microsoft’s Docs website for details.

    New Audit menu in SQL Server Management Studio

    New Audit menu in SQL Server Management Studio

Logging to the SQL Server Error Log or the Windows Security Event Log means you’ll need some kind of tool to slice-and-dice the data, postmortem. It’s difficult to respond to events as they happen with this kind of auditing, and hard to create simple T-SQL queries to inspect the data. You could create a login trigger at the server level, but that will only allow you to capture successful logins. Coding the trigger incorrectly can result in everyone being locked out of the server. You’ll need to use the Dedicated Administrator Connection, otherwise known as the DAC, to login to the server and disable the errant trigger. Not fun.

Luckily, there is a much better option; using SQL Server’s built-in Event Notification service to receive login events through Service Broker. This event stream is asynchronous to the login process, meaning it won’t interrupt or slow down the login process, and it allows you to capture both successful and failed logins to a table, either locally or remotely. For larger SQL Server infrastructures, it’s not uncommon to setup a single SQL Server instance to gather this information for central analysis.

This blog post shows how to setup a database locally for auditing login events via SQL Server Event Notifications and Service Broker.

Setup the Database, First!

First, we need to create a database to store the login data. I like this in a separate database so you can manage this data effectively.

The code above creates the login_auditing database in the default location for databases. You’ve configured this correctly on your instance, right? You could modify the statements above to point the data and log files to locations of your choosing, but I’ll leave that as an exercise for the reader. Service Broker is enabled in the database via the ALTER DATABASE ... SET ENABLE_BROKER; statement. We’re setting the recovery model to simple since we don’t really care about restoring this database with point-in-time recovery. Even if we lost the entire database, it wouldn’t be the end-of-the world, unless you’re required to audit logins by some law or regulation. In that case, you might want to enable full recovery, and take backups to allow you to meet your Recovery Point and Recovery Time objectives. The last statement takes a backup of the database to the NUL: device to ensure the recovery model change has taken effect. It’s just a habit I have.

The following code sets up several tables we’ll use to capture login details:

There are two discrete tables for auditing login events. This makes it easy to look at the failed attempts without needing to wade through the far larger list of successful logins. Having two tables also allows two different sets of statistics objects, customized for each table. Since we may be capturing a very large number of events (thanks, SharePoint!), having 400 histogram steps, instead of 200, might be A Good Thing™. I’m all about getting good performance. You can thank me later. 😉

Service Broker Infrastructure

This next bit of code creates the required Service Broker infrastructure. We need a QUEUE, a SERVICE, a ROUTE, and an EVENT NOTIFICATION. If the objects exist, this code will drop ’em first:

Activation Stored Procedure

Next, we’ll need a stored procedure for Service Broker to call whenever it sends a message to the queue. This stored procedure parses the Event Notification XML data into discrete variables that are then inserted into the relevant tables:

Finally, the following code configures the queue to call a maximum of 4 simulatenous readers, and activates the queue. I have RETENTION = OFF because I don’t want the queue to retain recorded events. I have MAX_QUEUE_READERS set to 4 on some busy instances; you may want to move this up or down depending on how busy your server is.

Once you have the Event Notification and Service Broker infrastructure in place, both successful and failed logins will be logged.

If you need to turn the queue off for some reason, use the following code:

Analysis

Here’s a couple of sample queries to get you started. The first simply shows the last login date for the list of logins stored in sys.server_principals – you could use this to potentially disable logins that haven’t been used in x number of months, etc. The second and third queries show the details of successful logins and failed logins, respectively.

Let me know if what you think of this methodology, or if you see some potential pitfall not covered herein.

See Service Broker Queue Monitor for code to create a console app for monitoring SQL Server Service Broker Queues.

Check out the rest of our security-related posts!