Aaron Bertrand

An important change to Extended Events in SQL Server 2012

November 6, 2012 by in Extended Events | 4 Comments
SentryOne eBooks

In these books, you will find useful, hand-picked articles that will help give insight into some of your most vexing performance problems. These articles were written by several of the SQL Server industry’s leading experts, including Paul White, Paul Randal, Jonathan Kehayias, Erin Stellato, Glenn Berry, Aaron Bertrand, and Joe Sack.

Free Download

Featured Author

Itzik is a T-SQL trainer, a co-founder of SolidQ, and blogs about T-SQL fundamentals and query tuning.

Itzik’s Posts

As you have most certainly heard elsewhere, SQL Server 2012 finally offers a version of Extended Events that is a viable alternative to SQL Trace, in terms of both better performance and event parity. There are other enhancements such as a usable UI in Management Studio – previously your only hope for this was Jonathan Kehayias' Extended Events Manager. There is also a great change related to permissions: in SQL Server 2012 you only need ALTER ANY EVENT SESSION to create and manage Extended Event sessions (previously you needed CONTROL SERVER).

I came across a more subtle behavior change recently that made it look like my event session was dropping events. The change itself is not a secret, and in fact even after reading or hearing about this change multiple times (Jonathan reminded me that he told me about this change too), I still missed it in my initial troubleshooting since, at the time, it wasn't a change that I thought would affect me. Lo and behold…

TL;DR Version

In SQL Server 2012, your event session will only capture 1,000 events by default if it uses the ring_buffer target (and 10,000 for pair_matching). This is a change from 2008 / 2008 R2, where it was limited only by memory. (The change is mentioned almost in a footnote here, back in July 2011.) To override the default, you can use the MAX_EVENTS_LIMIT setting – but note that this setting will not be recognized by SQL Server 2008 / 2008 R2, so if you have code that needs to work against multiple versions, you'll need to use a conditional.

More Details

The scenario I was working through was more complex than this, but to demonstrate this issue, let's assume a very simple use case for Extended Events: tracking who is modifying objects. There is a handy facility for this: object_altered. We can see the description for this event from the following query:

SELECT description FROM sys.dm_xe_objects WHERE name = 'object_altered';
Occurs when an object was altered by the ALTER statement. This event is raised two times for every ALTER operation. The event is raised when the operation begins and when the operation is either rolled back or committed. Add the nt_username or server_principal_name actions to this event to determine who altered the object.

So, if an object is modified, say, 20 times, I would expect to pull 40 events. And this is exactly what happens in SQL Server 2008, 2008 R2 and 2012. The challenge comes when more than 500 modifications happen (leading to more than 1,000 events). In SQL Server 2008 and 2008 R2, we still capture all events. But SQL Server 2012 will drop some due to a change in the ring_buffer target. To demonstrate, let's build a quick, sample event session that trades performance for prevention of losing events (note that this is not the set of options I would prescribe for any production system):

USE master;
GO
CREATE EVENT SESSION [XE_Alter] ON SERVER
ADD EVENT  sqlserver.object_altered
(
    ACTION (sqlserver.server_principal_name)
    WHERE  (sqlserver.session_id = 78) -- change 78 to your current spid
)
ADD TARGET package0.ring_buffer (SET MAX_MEMORY = 4096)
WITH (EVENT_RETENTION_MODE = NO_EVENT_LOSS, MAX_DISPATCH_LATENCY = 5 SECONDS);

ALTER EVENT SESSION [XE_Alter] ON SERVER STATE = START;
GO

With the session started, in the same window, run the following script, which creates two procedures, and alters them in a loop.

CREATE PROCEDURE dbo.foo_x AS SELECT 1;
GO

CREATE PROCEDURE dbo.foo_y AS SELECT 1;
GO

ALTER PROCEDURE dbo.foo_x AS SELECT 2;
GO 275

ALTER PROCEDURE dbo.foo_y AS SELECT 2;
GO 275

DROP PROCEDURE dbo.foo_x, dbo.foo_y;
GO

Now, let's pull the object name, and how many times each object was modified from the target, and drop the event session (be patient; on my system, this consistently takes about 40 seconds):

;WITH raw_data(t) AS
(
  SELECT CONVERT(XML, target_data)
  FROM sys.dm_xe_sessions AS s
  INNER JOIN sys.dm_xe_session_targets AS st
  ON s.[address] = st.event_session_address
  WHERE s.name = 'XE_Alter'
  AND st.target_name = 'ring_buffer'
),
xml_data (ed) AS
(
  SELECT e.query('.') 
  FROM raw_data 
  CROSS APPLY t.nodes('RingBufferTarget/event') AS x(e)
)
SELECT [object_name] = obj, event_count = COUNT(*)
FROM
(
  SELECT
    --[login] = ed.value('(event/action[@name="server_principal_name"]/value)[1]', 'nvarchar(128)'),
    obj   = ed.value('(event/data[@name="object_name"]/value)[1]', 'nvarchar(128)'),
    phase = ed.value('(event/data[@name="ddl_phase"]/text)[1]',    'nvarchar(128)')
  FROM xml_data
) AS x
WHERE phase = 'Commit'
GROUP BY obj;
GO

DROP EVENT SESSION [XE_Alter] ON SERVER;
GO

Results (which ignore exactly half of the 1,000 captured events, focusing on Commit events only):

object_name   event_count
===========   ===========
foo_x         225
foo_y         275

This shows that 50 commit events (100 events total) were dropped for foo_x, and exactly 1,000 total events have been collected ((225 + 275) * 2)). SQL Server seems to arbitrarily decide which events to drop – in theory, if it were collecting 1,000 events and then stopping, I should have 275 events for foo_x, and 225 for foo_y, since I altered foo_x first, and I shouldn't have hit the cap until after that loop was completed. But obviously there are some other mechanics at play here in how XEvents decides which events to keep and which events to throw away.

In any case, you can get around this by specifying a different value for MAX_EVENTS_LIMIT in the ADD TARGET portion of the code:

-- ...
ADD TARGET package0.ring_buffer (SET MAX_MEMORY = 4096, MAX_EVENTS_LIMIT = 0)
------------------------------------------------------^^^^^^^^^^^^^^^^^^^^^^
-- ...

Note that 0 = unlimited, but you can specify any integer value. When we run our test above with the new setting, we see more accurate results, since no events were dropped:

object_name   event_count
===========   ===========
foo_x         275
foo_y         275

As mentioned above, if you attempt to use this property when creating an event session against SQL Server 2008 / 2008 R2, you will get this error:

Msg 25629, Level 16, State 1, Line 1
For target, "package0.ring_buffer", the customizable attribute, "MAX_EVENTS_LIMIT", does not exist.

So if you are doing any kind of code generation and want consistent behavior across versions, you'll have to check the version first, and only include the attribute for 2012 and above.

Conclusion

If you are upgrading from SQL Server 2008 / 2008 R2 to 2012, or have written Extended Events code that targets multiple versions, you should be aware of this behavior change and code accordingly. Otherwise you risk dropping events, even in situations where you would assume – and where previous behavior would imply – that dropped events were not possible. This isn't something tools like the Upgrade Advisor or Best Practices Analyzer are going to point out for you.

The underlying mechanics surrounding this problem are described in detail in this bug report and this blog post.