When I saw the Halloween problem for the first time, I was trying to optimize the UPDATE statement in an upsert proc. I had suspected many of the changes to the table were redundant; we were setting the value equal to its current value. We are still doing all the effort of a write to change nothing. The original query was something like this:

/* Original update */
UPDATE ex
SET
	ex.value = CASE WHEN @op = 1 THEN tvp.value
		ELSE ex.value + tvp.value END
FROM dbo.Example ex
INNER JOIN @tvp tvp
	ON tvp.AccountID = ex.AccountID
	AND tvp.ProductID = ex.ProductID
	AND tvp.GroupID = ex.GroupID
WHERE
	ex.AccountID = @AccountID
	AND ex.ProductID = @ProductID;

So, I added a WHERE clause to prevent us from updating the row unless we were actually changing the data. If this meant we would write fewer rows, I expected the query would run faster. Instead, the duration and CPU usage increased significantly.

/* WHERE clause changed */
UPDATE ex
SET
	ex.value = CASE WHEN @op = 1 THEN tvp.value
		ELSE ex.value + tvp.value END
FROM dbo.Example ex
INNER JOIN @tvp tvp
	ON tvp.AccountID = ex.AccountID
	AND tvp.ProductID = ex.ProductID
	AND tvp.GroupID = ex.GroupID
WHERE
	ex.AccountID = @AccountID
	AND ex.ProductID = @ProductID
	AND ex.value <> CASE WHEN @op = 1 THEN tvp.value
		ELSE ex.value + tvp.value END;

After discussing with a colleague, we suspected the Halloween problem. So, I read up on the subject and tried to come up with a different optimization for the statement.

I was interested in the idea that the SQL Server optimizer was trying to separate the read phase of the query from the write phase. The eager spool gives SQL Server a complete list of rows it needs to update, preventing the Halloween problem. But the protections made the query take longer.

Description

Well, if SQL Server is trying to separate the read from the write, why don’t I just do that myself? I had the idea to read the data I needed in an INSERT…SELECT statement, writing into a memory-optimized table variable (motv). I could make sure the read included all the columns I need to calculate the new value, including the CASE statement I had in the SET clause of the UPDATE.

I thought of this as Manual Halloween protections but later found Paul White had coined the term about 5 years earlier (look near the bottom of that article).

Why a motv and not a temp table? I found previously that using a temp table in this procedure, which ran hundreds of millions of times per day in our environment, caused a lot of tempdb contention. A table variable would have a similar effect, but not if it is memory-optimized.

This was an upsert procedure, so we will try to update all the rows that correspond to the TVP passed in, and we will insert any rows that don’t exist. Originally, the procedure ran the UPDATE and the INSERT each time, but we already found that we inserted no records the vast majority of the time.

Example

If I query the data into the motv, we can use the motv to decide whether we need to UPDATE or INSERT at all.

/* Manual Halloween; populating the motv */
INSERT INTO motv
SELECT
	tvp.AccountID,
	tvp.ProductID,
	tvp.GroupID,
	ex.value,
	CASE WHEN @op = 1 THEN tvp.value
		ELSE ex.value + tvp.value END
		AS new_value
FROM @tvp tvp
LEFT LOOP JOIN dbo.Example ex
	ON ex.AccountID = tvp.AccountID
	AND ex.ProductID = tvp.ProductID
	AND ex.GroupID = tvp.GroupID
WHERE
	ex.AccountID = @AccountID
	AND ex.ProductID = @ProductID;

I wrote this with a specific join order in mind and used the LOOP JOIN hint to fix the join order as I wrote it, while ensuring we didn’t use a different join type. The table-valued parameter (tvp) input is very likely to have only a few rows in it; less than 5 in almost all cases.

I used a LEFT join as well to account for the possibility that the row isn’t present in the underlying table. If that row doesn’t exist, the ex.value written into the motv will be NULL, and that will indicate we need to insert this row.

But first, let’s look at the update:

/* Manual Halloween; check, maybe UPDATE the base table */
IF EXISTS(
	SELECT 1 
	FROM @motv motv
	WHERE
		motv.value <> motv.new_value
)
BEGIN
	UPDATE ex
	SET
		ex.value = motv.new_value
	FROM @motv motv
	LEFT LOOP JOIN dbo.Example ex
		ON ex.AccountID = motv.AccountID
		AND ex.ProductID = motv.ProductID
		AND ex.GroupID = motv.GroupID
	WHERE
		motv.value <> motv.new_value;
END;

In this case, where we may need to UPDATE or INSERT (but not both for a given row) but also where we suspect the data may not be changing at all, we don’t necessarily have to run the UPDATE statement. First, we query the motv to see if any rows have a value that has changed. If not, we skip the UPDATE.

And I think I should linger on the WHERE clause. The motv.value could be NULL; how does that comparison work? If you compare NULL to a real value, the result is not true or false, it is NULL. Returning NULL for that row won’t cause us to run the UPDATE, which is the correct behavior; we need to INSERT that row.

/* Manual Halloween; check, maybe INSERT the base table */
IF EXISTS(
	SELECT 1 
	FROM @motv motv
	WHERE
		motv.value IS NULL
)
BEGIN
	INSERT ex
	SELECT
		motv.AccountID,
		motv.ProductID,
		motv.GroupID,
		motv.new_value
	FROM @motv motv
	WHERE
		motv.value IS NULL;
END;

But we only need to INSERT for the rows where that value is NULL.

Takeaways

So, why was this an improvement in this case? There are several points, some I only thought of recently.

  • Using Manual Halloween made it easy to reduce the writes to the underlying table for the UPDATE, saving a lot of effort.
  • The INSERT statement was also skipped the vast majority of the time.
  • Skipping a statement also meant we didn’t run the associated trigger. We also skip foreign key validation on the statement, which can also be quite expensive.
  • Fewer writes means fewer X locks on the table, making it less likely we could have contention in a very busy object.
  • The separate INSERT SELECT will read the data from the table using SH locks since we know we aren’t writing to the table in that statement.
  • The motv uses optimistic concurrency, and we aren’t writing data to a disk for that operation.

In this case, the UPDATE statement only executed 4% of the time the procedure was called. For the INSERT, that number was less than 2%. I wasn’t surprised that the INSERT was unnecessary most of the time; you only insert a row once. I was surprised by how often the lack of an UPDATE indicated the data passed in was unchanged. But we knew this activity was customer-driven and had seen data passed in repeatedly in other places.

In other cases, the Manual Halloween approach may be very effective if the data is redundant. The reduced contention from the fewer writes may also have been a big factor in the improvement. The redundant data may be a very unusual circumstance, though.

It may also be helpful if when there are multiple statements, like in an upsert procedure, where only one is necessary for a given row.

Summary

I have found the Halloween problem fascinating since I was introduced to it, but I’m done with the subject for now. I’ll likely be talking about Query Store next time.

You can follow me on Twitter (@sqljared) and contact me if you have questions. My other social media links are at the top of the page.

Update and Correction:

This blog was originally posted on February 20. Since then I read other articles that suggested different behavior with the Halloween Problem. I contacted Paul White, who informed me that the WideWorldImporters database uses compatibility level 130 (SQL Server 2016) by default. So, I tested on a SQL Server 2019 instance but was probably seeing an issue addressed in later updates.

I tested again at compatibility level 150 and saw a different execution plan which led to different conclusions.

I’ve left the majority of the post unchanged, but I’m adding an addendum section, and updating the summary and its conclusions. So, make sure you read those sections for the corrections.

Original Post:

I find myself talking about the Halloween Problem a lot and wanted to fill in some more details on the subject. In short, the Halloween Problem is a case where an INSERT\UPDATE\DELETE\MERGE operates on a row more than once, or tries to and fails. In the first recorded case, an UPDATE changed multiple rows in the table more than once.

So let’s take a look at an example using a publicly available database, WideWorldImporters.

A Halloween Problem example

Here’s a simple update procedure. We’re going to update the quantity for an item in the Sales.OrderLines table:

CREATE OR ALTER PROCEDURE Sales.OrderLines_UpdateQuantity
	@OrderID INT,
	@StockItemID INT,
	@Quantity INT
	WITH EXECUTE AS OWNER
AS
BEGIN
	SET NOCOUNT ON;
	SET XACT_ABORT ON;

	UPDATE sol
	SET
		sol.Quantity = @Quantity,
		sol.PickedQuantity = @Quantity
	FROM Sales.OrderLines sol
	WHERE
		sol.OrderID = @OrderID
		AND sol.StockItemID = @StockItemID;
		-- AND sol.Quantity <> @Quantity;
END;
GO

You may notice the commented line. In one description of the Halloween Problem I heard\read, it was suggested that if we try to SET something that is in our WHERE clause the problem is likely to occur. Or rather, SQL Server will see the possibility of the problem and add protections to our execution plan to prevent it.

First, let’s test without that line, and see what our execution plan tells us.

EXEC Sales.OrderLines_UpdateQuantity
	@OrderID = 5,
	@StockItemID = 155,
	@Quantity = 21;
GO
Note the eager spool

The eager spool between our index reads and clustered index update shows that SQL Server added Halloween protections to prevent the problem. The problem is prevented by separating the read phase of the query from the write phase.

This usually involves a blocking operator. Most often this is an eager spool, but if there is another blocking operator in the plan like a sort or hash match, that blocking operator may remove the need for a separate spool.

The Halloween Problem would occur if a query is running in row mode and as rows are still being read, rows are being updated and moved in an index. This allows the read operation to potentially read the updated row again and operate on it again. The index movement is key in this scenario.

But with a blocking operator between the read operation and the write, we force all the reads to complete first. This gives us a complete, distinct list of rows to be updated (in this example) before we get to the clustered index update, so it isn’t possible to update the same row twice.

So, how does index movement come into play here? We are updating the Quantity and PickedQuantity columns in our UPDATE statement. Both fields are key columns in the only columnstore index on the table, NCCX_Sales_OrderLines.

CREATE NONCLUSTERED COLUMNSTORE INDEX [NCCX_Sales_OrderLines] ON [Sales].[OrderLines]
(
[OrderID],
[StockItemID],
[Description],
[Quantity],
[UnitPrice],
[PickedQuantity]
)WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0) ON [USERDATA];
GO

So when we update these columns, the affected rows will move in that index. If the row moves, that means a read operation could continue reading and find the same row again, returning it as a part of its result set a second time.

Interestingly, we aren’t reading from the columnstore index in the plan provided. Since that’s the only index containing these columns as key values, it’s the only index where the rows should move. In this case, our read operators shouldn’t encounter updated rows a second time, since they use the FK_Sales_OrderLines_OrderID (with a key lookup against PK_Sales_OrderLines).

I wonder if SQL Server decided the Halloween protections were needed before it decided which index it would use for the read.

Removing the index

Either way, if we dropped the NCCX_Sales_OrderLines index, we should see a plan without an eager spool between the read operators and the update operator.

IF EXISTS(
	SELECT 1
	FROM sys.indexes si
	WHERE
		si.name = 'NCCX_Sales_OrderLines'
)
BEGIN
	DROP INDEX [NCCX_Sales_OrderLines]
		ON Sales.OrderLines;
END;
GO

With the index removed, let’s look at the new plan.

Unspooled

We’ve lost the extra steps to the left of the clustered index update operator to update the columnstore index, and we have also lost the eager spool between the read operators and the update operator. This shows without the index movement, Halloween protections are no longer needed.

Performance impact of protections

Let’s look at the data from Query Store to see how big the difference is between the two execution plans.

I ran a simple query against the same OrderID in Sales.OrderLines before running the procedure before and after the index change to get the data into the cache (because cold cache issues were making a large difference). I also ran the procedure 10 times to try to average out our results in case any odd wait types were seen.

80 microseconds versus 46 microseconds. Blazing fast in both cases with the data already cached, but the plan with Halloween protections took 74% longer. Unsure if the update to a columnstore index is significantly more expensive than that of a rowstore index. Perhaps we should test this again without columnstore complicating the issue.

Speaking in general, I would expect a bigger difference in a query affecting more rows. For a query that only returns 3 rows from the first index seek, the delay caused by the spool would be very small. But imagine if we have a query that reads tens or hundreds of thousands of rows before performing its write operation.

Normally such a query would be passing rows it has read up to the join and update operators while it is continuing to read. Those operations would be happening on different threads in parallel.1

If we are being protected from the Halloween Problem, the eager spool will not return any rows to the operations above it (like the clustered index update) until all rows have been read. So the writes cannot start until much later, and the more rows being read the more considerable the delay.

Nonclustered indexes?

If you noticed the “+3 non-clustered indexes” banner in one of the plans above, that’s indicating the nonclustered indexes updated when we updated the clustered index. This is more obvious in Plan Explorer than in the plans as shown in SQL Server Management Studio. So, I wanted to point that out in case the visual was confusing to anyone.

But this raises another question. If we are updating those indexes, why don’t they cause the Halloween protections to be used?

That is because the quantity columns are present in those indexes only as included columns. Changes to those columns won’t affect where the row sorts, but the values still need to be updated.

Rowstore testing

So, let’s see how this looks with a rowstore index. Here’s a second procedure, similar to the first but also updating PickingCompletedWhen.

CREATE OR ALTER PROCEDURE Sales.OrderLines_UpdateQuantityWhen
	@OrderID INT,
	@StockItemID INT,
	@Quantity INT
	WITH EXECUTE AS OWNER
AS
BEGIN
	SET NOCOUNT ON;
	SET XACT_ABORT ON;

	UPDATE sol
	SET
		sol.Quantity = @Quantity,
		sol.PickedQuantity = @Quantity,
		sol.PickingCompletedWhen = GETUTCDATE()
	FROM Sales.OrderLines sol
	WHERE
		sol.OrderID = @OrderID
		AND sol.StockItemID = @StockItemID
		AND sol.PickingCompletedWhen < GETUTCDATE();
END;
GO

Initially, no index uses PickingCompletedWhen. So if we execute the procedure as is, we shouldn’t see the tell-tale eager spool.

EXEC Sales.OrderLines_UpdateQuantityWhen
	@OrderID = 5,
	@StockItemID = 155,
	@Quantity = 21;
GO

This plans is what we’d expect. If we add an index, how does this change the plan and how does this change the performance?

IF NOT EXISTS(
	SELECT 1
	FROM sys.indexes si
	WHERE
		si.name = 'IX_OrderLines_OrderID_StockItemID_PickingCompletedWhen'
)
BEGIN
	CREATE INDEX IX_OrderLines_OrderID_StockItemID_PickingCompletedWhen
		ON Sales.OrderLines (OrderID, StockItemID, PickingCompletedWhen);
END;
GO

Here, we see the eager spool implementing the Halloween protections again, but between the index seek and the key lookup. Note that the new index is the one we are using for the index seek. The clustered index update now indicates it is updating 4 nonclustered indexes, including the new index.

So, is the performance difference as stark as it was with the columnstore index?

So, 52 µs vs 118 µs. The query took about ~126% longer when the Halloween protections were present. More than we saw with the columnstore index, which is surprising. Perhaps it is relevant that we are updating a third field. It almost feels like the observer effect at this scale.

Addendum

So, to correct things here, let’s go back to the first procedure.

CREATE OR ALTER PROCEDURE Sales.OrderLines_UpdateQuantity
	@OrderID INT,
	@StockItemID INT,
	@Quantity INT
	WITH EXECUTE AS OWNER
AS
BEGIN
	SET NOCOUNT ON;
	SET XACT_ABORT ON;

	UPDATE sol
	SET
		sol.Quantity = @Quantity,
		sol.PickedQuantity = @Quantity
	FROM Sales.OrderLines sol
	WHERE
		sol.OrderID = @OrderID
		AND sol.StockItemID = @StockItemID;
		-- AND sol.Quantity <> @Quantity;
END;
GO

If I run this procedure again with the restored database and no other changes besides updating the compatibility level to 150, I see the following execution plan:

So, we have no eager spool, which means the Halloween Problem isn’t a problem now.

Previously, there was a spool between the index seek and lookup and the clustered index update. The only index using any of the updated fields as a key value was the columnstore index. This suggested that the optimizer will use Halloween protections if any index uses the updated fields as a key value because the rows would be moved in that index.

This new plan disproves that because the optimizer no longer uses the protections with the later compatibility level. And the columnstore index (NCCX_Sales_OrderLines) is still present (as you can see if you hover over the clustered index update operator).

As for the second procedure, I see the Halloween protections even without the index I added in my example. Without that index, the query originally used the FK_Sales_OrderLines_OrderID index to seek the rows in question. At the higher compatibility level, the IX_Sales_OrderLines_Perf_20160301_02 index is used, which is keyed on (StockItemID, PickingCompletedWhen).

So, the Halloween protections are used because we read from an index keyed on one of the updated fields, and rows being updated will potentially move in that index.

We’ve seen the Halloween protections when using nonclustered indexes so far, but what if we are using the clustered index for the read?

I wrote a quick procedure to change the OrderLineID, which is the only column in the clustered primary key for this table. And this matches expectations; we see the eager spool between the clustered index seek and the update operator.

Summary

Hopefully, the addendum corrects the matter while keeping things clear. I’m updating one of the bullet points below, as well.

It seems there are only two criteria for the protections against the Halloween Problem to be used for an UPDATE query:

  1. The object being updated must also be in the query.
  2. A column being updated must be a key column in at least one index on the table. One of the updated columns must be a key value in the index used for the read portion of the query, so that the rows may move in that index.

For other statements, the setup is more complex. I find the UPDATE statement is the most straightforward example of the Halloween Problem. But you can see the protections in place if you query from a table as part of an INSERT or DELETE (or MERGE) where you change that same table.

And if we see Halloween protections in the plan for a query, we could change the offending index or the query to change the behavior.

Or we could use the manual Halloween technique, which I will discuss next time.

Thanks again to Paul White for pointing out the compatibility level; I doubt that would ever have occurred to me.

Please contact me if you have any questions or comments. I’ve updated my social media links above to include Counter.Social and Mastodon. We’ll see if there is more #sqlfamily activity on those platforms going forward.

Footnotes

1: Not the type of parallelism we typically think of with SQL Server. Parallelism is typically when a given operation, like an index scan, is expected to process many rows, and SQL Server dedicates multiple threads to that operator or group of operators. In this case, I say parallel because different operators (the index seek, nested loops join, and clustered index update) are all processing rows at the same time, one row at a time.

Tempdb contention has long been an issue in SQL Server, and there are many blogs on the issue already. But I wanted to add one more mainly to highlight the improvements in recent versions of SQL Server

Tempdb contention is most often discussed in as relating to the creation of temp tables (and other objects) in tempdb. If you are experiencing this you will see PAGELATCH_EX or PAGELATCH_SH waits, frequently with wait resources like 2:1:1 or 2:1:3. This indicates contention in database 2 (tempdb), page 1 (the first data file in tempdb), and one of the PFS, GAM, or SGAM pages (which are pages 1, 2, and 3 respectively). Tempdb files of sufficient size will have additional PFS, GAM, and SGAM pages at higher page numbers, but 1 and 3 are the pages most often referenced.

Temp tables aren’t the only objects being created in tempdb. Table variables are as well unless they are memory-optimized. There are also worktables for sorts, spools, and cursors. Hash operations can spill to disk and are written into tempdb. Row versions are written into tempdb for things like read committed snapshot isolation, and triggers make use of row versioning as well. For more details, check out this excellent post by David Pless.

Before recent releases, there were three main suggestions for reducing tempdb contention.

  • Trace flags (1118 and 1117)
  • More tempdb files
  • Create fewer objects in tempdb

Honestly, I don’t think the third was even included in a lot of the blogs on the subject, and it is very important. Many of the actions that use tempdb can’t be avoided, but I tend to use memory-optimized table variables instead of temp tables the vast majority of the time.

In one case a few years ago, I replaced the memory-optimized table variables in one very frequently executed stored procedure with temp tables to see if using temp tables would result in better execution plans. This procedure was executed about 300 million times per day across several SQL Server instances using similar databases, and the procedure used 4 temp tables. The plans didn’t matter; creating 1.2 billion more temp tables per day added far too much tempdb contention.

But the main point of this post is to help everyone catch up on the topic, and see how more recent versions of SQL Server improve on this issue.

Improvements in SQL Server 2016

SQL Server 2016 introduced several improvements that help reduce tempdb contention.

The most obvious is that setup will create multiple files by default, one per logical server up to eight. That bakes in one of the main recommendations for reducing tempdb contention, so it’s a welcome improvement.

There are also behavior changes that include the behavior of trace flags 1117 and 1118. All tempdb data files grow at the same time and by the same amount by default, which removes the need for trace flag 1117. And all tempdb allocations use uniform extents instead of mixed extents, removing the need for trace flag 1118.

So, that’s another recommendation for reducing tempdb contention already in place.

Several other changes also improve caching (reducing page latch and metadata contention), reduce the logging for tempdb operation, and reduce the usage of update locks.
For the full list, check here.

Improvements in SQL Server 2019

The big change here is the introduction of memory-optimized tempdb metadata. The documentation here says that this change (which is not enabled by default, you will need to run an ALTER SERVER CONFIGURATION statement and restart) “effectively removes” the bottleneck from tempdb metadata contention.

However, this post by Marisa Mathews indicates the memory-optimized tempdb metadata improvement in SQL Server 2019 removed most contention in PFS pages caused by concurrent updates. This is done by allowing the updates to occur with a shared latch (see the “Concurrent PFS updates” entry here).

Tempdb contention seen in sp_WhoIsActive output

One thing I would point out is that the metadata is being optimized here; the temp tables you create are not memory-optimized and will still be written to the storage under tempdb as usual.

Improvements in SQL Server 2022

The post above also indicates that SQL Server 2022 reduces contention in the GAM and SGAM pages by allowing these pages to be updated with a shared lock rather than an update log.

The issue with the PFS, GAM, and SGAM pages has always been the need for an exclusive latch on those pages when an allocation takes place. If 20 threads are trying to create a temp table, 19 of them get to wait. The suggestion to add more data files to tempdb was a way to get around access to these pages being serialized; adding more files gives you more of these pages to spread the allocation operations across.

In Summary

The gist is that tempdb contention has been nearly eliminated in SQL Server 2022. There are still several other actions that use tempdb, and you may see contention if have a niche workload or use a lot of worktables.

Hopefully, this post will help you decide if it’s time for an upgrade. If you have been seeing tempdb contention on these common pages, the latest release should be a major improvement.

Feel free to contact me with any questions or let me know of any suggestions you may have for a post.

Wanted to point out a few more good articles and a video on the subject that you may enjoy.