Parameter sniffing (for the uninitiated)

Parameter sniffing is a well-known issue within SQL Server. Imagine you have a query that can return a wildly varying number of rows. For example, a query that returns all orders in your system for a given customer. Most of your customers have a few orders, several have a lot, and one huge customer has tons of orders.

When the query is executed for the first time, SQL Server will estimate the number of rows the query will return based on the customer_id and create a plan based on that estimate.

If you wanted the details for a small account, the plan will be optimized for a small number of rows (assuming your statistics are accurate) and will be more likely to have nested loops and key lookups. Those operations are fine for small result sets but less efficient for large result sets.

If you called to get details for your largest account, you’ll get a plan that is optimized for a huge number of rows; likely using an index\table scan or maybe a hash match join instead of nested loops.

But the plan will be compiled for the first execution, and hopefully reused after that. If the plan is geared toward a small result set, it will perform poorly for the large account, and vice versa.

You can minimize the effect of parameter sniffing using WITH RECOMPILE or OPTIMIZE FOR hints, but look at an example of parameter sniffing using WideWorldImporters.

USE WideWorldImporters
GO
CREATE OR ALTER PROCEDURE Sales.GetOrders
	@CustomerID INT
AS

	SELECT *
	FROM Sales.Orders so
	WHERE
		so.CustomerID = @CustomerID;
GO

I added some more data to the table, all under one CustomerID, to change its statistics. CustomerID 90 has been very busy.

If I run this simple procedure for CustomerID 13, we’ll get a plan well suited to it, and the estimates are accurate.

EXEC Sales.GetOrders @CustomerID = 13; -- 13 or 90
GO 

If we try again but with CustomerID 90, we use the same plan and estimate the same number of rows. But nested loops and a key lookup probably shouldn’t be the plan when we are returning over 100,000 rows.

This is an example of parameter sniffing. Our plan is going to be defined largely by the parameters we use when the procedure gets compiled.

Variant Queries

Parameter Sensitive Plan Optimization attempts to solve the parameter sniffing issue by allowing a query to have different plans that are used based on the cardinality of one important parameter. The parent query has a plan that is really just a stub; there can be up to three variant queries that have full execution plans. When the query is executed, the optimizer chooses which variant query and plan to use based on the cardinality of the parameter’s value.

The plans for the parent and variant queries all indicate that PSPO is in use, and show the boundaries for the parameter. There is a low boundary and a high boundary. If the value provided for the parameter should return a small number of rows, less than the low boundary, the appropriate plan is used. If the cardinality for that value is between the two boundaries, another plan is used. If the cardinality is above the high boundary, the third plan is used.

Example

I tried several times but was unable to get PSPO to kick in on this table in WideWorldImporters, so I shifted to using the StackOverflow2013 database.

USE StackOverflow2013;
GO
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 160; /* 2022 */
GO
ALTER DATABASE StackOverflow2013 SET QUERY_STORE = ON;
GO
ALTER DATABASE StackOverflow2013 SET QUERY_STORE (OPERATION_MODE = READ_WRITE, QUERY_CAPTURE_MODE = ALL, INTERVAL_LENGTH_MINUTES = 15);
GO
EXEC DropIndexes;
GO
IF NOT EXISTS(
	SELECT 1
	FROM sys.indexes si
	WHERE
		name = 'IX_User_Reputation'
)
BEGIN
	CREATE INDEX IX_User_Reputation ON dbo.Users(Reputation);
END;
GO
CREATE OR ALTER PROCEDURE dbo.User_GetByReputation
  @Reputation int
AS
	SELECT 
		u.AccountId,
		u.DisplayName,
		u.Views,
		u.CreationDate
	FROM dbo.Users u
	WHERE 
		u.Reputation=@Reputation;
GO

I’ve seen a few other blogs use Reputation as an example, and it seems its histogram is in a good state for the optimizer to choose PSPO.

You can see from the query text at the top of the plan that the system has added an OPTION hint for “PLAN PER VALUE”. This is specifying which of the variant plans to use for this parameter.

If you look in the XML, you will also see a block like this one when PSPO is in use:

<StmtSimple StatementCompId="4" StatementEstRows="7173" StatementId="1" StatementOptmLevel="FULL" CardinalityEstimationModelVersion="160" StatementSubTreeCost="21.8786" StatementText="SELECT &#xD;&#xA;		u.AccountId,&#xD;&#xA;		u.DisplayName,&#xD;&#xA;		u.Views,&#xD;&#xA;		u.CreationDate&#xD;&#xA;	FROM dbo.Users u&#xD;&#xA;	WHERE &#xD;&#xA;		u.Reputation=@Reputation option (PLAN PER VALUE(ObjectID = 1237579447, QueryVariantID = 2, predicate_range([StackOverflow2013].[dbo].[Users].[Reputation] = @Reputation, 100.0, 1000000.0)))" StatementType="SELECT" QueryHash="0x08FD84B17223204C" QueryPlanHash="0x2127C7766B9DDB3C" RetrievedFromCache="true" StatementSqlHandle="0x0900A6524A0ECC5A61EA55C6320D3963E1D20000000000000000000000000000000000000000000000000000" DatabaseContextSettingsId="1" ParentObjectId="0" StatementParameterizationType="1" SecurityPolicyApplied="false">
          <StatementSetOptions ANSI_NULLS="true" ANSI_PADDING="true" ANSI_WARNINGS="true" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="true" NUMERIC_ROUNDABORT="false" QUOTED_IDENTIFIER="true" />
          <Dispatcher>
            <ParameterSensitivePredicate LowBoundary="100" HighBoundary="1000000">
              <StatisticsInfo Database="[StackOverflow2013]" Schema="[dbo]" Table="[Users]" Statistics="[IX_User_Reputation]" ModificationCount="0" SamplingPercent="100" LastUpdate="2023-05-19T14:04:14.91" />
              <Predicate>
                <ScalarOperator ScalarString="[StackOverflow2013].[dbo].[Users].[Reputation] as [u].[Reputation]=[@Reputation]">
                  <Compare CompareOp="EQ">
                    <ScalarOperator>
                      <Identifier>
                        <ColumnReference Database="[StackOverflow2013]" Schema="[dbo]" Table="[Users]" Alias="[u]" Column="Reputation" />
                      </Identifier>
                    </ScalarOperator>
                    <ScalarOperator>
                      <Identifier>
                        <ColumnReference Column="@Reputation" />
                      </Identifier>
                    </ScalarOperator>
                  </Compare>
                </ScalarOperator>
              </Predicate>
            </ParameterSensitivePredicate>
          </Dispatcher>

The “ScalarOperator ScalarString” shows us the column and parameter our plan optimization is focusing on. We can see the index\statistic being used after the StatisticsInfo clause a few lines above; it also shows the sampling rate for that statistic.

The LowBoundary and HighBoundary values define the three ranges for parameter-sensitive plan optimization. If the optimizer estimates rows below the LowBoundary (which has always been 100 in examples I have seen), we will use the first variant. If the estimate is between the boundaries, we use the second. If it’s above the HighBoundary (which has been 100,000 or 1,000,000 in every case I’ve seen), we use the third.

In this case, we used VariantID 2 because we estimated 7173 rows.

One Problem

But as Brent Ozar points out here, there is still parameter sniffing going on. And if you are reading about the subject here, you should also read Brent’s post.

The low boundary I’ve always seen is 100, but there may be some variety there. So, if a query is executed using PSPO with a parameter that has a cardinality of less than 100, SQL Server will use the plan for that smallest range. If the parameter leads to an estimate of 5 rows, we’ll compile and execute that query and reuse the plan for subsequent low cardinality parameters. If the query is executed again with a parameter that should read 80 rows, we’ll use the same plan. But I would expect the optimal plan for those two parameters to be the same, or at least fairly similar. Nested loops and a key lookup won’t hurt here, since we aren’t talking about many rows in either case.

I would think the same for the high boundary, which I’ve seen as 100,000 or 1,000,000 in every case I’ve seen so far. Let’s assume the high boundary is 100,000. If the query is executed, and we estimate 250,000 rows to be returned, we’re likely to have a plan with a scan or hash match in it; maybe we’ll see parallelism. If the query is executed again and 1.5 million rows are expected, the plan is likely to be similar even if the memory allocation is too low.

My concern is for the middle range. A plan intended when we estimate 200 rows to be returned should be very different from a plan expecting 90,000 rows. So it seems more likely that there will be parameter sniffing in that middle range because there’s a large relative difference between the cardinality values within that range. So if parameter sniffing has a minimal effect on the high and low ranges, the improvement made by PSPO for a given query will depend on how many values fall into that middle range.

Summary

I wanted to do a post on the topic as is, before going into a problem that is potentially serious. I’ll cover that in the next post where I will talk about the changes in Query Store related to Parameter Sensitive Plan Optimization. There are changes in other system tables, but I will focus on Query Store because I use it daily.

You can follow me on Twitter (@sqljared) and contact me if you have questions. My other social media links are at the top of the page. Also, let me know if you have any suggestions for a topic for a new blog post.

When I saw the Halloween problem for the first time, I was trying to optimize the UPDATE statement in an upsert proc. I had suspected many of the changes to the table were redundant; we were setting the value equal to its current value. We are still doing all the effort of a write to change nothing. The original query was something like this:

/* Original update */
UPDATE ex
SET
	ex.value = CASE WHEN @op = 1 THEN tvp.value
		ELSE ex.value + tvp.value END
FROM dbo.Example ex
INNER JOIN @tvp tvp
	ON tvp.AccountID = ex.AccountID
	AND tvp.ProductID = ex.ProductID
	AND tvp.GroupID = ex.GroupID
WHERE
	ex.AccountID = @AccountID
	AND ex.ProductID = @ProductID;

So, I added a WHERE clause to prevent us from updating the row unless we were actually changing the data. If this meant we would write fewer rows, I expected the query would run faster. Instead, the duration and CPU usage increased significantly.

/* WHERE clause changed */
UPDATE ex
SET
	ex.value = CASE WHEN @op = 1 THEN tvp.value
		ELSE ex.value + tvp.value END
FROM dbo.Example ex
INNER JOIN @tvp tvp
	ON tvp.AccountID = ex.AccountID
	AND tvp.ProductID = ex.ProductID
	AND tvp.GroupID = ex.GroupID
WHERE
	ex.AccountID = @AccountID
	AND ex.ProductID = @ProductID
	AND ex.value <> CASE WHEN @op = 1 THEN tvp.value
		ELSE ex.value + tvp.value END;

After discussing with a colleague, we suspected the Halloween problem. So, I read up on the subject and tried to come up with a different optimization for the statement.

I was interested in the idea that the SQL Server optimizer was trying to separate the read phase of the query from the write phase. The eager spool gives SQL Server a complete list of rows it needs to update, preventing the Halloween problem. But the protections made the query take longer.

Description

Well, if SQL Server is trying to separate the read from the write, why don’t I just do that myself? I had the idea to read the data I needed in an INSERT…SELECT statement, writing into a memory-optimized table variable (motv). I could make sure the read included all the columns I need to calculate the new value, including the CASE statement I had in the SET clause of the UPDATE.

I thought of this as Manual Halloween protections but later found Paul White had coined the term about 5 years earlier (look near the bottom of that article).

Why a motv and not a temp table? I found previously that using a temp table in this procedure, which ran hundreds of millions of times per day in our environment, caused a lot of tempdb contention. A table variable would have a similar effect, but not if it is memory-optimized.

This was an upsert procedure, so we will try to update all the rows that correspond to the TVP passed in, and we will insert any rows that don’t exist. Originally, the procedure ran the UPDATE and the INSERT each time, but we already found that we inserted no records the vast majority of the time.

Example

If I query the data into the motv, we can use the motv to decide whether we need to UPDATE or INSERT at all.

/* Manual Halloween; populating the motv */
INSERT INTO motv
SELECT
	tvp.AccountID,
	tvp.ProductID,
	tvp.GroupID,
	ex.value,
	CASE WHEN @op = 1 THEN tvp.value
		ELSE ex.value + tvp.value END
		AS new_value
FROM @tvp tvp
LEFT LOOP JOIN dbo.Example ex
	ON ex.AccountID = tvp.AccountID
	AND ex.ProductID = tvp.ProductID
	AND ex.GroupID = tvp.GroupID
WHERE
	ex.AccountID = @AccountID
	AND ex.ProductID = @ProductID;

I wrote this with a specific join order in mind and used the LOOP JOIN hint to fix the join order as I wrote it, while ensuring we didn’t use a different join type. The table-valued parameter (tvp) input is very likely to have only a few rows in it; less than 5 in almost all cases.

I used a LEFT join as well to account for the possibility that the row isn’t present in the underlying table. If that row doesn’t exist, the ex.value written into the motv will be NULL, and that will indicate we need to insert this row.

But first, let’s look at the update:

/* Manual Halloween; check, maybe UPDATE the base table */
IF EXISTS(
	SELECT 1 
	FROM @motv motv
	WHERE
		motv.value <> motv.new_value
)
BEGIN
	UPDATE ex
	SET
		ex.value = motv.new_value
	FROM @motv motv
	LEFT LOOP JOIN dbo.Example ex
		ON ex.AccountID = motv.AccountID
		AND ex.ProductID = motv.ProductID
		AND ex.GroupID = motv.GroupID
	WHERE
		motv.value <> motv.new_value;
END;

In this case, where we may need to UPDATE or INSERT (but not both for a given row) but also where we suspect the data may not be changing at all, we don’t necessarily have to run the UPDATE statement. First, we query the motv to see if any rows have a value that has changed. If not, we skip the UPDATE.

And I think I should linger on the WHERE clause. The motv.value could be NULL; how does that comparison work? If you compare NULL to a real value, the result is not true or false, it is NULL. Returning NULL for that row won’t cause us to run the UPDATE, which is the correct behavior; we need to INSERT that row.

/* Manual Halloween; check, maybe INSERT the base table */
IF EXISTS(
	SELECT 1 
	FROM @motv motv
	WHERE
		motv.value IS NULL
)
BEGIN
	INSERT ex
	SELECT
		motv.AccountID,
		motv.ProductID,
		motv.GroupID,
		motv.new_value
	FROM @motv motv
	WHERE
		motv.value IS NULL;
END;

But we only need to INSERT for the rows where that value is NULL.

Takeaways

So, why was this an improvement in this case? There are several points, some I only thought of recently.

  • Using Manual Halloween made it easy to reduce the writes to the underlying table for the UPDATE, saving a lot of effort.
  • The INSERT statement was also skipped the vast majority of the time.
  • Skipping a statement also meant we didn’t run the associated trigger. We also skip foreign key validation on the statement, which can also be quite expensive.
  • Fewer writes means fewer X locks on the table, making it less likely we could have contention in a very busy object.
  • The separate INSERT SELECT will read the data from the table using SH locks since we know we aren’t writing to the table in that statement.
  • The motv uses optimistic concurrency, and we aren’t writing data to a disk for that operation.

In this case, the UPDATE statement only executed 4% of the time the procedure was called. For the INSERT, that number was less than 2%. I wasn’t surprised that the INSERT was unnecessary most of the time; you only insert a row once. I was surprised by how often the lack of an UPDATE indicated the data passed in was unchanged. But we knew this activity was customer-driven and had seen data passed in repeatedly in other places.

In other cases, the Manual Halloween approach may be very effective if the data is redundant. The reduced contention from the fewer writes may also have been a big factor in the improvement. The redundant data may be a very unusual circumstance, though.

It may also be helpful if when there are multiple statements, like in an upsert procedure, where only one is necessary for a given row.

Summary

I have found the Halloween problem fascinating since I was introduced to it, but I’m done with the subject for now. I’ll likely be talking about Query Store next time.

You can follow me on Twitter (@sqljared) and contact me if you have questions. My other social media links are at the top of the page.

Update and Correction:

This blog was originally posted on February 20. Since then I read other articles that suggested different behavior with the Halloween Problem. I contacted Paul White, who informed me that the WideWorldImporters database uses compatibility level 130 (SQL Server 2016) by default. So, I tested on a SQL Server 2019 instance but was probably seeing an issue addressed in later updates.

I tested again at compatibility level 150 and saw a different execution plan which led to different conclusions.

I’ve left the majority of the post unchanged, but I’m adding an addendum section, and updating the summary and its conclusions. So, make sure you read those sections for the corrections.

Original Post:

I find myself talking about the Halloween Problem a lot and wanted to fill in some more details on the subject. In short, the Halloween Problem is a case where an INSERT\UPDATE\DELETE\MERGE operates on a row more than once, or tries to and fails. In the first recorded case, an UPDATE changed multiple rows in the table more than once.

So let’s take a look at an example using a publicly available database, WideWorldImporters.

A Halloween Problem example

Here’s a simple update procedure. We’re going to update the quantity for an item in the Sales.OrderLines table:

CREATE OR ALTER PROCEDURE Sales.OrderLines_UpdateQuantity
	@OrderID INT,
	@StockItemID INT,
	@Quantity INT
	WITH EXECUTE AS OWNER
AS
BEGIN
	SET NOCOUNT ON;
	SET XACT_ABORT ON;

	UPDATE sol
	SET
		sol.Quantity = @Quantity,
		sol.PickedQuantity = @Quantity
	FROM Sales.OrderLines sol
	WHERE
		sol.OrderID = @OrderID
		AND sol.StockItemID = @StockItemID;
		-- AND sol.Quantity <> @Quantity;
END;
GO

You may notice the commented line. In one description of the Halloween Problem I heard\read, it was suggested that if we try to SET something that is in our WHERE clause the problem is likely to occur. Or rather, SQL Server will see the possibility of the problem and add protections to our execution plan to prevent it.

First, let’s test without that line, and see what our execution plan tells us.

EXEC Sales.OrderLines_UpdateQuantity
	@OrderID = 5,
	@StockItemID = 155,
	@Quantity = 21;
GO
Note the eager spool

The eager spool between our index reads and clustered index update shows that SQL Server added Halloween protections to prevent the problem. The problem is prevented by separating the read phase of the query from the write phase.

This usually involves a blocking operator. Most often this is an eager spool, but if there is another blocking operator in the plan like a sort or hash match, that blocking operator may remove the need for a separate spool.

The Halloween Problem would occur if a query is running in row mode and as rows are still being read, rows are being updated and moved in an index. This allows the read operation to potentially read the updated row again and operate on it again. The index movement is key in this scenario.

But with a blocking operator between the read operation and the write, we force all the reads to complete first. This gives us a complete, distinct list of rows to be updated (in this example) before we get to the clustered index update, so it isn’t possible to update the same row twice.

So, how does index movement come into play here? We are updating the Quantity and PickedQuantity columns in our UPDATE statement. Both fields are key columns in the only columnstore index on the table, NCCX_Sales_OrderLines.

CREATE NONCLUSTERED COLUMNSTORE INDEX [NCCX_Sales_OrderLines] ON [Sales].[OrderLines]
(
[OrderID],
[StockItemID],
[Description],
[Quantity],
[UnitPrice],
[PickedQuantity]
)WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0) ON [USERDATA];
GO

So when we update these columns, the affected rows will move in that index. If the row moves, that means a read operation could continue reading and find the same row again, returning it as a part of its result set a second time.

Interestingly, we aren’t reading from the columnstore index in the plan provided. Since that’s the only index containing these columns as key values, it’s the only index where the rows should move. In this case, our read operators shouldn’t encounter updated rows a second time, since they use the FK_Sales_OrderLines_OrderID (with a key lookup against PK_Sales_OrderLines).

I wonder if SQL Server decided the Halloween protections were needed before it decided which index it would use for the read.

Removing the index

Either way, if we dropped the NCCX_Sales_OrderLines index, we should see a plan without an eager spool between the read operators and the update operator.

IF EXISTS(
	SELECT 1
	FROM sys.indexes si
	WHERE
		si.name = 'NCCX_Sales_OrderLines'
)
BEGIN
	DROP INDEX [NCCX_Sales_OrderLines]
		ON Sales.OrderLines;
END;
GO

With the index removed, let’s look at the new plan.

Unspooled

We’ve lost the extra steps to the left of the clustered index update operator to update the columnstore index, and we have also lost the eager spool between the read operators and the update operator. This shows without the index movement, Halloween protections are no longer needed.

Performance impact of protections

Let’s look at the data from Query Store to see how big the difference is between the two execution plans.

I ran a simple query against the same OrderID in Sales.OrderLines before running the procedure before and after the index change to get the data into the cache (because cold cache issues were making a large difference). I also ran the procedure 10 times to try to average out our results in case any odd wait types were seen.

80 microseconds versus 46 microseconds. Blazing fast in both cases with the data already cached, but the plan with Halloween protections took 74% longer. Unsure if the update to a columnstore index is significantly more expensive than that of a rowstore index. Perhaps we should test this again without columnstore complicating the issue.

Speaking in general, I would expect a bigger difference in a query affecting more rows. For a query that only returns 3 rows from the first index seek, the delay caused by the spool would be very small. But imagine if we have a query that reads tens or hundreds of thousands of rows before performing its write operation.

Normally such a query would be passing rows it has read up to the join and update operators while it is continuing to read. Those operations would be happening on different threads in parallel.1

If we are being protected from the Halloween Problem, the eager spool will not return any rows to the operations above it (like the clustered index update) until all rows have been read. So the writes cannot start until much later, and the more rows being read the more considerable the delay.

Nonclustered indexes?

If you noticed the “+3 non-clustered indexes” banner in one of the plans above, that’s indicating the nonclustered indexes updated when we updated the clustered index. This is more obvious in Plan Explorer than in the plans as shown in SQL Server Management Studio. So, I wanted to point that out in case the visual was confusing to anyone.

But this raises another question. If we are updating those indexes, why don’t they cause the Halloween protections to be used?

That is because the quantity columns are present in those indexes only as included columns. Changes to those columns won’t affect where the row sorts, but the values still need to be updated.

Rowstore testing

So, let’s see how this looks with a rowstore index. Here’s a second procedure, similar to the first but also updating PickingCompletedWhen.

CREATE OR ALTER PROCEDURE Sales.OrderLines_UpdateQuantityWhen
	@OrderID INT,
	@StockItemID INT,
	@Quantity INT
	WITH EXECUTE AS OWNER
AS
BEGIN
	SET NOCOUNT ON;
	SET XACT_ABORT ON;

	UPDATE sol
	SET
		sol.Quantity = @Quantity,
		sol.PickedQuantity = @Quantity,
		sol.PickingCompletedWhen = GETUTCDATE()
	FROM Sales.OrderLines sol
	WHERE
		sol.OrderID = @OrderID
		AND sol.StockItemID = @StockItemID
		AND sol.PickingCompletedWhen < GETUTCDATE();
END;
GO

Initially, no index uses PickingCompletedWhen. So if we execute the procedure as is, we shouldn’t see the tell-tale eager spool.

EXEC Sales.OrderLines_UpdateQuantityWhen
	@OrderID = 5,
	@StockItemID = 155,
	@Quantity = 21;
GO

This plans is what we’d expect. If we add an index, how does this change the plan and how does this change the performance?

IF NOT EXISTS(
	SELECT 1
	FROM sys.indexes si
	WHERE
		si.name = 'IX_OrderLines_OrderID_StockItemID_PickingCompletedWhen'
)
BEGIN
	CREATE INDEX IX_OrderLines_OrderID_StockItemID_PickingCompletedWhen
		ON Sales.OrderLines (OrderID, StockItemID, PickingCompletedWhen);
END;
GO

Here, we see the eager spool implementing the Halloween protections again, but between the index seek and the key lookup. Note that the new index is the one we are using for the index seek. The clustered index update now indicates it is updating 4 nonclustered indexes, including the new index.

So, is the performance difference as stark as it was with the columnstore index?

So, 52 µs vs 118 µs. The query took about ~126% longer when the Halloween protections were present. More than we saw with the columnstore index, which is surprising. Perhaps it is relevant that we are updating a third field. It almost feels like the observer effect at this scale.

Addendum

So, to correct things here, let’s go back to the first procedure.

CREATE OR ALTER PROCEDURE Sales.OrderLines_UpdateQuantity
	@OrderID INT,
	@StockItemID INT,
	@Quantity INT
	WITH EXECUTE AS OWNER
AS
BEGIN
	SET NOCOUNT ON;
	SET XACT_ABORT ON;

	UPDATE sol
	SET
		sol.Quantity = @Quantity,
		sol.PickedQuantity = @Quantity
	FROM Sales.OrderLines sol
	WHERE
		sol.OrderID = @OrderID
		AND sol.StockItemID = @StockItemID;
		-- AND sol.Quantity <> @Quantity;
END;
GO

If I run this procedure again with the restored database and no other changes besides updating the compatibility level to 150, I see the following execution plan:

So, we have no eager spool, which means the Halloween Problem isn’t a problem now.

Previously, there was a spool between the index seek and lookup and the clustered index update. The only index using any of the updated fields as a key value was the columnstore index. This suggested that the optimizer will use Halloween protections if any index uses the updated fields as a key value because the rows would be moved in that index.

This new plan disproves that because the optimizer no longer uses the protections with the later compatibility level. And the columnstore index (NCCX_Sales_OrderLines) is still present (as you can see if you hover over the clustered index update operator).

As for the second procedure, I see the Halloween protections even without the index I added in my example. Without that index, the query originally used the FK_Sales_OrderLines_OrderID index to seek the rows in question. At the higher compatibility level, the IX_Sales_OrderLines_Perf_20160301_02 index is used, which is keyed on (StockItemID, PickingCompletedWhen).

So, the Halloween protections are used because we read from an index keyed on one of the updated fields, and rows being updated will potentially move in that index.

We’ve seen the Halloween protections when using nonclustered indexes so far, but what if we are using the clustered index for the read?

I wrote a quick procedure to change the OrderLineID, which is the only column in the clustered primary key for this table. And this matches expectations; we see the eager spool between the clustered index seek and the update operator.

Summary

Hopefully, the addendum corrects the matter while keeping things clear. I’m updating one of the bullet points below, as well.

It seems there are only two criteria for the protections against the Halloween Problem to be used for an UPDATE query:

  1. The object being updated must also be in the query.
  2. A column being updated must be a key column in at least one index on the table. One of the updated columns must be a key value in the index used for the read portion of the query, so that the rows may move in that index.

For other statements, the setup is more complex. I find the UPDATE statement is the most straightforward example of the Halloween Problem. But you can see the protections in place if you query from a table as part of an INSERT or DELETE (or MERGE) where you change that same table.

And if we see Halloween protections in the plan for a query, we could change the offending index or the query to change the behavior.

Or we could use the manual Halloween technique, which I will discuss next time.

Thanks again to Paul White for pointing out the compatibility level; I doubt that would ever have occurred to me.

Please contact me if you have any questions or comments. I’ve updated my social media links above to include Counter.Social and Mastodon. We’ll see if there is more #sqlfamily activity on those platforms going forward.

Footnotes

1: Not the type of parallelism we typically think of with SQL Server. Parallelism is typically when a given operation, like an index scan, is expected to process many rows, and SQL Server dedicates multiple threads to that operator or group of operators. In this case, I say parallel because different operators (the index seek, nested loops join, and clustered index update) are all processing rows at the same time, one row at a time.