The query optimizer is a finicky thing, and sometimes it doesn’t understand exactly what you’re trying to do until you give it a bit more information. The situation I’m going to describe in this post is one such case. By providing the optimizer with ever-so-slightly more data, it’s possible to make some XML processing over 300 times faster.
Here’s the situation: The XML I was working with was stored in a table, but not typed as XML. Rather, it had been typed as VARCHAR(MAX). This presents an interesting conundrum for the optimizer: should the query be optimized as though operations are being done on a string, or on XML?
If you would like to follow along with the examples below, here’s some DDL and an INSERT statement to populate a test table using AdventureWorks data:
CREATE TABLE #myXML ( x VARCHAR(MAX) ) GO INSERT #myXML ( x ) SELECT x FROM ( SELECT * FROM AdventureWorks.Production.Product FOR XML PATH ('Product') ) AS y (x) GO
Running this code will put one row into the temp table, with an XML document containing all of the product data from the AdventureWorks Production.Product table. And now, just for kicks, what if we want to pull all of the ProductIDs out of that document? Simple enough…
WITH theXML ( x ) AS ( SELECT CONVERT(XML, x) FROM #myXML ) SELECT node.value('ProductID', 'INT') FROM theXML CROSS APPLY x.nodes('/Product') AS nodes(node)
This code isn’t especially interesting or puzzling in and of itself. It converts the document(s) in the table to XML, runs them through the .nodes() function to produce one row per product node, and pulls out the first ProductID attribute found. If you’ve actually run the code on your end at this point, you know why I was puzzled: This code takes a full 20 seconds to run on my end. Which is a bit extreme, considering that there are only 504 products in the AdventureWorks database. In the real situation, the documents were several times bigger, and 20 seconds was over an hour in some cases. And that just wouldn’t do.
And so much head-scratching ensued. And teeth gnashing. And cursing of the SQL Server programmability team. You know, a typical day at the office.
I pulled apart my code, put it back together again, and considered writing a CLR UDF to do the processing. But then I tried something on a whim:
DECLARE @x XML = ( SELECT TOP(1) x FROM #myXML ) SELECT node.value('ProductID', 'INT') FROM @x.nodes('/Product') AS nodes(node)
And–shocker–this code returns all 504 ProductIDs seemingly instantly. (Actually, it takes around 28 milliseconds on my end.)
So was a cursor and document-by-document processing the answer? At first, it seemed so. But after further messing around I noticed something: adding TOP(1), so that only a single row was processed, wasn’t taking too long. Could it be that the query processor was doing a lot more work than necessary, like converting the text to XML 504 times?
The TYPE directive can be used with FOR XML to make your query return the XML document typed as XML rather than typed as a string. Perhaps it would work here?
WITH theXML ( x ) AS ( SELECT CONVERT(XML, x) FROM #myXML FOR XML PATH(''), TYPE ) SELECT node.value('ProductID', 'INT') FROM theXML CROSS APPLY x.nodes('/Product') AS nodes(node)
The empty path expression is needed because the TYPE directive only works in conjunction with a valid FOR XML mode. Using an empty path has zero net effect on the actual XML produced in this case. But using the TYPE directive has quite a huge effect: A reduction in query time to around 58 milliseconds on my end. The 30,000% speedup I promised you earlier.
So why does this work? A quick peek at the plans indicates that I was correct. Here’s the first part of the plan for the first version of the query:
|--Compute Scalar(DEFINE:([Expr1023]=[Expr1022])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1004], XML Reader with XPath filter.[id])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1004])) | |--Compute Scalar(DEFINE:([Expr1004]=CONVERT(xml,[tempdb].[dbo].[#myXML].[x],0))) | | |--Table Scan(OBJECT:([tempdb].[dbo].[#myXML])) | |--Filter(WHERE:(STARTUP EXPR([Expr1004] IS NOT NULL))) | |--Table-valued function
The second version is almost exactly the same, but for one additional iterator:
|--Compute Scalar(DEFINE:([Expr1024]=[Expr1023])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1005], XML Reader with XPath filter.[id])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1005])) | |--<b>UDX(([Expr1004]))</b> | | |--Compute Scalar(DEFINE:([Expr1004]=CONVERT(xml,[tempdb].[dbo].[#myXML].[x],0))) | | |--Table Scan(OBJECT:([tempdb].[dbo].[#myXML])) | |--Filter(WHERE:(STARTUP EXPR([Expr1005] IS NOT NULL))) | |--Table-valued function
Notice the “UDX” iterator? That’s an XML iterator that handles the conversion to typed XML. And in the first case, we don’t get one, even though we’ve “technically” converted the string to XML at that point.
This story was puzzling, and somewhat arcane, but it has a moral that stretches far beyond this simple example: Only by giving the query optimizer much more information than any rational person might think necessary did we get a plan that does the right thing. And that is quite often the case when working with SQL Server. CHECK constraints, foreign keys, UNIQUE constraints, the DISTINCT keyword, GROUP BY, APPLY, and various other constructs are more than just ways to define your requirements or the output you’re looking for. They can be used to provide information to the query optimizer to help it determine the best way to process your data. Information that can make your query return in a second instead of an hour. Information that will make your users happy and your project a success.
The secret to writing high performance T-SQL? Step out of your human mind. Un-puzzle. Be the optimizer. And until next month, thank you for reading this entry in T-SQL Tuesday #002!