In-Memory Tables Cause Tons of Benign Entries in Error Log

If any of your databases use in-memory tables, you’ve likely seen messages in your error log similar to this:

[INFO] Database ID: [11]. Deleting unrecoverable checkpoint table row (id: 40568).

You probably have tons of these things filling up your logs. They are completely normal and refer to SQL’s process of cleaning up the list of checkpoints that the In-Memory process uses to provide data consistency and durability. Paul Randall answered a question about these messages in his SQL Skills Insiders Newsletter (PDF link).

It’s nice that they are labeled with [INFO], so you can be fairly sure they aren’t a major issue, but they still annoyingly fill up the log with information that is of no use to anyone outside of Microsoft. It would be nice if you could disable these messages but, to my knowledge, you cannot.

Occasionally, you may see another message in your error log, one that seems like it might be more serious:

[WARNING] === At least 4 extensions for file {0BAE5D9C-35C2-480C-BE9C-7D2AFEEE4EA0} (e:\b\s3\sources\sql\ntdbms\hekaton\engine\hadr\mtlog.cpp : 5946)

This one is identified as a [WARNING]. Uh-oh! That can’t be good.

From the path and filename, it pretty clear this message is also related to logging for in-memory tables, but how? Despite my efforts, I was unable to find out much about this message online. I did find one post from someone at Microsoft saying they would investigate the warning, but there was never any follow up.

Was this something I should be concerned about? Given that my system had been running fine for months and months, my thought was it was probably unimportant and could be ignored. Still, I’d like to know what it meant.

When my internet sleuthing didn’t turn up any answers, I contacted the folks at SQL Skills and Erin Stellato was kind enough to ask her contacts at Microsoft.

Bottom line: it can be ignored.

The message is simply stating that the log file has been extended, i.e. grown in size. It reports at power of 2 intervals.

I have no idea why this is labelled WARNING, as opposed to INFO, like the other message. It would also be nice if these informational-only messages were not written to the error log, or at least written at a much lower frequency. Literally 85% of my error log are just these two messages! Perhaps in SQL 2019, these messages aren’t logged….

I hope that by putting this out there, someone else who might be bothered by this warning can relax a bit.

Thanks, Erin, for getting this info!

How Can Identical Servers Have Different CPU Utilization Under The Same Load?

I ran into an interesting case today. We’ve got a pair of SQL Servers running in an Availability Group. We’re set up so we can run with either server as the primary. We use Sentry One to monitor our SQL servers and after a recent planned failover, I noticed that when we run on Server 1, CPU processor utilization seems to be lower than when we are running on Server 2. Not drastically lower and there is not enough difference that our users would complain about it, but when I looked at the graphs, I could definitely see a difference.

This bothered me. The two servers have identical hardware and SQL Server is configured identically on both. I ran through all the checks I could think of: power management was set to High Performance on both, NIC drivers were the same version, both were using the same SAN and local drives for storage, memory was the same, anti-virus settings were identical. I just couldn’t explain why Server 1 was always running with about a 10% lower CPU utilization.

Server 1 on the left, Server 2 on the right

Normally, I like seeing a low CPU utilization. I like my SQL Servers to have some headroom to handle an increase in load. We had been running on Server 1 for a long time in the past and, looking at the CPU utilization, I thought we had lots of room to handle an increased load. But when we were running on Server 2, I wasn’t so sure.

One morning, I decided to dig a little deeper. I had already confirmed Windows power management was set to High Performance on both servers. However, this time, I dug a little deeper and checked out what those settings actually were. Turns out, that was where the difference was.

I discovered that Server 2 was set to a maximum processor state of 100%, but Server 1 was set to 75%. Aha! That was the reason Server 1 always had a lower CPU utilization! Windows was limiting processor use to 75%!

Digging deeper still, I found several other settings that were different: minimum processor state, hard drive shutoff time, PCI Express link state power management, and one or two others.

The moral is just because your CPU utilization looks low, make sure that it isn’t because there is some other rule limiting your processor utilization!

User_Updates value from sys.dm_db_index_usage_stats can be misleading

I was doing some investigating the other day and trying to find which tables in our environment were the most written to. This was because my buddies in the IT department had gotten a bunch of new storage that was optimized for a write-intensive workload. To identify which tables had the most writes, I used the following query:

DECLARE @dbid INT = DB_ID();
SELECT TableName = OBJECT_NAME(s.object_id),
       Reads = SUM(user_seeks + user_scans + user_lookups),
       Writes = SUM(user_updates)
FROM sys.dm_db_index_usage_stats AS s
    INNER JOIN sys.indexes AS i
        ON s.object_id = i.object_id
           AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
      AND s.database_id = @dbid
GROUP BY OBJECT_NAME(s.object_id)
ORDER BY Writes DESC;
GO

This pulls both reads and writes from the sys.dm_db_index_usage_stats dynamic management view. A read is defined as either a seek, scan, or lookup and a write is defined as an update. All seemed good until I noticed something strange. One of the top written to tables was, based on our naming convention, a lookup table. That seemed odd. A lookup table should have lots of reads, but only few writes. The query above showed my lookup table had almost twice as many writes as reads!

I dug around a bit and found two stored procedures that referenced that particular table. I checked them out, but nothing seemed out of the ordinary to me, so I dug a little deeper and discovered something strange: the user_updates value of sys.dm_db_index_usage_stats can get incremented even when there is no actual update to the table!!

Let’s check it out. First, create a table. I’m going to go with a lookup table, as that’s what I was dealing with. It’s a simple table – one identity column and one varchar column for a machine name.

CREATE TABLE [dbo].[Lkp_Machine]
(
[IdMachine] [INT] IDENTITY(1, 1) NOT NULL,
[MachineName] [VARCHAR](50) NULL,
CONSTRAINT [PK__Lkp_Machine]
PRIMARY KEY CLUSTERED ([IdMachine] ASC)
);

Now let’s insert a couple rows of data to work with:

INSERT INTO Lkp_Machine
(
MachineName
)
VALUES
('Machine01'),
('Machine02'),
('Machine03');

Finally, let’s see what our DMV reports: (I’ve modified it slightly to only show our table.)

DECLARE @dbid INT = DB_ID();
SELECT TableName = OBJECT_NAME(s.object_id),
       Reads = SUM(user_seeks + user_scans + user_lookups),
       Writes = SUM(user_updates)
FROM sys.dm_db_index_usage_stats AS s
    INNER JOIN sys.indexes AS i
        ON s.object_id = i.object_id
           AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
      AND s.database_id = @dbid
	  AND OBJECT_NAME(s.object_id)='Lkp_Machine'
GROUP BY OBJECT_NAME(s.object_id)
ORDER BY Writes DESC;
GO

Here’s the output:

OK. We inserted 3 rows, but got a write count of 1. That’s normal. Books Online says of the user_updates field: “Number of updates by user queries. This includes Insert, Delete, and Updates representing number of operations done not the actual rows affected. For example, if you delete 1000 rows in one statement, this count increments by 1”

Doesn’t really explain my situation though. It did, however, give me an idea. I decided to do another test using this code:

DECLARE @TestVariable CHAR(1);
UPDATE Lkp_Machine
SET @TestVariable = 'a';

This query does not actually update the table. It simply sets a variable. Now when we run the write count query again, what do we get?

Look at that! We got one read and one more write! The user_updates count was incremented even though there was no actual update to the table.

It would seem this DMV increments the update count whenever the table is referenced in an update command.

Why would someone do this? I admit, my test query is somewhat silly. However, you may very well encounter this situation disguised as a MERGE statement. This was where I found it.

Our developers wanted to check the lookup table for the existence of a certain machine name. If that name did not exist, they wanted to insert it into the table and get the id that was created for it, but if it did exist, they only needed to get the id value of that machine. Here is their code:

MERGE Lkp_Machine AS A
USING
(SELECT @machineName AS machineName) AS B
ON B.machineName = A.MachineName
WHEN MATCHED THEN
    UPDATE SET @idMachine = A.idMachine
WHEN NOT MATCHED THEN
    INSERT
    (
        machineName
    )
    VALUES
    (B.machineName)
OUTPUT Inserted.idMachine
INTO @inserts;

The syntax of the MERGE statement requires that the command after the WHEN MATCHED clause be either an UPDATE or a DELETE. We don’t really need to update the table here – we just want to set a variable – but the syntax forces the UPDATE command.

Granted, this probably isn’t the best way to perform this action. However, it is how this particular developer coded it and it’s what I, as the DBA, have to live with. Just be aware that some of your tables might not actually be getting updated as often as this DMV implies!

 

Finding All Gaps In An Identity Column

Have you ever had random inserts into a large table fail? Most of the time, inserts happen fine, but every so often you get a failure with a “primary key violation” error? If your primary key is an integer column with the identity property, you may be wondering how this is possible.

What is likely happening is your table has grown very large or has been in use for a long time and your identity column ran out of numbers. An integer column has a maximum value of 2,147,483,647. Now an integer can start at -2,147,483,648, but most people  start at 0 or 1, so that leaves you with 2 billion numbers.

The first time this happens, someone (maybe you) will reseed the identity column to a new value (zero, in this case):

DBCC CHECKIDENT (YourTable,RESEED,0)

If your table has a lot of records that have been deleted, you might be able to continue on for quite some time before encountering another primary key violation. But this is not a permanent fix – you will encounter this error again eventually.

Another temporary fix, but one that will likely allow you to go for a longer time without encountering the primary key violation again is to reseed the column to the smallest negative number the datatype allows (assuming you have a positively incrementing identity value). For example, to reseed an integer column, you can run:

DBCC CHECKIDENT (YourTable,RESEED,-2147483647)

Assuming you started at zero or 1, this gives you another 2 billion values to use. However, I’ve found most programmers, for some reason, have an aversion to negative numbers as primary keys. There no reason for it, but it’s there, so the DBA has to deal with it.

Of course, the permament fix is to change the data type from integer to BIGINT, but, if your table is big enough to have this problem, that will also present its own set of challenges – this is a “size of data” operation, meaning  SQL has to modify every single row in the table and the time it takes to do this is directly related to the size of the data in the table. The change will require the table to be locked, lots of transaction log will be generated, etc. See Kendra Little’s post about the issues you will face here.

The path of least resistance (and least (or no) downtime) is to reseed the identity column. This is virtually instantaneous. The problem becomes “to what value should I reseed it?”

To answer this question, you need to know where the gaps in your used identity values are. Ideally, you’ll like to reseed to the start of the biggest gap.

I went searching for some code to do this and came across this solution from Microsoft:

-- Use a recursive CTE approach, which will provide
-- all values in all gap ranges
DECLARE @i INT;
SELECT @i = MAX(PK_ID)
FROM YourTable;
WITH tmp (gapId)
AS (SELECT DISTINCT
           a.PK_ID + 1
    FROM YourTable a
    WHERE NOT EXISTS
    (
        SELECT * FROM YourTable b WHERE b.PK_ID = a.PK_ID + 1
    )
          AND a.PK_ID < @i
    UNION ALL
    SELECT a.gapId + 1
    FROM tmp a
    WHERE NOT EXISTS
    (
        SELECT * FROM YourTable b WHERE b.PK_ID = a.gapId + 1
    )
          AND a.gapId < @i)
SELECT gapId
FROM tmp
ORDER BY gapId;
GO

However, when I ran this, I got the following error:

Looks like my table has too many gaps for this to be a valid solution. So back to Google I went and found this really nice solution from Henning Frettem. When I ran his code, I saw my table had 97,236 gaps! So which should I use? Ideally, the largest one! Henning’s code didn’t readily give that, so below is my modified version which gives you that information and sorts from the largest gap to the smallest, since the largest gap is what you are really looking for.

;WITH cte
AS (SELECT PK_ID, /* replace with your identity column */
           RowNum = ROW_NUMBER() OVER (ORDER BY PK_ID)
    FROM dbo.YourTable), /* replace with your table name */
      cte2
AS (SELECT cte.PK_ID,
           cte.RowNum,
           DENSE_RANK() OVER (ORDER BY PK_ID - RowNum) AS Series /* replace with your identity column */
    FROM cte),
      cte3
AS (SELECT cte2.PK_ID,
           cte2.RowNum,
           cte2.Series,
           COUNT(*) OVER (PARTITION BY Series) AS SCount
    FROM cte2),
      cte4
AS (SELECT MinID = MIN(PK_ID), /* replace with your identity column */
           MaxID = MAX(PK_ID), /* replace with your identity column */
           Series
    FROM cte3
    GROUP BY Series)
SELECT GapStart = a.MaxID,
       GapEnd = b.MinID,
       SizeOfGap = b.MinID - a.MaxID
FROM cte4 a
    INNER JOIN cte4 b
        ON a.Series + 1 = b.Series
--ORDER BY GapStart
ORDER BY ABS(b.MinID - a.MaxID) DESC;

In the output screenshot below, I had already reseeded my table to the smallest integer value, so that’s why you see that first big 2 billion row gap. In my case, someone had already reseeded the table to negative numbers in the past, so my developers were OK with using negative numbers.

 

If you encounter this problem and are unable to change the datatype, you now have the code to find the largest gap to use when reseeding your identity value!

 

Error 8623 And Entity Framework

My company recently purchased SQL Sentry (oops, they changed the name to Sentry One over two years ago, although the website still says SQL Sentry all over the place) and I’ve been spending the last several weeks setting it up and configuring all the alerts. There are a ton of items it monitors and, while the default thresholds are good, they definitely need tweaking to fit your environment. Unless you enjoy getting thousands of email alerts, that is.

I’ve finally gotten those alerts down to a manageable number and I can start paying attention to them. One alert  caught my eye because it was one I hadn’t seen before. I was getting these errors in the Windows Application log:

Someone made SQL beg. “Please simplify the query. Please!” Don’t grovel, SQL. You’re better than that.

The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.

Rare, huh? I had dozens of these in my logs every day, going back months. But I also had no complaints from people about queries failing. Strange.

So I wanted to track down what was causing these errors. I’ve got Query Store enabled, so my first thought was to start looking there. But the error in my thinking quickly became apparent. The error says SQL Server “could not produce a query plan.” I’m guessing that means I wouldn’t find anything in the Query Store.

So I turned to Extended Events and set up a session looking for error 8623. After running for a short while, I caught one of these errors. That gave me the host name of the computer that was submitting these complex queries. I narrowed my monitoring down to that server and that let me find a couple of queries causing the problems.

The machine in question was one that handles searches for our website. The server was submitting queries that looked like this:

SELECT [Extent1].[idPK] AS [idPK],
       [Extent1].[idIQ] AS [idIQ],
       [Extent1].[availableQuantity] AS [availableQuantity],
       [Extent1].[price] AS [price]
FROM [dbo].[INVENTORY] AS [Extent1]
WHERE [Extent1].[id] IN ( 391308, 391371, 392576, 392577, 394366, 395733, 395964, 396311, 397119, 397739, 398376,
                          398394, 398398, 401348, 401351, 401353, 403664, 404323, 404338, 404342, 404364, 404370,
                          404422, 404435, 404436, 404456, 404472, 404579, 405693, 405694, 405698, 405700, 405707,
                          405712, 405726, 405727, 407752, 407785, 408153, 409251, 411215, 411243, 412909, 414755,
                          415854, 417149, 419422, 419453, 419473, 420559, 420863, 420998, 421150, 421232, 421739,
                          421754, 421927, 424444, 424469, 424535, 424575, 424620, 424639, 426836, 430218, 430219,
                          431263, 431266, 450040, 463387, 463588, 463699, 463707, 463719, 463725, 463729, 464495,
                          465619, 473201, 474945, 480517, 481001, 486685, 509708, 516624, 519215, 519233, 520934,
                          520935, 520937, 520944, 520950, 520952, 520954, 520957, 532620, 592071, 601891, 604703,
                          605129, 605189, 605385, 605582, 610650, 610651, 610652, 611839, 611842, 611950, 612140,
                          628875, 641726, 641759, 641811, 641886, 641915, 641989, 642027, 652891, 652985, 652993,
                          653356, 661335, 663788, 664461, 664537, 665806, 667134, 672804, 681566, 681726, 681841,
                          682588, 682592, 682593, 683184, 688265, 691080, 691315, 691373, 692802, 693582, 693725,
                          695926, 695952, 696091, 697126, 697845, 697875, 701239, 701613, 701708, 703539, 704276,
                          704390, 704407, 705863, 705888, 706265, 706752, 706807, 708172, 708192, 708228, 708229,
                          711065, 712500, 715612, 715637, 715678, 715964, 717222, 717230, 717357, 717964, 720977,
                          720986, 721306, 724077, 725058, 729185, 747098, 754254, 757031, 764228, 764381, 765274,
                          765276, 765366, 772124, 773464, 775823, 784558, 785769, 819295, 959324, 972412, 1006602,
                          1009106, 1015577, 1026883, 1029304, 1032824, 1032845, 1038615, 1038765, 1040982, 1043104,
                          1048450, 1048977, 1050174, 1063084, 1063968, 1067818, 1078311, 1087729, 1088750, 1104660,
                          1105307, 1108714, 1110643, 1111224, 1113847, 1116331, 1118006, 1118471, 1120921, 1124497,
                          1125103, 1125208, 1126759, 1135328, 1141480, 1141524, 1142038, 1151645, 1154834, 1154913,
                          1157006, 1162516, 1166515, 1176365, 1183526, 1186453, 1189284, 1192596, 1195785, 1205383,
                          1208232, 1214972, 1218122, 1225694, 1228015, 1229616, 1243334, 1258896, 1262941, 1262964,
                          1278841, 1291791, 1293157, 1298797, 1299149, 1299612, 1312358, 1312951, 1314835, 1322091,
                          1329469, 1329818, 1331843, 1335039, 1341654, 1341745, 1345956, 1347852, 1349761, 1363893,
                          1374385, 1374476, 1375860, 1376962, 1388735, 1393231, 1397490, 1402082, 1418529, 1425704,
                          1426157, 1427110, 1428606, 1429839, 1430511, 1433470, 1437685, 1439823, 1448465, 1451089,
                          1453842, 1455186, 1457854, 1459572, 1460646, 1463557, 1464646, 1466004, 1466740, 1468101,
                          1473166, 1474601, 1474606, 1475110, 1475453, 1478397, 1481024, 1481029, 1481695, 1482749,
                          1491729, 1492923, 1500863, 1501570, 1503162, 1503626, 1512370, 1521924, 1522065, 1522511,
                          1523220, 1525082, 1528630, 1531812, 1536587, 1537296, 1544694, 1545792, 1546276, 1560557,
                          1561386, 1562338, 1567510, 1570314, 1570664, 1571741, 1571765, 1573523, 1574077, 1580062,
                          1580648, 1580916, 1581677, 1582604, 1586406, 1589158, 1589176, 1593653, 1596899, 1599841,
                          1600269, 1600481, 1609903, 1617622, 1618749, 1620205, 1620238, 1622516, 1625037, 1625509,
                          1625842, 1628935, 1634027, 1636732, 1642283, 1642316, 1644555, 1644606, 1647243, 1651084,
                          1651190, 1652564, 1654135, 1659224, 1659719, 1659893, 1660327, 1662242, 1662307, 1663213,
                          1666240, 1667633, 1670648, 1679404, 1686068, 1690715, 1692910, 1694468, 1697044, 1699094,
                          1703436, 1705988, 1707724, 1710776, 1714144, 1715699, 1719534, 1726452, 1726681, 1727647,
                          1730128, 1732335, 1736398, 1736816, 1738060, 1742014, 1743187, 1743529, 1744892, 1747927,
                          1749328, 1752241, 1753752, 1753908, 1753914, 1754476, 1756803, 1759139, 1759159, 1759353,
                          1759360, 1761304, 1761326, 1766008, 1770469, 1771114, 1771272, 1772550, 1772551, 1773708,
                          1777500, 1786995, 1788705, 1790271, 1790981, 1793503, 1794410, 1794619, 1794787, 1795140,
                          1796458, 1796501, 1796659, 1798274, 1799978, 1800112, 1801116, 1804306, 1804367, 1805786,
                          1806433, 1808932, 1809237, 1811156, 1812682, 1817337, 1818632, 1822850, 1826904, 1827000,
                          1827001, 1828199, 1828290, 1839029, 1842496, 1843931, 1843956, 1845295, 1846219, 1847225,
                          1848101, 1848114, 1848215, 1849248, 1850297, 1852811, 1853631, 1854481, 1854659, 1854666,
                          1855276, 1855422, 1856071, 1856432, 1857491, 1858675, 1859210, 1860003, 1860018, 1860665,
                          1862994, 1863835, 1864068, 1864072, 1865538, 1866025, 1866991, 1867057, 1867376, 1867469,
                          1867475, 1868157, 1868838, 1869159, 1869189, 1869227, 1869284, 1869847, 1869860, 1870039,
                          1870226, 1870986, 1871128, 1871486, 1872108, 1872285, 1872318, 1872769, 1873108, 1873518,
                          1874352, 1874677, 1875027, 1875037, 1875376, 1875407, 1875510, 1875580, 1875585, 1876261,
                          1876322, 1876327, 1879511, 1879708, 1880257, 1880361, 1880451, 1880769, 1881025, 1881423,
                          1881565, 1881759, 1881896, 1882572, 1882623, 1884178, 1884785, 1885227, 1885775, 1886433,
                          1886644, 1887153, 1889518, 1889685, 1890101, 1890757, 1890807, 1890829, 1891143, 1891981,
                          1892004, 1892333, 1892421, 1892739, 1894339, 1894500, 1894619, 1895542, 1895619, 1895960,
                          1897217, 1897565, 1897783, 1898146, 1898205, 1898332, 1898720, 1899227, 1899973, 1900038,
                          1900439, 1900550, 1900938, 1901014, 1901933, 1902038, 1902047, 1902531, 1902766, 1902813,
                          1902829, 1902891, 1902892, 1903046, 1903055, 1903156, 1904181, 1904717, 1904755, 1904756,
                          1904766, 1905037, 1906795, 1906817, 1907656, 1908415, 1908491, 1908554, 1908667, 1908793,
                          1909564, 1909582, 1909840, 1910197, 1910241, 1910451, 1910694, 1910717, 1910963, 1911442,
                          1911669, 1911816, 1912102, 1912134, 1912812, 1913387, 1913485, 1913491, 1913658, 1913759,
                          1914047, 1914073, 1914145, 1914237, 1914613, 1914719, 1914740, 1914758, 1915678, 1916007,
                          1916110, 1916249, 1917384, 1917674, 1919352, 1919641, 1920419, 1920915, 1921015, 1921540,
                          1922064, 1922289, 1922567, 1923200, 1923398, 1923420, 1923642, 1923874, 1924040, 1924429,
                          1924733, 1924763, 1925319, 1925920, 1926012, 1926451, 1926493, 1926507, 1926515, 1927144,
                          1927617, 1927783, 1927923, 1928249, 1928337, 1928484, 1928522, 1928596, 1928936, 1929041,
                          1929320, 1929599, 1930039, 1930093, 1930441, 1930500, 1930509, 1930536, 1930649, 1930702,
                          1930870, 1931687, 1931825, 1931939, 1932113, 1932399, 1932403, 1932592, 1932979, 1933032,
                          1933306, 1933728, 1934070, 1934446, 1935459, 1935738, 1936522, 1936949, 1937047, 1937052,
                          1937178, 1937510, 1937581, 1937677, 1937856, 1937883, 1937994, 1938011, 1939217, 1939465,
                          1940743, 1942151, 1943523, 1944004, 1944099, 1945309, 1945858, 1947621, 1947816, 1948876,
                          1948970, 1949112, 1950848, 1952185, 1952662, 1954575, 1954578, 1955864, 1956010, 1956472,
                          1956475, 1956476, 1956815, 1956915, 1957719, 1958372, 1958674, 1959158, 1959253, 1959334,
                          1959335, 1959338, 1959882, 1959952, 1959958, 1960200, 1960245, 1960350, 1960351, 1960752,
                          1960859, 1960864, 1961750, 1961767, 1961771, 1961774, 1961775, 1961784, 1970152, 1970536,
                          1972740, 1974605, 1974634, 1975043, 1975621, 1975630, 1975679, 1975810, 1975891, 1975922,
                          1976531, 1976544, 1976585, 1977476, 1978818, 1978819, 1982221, 1982263, 1982313, 1982612,
                          1982635, 1982636, 1982666, 1982879, 1982888, 1983450, 1984584, 1985790, 1989910, 1989944,
                          1989982, 1990000, 1990023, 1990030, 1990133, 1990134, 1990135, 1990141, 1990160, 1990169,
                          1990177, 1990183, 1990208, 1990307, 1990348, 1990662, 1990690, 1990695, 1990709, 1990713,
                          1990724, 1990754, 1990758, 1990761, 1992032, 1993099, 1993439, 1993455, 1993487, 1995308,
                          1995309, 1995658, 1995661, 1998992, 2003628, 2006044, 2006046, 2006048, 2008469, 2008837,
                          2009023, 2009038, 2009042, 2009489, 2009681, 2017101, 2017578, 2018721, 2019439, 2020043,
                          2020155, 2020183, 2021592, 2023170, 2023241, 2023373, 2027099, 2028133, 2029082, 2029223,
                          2029226, 2029227, 2029905, 2030177, 2030178, 2030179, 2030735, 2032671, 2038844, 2041663,
                          2042003, 2046848, 2049594, 2049627, 2054111, 2056018, 2056713, 2059877, 2060033, 2060439,
                          2060554, 2060652, 2060802, 2061445, 2062014, 2062052, 2062646, 2064385, 2064617, 2064618,
                          2064620, 2064623, 2064630, 2064631, 2064632, 2064643, 2064645, 2064657, 2064664, 2064675,
                          2064678, 2064684, 2064685, 2064686, 2064693, 2064700, 2064701, 2064709, 2064718, 2065426,
                          2065466, 2065469, 2065503, 2065529, 2065673, 2065813, 2066732, 2066739, 2066741, 2066751,
                          2067014, 2067020, 2067021, 2067022, 2067023, 2067082, 2067190, 2068000, 2071066, 2071201,
                          2071319, 2072145, 2072208, 2072222, 2072226, 2072230, 2072251, 2072268, 2072313, 2072403,
                          2072474, 2072520, 2072864, 2073416, 2073434, 2073777, 2073778, 2075653, 2075668, 2083610,
                          2083770, 2083772, 2083774, 2083849, 2083979, 2083980, 2084062, 2084065, 2084066, 2084624,
                          2085757, 2085769, 2086077, 2086477, 2087767, 2088012, 2089269, 2089394, 2092956, 2100825,
                          2101832, 2101992, 2102214, 2102215, 2102912, 2102914, 2103042, 2103044, 2103208, 2104399,
                          2104626, 2104997, 2105009, 2126494, 2127843, 2127844, 2128906, 2130128, 2135529, 2135619,
                          2136442, 2137093, 2137789, 2138014, 2138262, 2138296, 2140777, 2141795, 2141799, 2142053,
                          2143185, 2144350, 2144354, 2144392, 2144495, 2144900, 2145171, 2145996, 2146008, 2146025,
                          2146026, 2146030, 2146505, 2146514, 2146524, 2146535, 2146562, 2146565, 2146571, 2146585,
                          2148016, 2149444, 2149455, 2149488, 2149608, 2149625, 2149649, 2149920, 2150018, 2150521,
                          2150960, 2151352, 2151416, 2151930, 2151931, 2151939, 2151940, 2151945, 2152189, 2154269,
                          2155094, 2156155, 2159764, 2160220, 2160450, 2160652, 2160653, 2160654, 2160960, 2161061,
                          2161971, 2162051, 2162576, 2163056, 2166188, 2167164, 2171009, 2172086, 2172112, 2172142,
                          2172196, 2172325, 2172854, 2172963, 2173148, 2173149, 2173168, 2173228, 2173302, 2174302,
                          2174444, 2174575, 2177024, 2182884, 218288...

That’s just beautiful. The statement actually continued on for who knows how long, but it was too long to fit in the extended events sql_text buffer.

Some research turned up this warning from Microsoft BOL:

Explicitly including an extremely large number of values (many thousands of values separated by commas) within the parentheses, in an IN clause can consume resources and return errors 8623 or 8632. To work around this problem, store the items in the IN list in a table, and use a SELECT subquery within an IN clause.

Looks like I found the problem. I actually counted the number of items in that IN statement. 1,077, but don’t hold me to that. I sneezed when counting and may have lost my place. There’s definitely enough to qualify as “an extremely large number,” I do believe.

So I contacted our devs and told them the problem. They said this code was generated by a third party app that uses entity framework to update our search indexes. They’ve added a task to move this functionality to an in-house developed stored procedure, but it will take a while to implement that change because it requires some application architecture changes.

So I’m stuck with my error logs filling with these. Bummer.

I know computer-generated SQL is not the most well-written code, but this is the first time I have ever seen some that doesn’t just perform poorly, but actually causes errors. Three cheers for AI!