SQL: How to find out who truncated your table in SQL Server

Someone truncated your table. You would like to get your data back, but mostly you really just want to know who did it.

Getting the data back can be as easy as restoring a backup. But having a backup does not tell you who the culprit is.

This information can however be gleaned from the SQL Server Transaction Log. If your database is in simple recovery model you’d better be quick about it, so that you can get to the information before the log gets reused.

The following query will show you who the culprit is based on data in the active transaction log:

SELECT [Current LSN],[Operation],[Context], [Transaction ID],[Transaction Name],SUSER_SNAME ([Transaction SID])[Culprit]

FROM fn_dblog (NULL,NULL)

WHERE [Transaction Name]=’TRUNCATE TABLE’

Unfortunately it only shows you that the user truncated atable but not exactly which table. So how can we determine which table this particular action relates to?

We know that SQL Server can create multiple entries in the transaction log for even the simplest operation. So we can look at all the entries with the same Transaction ID to determine exactly what happened.

SELECT[Current LSN],[Operation],[Context],[Transaction ID],[Transaction Name],SUSER_SNAME ([Transaction SID])[Culprit],[Lock Information]

FROM fn_dblog (NULL,NULL)

WHERE[Transaction ID]= (Select[Transaction ID]FROMfn_dblog(null,null)WHERE[Transaction Name]=’TRUNCATE TABLE’)


Here I am including the Lock Information column. Since we know that in order to truncate a table a Sch-M (Schema Modification) lock will be taken, we can use that to determine which object it was taken on.

In this case the object id is 885578193. We can then use the following to obtain the name of the table which was truncated in that transaction.


If the data is no longer available in the online transaction log, and you are using the FULL recovery model and have log backups, you can use the same method with the fn_dump_dblog function instead. This will allow you to not only read the online transaction log but also log backups.

******The Correct Query : WHERE[Transaction ID] IN (Select[Transaction ID]FROM fn_dblog (null,null)WHERE[Transaction Name]=’TRUNCATE TABLE’)


SQL Query: How to retrieve the Table columns from a stored procedure

name AS ColumnName
,system_type_name AS DataType
WHEN is_nullable=0 THEN ‘NOT NULL’
WHEN is_nullable=1 THEN ‘NULL’
,column_ordinal AS OrdinalPosition
,(‘Col_’ + CAST(column_ordinal AS varchar(4))) AS Alias
FROM sys.dm_exec_describe_first_result_set
(N'[dbo.Registration]’, null, 0) ;


SQL SERVER – How to retrieve the metadata of a stored procedure

In my earlier articles, I wrote about sys.dm_exec_describe_first_result_set
(A new dynamic management view shipped with SQL Server 2012), it is very handy when you need to know the metadata of the first possible result set of any Transact SQL. Today, I came across an issue when I tried to get the metadata of a stored procedure in earlier version of SQL Server and wanted to use the list of columns for further manipulations. However, I could not find a proper solution like sys.dm_exec_describe_first_result_set in the earlier version of SQL Server. So I started developing the solution.

Given below are the solution.

SOLUTION 1 : For SQL Server 2012 and above using sys.dm_exec_describe_first_result_set

USE AdventureWorks2012;
SELECT FROM sys.dm_exec_describe_first_result_set
(N'[dbo].[uspGetManagerEmployees] 16', null, 0) ;

metadata of stored procedure.1.1

SOLUTION 2 : For SQL Server 2005 and above using OPENROWSET

USE AdventureWorks2012
--DROP TABLE #temp1
SELECT * INTO  #temp1
FROM OPENROWSET('SQLNCLI', 'Server=(local);uid=sa;pwd=test',
'EXEC AdventureWorks2012.[dbo].[uspGetManagerEmployees] 16');
SELECT * FROM tempdb.sys.columns
WHERE object_id=object_id('tempdb..#temp1')
--View the column name / metadata of the
--above created temporary table.

metadata of stored procedure.1.2

SQL Query: Find all indexes, fragmention, and rebuild in the Database

–Find all the indexes in the database, find the index fragmention size in percentage, reorganize or rebuild indexes

— Find the average fragmentation percentage of all indexes
— in the HumanResources.Employee table.

–avg_fragment_in_percent=(number of recrods*record size)/(total page used*page size)

Generate Scripts for database objects with SMO for SQL Server

USE [DBName];

SELECT f.object_id, s.name AS SchemaName, o.name AS TableName, i.index_id, i.name AS IndexName, p.rows AS TableRowCount, page_count, fragment_count, avg_fragment_size_in_pages, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(N’DBNAME’), NULL, NULL, NULL, NULL) AS f
INNER JOIN sys.indexes AS i
ON f.object_id = i.object_id
AND f.index_id = i.index_id
INNER JOIN sys.objects AS o
ON i.object_id=o.object_id
INNER JOIN sys.schemas AS s
ON o.schema_id=s.schema_id
sys.partitions p
ON i.object_id=p.object_id
AND i.index_id = p.index_id
WHERE o.type=’U’ –include only user table indexes
AND i.index_id>0 –exclude the null indexes
ORDER BY SchemaName, TableName, IndexName


— Reorganize all indexes on the HumanResources.Employee table.
ALTER INDEX AK_Employee_LoginID ON HumanResources.Employee
REBUILD WITH (ONLINE = ON);  –rebuild works better than reorganizing index


Defragmenting compacts as well as changes physical order

Index maintenance compacts a structure *and* changes physical order– both can be important!

Editors Note: this post was updated in June 2014 to link to an online index rebuild bug that can cause corruption.

Once up on a time, there was a database server with 500GB of data and a heavy read workload of dynamic queries. Data was updated frequently throughout the day and index tuning was a serious challenge. At the best of times, performance was dicey.

Then things went bad

Application performance plummeted. Lots of code changes had been released recently, data was growing rapidly, and the hardware wasn’t the absolute freshest. There was no single smoking gun– there were 20 smoking guns!

A team was formed of developers and IT staff to tackle the performance issue. Early in the process they reviewed maintenance on the database server. Someone asked about index fragmentation. The DBA manager said, “Of course we’re handling fragmentation!” But a few queries were run and some large, seriously fragmented indexes were discovered in production.

The DBA explained that fragmentation wasn’t the problem. She didn’t have automated index maintenance set up, but she periodically manually defragmented indexes that were more than 75% fragmented.

Bad, meet ugly

At this point the whole performance team flipped out. Trust disappeared. Managers squirmed. More managers were called in. The DBA tried to change the subject, but it was just too late. More than a week was wasted over Fragmentation-Gate. It was a huge, embarrassing distraction, and it solved nothing.

Here’s the deal– the DBA was right. Fragmentation wasn’t the root cause of the performance problem. The DBA was a super smart person and very talented at performance tuning, too! (And no, I’m not secretly talking about myself here– this is not the “royal she”.) But she made a strategic miscalculation: she should have set up occasional automated index maintenance to align with her team’s normal practices and standards.

Why you need automated index maintenance

When performance gets bad, one of the very first things people look at is whether systems involved are configured according to best practices. If you’re not following a best practice, you need to have a really good reason for it.

Regular index maintenance still has a lot of merit: even in Shangri-La, where your data all fits into memory and your storage system is a rockstar with random IO, index maintenance can help make sure that you don’t have a lot of empty space wasting loads of memory.

It’s still a good idea to automate index maintenance. Absolutely don’t go too crazy with it– monitor the runtime and IO use and run it only at low volume times to make sure it helps more than it hurts. Be careful, but don’t skip it.

How much downtime can you spare?

Before you implement index maintenance, find out how much time tables can be offline in each of your databases. Then, figure out what operations you want to use.

To Rebuild or Reorganize: That is the Question

First off: ‘Reorganize’ and ‘Rebuild’ are two different operations that each reduce fragmentation in an index. They work differently toward the same end. You don’t need to run both against the same index. (I sometimes find that people are doing both against every index in a maintenance plan. That’s just double the work and NOT double the fun.)

Rebuild: An index ‘rebuild’ creates a fresh, sparkling new structure for the index. If the index is disabled, rebuilding brings it back to life. You can apply a new fillfactor when you rebuild an index. If you cancel a rebuild operation midway, it must roll back (and if it’s being done offline, that can take a while).

Reorganize: This option is more lightweight. It runs through the leaf level of the index, and as it goes it fixes physical ordering of pages and also compacts pages to apply any previously set fillfactor settings. This operation is always online, and if you cancel it then it’s able to just stop where it is (it doesn’t have a giant operation to rollback).

Factors to consider:

  • Standard Edition rebuilds ain’t awesome. If you’ve got SQL Server Standard Edition, index rebuilds are always an offline operation. Bad news: they’re also single-threaded. (Ouch!)
  • Enterprise Edition rebuilds have gotchas. With SQL Server Enterprise Edition, you can specify an online rebuild — unless the index contains large object types. (This restriction is relaxed somewhat in SQL Server 2012). You can also use parallelism when creating or rebuilding an index— and that can save a whole lot of time. Even with an online rebuild, a schema modification lock (SCH-M) is needed at the time the fresh new index is put in place. This is an exclusive lock and in highly concurrent environments, getting it can be a big (blocking) problem.
  • There’s a bug in SQL Server 2012 Enterprise Edition Rebuilds that can cause corruption. If you’re running SQL Server 2012 SP1 – SP2, parallel online index rebuilds can cause corruption. Read about your options here.
  • Rebuilding partitioned tables is especially tricky. You can rebuild an entire partitioned index online– but nobody really wants to do that because they’re huge! The whole idea behind horizontal partitioning is to break data into more manageable chunks, right? Unfortunately, partition level rebuilds are offline until SQL Server 2014.
  • Reorganizing can be pretty cool. ‘Reorganizing’ an index is always an online op, no matter what edition of SQL Server you’re using. It doesn’t require a schema mod lock, so it can provide better concurrency. Reorganizing only defragments the leaf level of the index. On large tables it can take longer than a rebuild would take, too. But as I said above, it’s nice that you can reorganize for a while and then stop without facing a massive rollback.


I’m really excited about new index rebuild options that are shipping in SQL Server 2014. Check *this* out:

  1. ALTER INDEX OhSoFragmented ON dbo.MyTable REBUILD
  2. WITH (
  3. ONLINE = ON
  6. )
  7. );

So we’ve got new tools for those concurrency problems I was talking about with online rebuilds. Now, we can say how long we’re willing to wait to get that schema modification lock (in minutes), and if we have to keep waiting what should happen. (Options: kill those who are blocking us, kill ourself, or do nothing.) Obviously there are some situations where just killing the blockers could be a terrible problem, but I’m interested to experiment with this.

You Didn’t Answer the Question: Do I Use Rebuild or Reorganize?

Yeah, I totally dodged that question, didn’t I?

If you have a regularly scheduled downtime every weekend, you’re probably fine with straight up index rebuilds, even if you have Standard Edition. Single threaded offline index maintenance may not be the hottest thing in the world, but hey, if you’ve got time for it then embrace the simplicity.

If you have Enterprise Edition, embrace parallel index rebuilds– and use the ONLINE option for indexes that allow it if people need to access the database during your maintenance window.

If you have database mirroring or AlwaysOn Availability Groups, tread with caution– particularly with rebuilds. It’s easy to generate a ton of IO with index maintenance, and it could mean putting your secondaries or mirror so far behind that they can’t catch up.

Maintenance plans or custom scripts?

You can go the easy way and use SQL Server Maintenance Plans, but unfortunately they’re very simplistic: you can only say “rebuild all the indexes” or “reorganize all the indexes”. You cannot say, “If the index is 45% or more fragmented, rebuild it– otherwise do nothing.” If you don’t spend much time with SQL Server and you’ve got downtime available every weekend, this can be a decent option.

If you need to minimize downtime, custom index maintenance scripts are the way to go. Our favorite: Ola Hallengren’s maintenance scripts. These are super flexible, well documented, and … free! The scripts have all sorts of cool options like time boxing and statistics maintenance.

Some tips for using Ola Hallengren’s index maintenance scripts:

  1. Download and configure them on a test instance first. There’s a lot of options on parameters, and you’ll need to play with them.
  2. Get used the ‘cmdexec’ job step types. When you install the scripts you’ll see that the SQL Server Agent jobs run index maintenance using a call to sqlcmd.exe in an MSDOS style step. That’s by design!
  3. Use the examples on the website. If you scroll to the bottom of the index maintenance page you’ll find all sorts of examples showing how to get the procedure to do different useful things.

Find out when maintenance fails

Don’t forget to make sure that your maintenance jobs are successfully logging their progress. Set up Database Mail and operators so jobs let you know if they fail.

Tell your boss you did a good thing

Finally, write up a quick summary of what you did, why you chose custom scripts or maintenance plans, and why. Share it with your manager and explain that you’ve set up automated index maintenance as a proactive step.

Having your manager know you’re taking the time to follow best practices certainly won’t hurt– and one of these days, it just might help you out. (Even if it’s just by keeping everyone from following a red herring.)

SQL: Find the Key Identifier Column in all the tables, in all the database in the server

Scenario: Sometimes we use a key identifier column across many tables and across many databases. For example: To identify a customer, we might use CustomerID in all the tables have got data related to Customer.

,@command1 VARCHAR(MAX)

IF OBJECT_ID(‘tempdb..#temp’, ‘U’) IS NOT NULL DROP TABLE tempdb..#temp
IF OBJECT_ID(‘tempdb..#temp2’, ‘U’) IS NOT NULL DROP TABLE tempdb..#temp2

SELECT TOP 0 CAST(NULL AS VARCHAR(100)) AS ServerName, Table_Catalog, Table_Schema, Table_Name, Column_Name INTO #temp

SET @command1 = ‘use [?]; INSERT INTO #temp select @@ServerName, c.Table_Catalog, c.Table_Schema, c.Table_Name, c.Column_Name
JOIN INFORMATION_SCHEMA.TABLES t ON c.Table_Name = t.Table_name AND t.Table_type <> ”VIEW”

/*include Possible alternative column names*/
WHERE Column_Name IN (
”NHSNumber”, ”NHSNo”, ”NHSNum”, ”NHS Number”, ”NHS_Number”, ”NHS_No”, ”NHS_Num”, ”NHS No#”, ”xNHSNumber”
–, ”SK_PatientID”
/*,”DOB”, ”Dateofbirth”, ”BirthDate”, ”Date of Birth”, ”Birth Date”, ”Date_of_Birth”, ”Birth_Date”,
”Postcode”, ”PCode”, ”Post_Code”, ”Post Code”, ”Postcode_single_space_e_Gif”, ”Postcode_no_space”, ”Postcode_NoSpace”,
”PostCode(Home)”, ”PostCode(Location)”, ”Postcode_8”, ”Postcode_8_chars”, ”Postcode of Usual Address”, ”Post code at appointment date”*/
AND ”?” IN (”SUS”)/*NOT IN (”master”,”msdb”,”tempdb”,”model”)*/’

EXEC @RETURN_VALUE = sp_MSforeachdb @command1
— = @command1
— SELECT * FROM #temp

,ServerName VARCHAR(100)
,Table_Catalog NVARCHAR(256)
,Table_Schema NVARCHAR(256)
,Table_Name SYSNAME
,Column_Name SYSNAME
,SK_PaitentID_Statement varchar(max)

INSERT INTO #temp2 (ServerName, Table_Catalog, Table_Schema, Table_Name, Column_Name, SQLStatement, SK_PaitentID_Statement)
,’SELECT TOP 10 [‘+Column_Name + ‘] From [‘ + Table_Catalog+’].[‘+Table_Schema+’].[‘+Table_Name+’] WHERE [‘+Column_Name+’] IS NOT NULL’ AS SQLStatment
,’BEGIN TRY ALTER TABLE [‘ + Table_Catalog + ‘].’ + ‘[‘ + Table_Schema + ‘].’ + ‘[‘ + Table_Name + ‘]’ + ‘ ADD SK_PatientID int; RAISERROR(”SK_PatientID Column ADDED”, 0, 1); END TRY BEGIN CATCH RAISERROR(”SK_PatientID Column already exist”, 0, 1); END CATCH’ AS SK_PaitentID_Statement
FROM #temp
SELECT distinct ServerName, Table_Catalog, Table_Schema, Table_Name, Column_Name, SQLStatement, SK_PaitentID_Statement
FROM #temp2 order by 2, 3, 4, 5