Recover a database with a DAMAGED and/or LOST log file

In this procedure we’ll manage one of the worst situation a DBA has to manage: corrupted files and data loss. When this heppen usually the common way is restoring but we’ll use sql server features to reduce stop time (avoiding a complete restore) and data loss.

Possible starting problems:
Corrupted logfile
Corrupted logfile during a long transaction
Logfile volume corrupted or lost during transactions

At this point there are different solutions following current database settings:

SCENARIO 1: No transactions running during crash.
Solution:
If no transactions were running at crash point the solution is easy.This because SQL server rebuild automatically lost log file during database startup. So:
1) Detach corrupted database
2) Rename the old corrupted logfile in *.OLD
3) Attach database using:

CREATE DATABASE [MYDATABASE] ON
 ( FILENAME = N'D:Microsoft SQL ServerYourDataPathDataDatabase.mdf' )
 FOR ATTACH_REBUILD_LOG
 GO
 Notes:
 - SQL Server will try to rebuild log file in the ORIGINAL path.

SCENARIO 2: Transactions running during crash
Solution:
ATTACH_REBUILD_LOG in this situation *IS NOT* allowed because SQL Server find open transactions in the database and pending rollback/rollforward operations. So you’ll find the following error trying:

“File activation failure. The physical file name “D:Microsoft SQL ServerYourDataPathDataLogfile.ldf” may be incorrect.

The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure.
Msg 1813, Level 16, State 2, Line 1
Could not open new database ‘MYDATABASE’.
CREATE DATABASE is aborted. “

So, follow this procedure:
1) DETACH DATABASE MyDatabase
2) Rename datafile and logfile in MDF.OLD and LDF.OLD
3) Create a new database with THE SAME name and identical original datafile and logfile position. I
4) ALTER DATABASE MyDatabase SET OFFLINE
5) Now you can put the original datafile in the original position
6) ALTER DATABASE MyDatabase SET ONLINE. This will fail but now we’ll can rebuild the log file
7) ALTER DATABASE [MyDatabase ] REBUILD LOG ON (NAME=’MyDatabaseLog’,FILENAME=’D:Microsoft SQL ServerYourDataPathDataLogfile.ldf’)
At this point the database will be usable but SQL Server at the end will show this warning:
Warning: The log for database ‘MyDatabase’ has been rebuilt. Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files.
8) Final Step: open the database to users:
ALTER DATABASE [nomedb] SET MULTI_USER

Notes:
– In recovery model FULL make a new FULL BACKUP as soon as possible because the RESTORE chain is broken and you need a new baseline for log backup.
*Ask to double-check application consistency* because data recovered could be NOT consistent at application level. (we have done an uncomplete recover). If applicaton checks fails and nothing is fixable rapidly at application levele you have to consider, at the end, only a complete restore.

Monitoring Wait Events of a single Session or Query in SQL Server

Using sys.dm_os_wait_stat is not useful for deep troubleshooting because this view contains wait events for ALL processes/queries running on your instance since last restart. Using command “DBCC SQLPERF (‘sys.dm_os_wait_stats’, CLEAR)” we can reset this view with no restart but on big and stressed instances with many processes and applications running this isn’t a real good solution.

SQL 2008 introduce a new powerful and flexible way to collect performance data: Extended Events.
With EE we can collect performance data for a wide range of counters with almost any filter we want.

The following procedure collect and manage asyncronously data of a SINGLE session.
Using this solution you can collect wait events infos of your session while testing or wait events infos about a single user or application.

NOTE: Stats are collected in a fs folder, so watch carefully space used growing while collecting.

--- 1) Drop the monitor session if it exists.
IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'WaitMon')
    DROP EVENT SESSION WaitMon ON SERVER
GO

--- 2) Create the new monitor session.
CREATE EVENT SESSION WaitMon ON SERVER
ADD EVENT sqlos.wait_info
    (WHERE sqlserver.session_id = 79 )              ---INSERT HERE the session id to monitor
ADD TARGET package0.asynchronous_file_target       
    (SET FILENAME = N'S:tempEE_WaitMonStats.xel', ---INSERT HERE the correct path for collected data stats
     METADATAFILE = N'S:tempEE_WaitMonStats.xem')
WITH (max_dispatch_latency = 1 seconds);
GO

--- 3) Start the Monitor session
ALTER EVENT SESSION WaitMon ON SERVER STATE = START;
GO

--- 4) >>>>>>...run your query or wait for data collection from spid session....<<<<<<

--- 5) Stop the Monitor session
ALTER EVENT SESSION WaitMon ON SERVER STATE = STOP;
GO

--- 6) Load collected data in a temp table
CREATE TABLE #RawEventData (
    Rowid  INT IDENTITY PRIMARY KEY,
    event_data XML);
GO

INSERT INTO #RawEventData(event_data)
SELECT
    CAST (event_data AS XML) AS event_data
FROM sys.fn_xe_file_target_read_file
        ('S:tempEE_WaitMonStats*.xel',          ---INSERT HERE the correct path for collected data stats 
         'S:tempEE_WaitMonStats*.xem', null, null);
GO

--- 7) Query data to analize wait events
SELECT
    waits.[Wait Type],
    COUNT (*) AS [Wait Count],
    SUM (waits.[Duration]) AS [Total Wait Time (ms)],
    SUM (waits.[Duration]) - SUM (waits.[Signal Duration]) AS [Total Resource Wait Time (ms)],
    SUM (waits.[Signal Duration]) AS [Total Signal Wait Time (ms)]
FROM
    (SELECT
        event_data.value ('(/event/@timestamp)[1]', 'DATETIME') AS [Time],
        event_data.value ('(/event/data[@name=''wait_type'']/text)[1]', 'VARCHAR(100)') AS [Wait Type],
        event_data.value ('(/event/data[@name=''opcode'']/text)[1]', 'VARCHAR(100)') AS [Op],
        event_data.value ('(/event/data[@name=''duration'']/value)[1]', 'BIGINT') AS [Duration],
        event_data.value ('(/event/data[@name=''signal_duration'']/value)[1]', 'BIGINT') AS [Signal Duration]
     FROM #RawEventData
    ) AS waits
WHERE waits.[op] = 'End'
GROUP BY waits.[Wait Type]
ORDER BY [Total Wait Time (ms)] DESC;
GO 

--- 8) Cleanup
DROP TABLE #RawEventData;
GO 

Massive SQL Server Database Moving Using Detach – Attach: The Complete Procedure

This is a complete configurable metascript, prepared to create detach statements, file move statement and re-attach statements for every database in your instance.
You have only to configure the final destination for datafiles and the database list you want to migrate.

The final generated script for every database selected will be something like this (remembar to use ALT-T to use text output mode in Management Studio)

============================================================
  DATABASE: myTestDB
============================================================
 
--  1) DETACH DATABASE
 
USE [master]
GO
ALTER DATABASE [myTestDB] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
EXEC master.dbo.sp_detach_db @dbname = N'myTestDB', @keepfulltextindexfile=N'false'
GO
 
--  2) DATAFILE OS MOVING
 
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_log.LDF" L:LogPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB.mdf" D:DataPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_2.ndf" D:DataPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_3.ndf" D:DataPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_4.ndf" D:DataPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_5.ndf" D:DataPath'
 
--  3) ATTACH DATABASE
 
CREATE DATABASE [myTestDB] ON 
( FILENAME = N'L:LogPathmyTestDB_log.LDF' ),
( FILENAME = N'D:DataPathmyTestDB.mdf' ),
( FILENAME = N'D:DataPathmyTestDB_2.ndf' ),
( FILENAME = N'D:DataPathmyTestDB_3.ndf' ),
( FILENAME = N'D:DataPathmyTestDB_4.ndf' ),
( FILENAME = N'D:DataPathmyTestDB_5.ndf' ) 
FOR ATTACH
GO

This is the complete metascript:

set nocount on
----------------------------------------------------
--CONFIGURE HERE
----------------------------------------------------
--     DESTINATION PATH
----------------------------------------------------
--data files destination 
    declare @pathdata nvarchar(max) 
    set @pathdata = 'D:DataPath'
--log files destination   
    declare @pathlog nvarchar(max) 
    set @pathlog = 'L:LogPath'
----------------------------------------------------
--     DATABASE LIST
----------------------------------------------------
    create table #dbs (dbname nvarchar(255))
    --insert into #dbs values('db1')
    --insert into #dbs values('db2')
    --insert into #dbs values('db3')
--Note: if you want to move ALL database simply insert all db names in #dbs table using:
Insert into #dbs select name from master..sysdatabases where name not in ('master','model','msdb','tempdb')
---------------------------------------------------

declare @aDB nvarchar(255)
declare @aFile nvarchar(255)
declare @aType nvarchar(255)
declare @file_n integer
declare @count integer
declare @sep nvarchar(1)

declare db_cur CURSOR FOR select * from #dbs
OPEN db_cur
FETCH NEXT FROM db_cur INTO @aDB
WHILE @@FETCH_STATUS = 0
BEGIN
	print '============================================================'
    print '  DATABASE: ' + @aDB
    print '============================================================'
    print ''
	
	print '--  1) DETACH DATABASE'
	print ''
	print 'USE [master]'
	print 'GO'
	print 'ALTER DATABASE [' + @aDB + '] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE'
	print 'GO'
	print 'EXEC master.dbo.sp_detach_db @dbname = N''' + @aDB +''', @keepfulltextindexfile=N''false'''
	print 'GO'
	print ''
	
	print '--  2) DATAFILE OS MOVING'	
	print ''
		
	declare file_cur CURSOR FOR select type_desc,physical_name from sys.master_files where db_name(database_id) = @aDB order by type_desc,physical_name
	OPEN file_cur
		
    FETCH NEXT FROM file_cur INTO @aType,@aFile
    WHILE @@FETCH_STATUS = 0
    BEGIN
		if (@aType='ROWS')
		BEGIN			
			print 'exec xp_cmdshell ''move "' + @aFile + '" ' + @pathdata + ''''
		END
		ELSE
		BEGIN			
			print 'exec xp_cmdshell ''move "' + @aFile + '" ' + @pathlog + ''''
		END
		set @count=@count+1
	FETCH NEXT FROM file_cur INTO @aType,@aFile
	END
	CLOSE file_cur
	DEALLOCATE file_cur	
	print ''
	print '--  3) ATTACH DATABASE'	
	print ''
	--Create ATTACH Statement
	set @count=1
	set @sep=','	
	PRINT 'CREATE DATABASE [' + @aDB + '] ON '
	declare file_cur CURSOR FOR select type_desc,physical_name from sys.master_files where db_name(database_id) = @aDB order by type_desc,physical_name
	select @file_n=count(*) from sys.master_files where db_name(database_id) = @aDB
	OPEN file_cur
    FETCH NEXT FROM file_cur INTO @aType,@aFile
    WHILE @@FETCH_STATUS = 0
    BEGIN
		if (@count=@file_n)
		BEGIN
			set @sep=' '
		END
		
		if (@aType='ROWS')
		BEGIN
			print '( FILENAME = N''' + @pathdata + RIGHT(@aFile, CHARINDEX('', REVERSE(@aFile))-1) +''' )' + @sep
		END
		ELSE
		BEGIN
			print '( FILENAME = N''' + @pathlog + RIGHT(@aFile, CHARINDEX('', REVERSE(@aFile))-1) +''' )' + @sep
		END
		set @count=@count+1
	FETCH NEXT FROM file_cur INTO @aType,@aFile
	END
	CLOSE file_cur
	DEALLOCATE file_cur
	
	print 'FOR ATTACH'
	print 'GO'
	print ''
FETCH NEXT FROM db_cur INTO @aDB
END
CLOSE db_cur
DEALLOCATE db_cur
drop table #dbs

Anayze SQL default trace to investigate instance events

Quering default trace is the best way to investigate unusual or critical events heppened in SQL server and not logged in errorlog files.
It’s not difficult to find useful informations there but the trace is full of codes to translate to make it more readable and useful. This is my query, based on sys.fn_trace_gettable function and sys.trace_events system table.

DECLARE @TraceFileName NVARCHAR(512)  --Find the trace file name
SELECT @TraceFileName = path
  FROM sys.traces
 WHERE id = 1
 
 SELECT StartTime,  --then we can quering, translating the EventClass  
	TE.name as EventClass , 
    TextData,  
    HostName,  
    ApplicationName,  
    LoginName,  
    SPID,           --this is the SPID of the session:   
    ObjectName      --can be used for following analysis 	
   FROM sys.fn_trace_gettable(@TraceFileName,default) TG
   left join
   sys.trace_events TE on TG.EventClass=TE.trace_event_id
   where TE.name is not null

If you are looking for the query that generated an event (log file grow, deleted object and so on…) you can look for it quering the inputbuffer for the SPID.
NOTE: the SPID can be re-used during the instance life and the inputbuffer is cleaned and re-used for following queries

dbcc inputbuffer([SPID])

The following is the full list of events available to filter in the previous query if you need to monitor a particular kind of event.
Not all are used in the default trace. If you need not traced event you have to create a custom trace.

select trace_event_id,name from sys.trace_events 

trace_event_id name
-------------- -------------------------------
10             RPC:Completed
11             RPC:Starting
12             SQL:BatchCompleted
13             SQL:BatchStarting
14             Audit Login
15             Audit Logout
16             Attention
17             ExistingConnection
18             Audit Server Starts And Stops
19             DTCTransaction
20             Audit Login Failed
21             EventLog
22             ErrorLog
23             Lock:Released
24             Lock:Acquired
25             Lock:Deadlock
26             Lock:Cancel
27             Lock:Timeout
28             Degree of Parallelism
33             Exception
34             SP:CacheMiss
35             SP:CacheInsert
36             SP:CacheRemove
37             SP:Recompile
38             SP:CacheHit
40             SQL:StmtStarting
41             SQL:StmtCompleted
42             SP:Starting
43             SP:Completed
44             SP:StmtStarting
45             SP:StmtCompleted
46             Object:Created
47             Object:Deleted
50             SQLTransaction
51             Scan:Started
52             Scan:Stopped
53             CursorOpen
54             TransactionLog
55             Hash Warning
58             Auto Stats
59             Lock:Deadlock Chain
60             Lock:Escalation
61             OLEDB Errors
67             Execution Warnings
68             Showplan Text (Unencoded)
69             Sort Warnings
70             CursorPrepare
71             Prepare SQL
72             Exec Prepared SQL
73             Unprepare SQL
74             CursorExecute
75             CursorRecompile
76             CursorImplicitConversion
77             CursorUnprepare
78             CursorClose
79             Missing Column Statistics
80             Missing Join Predicate
81             Server Memory Change
82             UserConfigurable:0
83             UserConfigurable:1
84             UserConfigurable:2
85             UserConfigurable:3
86             UserConfigurable:4
87             UserConfigurable:5
88             UserConfigurable:6
89             UserConfigurable:7
90             UserConfigurable:8
91             UserConfigurable:9
92             Data File Auto Grow
93             Log File Auto Grow
94             Data File Auto Shrink
95             Log File Auto Shrink
96             Showplan Text
97             Showplan All
98             Showplan Statistics Profile
100            RPC Output Parameter
102            Audit Database Scope GDR Event
103            Audit Schema Object GDR Event
104            Audit Addlogin Event
105            Audit Login GDR Event
106            Audit Login Change Property Event
107            Audit Login Change Password Event
108            Audit Add Login to Server Role Event
109            Audit Add DB User Event
110            Audit Add Member to DB Role Event
111            Audit Add Role Event
112            Audit App Role Change Password Event
113            Audit Statement Permission Event
114            Audit Schema Object Access Event
115            Audit Backup/Restore Event
116            Audit DBCC Event
117            Audit Change Audit Event
118            Audit Object Derived Permission Event
119            OLEDB Call Event
120            OLEDB QueryInterface Event
121            OLEDB DataRead Event
122            Showplan XML
123            SQL:FullTextQuery
124            Broker:Conversation
125            Deprecation Announcement
126            Deprecation Final Support
127            Exchange Spill Event
128            Audit Database Management Event
129            Audit Database Object Management Event
130            Audit Database Principal Management Event
131            Audit Schema Object Management Event
132            Audit Server Principal Impersonation Event
133            Audit Database Principal Impersonation Event
134            Audit Server Object Take Ownership Event
135            Audit Database Object Take Ownership Event
136            Broker:Conversation Group
137            Blocked process report
138            Broker:Connection
139            Broker:Forwarded Message Sent
140            Broker:Forwarded Message Dropped
141            Broker:Message Classify
142            Broker:Transmission
143            Broker:Queue Disabled
144            Broker:Mirrored Route State Changed
146            Showplan XML Statistics Profile
148            Deadlock graph
149            Broker:Remote Message Acknowledgement
150            Trace File Close
151            Database Mirroring Connection
152            Audit Change Database Owner
153            Audit Schema Object Take Ownership Event
154            Audit Database Mirroring Login
155            FT:Crawl Started
156            FT:Crawl Stopped
157            FT:Crawl Aborted
158            Audit Broker Conversation
159            Audit Broker Login
160            Broker:Message Undeliverable
161            Broker:Corrupted Message
162            User Error Message
163            Broker:Activation
164            Object:Altered
165            Performance statistics
166            SQL:StmtRecompile
167            Database Mirroring State Change
168            Showplan XML For Query Compile
169            Showplan All For Query Compile
170            Audit Server Scope GDR Event
171            Audit Server Object GDR Event
172            Audit Database Object GDR Event
173            Audit Server Operation Event
175            Audit Server Alter Trace Event
176            Audit Server Object Management Event
177            Audit Server Principal Management Event
178            Audit Database Operation Event
180            Audit Database Object Access Event
181            TM: Begin Tran starting
182            TM: Begin Tran completed
183            TM: Promote Tran starting
184            TM: Promote Tran completed
185            TM: Commit Tran starting
186            TM: Commit Tran completed
187            TM: Rollback Tran starting
188            TM: Rollback Tran completed
189            Lock:Timeout (timeout > 0)
190            Progress Report: Online Index Operation
191            TM: Save Tran starting
192            TM: Save Tran completed
193            Background Job Error
194            OLEDB Provider Information
195            Mount Tape
196            Assembly Load
198            XQuery Static Type
199            QN: Subscription
200            QN: Parameter table
201            QN: Template
202            QN: Dynamics
212            Bitmap Warning
213            Database Suspect Data Page
214            CPU threshold exceeded
215            PreConnect:Starting
216            PreConnect:Completed
217            Plan Guide Successful
218            Plan Guide Unsuccessful
235            Audit Fulltext

SQL Jobs Monitoring: check last run datetime and duration

A simple query to check rapidly your job’s status and duration.
Useful for fast monitoring on many instances.

No more thing s to say: this is the code based on msdb..sysjobs and msdb..sysjobhistory. It’s easy if necessary filtering a single job id or jobs durations too long.

select job_id,
job_name,
run_datetime,
SUBSTRING(run_duration, 1, 2) + ':' + SUBSTRING(run_duration, 3, 2) + ':' +
SUBSTRING(run_duration, 5, 2) AS run_duration,
convert(int,SUBSTRING(run_duration, 1, 2))*60 +
convert(int,SUBSTRING(run_duration, 3, 2)) as [min],
convert(float,SUBSTRING(run_duration, 5, 2)) as [sec]

from
(
select j.job_id,
job_name,
DATEADD(hh, -7, run_datetime) as run_datetime,
run_duration = RIGHT('000000' + CONVERT(varchar(6), h.run_duration), 6)
from
(
select
j.name as job_name,
run_datetime = max(CONVERT(DATETIME, RTRIM(run_date)) +
(run_time * 9 + run_time % 10000 * 6 + run_time % 100 * 10) / 216e4)
from msdb..sysjobhistory h
inner join msdb..sysjobs j
on h.job_id = j.job_id
group by j.name
) t
inner join msdb..sysjobs j
on t.job_name = j.name
inner join msdb..sysjobhistory h
on j.job_id = h.job_id and
t.run_datetime = CONVERT(DATETIME, RTRIM(h.run_date)) + (h.run_time * 9 + h.run_time % 10000 * 6 + h.run_time % 100 * 10) / 216e4
) dt
--where job_Id=@job_id

How to make your databases smaller and faster: find unused indexes

It’s a boring job but sometimes a good DBA has to do it.

Applications change and you have to understand what become unuseful in your databases: we are talking about unused indexes. In any SQL server database indexes take up a lot of space and have to be updated every time an application runs an update on an indexed table. You have to rebuild and reorganize them… and you have to backup them, every day.

SQL Server gives you a good instrument to understand which indexes are really used. SYS.DM_DB_INDEX_USAGE_STATS is a dynamic management view used by SQL engine to collect information about this.
The first time a new index is used SQL server adds a new line to this table with many different counters. Those counters are used to collect data about the index every time it’s used. On every instance restart SQL reset index counters.
Querying this view is easy to understad which indexes are really used:
– Indexes not listed here are never used
– Indexes with blank counters are never used since the last instance restart

Those are a queries ready to make this kind of analisys. Remembar that not every application use alway ALL their indexes. Some indexes are used only when specific application functions are turned on. For this reason talk ALWAYS with application support guys before dropping anything.

--Indexes Never Used
DECLARE @dbid INT
SELECT @dbid = DB_ID(DB_NAME())

SELECT OBJECTNAME = OBJECT_NAME(I.OBJECT_ID),
INDEXNAME = I.NAME,
I.INDEX_ID
FROM SYS.INDEXES I
JOIN SYS.OBJECTS O
ON I.OBJECT_ID = O.OBJECT_ID
WHERE OBJECTPROPERTY(O.OBJECT_ID,'IsUserTable') = 1
AND I.INDEX_ID NOT IN (
	SELECT S.INDEX_ID
	FROM SYS.DM_DB_INDEX_USAGE_STATS S
	WHERE S.OBJECT_ID = I.OBJECT_ID
	AND I.INDEX_ID = S.INDEX_ID
	AND DATABASE_ID = @dbid)
	ORDER BY OBJECTNAME,
I.INDEX_ID,
INDEXNAME ASC
GO
--Indexes never used since the last restart
DECLARE @dbid INT
SELECT @dbid = DB_ID(DB_NAME())

SELECT u.*
FROM [sys].[indexes] i
JOIN [sys].[objects] o
ON (i.OBJECT_ID = o.OBJECT_ID)
LEFT JOIN [sys].[dm_db_index_usage_stats] u
ON (i.OBJECT_ID = u.OBJECT_ID)
AND i.[index_id] = u.[index_id]
AND u.[database_id] = @dbid 
WHERE o.[type] <> 'S'
AND i.[type_desc] <> 'HEAP'
AND u.[user_seeks] + u.[user_scans] + u.[user_lookups] = 0
ORDER BY i.indexname asc

Analyze SQL Server database historical growth: MONTLY size changes

The most simple way to analyze database historical growth is quering database backup catalog.
SQL Server catalog stores informations about every single database backup in msdb..backupset. If you don’t have other instruments to collect historical database size this is a good point to start for a capacity planning

This time we take the max size reached during every month to calculate the monthly size change.
This is the report query:

declare @dbname nvarchar(1024)  

--Configure HERE database name
set @dbname ='YourDatabaseName';

--Month Report
WITH TempTable(Row,database_name,backup_start_date,Mb) 
as 
(
select 
ROW_NUMBER() OVER(order by left(convert(nvarchar(20),backup_start_date,112),6)) as Row,
database_name,
left(convert(nvarchar(20),backup_start_date,112),6) month,
MAX(cast(backup_size/1024/1024 as decimal(10,2))) Mb 
from msdb..backupset
group by database_name,left(convert(nvarchar(20),backup_start_date,112),6),type
having 
type='D' and 
database_name=@dbname
)
select 
A.database_name,
A.backup_start_date,
A.Mb as max_backup_size_mb,
A.Mb - B.Mb as delta_mb
from TempTable A left join TempTable B on A.Row=B.Row+1
order by database_name,backup_start_date

Analyze SQL Server database historical growth: DAILY size changes

The most simple way to analyze database historical growth is quering database backup catalog.
SQL Server catalog stores informations about every single database backup in msdb..backupset. If you don’t have other instruments to collect historical database size this is a good point to start for a capacity planning. Remembar only this:
msdb..backupset stores historical informations about backup size NOT database file size. This is good for you if you need to understand how your real stored data are growing day by day… but obviously datafiles are tipically larger: there is empty space inside for future data.
– evey full database backup contains a little part of logs used for recover. For this reason the size reported is not alway exactly your data dimension but usually this is not relevant

In this report you can see the daily changes in your database effective size.

tableresults

 

declare @dbname nvarchar(1024)  
declare @days int            
         
--Configure HERE database name
set @dbname ='YourDBName'
--...and number of days to analyze
set @days   =365;

--Daily Report
WITH TempTable(Row,database_name,backup_start_date,Mb) 
as 
(
select 
ROW_NUMBER() OVER(order by backup_start_date) as Row,
database_name,
backup_start_date,
cast(backup_size/1024/1024 as decimal(10,2)) Mb 
from msdb..backupset
where 
type='D' and 
database_name=@dbname and
backup_start_date>getdate()-@days
)
select 
A.database_name,
A.backup_start_date,
A.Mb as daily_backup,
A.Mb - B.Mb as increment_mb
from TempTable A left join TempTable B on A.Row=B.Row+1
order by database_name,backup_start_date

Queries to see rapidly what your SQL Server is doing NOW

1) Blocked And Blocking queries.
If this query returns no rows you have no blocked queries in this moment. Run it more then once to see any few-seconds blocking queries. NOTE: This exclude ONLY problems with long-locking running queries. Cumulative short-term locking contentions need other kinds of debug (see point 2)

SELECT 'BLOCKING STATUS' as Controllo,
BlockedSPID=left(blocked.session_id,5) ,
BlockedQuery=convert(varchar(50),blockedsql.text),
BlockingSPID=convert(varchar(50),blocking.session_id),
BlockingQuery=convert(varchar(50),blockingsql.text)
FROM sys.dm_exec_requests blocked
JOIN sys.dm_exec_requests blocking
ON blocked.blocking_session_id = blocking.session_id
CROSS APPLY
(
SELECT *
FROM sys.dm_exec_sql_text(blocked.sql_handle)
) blockedsql
CROSS APPLY
(
SELECT *
FROM sys.dm_exec_sql_text(blocking.sql_handle)
) blockingsql
GO

2) Time-Wait analysis
SQL Server collects informations about time wait events of your instance for every session. Every event (IO,CPU Processing,Locking and so on) is collected and showed in some dynamic management views from instance start/restart. To see what’s heppening now you can reset one af this views and collect for a short time windows events details for debug purpose. To understand the meaning of every SQL Wait Events see: http://msdn.microsoft.com/it-it/library/ms179984.aspx.
Following you can see a good wait analysis script to cross informations for a fast debug (source: http://www.sqlskills.com/blogs/paul/advanced-performance-troubleshooting-waits-latches-spinlocks/)

DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR); --reset DM view
GO

SELECT
	[owt].[session_id],
	[owt].[exec_context_id],
	[owt].[wait_duration_ms],
	[owt].[wait_type],
	[owt].[blocking_session_id],
	[owt].[resource_description],
	[es].[program_name],
	[est].1,
	[est].[dbid],
	[eqp].[query_plan],
	[es].[cpu_time],
	[es].[memory_usage]
FROM sys.dm_os_waiting_tasks [owt]
INNER JOIN sys.dm_exec_sessions [es] ON
	[owt].[session_id] = [es].[session_id]
INNER JOIN sys.dm_exec_requests [er] ON
	[es].[session_id] = [er].[session_id]
OUTER APPLY sys.dm_exec_sql_text ([er].[sql_handle]) [est]
OUTER APPLY sys.dm_exec_query_plan ([er].[plan_handle]) [eqp]
WHERE [es].[is_user_process] = 1
ORDER BY [owt].[session_id], [owt].[exec_context_id];
GO

3) Open transactions with plan and sql texts
It’s really simple to see informations about current sessions using the old and trusty exec sp_who2 or the dynamic management view sys.dm_exec_requests
But if you need exactly what statements are running and wich plan are they using you need a more complicate query.
This is a good script from http://www.sqlskills.com/blogs/paul/script-open-transactions-with-text-and-plans/ useful to see current transactions with detailed informations about every sessions running.

 SELECT s_tst.[session_id],
   s_es.[login_name] AS [Login Name],
   DB_NAME (s_tdt.database_id) AS [Database],
   s_tdt.[database_transaction_begin_time] AS [Begin Time],
   s_tdt.[database_transaction_log_record_count] AS [Log Records],
   s_tdt.[database_transaction_log_bytes_used] AS [Log Bytes],
   s_tdt.[database_transaction_log_bytes_reserved] AS [Log Rsvd],
   s_est. AS [Last T-SQL Text],
   s_eqp.[query_plan] AS [Last Plan]
FROM sys.dm_tran_database_transactions s_tdt
   JOIN sys.dm_tran_session_transactions s_tst
      ON s_tst.[transaction_id] = s_tdt.[transaction_id]
   JOIN sys.[dm_exec_sessions] s_es
      ON s_es.[session_id] = s_tst.[session_id]
   JOIN sys.dm_exec_connections s_ec
      ON s_ec.[session_id] = s_tst.[session_id]
   LEFT OUTER JOIN sys.dm_exec_requests s_er
      ON s_er.[session_id] = s_tst.[session_id]
   CROSS APPLY sys.dm_exec_sql_text (s_ec.[most_recent_sql_handle]) AS s_est
   OUTER APPLY sys.dm_exec_query_plan (s_er.[plan_handle]) AS s_eqp
ORDER BY [Begin Time] ASC;
GO 

Massive Database Migration between SQL Server instances: the complete procedure v.2.0 *UPDATED*

(05/04/2014) Procedure Upgrades :
– Added compresson to reduce bandwith,space necessary and transfer time
– Reduced stat value for very large database
– Added backup type parameter to choose from FULL,FULL_COPYONLY or DIFFERENTIAL backup (for bigger database migration)
– Added Maxtransfersize and Buffercount parameters to improve backup performance (warning – this needs more memory!)
– Corrected a bug to show correct error message of any kind

Introduction :
This is my complete procedure to accomplish this task, completely based on TSQL and metascripts. Nothing more is necessary.

There are many ways to migrate databases (backup-restore, detach-copy-attach, sql database copy tool and so on..). In this procedure we’ll follow the backup-restore way. Why? Simply because is more flexible and require less bandwith and disk space then others. Obviusly every DBA have to choose the correct strategy for every scenario but this way is in my opinon one of the most adaptable for complex instances with many and big databases to transfer.

Those are the macro-steps we’ll follow to reach our goal:

  1. Migrate instance logins keeping the original SID and password .
  2. Backup databases and transfer to the destination sites
  3. Restore database relocating datafiles in the new fs

Migration of logins with original sid and password hash

This step is necessary to transfer logins from the source instance. We want to keep the source user password and the source user sid. The source user sid is necessary to preserve alignment between instance login SIDs and database user SIDs. The correct procedure to complete this task change for different SQL Server version because in different builds are used different ways to store login informations in system catalog.

The complete MS procedures are here:

http://support.microsoft.com/kb/246133/en-us (SQL 2000)

http://support.microsoft.com/kb/918992/en-us (SQL 2005,2008,2012*) *see note at page bottom

Here you can see the complete script for a common scenario (2008):

1) Run this script in the source instance and save the complete output as text (ALT+T before execute in SSMS).

USE [master]
GO

/****** Object: UserDefinedFunction [dbo].[fn_hexadecimal] ****/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE FUNCTION [dbo].[fn_hexadecimal]
(
-- Add the parameters for the function here
@binvalue varbinary(256)
)
RETURNS VARCHAR(256)
AS
BEGIN

DECLARE @charvalue varchar(256)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length)
BEGIN
DECLARE @tempint int
DECLARE @firstint int
DECLARE @secondint int
SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))
SELECT @firstint = FLOOR(@tempint/16)
SELECT @secondint = @tempint - (@firstint*16)
SELECT @charvalue = @charvalue +
SUBSTRING(@hexstring, @firstint+1, 1) +
SUBSTRING(@hexstring, @secondint+1, 1)
SELECT @i = @i + 1
END
return @charvalue

END
GO

SET NOCOUNT ON
GO
--use MASTER
GO
PRINT '-----------------------------------------------------------------------------'
PRINT '-- Script created on ' + CAST(GETDATE() AS VARCHAR(100))
PRINT '-----------------------------------------------------------------------------'
PRINT ''
PRINT '-----------------------------------------------------------------------------'
PRINT '-- Create the windows logins'
PRINT '-----------------------------------------------------------------------------'
SELECT 'IF NOT EXISTS (SELECT * FROM master.sys.server_principals WHERE [name] = ''' + [name] + ''')
CREATE LOGIN [' + [name] + '] FROM WINDOWS WITH DEFAULT_DATABASE=[' +
default_database_name + '], DEFAULT_LANGUAGE=[us_english]
GO

'
FROM master.sys.server_principals
where type_desc In ('WINDOWS_GROUP', 'WINDOWS_LOGIN')
AND [name] not like 'BUILTIN%'
and [NAME] not like 'NT AUTHORITY%'
and [name] not like '%SQLServer%'
GO

PRINT '-----------------------------------------------------------------------------'
PRINT '-- Create the SQL Logins'
PRINT '-----------------------------------------------------------------------------'
select 'IF NOT EXISTS (SELECT * FROM master.sys.sql_logins WHERE [name] = ''' + [name] + ''')
CREATE LOGIN [' + [name] + ']
WITH PASSWORD=' + [master].[dbo].[fn_hexadecimal](password_hash) + ' HASHED,
SID = ' + [master].[dbo].[fn_hexadecimal]([sid]) + ',
DEFAULT_DATABASE=[' + default_database_name + '], DEFAULT_LANGUAGE=[us_english],
CHECK_EXPIRATION=' + CASE WHEN is_expiration_checked = 1 THEN 'ON' ELSE 'OFF' END + ', CHECK_POLICY=OFF
GO
IF EXISTS (SELECT * FROM master.sys.sql_logins WHERE [name] = ''' + [name] + ''')
ALTER LOGIN [' + [name] + ']
WITH CHECK_EXPIRATION=' +
CASE WHEN is_expiration_checked = 1 THEN 'ON' ELSE 'OFF' END + ', CHECK_POLICY=' +
CASE WHEN is_policy_checked = 1 THEN 'ON' ELSE 'OFF' END + '
GO

'
--[name], [sid] , password_hash
from master.sys.sql_logins
where type_desc = 'SQL_LOGIN'
and [name] not in ('sa', 'guest')

PRINT '-----------------------------------------------------------------------------'
PRINT '-- Disable any logins'
PRINT '-----------------------------------------------------------------------------'
SELECT '--ALTER LOGIN [' + [name] + '] DISABLE
GO
'
from master.sys.server_principals
where is_disabled = 1

PRINT '-----------------------------------------------------------------------------'
PRINT '-- Assign groups'
PRINT '-----------------------------------------------------------------------------'
select
'EXEC master..sp_addsrvrolemember @loginame = N''' + l.name + ''', @rolename = N''' + r.name + '''
GO

'
from master.sys.server_role_members rm
join master.sys.server_principals r on r.principal_id = rm.role_principal_id
join master.sys.server_principals l on l.principal_id = rm.member_principal_id
where l.[name] not in ('sa')
AND l.[name] not like 'BUILTIN%'
and l.[NAME] not like 'NT AUTHORITY%'
and l.[name] not like '%SQLServer%'

2) Run the result script in the destination instance

Backup databases and transfer to the destination site

This step is not diffuclt but can be very long and frustrating on big instances with many databases. This is a metascript to generate and run the backup script in a smarter way skipping and logging errors with no halts.

----------------------------------------------------
--CONFIGURE HERE
----------------------------------------------------
-- DESTINATION PATH
----------------------------------------------------
    declare @destpath nvarchar(255) = 'G:backup'
----------------------------------------------------
-- BACKUP TYPE (use FULL_COPYONLY | FULL | DIFF)
----------------------------------------------------
    declare @bck_type nvarchar(25) = 'FULL_COPYONLY'  
----------------------------------------------------
-- DATABASE LIST
----------------------------------------------------
    create table #dbs (dbname nvarchar(255))
    insert into #dbs values('db1')
    insert into #dbs values('db2')
    insert into #dbs values('db3')

--Note: if you want to move ALL database simply insert all db names in #dbs table using:
--  Insert into #dbs select name from master..sysdatabases
--TO backup every user db excluding system database use
--  Insert into #dbs select name from master..sysdatabases where 
--	name not in ('master','model','msdb','tempdb')

----------------------------------------------------

declare @aDB nvarchar(255)
declare @aQry nvarchar(2048)

DECLARE em_cur CURSOR FOR select * from #dbs

OPEN em_cur
FETCH NEXT FROM em_cur INTO @aDB
WHILE @@FETCH_STATUS = 0
BEGIN
   set @aQry = 'BACKUP DATABASE [' + @aDB + '] TO  DISK = ''' + 
                 @destpath + @aDB + '.bak'' WITH  NOFORMAT, NOINIT,  NAME = N''' + @aDB + 
                 '-Full Database Backup'', SKIP, NOREWIND, NOUNLOAD,  STATS = 3, COMPRESSION'

--ADD buffercount = 64, maxtransfersize = 4194304 to improve performance.
--WARNING!! This require MORE memory and can lead to OUT OF MEMORY errors. Use and raise this parameters ONLY
--when you are sure this will not create any problem to your users (with stopped applications or with less used instances)
   
   if (@bck_type='DIFF')
   BEGIN
		set @aQry = @aQry + ',DIFFERENTIAL'
   END
   ELSE
   BEGIN
		if (@bck_type='FULL_COPYONLY')
		BEGIN
			set @aQry = @aQry + ',COPY_ONLY'
		END		
   END
      
    begin try
        print '-------------'
        print @aQry
        exec (@aQry)
        print '-------------'        
    end try
    begin catch
        print 'ERROR during backup: ' + ERROR_MESSAGE()
    end catch

    FETCH NEXT FROM em_cur INTO @aDB
END

CLOSE em_cur
DEALLOCATE em_cur
drop table #dbs

Restore database relocating datafiles in the new fs

On the source instance use this metascript to generate the restore script. We’ll use the generated script to restore all databases (into the destination instance) relocating datafiles in the new fs.

set nocount on
----------------------------------------------------
--CONFIGURE HERE
----------------------------------------------------
--     DESTINATION PATH
----------------------------------------------------
--backup files position
    declare @restorepath nvarchar(max) = 'G:backup'
--data files destination 
    declare @pathdata nvarchar(max) = 'D:DataPath'
--log files destination   
    declare @pathlog nvarchar(max)  = 'L:LogPath'
----------------------------------------------------
--     DATABASE LIST
----------------------------------------------------
    create table #dbs (dbname nvarchar(255))
    insert into #dbs values('db1')
    insert into #dbs values('db2')
    insert into #dbs values('db3')
--Note: if you want to move ALL database simply insert all db names in #dbs table using:
--Insert into #dbs select name from master..sysdatabases
---------------------------------------------------

declare @aDB nvarchar(255)
declare em_cur CURSOR FOR select * from #dbs

OPEN em_cur
FETCH NEXT FROM em_cur INTO @aDB
WHILE @@FETCH_STATUS = 0
BEGIN
select 'RESTORE DATABASE [' + @aDB + '] from disk=N''' + @restorepath + @aDB + '.bak'' WITH '
union all
select
CASE
WHEN m.type_desc = 'ROWS' THEN 'MOVE N''' + m.name + ''' TO N''' +
@pathdata + RIGHT(physical_name, CHARINDEX('', REVERSE(physical_name))-1) +''','
WHEN m.type_desc = 'LOG'  THEN 'MOVE N''' + m.name + ''' TO N''' +
@pathlog  + RIGHT(physical_name, CHARINDEX('', REVERSE(physical_name))-1) +''','
END
from sys.master_files m
inner join sys.databases d
on (m.database_id = d.database_id)
where d.name=@aDB
union all
select 'NOUNLOAD,  STATS = 10'
print 'GO'

FETCH NEXT FROM em_cur INTO @aDB
END

CLOSE em_cur
DEALLOCATE em_cur
drop table #dbs

Now take the generated script and run it on the destination instance to restore every database in the correct position.

*NOTE: (Update 22-07-2012) In my current tests on SQL 2012 the official MS procedure to export login password hashes export procedure does not function. To export logins with original password hash AND the original SID use the following workaround

select   
convert(varchar(50),'CREATE LOGIN [' + name + '] with password='),CAST(password AS VARBINARY(26)),
'HASHED,SID=',
CAST(sid AS VARBINARY(26))
from sys.syslogins  where password is not null