Wednesday, 27 May 2015

Listing Stored Procedures used by SSRS

The following SQL will allow you to list all stored procedures used by Reporting Services reports. Run this against the ReportServer database on the SQL Server your Reporting Services installation uses:

;with xmlnamespaces
(
default
'http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition',
'http://schemas.microsoft.com/SQLServer/reporting/reportdesigner' AS rd
)

select
       name
       ,x.value('CommandType[1]', 'VARCHAR(50)') AS CommandType
       ,x.value('CommandText[1]','VARCHAR(50)') AS CommandText

from (

       select
              name
              , cast(cast(content AS VARBINARY(MAX))as xml) as reportXML
       from
              ReportServer.dbo.Catalog
       where
              Name not like '%.gif'
              and Name not like '%.jpg'
              and Name not like '%.jpeg'

) a
cross apply reportXML.nodes('/Report/DataSets/DataSet/Query') r(x)
where

       x.value('CommandType[1]', 'VARCHAR(50)') = 'StoredProcedure'


Note, the 4th line, it may be necessary to change "2008" to "2003", "2005", "2010" depending on the version(s) of Visual Studio / Report Builder / BIDS, SSDT you've used to create your reports.

Credit to Jacob Sebastian, I used his post as a basis here:
http://beyondrelational.com/modules/2/blogs/28/posts/10446/how-to-find-all-stored-procedures-used-by-report-server.aspx

Wednesday, 6 May 2015

RAID Disk Levels

Google definition: RAID (originally redundant array of inexpensive disks; now commonly redundant array of independent disks) is a data storage virtualsation technology that combines multiple disk drive components into a single logical unit for the purposes of data redundancy or performance improvement.

There are a number of common RAID array configurations:

RAID 0:

This is also known as "striping" or a stripe of disks. Data is distributed (or "striped") across a number of drives, without any copy of the data, or any parity information about the data.

Disks: Minimum 2 disks required. Disks are striped, no mirror, no parity
Good for: performance - it is very quick to read and write to
Bad for: redundancy - if a disk fails, you've lost your entire RAID array. No good for critical systems

RAID 1:

This is also known as "mirroring" or a mirror of disks. The same data is written to two disks. Should one disk fail, you still have a complete copy of the entire disk on the other mirrored disk.

Disks: Minimum 2 disks required. Disks are mirrored, no stripe, no parity
Good for: redundancy - if a disk fails, you still have a copy of the entire disk.
Bad for: useable disk space - 2 x 1TB disk drives configured as RAID 1 only gives you 1TB of useable disk space

RAID 5:

This is the best cost effective option for both performance and redundancy

Disks: Minimum 3 disks required. Disks are striped, with distributed parity
Good for: Read performance due to striped disks. Also good for redundancy - if a disk fails, the data can be recovered based on the parity information
Bad for: write speeds can be slow

RAID 10:

Also known as RAID 1 + 0, or a "stripe of mirrors", it combines both RAID 1 and RAID 0.

Disks: Minimum 4 disks required. Disks are striped, and mirrored
Good for: Performance due to striped disks. Good for redundancy due to mirrored disks. The best option for critical systems (especially databases)
Bad for: cost - the most expensive option

Resources:


PCWorld post on RAID: http://www.pcworld.com/article/194360/raid-made-easy.html
The Geek Stuff: http://www.thegeekstuff.com/2010/08/raid-levels-tutorial/
There are some additional less common RAID levels (2,3,4,6), details about these here: http://www.thegeekstuff.com/2011/11/raid2-raid3-raid4-raid6/

Monday, 4 May 2015

Transparent Data Encryption (TDE)

Transparent Data Encryption (or TDE) is a means for encrypting your entire database, transparently. It was introduced from SQL Server 2008. It is transparent to the applications, as SQL Server does the encryption / decryption of data on the fly as required, there is no additional configuration required for your applications to make use of a database encrypted using TDE. Once TDE is set up, the data will be encrypted on the disk, and any backups of the data will be encrypted also, and you'll be unable to restore the data without the encyption key.

Setting up TDE:
There are a few steps to set up TDE for your SQL Server database(s):
  1. Create a Master Key (on the master database)
  2. Create a Certificate with the Master Key (on the master database, and back this certificate up)
  3. Create Database Encryption Key (on the database you want to use TDE on)
  4. Turn on encryption on the required database
  5. Repeat steps 3 & 4 for any other databases you require TDE for
SQL for these steps (taken from MSDN):

use master;
go
create master key encryption by password = '<UseStrongPasswordHere>';
go
create certificate MyServerCert
with
       subject = 'My DEK Certificate';
go
use AdventureWorks2012;
go
create database encryption key
with algorithm = aes_128
encryption by server certificate MyServerCert;
go
alter database AdventureWorks2012 set encryption on;
go


Moving a TDE Encrypted database:
If a database using TDE is required to be restored to another server, the following steps must be followed:
  1. Via Windows Explorer, copy TDE encrypted database, backup of server key and certificate to destination server
  2. Create database master key on destination server
  3. Recreate server certificate on destination server using backup from origin server
  4. Restore / attach TDE encypted database

Resources:
MSDN - set up / move TDE encrypted database: https://msdn.microsoft.com/en-gb/library/ff773063.aspx

SQLIO & SQLIOSIM

SQLIO is a lightweight command line application from Microsoft, to test your disk I/O and get some performance benchmarking figures.

Resources:
Brent Ozar blog post about SQLIO: http://www.brentozar.com/archive/2008/09/finding-your-san-bottlenecks-with-sqlio/
Download SQLIO from Microsoft: http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=20163

SQLIOSIM is not a performance benchmarking tool, but more of a stress testing tool, due to it's use of random patterns. Unlike SQLIO, SQLIOSIM has a graphical user interface. SQLIOSIM will test what kind of I/O you'll get from a database based on where you'll put your data and log files, auto grow configuration, whether or not you're using sparse files etc

Resources:
Download SQLIOSIM from Microsoft: https://support.microsoft.com/en-us/kb/231619

Sunday, 3 May 2015

Dynamic Management Views (DMVs)

Information about some of the DMVs availabile in SQL Server, and their function

Sessions

sys.dm_exec_sessions
Information about successful and unsuccessful logins

sys.dm_exec_connections
Provides information about connections established to the Database Engine instance

sys.dm_exec_requests
Information about each request executing on the SQL instance

sys.dm_exec_cursors
Information about cursors open


Audits

sys.dm_audit_actions
Information about every audit action that can be reported in the audit log as well as every audit action group that you are able to configure as part of SQL Server Audit

sys.dm_server_audit_status

Information about the current state of auditing

sys.dm_audit_class_type_map
Information about the class_type field in the audit output

fn_get_audit_file
Information from an audit output file that has already been generated by a server audit

Events

sys.server_event_sessions 
Lists all the event session definitions configured for the Database Engine instance

sys.server_event_session_actions
View actions on each event on an event session

sys.server_event_session_events 
View each event in an event session

sys.server_event_session_fields 
View each customizable column set on events and targets

sys.server_event_session_targets
View each event target for a specific event session

I/O

sys.dm_io_pending_io_requests
Provides information on unfulfilled I/O requests

sys.dm_io_backup_tapes 
Information on tape devices and mounts requests for backups

sys.dm_io_cluster_shared_drives
Information on shared drives if the host server is a member of a failover cluster

sys.dm_io_virtual_file_stats
Information on I/O statistics for data and log files

Deadlocks

sys.dm_tran_locks
Information on active locks

sys.dm_os_waiting_tasks
Information on tasks waiting on resources

sys.dm_exec_requests
Requests that are executing within SQL Server

Transaction Isolation Levels

Overview of transaction isolation levels, from least to most restorictive (i.e. least chance of causing locks, to most likely to cause locks):

SNAPSHOT:
This transaction level takes a copy of the data being read and puts it into tempdb. This ensures that the results at the start and end of a transaction will be identical, as it refers to a copy of the data, rather than putting locks on the actual data. 

READ UNCOMMITTED:
This is the least restrictive transaction isolation level and the least likely to result in locks. The READ UNCOMMITTED transaction level allows transactions to make modifications to data read by the current transaction. 

Use this level if you do not want the Database Engine to issue shared locks to prevent other transactions from making modifications to data read by the current transaction. READ UNCOMMITTED is the least restrictive transaction isolation level.

READ UNCOMMITTED does not prevent other transactions from making modifications to data read by the current transaction

READ COMMITTED:
This transaction level blocks statements from reading data that has been modified by but not committed by other transactions. At this transaction level, data can be changed by other transactions between the execution of individual statements within the current transaction.

If you set the READ COMMITTED transaction levels, statements cannot read data that has been modified by other transactions that are uncommitted. This prevents dirty reads. Data can be changed by other transactions between the execution of individual statements within the current transaction. READ COMMITTED is the SQL Server 2012 default transaction isolation level.

READ COMMITTED transaction isolation level blocks other statements from reading data that has been modified but not committed by other transactions, but data can be changed by other transactions between the execution of individual statements within the current transaction.

REPEATABLE READ:
This transaction level blocks statements from reading data that has been modified but not committed. When this level is set, transactions cannot alter data that has been read by the current transaction until that transaction completes. REPEATABLE READ uses shared locks on all data read by each statement until the transaction completes.

If you set the REPEATABLE READ transaction level, statements are unable to read data that has been modified by uncommitted. No transactions can alter data that has been read by the current transaction until that transaction completes. REPEATABLE READ uses shared locks on all data read by each statement until the transaction completes.

REPEATABLE READ transaction isolation level is set, no transactions can alter data that has been read by the current transaction until that transaction completes

SERIALIZABLE:
The SERIALIZABLE transaction level is the most restrictive transaction isolation level and likely to result in more locks than other transaction isolation levels. When this isolation level is set, statements cannot read data that has been modified but not yet committed by other transactions, transactions are unable to modify data that has been read by the current transaction until that transaction completes, and other transactions are unable to insert new rows into the range of rows being read by the current transaction.

If you set the SERIALIZABLE transaction level, statements cannot read data that has been modified but not yet committed by other transactions; transactions are unable to modify data that has been read by the current transaction until that transaction completes; other transactions are unable to insert new rows into the range of rows being read by the current transaction. The SERIALIZABLE transaction level is the most restrictive isolation level.

Statements cannot read data that has been modified but not yet committed by other transactions.
Other transactions are unable to modify data that has been read by the current transaction until the current transaction completes.

Other transactions are unable to insert new rows into the range of rows being read by the current transaction.

Concurrency issues with each transaction isolation level:

This post gives some great overview of transaction isolation levels. The following table and definitions of dirty / repeatable / phantom reads are taken from there: 


Isolation level
Dirty read
Nonrepeatable read
Phantom read
Read uncommitted
Read committed
Repeatable read
Serializable
Snapshot

  • Dirty read: Session 1 begins a transaction and modifies data. Session 2 reads the modified data before Session 1 commits the transaction. Session 2 is reading data that exists only in theory because the Session 1 transaction might be rolled back. If that occurs, the data read by Session 2 is no longer valid.
  • Nonrepeatable read: Session 1 begins a transaction and retrieves a row of data from a table. Session 2 updates that row. Session 1 tries to retrieve the row once more, still within the original transaction, but receives different results because of the Session 2 update.
  • Phantom read: Session 1 begins a transaction and retrieves several rows of data from a table. Session 2 inserts a row into the same table, and the insert coincides with the search criteria of the Session 1 statement. When Session 1 reruns the query, the new row added by Session 2 is now included in the results.

Trace Flags

Trace flags are used within SQL Server to provide information / logging about specific system events, or to change specific configuration of the SQL Server instance. 

Trace flags can be set on / off for the current connection only, using the following T-SQL code:
-- Turn on trace flag 1234
DBCC TRACEON (1234);
GO

-- Turn off trace flag 1234
DBCC TRACEOFF (1234);
GO
Traces can be set on / off globally (i.e. for ALL connections to the SQL Engine) with the addition of the -1 argument at the end:
 -- Turn on trace flag 1234, globally
DBCC TRACEON (1234, -1);
GO
Multiple traces can be switched on by listing them within the parenthesis:
-- Turn on trace flags 1234, 2345 and 3456
DBCC TRACEON (1234, 2345, 3456);
GO
You can see which trace flags are currently active as follows (if none are active, a message of "DBCC execution completed. If DBCC printed error messages, contact your system administrator" will appear): 
-- Check status of current trace flags
DBCC TRACESTATUS
Only members of the sysadmin fixed group can switch traces on / off.

Useful Trace Flags:
1222: Examine deadlocks
1204: Write deadlock information to SQL Server log

Trace flag resources:
MSDN: https://msdn.microsoft.com/en-us/library/ms188396.aspx
SQL Mag: http://sqlmag.com/sql-server/investigating-trace-flags
Stack Exchange: http://dba.stackexchange.com/questions/27704/sql-server-concise-list-of-trace-flags

Updating massive amount of rows whilst avoiding blocking

The following SQL is a good means to split an update on a massive table into smaller chunks, whilst reducing blocking. The method is to upda...