Enable “remote admin connections” on all your SQL Servers to be able to take DAC (Dedicated Admin Connection) remotely.

SQL Tip-7 | Different ways to enable a Trace Flag

Trace flags are used to temporarily set specific server characteristics or to switch off a particular behavior. There are many documented and undocumented trace flags. For example Trace Flag 1222 that can be used to get Deadlock Graph. In Trace Flag 1222 you get a detailed information about all the processes running and participating in the deadlock, which query they were running, what all resources were locked, what was the isolation level and what type of lock request was being blocked and why. You can use all this information to troubleshoot the problem.

You can switch-on Trace Flags either by using DBCC Commands or by Startup Parameters.

a. Enable Trace Flag using DBCC Commands

DBCC TraceON(1222)

Once the trace flag is enabled you can check the status by using DBCC TraceStatuscommand, it will provide an output similar to below screenshot.

clip_image001

In the above screenshot you can see that the trace flag 1222 has been enabled but on Session level. Means the trace flag will work for only the current session. If you want to enable this for the full instance you need to give one more parameter i.e., -1. This will enable the trace flag on global level. To enable the trace flag on Global level let’s first disable\switch off the trace flag on session level and then we’ll enable it on global level.

DBCC TraceOFF(1222)

DBCC TraceON(1222,-1)

Now let’s check the status one more time.

clip_image002

So the trace flag has been enabled on Global\Instance level.

b. Enable Trace Flag using –T Startup Option

SQLServer uses Startup Parameters during startup to find Master DB Data\Log & Error Log Files, along with this one can also specify some server wide Conditions. Most users do not need to use anything apart from Data\Log & Error log File paths, until and unless you want to change the default startup behavior of DB Engine.

Trace Flags are used to start the server with nonstandard behavior. When using this functionality use an Uppercase T to pass trace flags, a lower case t enables other internal flags that are required by only SQL Server support engineers. The -T startup option enables a trace flag globally. You cannot enable a session-level trace flag by using a startup option.

You can do this in SQL Server Configuration Manager, Choose the right SQL Server Service – right click and open properties. Follow the screenshot below to specify the trace flag and press Add. (the below screenshot is from SQLServer 2012)

Note: Changing any Startup Parameter\option needs a SQL Service Restart.

clip_image003

clip_image004

Happy Learning 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips\blog posts in your Inbox

Apply Password policy on all SQL Server logins (along with password expiration if possible) to avoid any security compromise.

Understand NOLOCK hint before using this in your transactions\applications – the data you are reading might not be consistent (Dirty Reads)

Always change the Page_Verify Database Option to Checksum after you migrate it to SQL Server 2005 or higher. Checksum is default for databases created on SQL Server 2005+.

SQL Tip-6 | Use Covering Index to solve key lookup issues

Do you have Key Lookup problem? Key lookup taking most of the query time. then probably this is the best time to know more about covering indexes. Covering Index can solve all key lookup problems. A Non Clustered Index which contains all columns which are referred by a Query (in Select, Where or Join) is called Covering Index.

To know more read Using Covering Index.

Happy Learning 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips in your Inbox

Using Covering Index

A Non Clustered Index which contains all columns which are referred by a Query (in Select, Where or Join) is called Covering Index.

In other words

When all columns which are required to satisfy the query are included in a non-clustered Index to avoid extra key lookup that index is known as Covering index.

This type of index can effectively improve the query performance. To understand the reason why Covering index helps in gaining extra performance let’s do some testing.

In this example, I’ll be creating two identical tables Stock & Stock1 with exactly same data and number of rows. To test the performance benefits, we’ve to create different type of indexes on both tables. On first table we’ll create an Index with all key columns and on Second table we’ll create a Covering index which will also include the non-key column. Let’s see whether it impacted the performance or not.

This is the query which we need to test: (Queries are given below)

select PID, Qty, calories, PName from Stock where Qty>=11 and calories< 200

On Table Stock we created a Clustered Index on PID and a non clustered index on Qty and Calories whereas on Table Stock1 we created a clustered Index on PID and a NON Clustered Index on Qty and Calories along with these two columns we also included a non-key column PName to convert this index into a covering index.

In First table, the Query uses the non-clustered index to get PID,Qty and calories and once all these values are available SQL Engine will search the PName column at the leaf node of the clustered Index (Basically it will be a lookup against the Clustering key which is PID) . Now if we could avoid traversing Clustered index to get relevant PName values we can easily improve the query performance; to do so We created another table Stock1 where the non-clustered index was created on Qty and calories but PName (a non-key column) was included in the index; Since the non clustered index already has the PName values now there is no need to traverse the Clustered index and we can save those resources.

When we executed both these queries after enabling Statistics IO ON; we saw a decrease in Logical Reads from 43 to 39. I know the numbers are not too much and theres not a major difference but there is definitely a improvement which depends heavily on Number of records & selection criterion.

image

 

Then we compared the Execution plans and saw that the first query on Stock table where we don’t have non-key column in the index that is taking 93% resources compared to second table query which is just 7%.

image

 

Moreover, if you compare the IO and CPU Cost there is a drastic improvement. Hence we can conclude by saying that performance wise Covering indexes are much better then any other normal Non Clustered Index.

image    image

 

In-case, you want to test this you can use the below mentioned query. (Don’t execute queries on your production box, these queries are provided as is without any warranty)

create database Test_Sarab
go
use test_sarab
go

create table Stock
(
PID int Identity primary key,
Qty int,
calories int,
PName varchar(30)
)

create table Stock1
(
PID int Identity primary key,
Qty int,
calories int,
PName varchar(30)
)

insert into Stock values
(10, 267, ‘Doughnut Custard’),
(11, 410, ‘Doughnut Iced Glazed’),
(11, 256, ‘Doughnut Jam’),
(11, 266, ‘Muffin Blueberry’),
(11, 149, ‘Breakfast Muffin’),
(13, 401, ‘Muffin Chocolate Chip’),
(9, 141, ‘Wholemeal Muffin’),
(13, 78, ‘Brown Bread’)
go 1001

insert into Stock1 values
(10, 267, ‘Doughnut Custard’),
(11, 410, ‘Doughnut Iced Glazed’),
(11, 256, ‘Doughnut Jam’),
(11, 266, ‘Muffin Blueberry’),
(11, 149, ‘Breakfast Muffin’),
(13, 401, ‘Muffin Chocolate Chip’),
(9, 141, ‘Wholemeal Muffin’),
(13, 78, ‘Brown Bread’)
go 1001

create index Id2 on Stock (Qty, calories)

create index FinalIDx on Stock1 (Qty, calories) include (PName)

–dbcc dropcleanbuffers

SET STATISTICS IO ON;
select PID, Qty, calories, PName from Stock where  Qty>=11 and calories< 200

                       select PID, Qty, calories, PName from Stock1 where  Qty>=11 and calories< 200

 

Thanks,

Sarabpreet Singh Anand

Make sure Auto_Shrink is disabled for all Databases on your Production Servers.