21 Mar

Built in Functions in SQL Server 2012

Microsoft SQL Server 2012 introduced 14 new built in functions. These new functions are:

Conversion functions

  • PARSE
  • TRY_CONVERT
  • TRY_PARSE

Date and time functions

  • DATEFROMPARTS
  • DATETIME2FROMPARTS
  • DATETIMEFROMPARTS
  • DATETIMEOFFSETFROMPARTS
  • EOMONTH
  • SMALLDATETIMEFROMPARTS
  • TIMEFROMPARTS

Logical functions

  • CHOOSE
  • IIF

String functions

  • CONCAT
  • FORMAT

In addition to the 14 new functions, one existing function has been changed. The existing LOG function now has an optional second base parameter.

Obviously all of these functions are useful in one way or the other, but I’ll be explaining five very useful functions which I feel are common\useful to all the environments.

 

1. EOMONTH

This function takes two parameters first being start_date which is mandatory and the second one is Month_to_add which is optional. This function will return the last day of the month (also termed as EOM or end of the month) for the given date, By passing the second argument also it will add the months to the start_date and then returns the last day of the month as per the final date (start_date + Month_to_add)

This can be easily understood with the help of an example:

DECLARE @date DATETIME;

SET @date = ’12/06/2010′;

SELECT EOMONTH (@date) AS Result;

–or

DECLARE @date VARCHAR(255);

SET @date = ’12/06/2010′;

SELECT EOMONTH (@date) AS Result;

Both of these queries will return the same output i.e.,

–>   2010-12-31 00:00:00.000

 

In order to find the last day of the month for any future\previous month you must use the second parameter. You can provide a positive or negative value to the second argument based on the requirements. The example below explains it in a better way.

DECLARE @date DATETIME;

SET @date = GETDATE();

SELECT EOMONTH ( @date ) as ThisMonth;

SELECT EOMONTH ( @date, 1 ) as NextMonth;

SELECT EOMONTH ( @date, -1 ) as LastMonth;

 

2. CHOOSE

This function can be used to return the value out of a list based on its index number (Note: Index no. here starts from 1) This function takes at-least 2 arguments, where the first must be an INT and the second onwards can be varchar.

The following example returns the Second item from the list of values that are provided.

Select Choose (2, ‘January’, ‘February’, ‘March’);

You can further enhance this functionality and make it dynamic, just declare a variable and depending upon the logic set the value and pass the variable as the first parameter.

Here’s the example for this:

declare @a int

set @a =2

SELECT CHOOSE ( @a, ‘January’, ‘February’, ‘March’);

 

3. IIF

If you’ve ever done some programming in Java or VB you must’ve realized what is this? Yes this is the same Conditional function which will return the value based on the condition you specified as the first argument. It takes three parameters where the first declares the condition and rest two are the results you want it to return in case the condition comes out to be true or false respectively.

A. Simple IIF example

DECLARE @a int = 45;

DECLARE @b int = 40;

SELECT IIF (@a > @b, ‘TRUE’, ‘FALSE’) AS Result;

You can also put multiple conditions using ‘and’, ‘or’ keywords which will help you to evaluate the condition based on multiple things.

 

B. Complex IIF example

DECLARE @a int = 45;

DECLARE @b int = 40;

SELECT IIF (@a>@b and @b>30, ‘TRUE’, ‘FALSE’) AS Result;

‘And’ keyword specifies that both the conditions @a>@b and @b>30 should be satisfied in order to make it true, whereas if you replace ‘and’ with ‘or’ keyword then the condition will be accepted as true even if one of them fails.

 

4. CONCAT

It’s the same concatenate function that we use in excel, it will concatenate two or more strings to make it single string.  It implicitly converts all arguments to string types. It accepts a minimum of 2 (at-least) Arguments and maximum of 254 Arguments.

The return type depends on the type of the arguments. The following table illustrates the mapping.

 

Input type Output type  and length
If any argument is a SQL-CLR system type, a SQL-CLR UDT, or nvarchar(max) nvarchar(max)
Otherwise, if any argument isvarbinary(max) orvarchar(max) varchar(max) unless one of the parameters is an nvarchar of any length. If so, then the result is nvarchar(max).
Otherwise, if any argument isnvarchar(<= 4000) nvarchar(<= 4000)
Otherwise, in all other cases varchar(<= 8000)unless one of the parameters is an nvarchar of any length. If so, then the result isnvarchar(max).

 

A. Using CONCAT

SELECT CONCAT (‘Today ‘, ‘is ‘, 6th, ‘-‘, ‘December’) AS Result;

 

5. TRY_CONVERT

Seems this Function is carrying try and catch block with itself. In earlier versions of SQL Server whenever the cast was not successful we were getting error messages but now no need to worry using this function will try to convert the data-type and return the value if it succeeds otherwise it will return null rather than that ugly error, but wait a minute there is a catch to it, if you are requesting this function to convert something which is explicitly not allowed this will definitely end-up giving an error.

 

A. TRY_CONVERT returns null

SELECT TRY_CONVERT(float,’test’)

You can decorate this with a custom message to make it more clear like:

SELECT

CASE WHEN TRY_CONVERT(float,’test’) IS NULL

THEN ‘Cast failed’

ELSE ‘Cast succeeded’

END AS Result;

GO

B. TRY_CONVERT fails with an error

SELECT TRY_CONVERT(xml, 4);

The result of this statement is an error, because an integer cannot be cast into an xml data type.

 

Hope this post helped you, do leave comments.

Regards

Sarabpreet Anand


17 Oct

Find Users with DBOwner Database role

Scenario: Auditors need a list of Users who have DBOwner Database role on any of the database which are hosted on Tier 1 Servers. Giving this information was Very easy for few servers where the DBCount was around 10 to 20, but we also have 8 SharePoint servers each hosting 200-350 DBs. Extracting this information on all these Servers was a pain, to ease things I created a script which can get all this information on a single execution.

This script can help you to extract the list of users from all databases who have DBOwner role. Once you execute the query you’ll get three column output for the query, three columns are: Database_Name, Role_Name, DBUser_Name. All the column names are self-explanatory.

You can download the script from this link: Find_DBOwners_from_All_Databases.sql

This is how the output will look like:

Find_DBOwner_all_Dbs

 

Note: Tested successfully on SQL Server 2012.

Enjoy!
Happy Learning 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips\blog posts in your Inbox


19 Aug

SQL Tip-8 | How to script out multiple SQL Server jobs at once

Migrating SQL Server jobs from one server to another is a critical task and there are many alternatives available for the same, the easiest one available is to script out each and every job and re-create them on the destination server.

If there are a few jobs you can very well script out each and every job individually, but once the count of jobs or number of servers to be migrated increases this seems to be another challenge.

For this reason SQL Product team provided a very convenient (but not so popular) option to script out all the jobs.

All you need to do is select jobs in the object explorer and open the Object Explorer Details Tab (shortcut key is F7) – Once the object explorer details tab is in view, you can now select multiple jobs by holding Ctrl (control) key.

Once all the desired jobs are selected right click and select script job as from the context menu.  Please refer the attached Screenshot to get more idea.

image.png

Enjoy!

Hope you enjoyed the post, feel free to leave a comment. 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips\blog posts in your Inbox


4 Aug

Using Covering Index

A Non Clustered Index which contains all columns which are referred by a Query (in Select, Where or Join) is called Covering Index.

In other words

When all columns which are required to satisfy the query are included in a non-clustered Index to avoid extra key lookup that index is known as Covering index.

This type of index can effectively improve the query performance. To understand the reason why Covering index helps in gaining extra performance let’s do some testing.

In this example, I’ll be creating two identical tables Stock & Stock1 with exactly same data and number of rows. To test the performance benefits, we’ve to create different type of indexes on both tables. On first table we’ll create an Index with all key columns and on Second table we’ll create a Covering index which will also include the non-key column. Let’s see whether it impacted the performance or not.

This is the query which we need to test: (Queries are given below)

select PID, Qty, calories, PName from Stock where Qty>=11 and calories< 200

On Table Stock we created a Clustered Index on PID and a non clustered index on Qty and Calories whereas on Table Stock1 we created a clustered Index on PID and a NON Clustered Index on Qty and Calories along with these two columns we also included a non-key column PName to convert this index into a covering index.

In First table, the Query uses the non-clustered index to get PID,Qty and calories and once all these values are available SQL Engine will search the PName column at the leaf node of the clustered Index (Basically it will be a lookup against the Clustering key which is PID) . Now if we could avoid traversing Clustered index to get relevant PName values we can easily improve the query performance; to do so We created another table Stock1 where the non-clustered index was created on Qty and calories but PName (a non-key column) was included in the index; Since the non clustered index already has the PName values now there is no need to traverse the Clustered index and we can save those resources.

When we executed both these queries after enabling Statistics IO ON; we saw a decrease in Logical Reads from 43 to 39. I know the numbers are not too much and theres not a major difference but there is definitely a improvement which depends heavily on Number of records & selection criterion.

image

 

Then we compared the Execution plans and saw that the first query on Stock table where we don’t have non-key column in the index that is taking 93% resources compared to second table query which is just 7%.

image

 

Moreover, if you compare the IO and CPU Cost there is a drastic improvement. Hence we can conclude by saying that performance wise Covering indexes are much better then any other normal Non Clustered Index.

image    image

 

In-case, you want to test this you can use the below mentioned query. (Don’t execute queries on your production box, these queries are provided as is without any warranty)

create database Test_Sarab
go
use test_sarab
go

create table Stock
(
PID int Identity primary key,
Qty int,
calories int,
PName varchar(30)
)

create table Stock1
(
PID int Identity primary key,
Qty int,
calories int,
PName varchar(30)
)

insert into Stock values
(10, 267, ‘Doughnut Custard’),
(11, 410, ‘Doughnut Iced Glazed’),
(11, 256, ‘Doughnut Jam’),
(11, 266, ‘Muffin Blueberry’),
(11, 149, ‘Breakfast Muffin’),
(13, 401, ‘Muffin Chocolate Chip’),
(9, 141, ‘Wholemeal Muffin’),
(13, 78, ‘Brown Bread’)
go 1001

insert into Stock1 values
(10, 267, ‘Doughnut Custard’),
(11, 410, ‘Doughnut Iced Glazed’),
(11, 256, ‘Doughnut Jam’),
(11, 266, ‘Muffin Blueberry’),
(11, 149, ‘Breakfast Muffin’),
(13, 401, ‘Muffin Chocolate Chip’),
(9, 141, ‘Wholemeal Muffin’),
(13, 78, ‘Brown Bread’)
go 1001

create index Id2 on Stock (Qty, calories)

create index FinalIDx on Stock1 (Qty, calories) include (PName)

–dbcc dropcleanbuffers

SET STATISTICS IO ON;
select PID, Qty, calories, PName from Stock where  Qty>=11 and calories< 200

                       select PID, Qty, calories, PName from Stock1 where  Qty>=11 and calories< 200

 

Thanks,

Sarabpreet Singh Anand


26 Feb

Understanding, Analysing and Troubleshooting Deadlocks – Part2

In First Part Understanding, Analysing and Troubleshooting Deadlocks – Part1 we discussed what is Deadlock, How SQL Server engine behaves when a deadlock is encountered, how a victim is identified and why deadlocks occurs etc.

After reading the error message people often think that either this is the only information SQL Server has for the deadlock or SQL Engine doesn’t share more information since that is internal. But that’s not true, you can get plenty of information that you may need in order to resolve\avoid Deadlocks in Future (yes this is quite possible, well- in most of the cases) This information about Deadlock is also known as Deadlock Graph.

There are many ways to get Deadlock Graph, let’s discuss a few in detail:

1. Trace Flags

Trace flags are used to temporarily set specific server characteristics or to switch off a particular behavior. Prior to SQL Server 2005 using trace flag was the only option to get Deadlock information. In SQL Server 2000 you can use Trace Flag 1204 & 1205 in order to get the information about the Types of locks held, commands being executed and more detailed information about the deadlock.

Starting from SQL Server 2005 Microsoft Introduced Trace Flag 1222 and Profiler trace Event that can be used to get Deadlock Graph. In Trace Flag 1222 you get a detailed information about all the processes running and participating in the deadlock, which query they were running, what all resources were locked, what was the isolation level and what type of lock request was being blocked and why. You can use all this information to troubleshoot the problem.

You can switch-on Trace Flags either by using DBCC Commands or by Startup Parameters.

 

a. Enable Trace Flag using DBCC Commands

DBCC TraceON(1222)

Once the trace flag is enabled you can check the status by using DBCC TraceStatus command, it will provide an output similar to below screenshot.

clip_image001

In the above screenshot you can see that the trace flag 1222 has been enabled but on Session level. Means the trace flag will work for only the current session. If you want to enable this for the full instance you need to give one more parameter i.e., -1. This will enable the trace flag on global level. To enable the trace flag on Global level let’s first disable\switch off the trace flag on session level and then we’ll enable it on global level.

DBCC TraceOFF(1222)

DBCC TraceON(1222,-1)

Now let’s check the status one more time.

clip_image002

So the trace flag has been enabled on Global\Instance level.

 

b. Enable Trace Flag using –T Startup Option

SQLServer uses Startup Parameters during startup to find Master DB Data\Log & Error Log Files, along with this one can also specify some server wide Conditions. Most users do not need to use anything apart from Data\Log & Error log File paths, until and unless you want to change the default startup behavior of DB Engine.

Trace Flags are used to start the server with nonstandard behavior. When using this functionality use an Uppercase T to pass trace flags, a lower case t enables other internal flags that are required by only SQL Server support engineers. The -T startup option enables a trace flag globally. You cannot enable a session-level trace flag by using a startup option.

You can do this in SQL Server Configuration Manager, Choose the right SQL Server Service – right click and open properties. Follow the screenshot below to specify the trace flag and press Add. (the below screenshot is from SQLServer 2012)

Note: Changing any Startup Parameter\option needs a SQL Service Restart.

clip_image003

clip_image004

 

The Deadlock Graph

Once the Trace Flag is enabled (no matter using which way you’ve enabled the Trace Flag) if any deadlock gets captured by the Deadlock monitor the relevant information is captured and logged into the error log. If you read Relevant Error log, you’ll find the Deadlock graph (the deadlock information about all participating processes in XML Format) information gets captured in the error log in clear text.

Refer the screenshot below to see how it looks:

Errorlog-deadlock-graph

In next part we’ll see how to get same Deadlock Graph (text Information as well as a GUI Representation) using Profiler and how to interpret it.

 

Happy Learning 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips\blog posts in your Inbox


4 Nov

Free SQL Server Webcast | Tap is the only Solution Ver.2 | 08-Nov-2012

Tap is the only solution – Ver. 2

Scarcity is the fundamental economic problem of having humans who have unlimited wants and needs in a world of limited resources. Alternatively, scarcity implies that not all of society’s goals can be pursued at the same time.
Just like Water the Server Resources are also getting scarce and we should find a way to put a Tap on these resources to optimally utilize them as and when needed. In this session we’ll see a way to control & manage server resources to get desired performance and targets.(Level: 100 – 300)

Register: http://ssgwebcast08nov2012.eventbrite.com/

Note: You will get the LiveMeeting details a few hours before the event. Please make sure that your laptop/PC is equipped with LiveMeeting client.

image