Category Archives: LearnSQL

Error while opening SQL Server Configuration Manager – Fixed

Hi Friends,

Today I again experienced the same old frustrating error while opening SQL Server Configuration Manager on one of the SQL Servers. Refer the screenshot below.

1_Error_while_opening_SQL_Server_Configuration_Manager_Fixed

The error says:  Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager.

Invalid namespace [0x8004100]  Adding text so that you can find it through search engine when you face this issue.

Now as the error mentioned: everybody will directly jump to services.msc console to check if WMI service (Windows Management Instrumentation) is running or not. Even i checked the same thing, It was running. The next step you might want to check is if WMI Service is running properly or not. Etc

Now Please stop validating your WMI Service, the issue is not related to WMI connectivity\service; having said that, it is related to WMI only but the error message is not pointing to the underlying issue.

The issue is with the WMI provider for your SQL instance. Basically what happens “whenever you uninstall any SQL instance from a Server it somehow also removes WMI provider – but this becomes a problem when you have multiple instances on a Server and you uninstall one of them. “

The 32-bit instance and the 64-bit instance of SQL Server 2008 share the same WMI configuration file. This file is located in the %programfiles(x86)% folder.

So what’s the solution\Fix?
The solution is a mof file which contains all configuration settings which regenerates WMI Provider for SQL Server, All you need to do is just locate the below mentioned folder on your server and issue the command with correct path.

mofcomp c:\program files (x86) \Microsoft SQL Server\100\Shared\sqlmgmproviderxpsp2up.mof”

This is how it will look once you issue this command on a Command Prompt.

2_Error_while_opening_SQL_Server_Configuration_Manager_Fixed

That’s it once you see “Done!” On the command Prompt, everything is fixed and back to normal.

Now you can open your SQL Server Configuration Manager and work as you wish.

Do let me know if there is any other way to fix this; I’ll add that to my knowledge.

Enjoy!

Regards

Sarabpreet Anand

Resolving – Disk Missing Issue for TempDB with a Twist

A not so common but major issue could be missing Disk\Drive issue for any SQL Server, If there were user DB files on that disk chances are you’ll lose some data if the same Disk is not allocated again to you, Moreover you have to bear downtime. (Though there are some exceptions to it which we’ll discuss in another blog post.)

But in case your TempDB was there on missing Drive\location – it will not even allow your SQL Server to start & gives the below error:

CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file ‘S:\TempDB_Date\tempdb.mdf’.

The only two options left with you are:

  1. Ask your storage admin to allocate a LUN (fresh\old one) – Map the storage to Server, assign the same drive letter, create the folder if it’s not there already and then start your SQL Server. (You might also need to work on permissions if there is a need.)
  2. If you have the space available on any other drive, start your SQL Server in Single User Mode, make changes to location of TempDB files with an Alter command and then restart SQL Server in normal mode.

Both the above solutions will work but the problem is you might not have enough time to implement either of these because if you ever encounter such issue on Tier 1 or Tier 2 Server a Sev.1 will be there immediately awaiting your response and on top of that you need to represent your tower on a Bridge call to answer multiple questions (what\where\when\why etc.) all of this add stress and between all this you need to take multiple steps to solve this issue.

Your SLA will also hit if you are not able to resolve the issue in given time. (SLA might depend but starting SQL in Single User mode or getting approval & allocating storage on server needs time)

Quickest Solution

The quickest solution could be learning just one dos command. Yes the very old Dos expose a highly powerful command which can resolve the issue under few minutes if not in seconds.

In my example the S: Drive was missing and here is the screenshot of the available drives.

Now, All you need to do is find the drive\disk where you have enough space on the server, and issue the below command:

SUBST S: D:\Sarabpreet\SSG_Example

–You need to replace the First parameter with Drive Letter you need and Second Parameter with the location\Drive where you have enough space to create TempDB Files

By issuing the above command it will map the location (Path) to Drive letter given as a second parameter.

 2_Resolving_Disk_Missing_Issue_for_TempDB_with_a_Twist

You can also check the location where the drive is pointing to – Verify by just issuing command SUBST

 3_Resolving_Disk_Missing_Issue_for_TempDB_with_a_Twist

Post that you can create the folder and restart the SQL Services.

If you want to delete the created drive once the disk has been allocated back to you, you can use the /D switch.

 

Note: Reboot will clear created drives by SUBST, but we can resolve this by keeping the command in autoexe.bat file

Hope you find it helpful.

Regards

Sarabpreet Anand

SQL Tip-12 | Using SQL Server Backup to Windows Azure Tool – Part2

In last part we discussed the installation, configuration and usage of the SQL Server Backup to windows azure tool.

In this part we’ll discuss some internals of this tool and how to restore the backups which were moved to Azure storage.

As we discussed in the last part this tool creates a login ID with the name of SQLBackup2Azure , let’s diagnose it further. The tool creates this login with a low privilege account which is used to run this service. This tool also creates a group called “TempGroup” and the login SQLBackup2Azure gets the membership of this group. The name of the group and login are hardcoded and can’t be changed. However the password of the login gets generated automatically and randomly which is not known to anyone including Microsoft.

When you start adding different rules in this tool, this group get read\write access on the specified location – that is how this tool is able to take backup and restore.

The biggest disadvantage of this tool is it doesn’t provide a mechanism to track the uploaded files and any kind of error logging information, you don’t even have any control over the schedule of the uploading of backups since there is a service it keeps on checking the files.

 

Restore a Database from the Backup file which was uploaded using this tool

This tool creates a Stub file with some metadata to use during restore. You can use this tool just like a regular backup file when you want to restore a database. SQL Server automatically reads the metadata information in the file and restore the database by getting the required file from the Azure Account. Internet connectivity is a must since the stub file contains only the metadata information about the backup file rest everything is there in the Azure Storage account.

 

Stub File

This tool uploads the specified backups from the file path mentioned in the rule but once the backup file gets uploaded it creates a Stub file for the same and leaves it in the local storage folder with metadata information to be used during restore.

 

What if Stub file is inaccessible or Deleted by mistake?

A copy of the same Stub file gets uploaded along with the backup on your Azure Storage Account. If for any reason the stub file is inaccessible you can download the copy of the stub file and place it in the same location which is configured for this tool.

 

Some known issues:

1. In the event of a loss of network connectivity, SQL Server will surface the error as not being able to write to the device. You will need to clean up the files relating to this backup operation (if any) and retry the backup.

2. If you want to create a backup using the Tool, you must create a new backup file instead of overwriting/appending to an existing backup file.

 

Thanks,
Sarabpreet Singh

SQL Tip-11 | Using SQL Server Backup to Windows Azure Tool

As you know there are so many cool benefits of using Windows Azure Storage Account, everybody want to use it to get advantage. Last month i blogged about “SQL DB Backup to URL\Cloud” feature which was first introduced in SQL Server 2012 and later in SQL 2014 it was integrated completely in every aspect.

But what if you are still running old version of SQL Server and want to take advantage of Cloud?

Microsoft is being very kind and they are providing you a workaround. They released a small tool which can help you create a one cloud backup strategy that means you will be able to take backups on cloud for prior SQL versions like SQL Server 2005, 2008 and SQL 2008 R2.

By using this tool you get all the latest features like Backup on Cloud, using Encryption and Compression in all earlier versions.  Currently these features are available only in following versions\editions.

Feature Version Edition
Backup Encryption 2014 Enterprise, Standard and Business Intelligence Edition
Backup Compression 2008 Enterprise Edition
Backup Compression 2008 R2 Enterprise and Standard Edition


Isn’t it cool you get all latest features even if you’ve not upgraded to new version\edition.

The Utility can be downloaded from this link and the utility is less than 10 MB is size. Let’s see how to install, configure and use this tool.

Installation of this tool is quite simple just like any other regular software.

1  2

3   4

The tool installs a service which takes the responsibility of taking the backups from local system to Cloud. The tool also creates a separate login for this tool with the name of “SQLBackup2Azure.

service

 

To configure this tool you’ve to provide a local path of a folder where your backups will be placed – this location will act as a source for this tool. You need to also mention the file name\extension which will be picked to be backed-up on the cloud. The best part is you can use wildcards to select the type of files.

config1 config2

 

Next you need to mention the details about your Windows Azure Storage Account, this is similar to creating a credential which will provide necessary access to this tool to backup the files on Azure. You get an option to Verify the account details right from the wizard.

config3

In the next page you also get an option to use Encryption and Compression for all these backups.

config4 config5

Read the next part here…

Thanks,
Sarabpreet Singh

SQL Tip-10 | SQL Server Database Backup to Cloud (Microsoft Azure)

This feature can help you leverage the power of Cloud with your SQL instance and taking your backups directly on Cloud.

Backup to URL is not new to SQL 2014, it was first introduced in SQL Server 2012 SP1 CU2, but you were allowed to use the URL in query only. Starting with SQL Server 2014 this feature has been totally integrated, now you can specify URL while taking Backup or doing Restore no matter if you are performing the task via Query, Backup\Restore GUI, Powershell cmdlets or using SMO.

To be able to take backups or restore from Microsoft Azure (earlier named as Windows Azure) you must create a Credential. Before creating a credential you must already have a Storage account created on Azure. Once you create your storage account you’ll be able to get Storage Access Key from Azure portal. This Storage access key will provide full Admin control on your storage account.

Here is the script to create the Credential:

Create credential Storage_4Backup

with identity = ‘dbbackup12’,

SECRET = ‘H61bUxq5ld+AUFGa3Zc30CKYGSOyMDGEb/MKcCjvssdrgF34PZt+TV/42/HoJpOgnrM82KQpuCYYHEudjg==’

go

 

You need to replace the Secret key with your storage access key, the screenshot below will tell you from where you need to copy this.

clip_image002

Once the credential is created on your SQL instance you are now all set to be able to take backups or do restore from Windows Azure storage account.

Here is the T-SQL Script for the same:

Backup database azure_test

to URL = ‘http://dbbackup12.blob.core.windows.net/sqlprod/azure_test.bak’

with credential = ‘Storage_4Backup’, format, compression, stats=10, MediaName = ‘azure_test_22_02_2014’,

mediadescription=‘Backup_of_azure_test’

 

Note: To get the complete script bundled with step-by-step demo follow this link.

You can also do the backup to windows azure using Backup UI – but the credential is a must, here is the screenshot.

If the credential already exists you can select from the drop down menu otherwise the wizard itself gives you an option to create one.

clip_image004

Rest everything is self-explanatory. 😉

 

Thanks,
Sarabpreet Singh

Find Users with DBOwner Database role

Scenario: Auditors need a list of Users who have DBOwner Database role on any of the database which are hosted on Tier 1 Servers. Giving this information was Very easy for few servers where the DBCount was around 10 to 20, but we also have 8 SharePoint servers each hosting 200-350 DBs. Extracting this information on all these Servers was a pain, to ease things I created a script which can get all this information on a single execution.

This script can help you to extract the list of users from all databases who have DBOwner role. Once you execute the query you’ll get three column output for the query, three columns are: Database_Name, Role_Name, DBUser_Name. All the column names are self-explanatory.

You can download the script from this link: Find_DBOwners_from_All_Databases.sql

This is how the output will look like:

Find_DBOwner_all_Dbs

 

Note: Tested successfully on SQL Server 2012.

Enjoy!
Happy Learning 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips\blog posts in your Inbox

What is Checkpoint in SQL Server?

Checkpoint is an internal process that writes all dirty pages (modified pages) from Buffer Cache to Physical disk, apart from this it also writes the log records from log buffer to physical file. Writing of Dirty pages from buffer cache to data file is also known as Hardening of dirty pages.

It is a dedicated process and runs automatically by SQL Server at specific intervals. SQL Server runs checkpoint process for each Database individually.

Checkpoint helps to reduce the recovery time for SQL Server in the event of unexpected shutdown or system crash\Failure.

 

In SQL Server 2012 there are four types of Checkpoints:

Automatic: This is the most common checkpoint which runs as a process in the background to make sure SQL Server Database can be recovered in the time limit defined by the Recovery Interval – Server Configuration Option.

Indirect: This is new in SQL Server 2012. This also runs in the background but to meet a user-specified target recovery time for the specific Database where the option has been configured. Once the Target_Recovery_Time for a given database has been selected this will override the Recovery Interval specified for the server and avoid Automatic Checkpoint on such DB.

Manual: This one runs just like any other T-SQL statement, once you issue checkpoint command it will run to its completion. Manual Checkpoint runs for your current Database Only. You can also specify the Checkpoint_Duration which is optional, this duration specifies the time in which you want your checkpoint to complete.

Internal: As a user you can’t control Internal Checkpoint. Issued on specific operations such as:
1. Shutdown initiates a Checkpoint operation on all databases except when Shutdown is not clean (Shutdown with nowait)
2. If the recovery model gets changed from Full\Bulk-logged to Simple.
3. While taking Backup of the Database.
4. If your DB is in Simple Recovery model, checkpoint process executes automatically either when the log becomes 70% full, or based on Server option-Recovery Interval.
5. Alter Database command to add or remove a data\log file also initiates a checkpoint.
6. Checkpoint also takes place when the recovery model of the DB is Bulk-Logged and a minimally logged operation is performed.
7. DB Snapshot creation.

Happy Learning 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips\blog posts in your Inbox

Using Covering Index

A Non Clustered Index which contains all columns which are referred by a Query (in Select, Where or Join) is called Covering Index.

In other words

When all columns which are required to satisfy the query are included in a non-clustered Index to avoid extra key lookup that index is known as Covering index.

This type of index can effectively improve the query performance. To understand the reason why Covering index helps in gaining extra performance let’s do some testing.

In this example, I’ll be creating two identical tables Stock & Stock1 with exactly same data and number of rows. To test the performance benefits, we’ve to create different type of indexes on both tables. On first table we’ll create an Index with all key columns and on Second table we’ll create a Covering index which will also include the non-key column. Let’s see whether it impacted the performance or not.

This is the query which we need to test: (Queries are given below)

select PID, Qty, calories, PName from Stock where Qty>=11 and calories< 200

On Table Stock we created a Clustered Index on PID and a non clustered index on Qty and Calories whereas on Table Stock1 we created a clustered Index on PID and a NON Clustered Index on Qty and Calories along with these two columns we also included a non-key column PName to convert this index into a covering index.

In First table, the Query uses the non-clustered index to get PID,Qty and calories and once all these values are available SQL Engine will search the PName column at the leaf node of the clustered Index (Basically it will be a lookup against the Clustering key which is PID) . Now if we could avoid traversing Clustered index to get relevant PName values we can easily improve the query performance; to do so We created another table Stock1 where the non-clustered index was created on Qty and calories but PName (a non-key column) was included in the index; Since the non clustered index already has the PName values now there is no need to traverse the Clustered index and we can save those resources.

When we executed both these queries after enabling Statistics IO ON; we saw a decrease in Logical Reads from 43 to 39. I know the numbers are not too much and theres not a major difference but there is definitely a improvement which depends heavily on Number of records & selection criterion.

image

 

Then we compared the Execution plans and saw that the first query on Stock table where we don’t have non-key column in the index that is taking 93% resources compared to second table query which is just 7%.

image

 

Moreover, if you compare the IO and CPU Cost there is a drastic improvement. Hence we can conclude by saying that performance wise Covering indexes are much better then any other normal Non Clustered Index.

image    image

 

In-case, you want to test this you can use the below mentioned query. (Don’t execute queries on your production box, these queries are provided as is without any warranty)

create database Test_Sarab
go
use test_sarab
go

create table Stock
(
PID int Identity primary key,
Qty int,
calories int,
PName varchar(30)
)

create table Stock1
(
PID int Identity primary key,
Qty int,
calories int,
PName varchar(30)
)

insert into Stock values
(10, 267, ‘Doughnut Custard’),
(11, 410, ‘Doughnut Iced Glazed’),
(11, 256, ‘Doughnut Jam’),
(11, 266, ‘Muffin Blueberry’),
(11, 149, ‘Breakfast Muffin’),
(13, 401, ‘Muffin Chocolate Chip’),
(9, 141, ‘Wholemeal Muffin’),
(13, 78, ‘Brown Bread’)
go 1001

insert into Stock1 values
(10, 267, ‘Doughnut Custard’),
(11, 410, ‘Doughnut Iced Glazed’),
(11, 256, ‘Doughnut Jam’),
(11, 266, ‘Muffin Blueberry’),
(11, 149, ‘Breakfast Muffin’),
(13, 401, ‘Muffin Chocolate Chip’),
(9, 141, ‘Wholemeal Muffin’),
(13, 78, ‘Brown Bread’)
go 1001

create index Id2 on Stock (Qty, calories)

create index FinalIDx on Stock1 (Qty, calories) include (PName)

–dbcc dropcleanbuffers

SET STATISTICS IO ON;
select PID, Qty, calories, PName from Stock where  Qty>=11 and calories< 200

                       select PID, Qty, calories, PName from Stock1 where  Qty>=11 and calories< 200

 

Thanks,

Sarabpreet Singh Anand