17 Mar

Unable to open Maintenance Plans in SSMS

Hi Friends,

A few days back, I got a backup failure alert regarding one of our most critical SQL Instance. I quickly connected to our jump-box and took a session of the server to diagnose further. From the job history I concluded that I need to modify the maintenance plan in order to fix it, so I located the respective maintenance plan (we have many maintenance plans on this instance) and when I double clicked I was welcomed with a strange unique error.

I was using SQL Server 2005 SP2 on this server. Here is a screenshot of the error i received.


The Error Says: <writing the error in text so that search engine could locate this entry and you can find it when needed>

Microsoft SQL Server Management Studio is unable to load this document:
SSIS object name is invalid. More specific errors would have been raised explaining the exact naming problem.

And I got a pop-up with the error:

Value cannot be null.
Parameter name: component (system.Design)

After clicking Ok I noticed the same first error is repeated in the error list without any description.


Now that was pretty strange and unique (to me at-least), so I started searching for similar kind of problems, after a lot of BINGing and GOOGLEing I came to know that there is a dts dll which sometimes doesn’t work or gets unregistered (reason mentioned but was not correct in my case so don’t want to mislead you all) and in order to fix this problem we can try to re-register the same dll.

Step i followed in order to resolve this:

  1. For registering dts.Dll you need to first locate the C:\Programs Files\ Microsoft SQL Server\100\DTS\Binn directory from Command prompt.

Note: The complete path may be different in your case. I am using c:\ drive since the SQL Instance is installed on C drive for this server.



2. Issue a command regsvr32 dts.dll

After pressing enter I got a pop-up confirming that the DTS.DLL has been registered.



That’s it, now open the SQL Server Management Studio and it will let you edit\view the SSIS packages. The problem I was having with backup was a different story all together, will tell you some other day.

In case you are still not able to open SSIS packages, you may also want to try the steps given below based on the SQL Engine you have. (The above solution worked for me like anything so never tried the additional option)

If the versions of the instances of SQL Server 2005 are not earlier than SQL Server 2005 SP2, run the following command at a command prompt to register the 32-bit Dts.dll file:

%windir%\syswow64\regsvr32 “%ProgramFiles(x86)%\Microsoft SQL Server\90\dts\binn\dts.dll”

You may also need to manually register MsDtsSrvrUtil.dll, using

%windir%\syswow64\regsvr32 “%ProgramFiles(x86)%\Microsoft SQL Server\90\dts\binn\MsDtsSrvrUtil.dll
If running 64-bit Integration Services, you may need to manually register the 64-bit dlls using these commands

%windir%\system32\regsvr32 “%ProgramFiles%\Microsoft SQL Server\90\dts\binn\dts.dll

%windir%\system32\regsvr32 “%ProgramFiles%\Microsoft SQL Server\90\dts\binn\MsDtsSrvrUtil.dll



Ideally it should fix the issue, pls. check if you registered the dll for correct install (32\64-bit)

also try this out: regsvr32 oleaut32.dll

regsvr32 actxprxy.dll

Also check if the client tools are also updated on the box where SQL 2005 SP4 is installed.


The above issue should get fixed if the client tools of SQL 2005 are also updated with SP4 and you’ve registered the correct set of dlls.



Sarabpreet Anand

3 Mar

Memory Configuration Changes | SQL Server 2012

Hi Friends,

There are a few changes in SQL Server 2012 in Memory Manger, and a DBA should be aware of such changes before upgrading or installation of SQL 2012 Instance on production.

In Earlier versions of SQL Server (before SQL 2012) SQL Server was also consuming memory outside of the limit defined in Max Server Memory. Those who don’t know, in earlier versions SQL Server was allocating memory mainly in two ways Single Page and Multi Page. Single page allocation was consuming memory from the limit defined in Min & Max Memory Setting; the main consumer of this type of memory is Buffer Pool. Multi-Page allocation was capable to consume memory out of the limit which is S0_Called available for OS.

SQL Server 2012 has a new page allocator that manages both single-page and multi-page allocations (less than 8 KB and greater than 8 KB allocation requests). Therefore, there is no separate categorization that is called “Multi-Page allocations” in SQL Server 2012.


Changes to “max & min server memory (MB)”

In earlier versions of SQL Server (2005, 2008, 2008 R2), the following configuration options determined the limits of physical memory that the buffer pool consumed. Notice that we are talking about physical memory in this case, physical memory that is committed by the SQL Server database engine process:

  • max server memory (MB)
  • min server memory (MB)

This configuration option typically included only memory allocations that were less than or equal to 8 KB in the SQL Server process. These allocations were also referred to as “single_page_allocations” because the SQL Server memory manager used a page size of 8 KB. This configuration did not include the following memory allocation requests:

  • Multi-Page allocations from SQL Server: These are allocations that request more than 8 KB.
  • CLR allocations: These allocations include the SQL CLR heaps and its global allocations that are created during CLR initialization.
  • Memory allocations for thread stacks in the SQL Server process.
  • Memory allocation requests made directly to Windows: These include Windows heap usage and direct virtual allocations made by modules that are loaded into the SQL Server process. Examples of such memory allocation requests include allocations from extended stored procedure DLLs, objects that are created by using Automation procedures (sp_OA calls), and allocations from linked server providers.


Starting with SQL Server 2012, Multi-Page allocations and CLR allocations are also included in memory limits that are controlled by max server memory (MB) and min server memory (MB).

Carefully review your current max server memory (MB) and min server memory (MB) values after you upgrade to SQL Server 2012. You should review such values because SQL Server 2012 now includes and accounts for more memory allocations compared to earlier versions. These changes apply to both 32-bit and 64-bit versions of SQL Server 2012.

The following table indicates whether a specific type of memory allocation is controlled by the max server memory (MB) and min server memory (MB) configuration options.


Type of memory allocation SQL Server 2005, SQL Server 2008, SQL Server 2008 R2 SQL Server 2012
Single-page allocations Yes Yes, consolidated into “any size” page allocations
Multi-page allocations No Yes, consolidated into “any size” page allocations
CLR allocations No Yes
Thread stacks memory No No
Direct allocations from Windows No No


Changes to “memory_to_reserve”

Because SQL Server 2012 has the new “any size” page allocator that handles allocations greater than 8 KB, the memory_to_reserve value does not include the multi-page allocations. Except for this change, everything else remains the same with this configuration option.

AWE Feature is not available in SQL Server 2012


Even though the “awe enabled” feature is not available in 32-bit SQL Server 2012, you can still use the “locked pages” feature by assigning the “lock pages in memory” user right for the SQL Server startup account.

Carefully review the memory requirements of the existing instance of SQL Server before you upgrade to SQL Server 2012.

Do let us know your thoughts on this by leaving a comment.


Sarabpreet Anand

29 Feb

Which trigger will be triggered first?

Hi Friends,

Yesterday I was working on a specific requirement of a client to create DDL trigger for a Database to log & validate certain things. While doing so I discovered there were already two DDL triggers created on the DB for the same event. Now I was not sure which one will be triggered first and after searching msdn I came across a stored procedure which can control this behaviour to certain extent.

SP_SetTriggerOrder, this is a system stored procedure which can specify the after triggers that should be fired at First or Last order. Unfortunately you can’t control the complete sequence but this much is more than enough to handle most of the requirements.

This system stored procedure accepts four parameters namely:

TriggerName: Trigger Name, you should mention schema name wherever applicable. However as stated earlier also this will not accept InsteadOf triggers.

Order: This parameter can accept one of these three values (‘First’, ‘Last’, ’None’) you can specify None if you don’t wish to set any sequence for this trigger.

Stmttype (StatementType): you can specify here the EventType for which the trigger has been created. You can’t specify an EventGroup.

NameSpace: This parameter describes the scope of trigger, the valid values are (‘Database’, ‘Server’, ‘NULL’) to define Database Scope, Server Scope and DML Trigger respectively.

Here is an example where I am configuring the zone_target Stored procedure to be the last fired trigger for Insert Statement

Exec sp_settriggerorder

@triggername= ‘sales.zone_target’,


@stmttype = ‘Insert’;

Caveat –there are a few things you need to consider before using this SP.

  1. Replication automatically generates a First trigger for the tables if an immediate update or Queued updating subscription is configured, so in case you’ve defined a first trigger for a table you’ll not be able to add it in replication. To make it work either alter the replication trigger or change the sequence for other trigger on the object.
  2. There can be only one First and one Last trigger for each statement on a single table.
  3. If an ALTER TRIGGER statement changes a first or last trigger, the First or Last attribute originally set on the trigger is dropped, and the value is replaced by None. The order value must be reset by using sp_settriggerorder.
  4. If the same trigger must be designated as the first or last order for more than one statement type, sp_settriggerorder must be executed for each statement type. Also, the trigger must be first defined for a statement type before it can be designated as the First or Last trigger to fire for that statement type.
  5. The order of execution of DDL triggers that exist on the same event is as follows:
  • The server-level trigger marked First.
  • Other server-level triggers.
  • The server-level trigger marked Last.
  • The database-level trigger marked First.
  • Other database-level triggers.
  • The database-level trigger marked Last.

Hope this will help you, do leave us comments & let us know how we are doing.




Sarabpreet Anand

18 Feb

Memory Shortage for all 32 Bit SQL Server 2012 Instances

Hi Friends,

As you all know AWE has been deprecated and not available from SQL Server 2012 (Find details here), as a result when you upgrade from SQL Server 2008 R2 to the next version of SQL Server,you cannot use more memory than what the virtual address space limits in 32-bit instances of SQL Server. If you must have more memory for this instance of SQL Server, then you have to migrate to a 64-bit instance of SQL Server.

After you upgrade to SQL Server 2012, the maximum amount of memory that can be used by the 32-bit instance of SQL Server is determined as follows.

SQL Server and operating system settings Maximum amount of memory used by SQL Server
32-bit SQL Server on 32-bit OS

2 GB

32-bit SQL Server on 32-bit OS with /3G boot option 3 GB
32-bit SQL Server on 64-bit OS 4 GB

Those who don’t know much about AWE;

AWE was being used in earlier versions of 32 Bit SQL Servers so that they can support more than 4GB of Physical Memory. SQL Server can access up to 64 GB of memory on Microsoft Windows Server 2000 & 2003.

By Default, Standard 32-bit addresses can map a maximum of 4 GB of memory. By default, on 32-bit Microsoft Windows operating systems, 2 GB is reserved for the operating system, and 2 GB is made available to the application. If you specify a /3gb parameter in the Boot.ini file of Windows Server, the operating system reserves only 1 GB of the address space, and the application can access up to 3 GB.

AWE is a set of extensions to the memory management functions of Windows that allow applications to address more memory than the 2-3 GB that is available through standard 32-bit addressing. AWE lets applications acquire physical memory, and then dynamically map views of the non-paged memory to the 32-bit address space.

Although the 32-bit address space is limited to 4 GB, the non-paged memory can be much larger. This enables memory-intensive applications, such as large database systems, to address more memory than can be supported in a 32-bit address space.

Before you configure the operating system for AWE on 32-bit operating systems, you must add the /pae parameter to the Boot.ini file and reboot the computer. Windows Server 2003, PAE is automatically enabled only if the server is using hot-add memory devices in Windows Server 2003.

The SQL Server buffer pool can fully utilize AWE mapped memory; however, only database pages can be dynamically mapped to and unmapped from SQL Server’s virtual address space and take full advantage of memory allocated through AWE. AWE does not directly help supporting additional users, databases, queries, and other objects that permanently reside in the virtual address space.

Lock pages in Memory

This policy determines which accounts can use a process to keep data in physical memory, preventing the system from paging the data to virtual memory on disk. The Lock Pages in Memory option is set to OFF by default in SQL Server. If you have system administrator permissions, you can enable the option manually by using the Windows Group Policy tool (gpedit.msc) and assign this permission to the account that SQL Server is running.

Although it is not required, we recommend locking pages in memory when using 64-bit operating systems. For 32-bit operating systems, Lock pages in memory permission must be granted before AWE is configured for SQL Server.


Sarabpreet Anand

16 Feb

SQL Server AWE Enabled Option | Deprecated

Like we already know, Windows-based applications can use Windows AWE (Address Windowing Extensions) APIs to allocate and to map physical memory into the process address space. AWE allow 32-bit operating systems to access large amounts of memory. Memory that is allocated by using this method is never paged out by the operating system, provided “Lock Pages In Memory” user right (LPIM) has been granted to the application Service account.


Lock Pages in memory is by default given to Local system Service account.

SQL Server supports dynamic allocation of AWE memory on Windows Server. The SQL Server 64-bit version also uses “locked pages” to prevent the process working set (committed memory) from being paged out or trimmed by the operating system. When you enable “locked pages,” it is very important to set an appropriate value for “max server memory” and for “min server memory” configuration options for each instance of SQL Server to avoid system-wide problems.

During startup, AWE reserves only a small portion of AWE-mapped memory. As additional AWE-mapped memory is required, the operating system dynamically allocates it to SQL Server. Similarly, if fewer resources are required, SQL Server can return AWE-mapped memory to the operating system for use by other processes or applications.

AWE Feature is not available in SQL Server 2012.

Even though the “awe enabled” feature is not available in 32-bit SQL Server 2012, you can still use the “locked pages” feature by assigning the “lock pages in memory” user right for the SQL Server startup account.



The virtual address space that is reserved for these allocations is determined by the memory_to_reserve configuration option. The default value that SQL Server uses is 256 MB. To override the default value, use the SQL Server -g startup parameter. This part of the virtual address space is also known as “Memory-To-Leave” or “non-Buffer Pool region.”

Because SQL Server 2012 has the new “any size” page allocator that handles allocations greater than 8 KB, thememory_to_reserve value does not include the multi-page allocations. Except for this change, everything else remains the same with this configuration option.



Sarabpreet Anand

12 Feb

Calculate Number of Leaves – excluding weekends

Hi Friends,

Sometime back I was answering questions on TechNet Forum when I saw an interesting and genuine problem.

The user wanted to calculate the number of leaves taken by an employee excluding weekends. Now calculating the number of leaves wasn’t a big deal all we need to do is just use datediff function but excluding the weekends was something which forced me to think harder.

Then I started digging into this and finally I was able to make logic.

Here is the script:  (Updated)

Direct Link: (Updated)

It works pretty well

For the sake of simplicity I’m creating the table with just three columns:

  1. leave_start
  2. leave_end
  3. leave_working_days

Here is the output:


Do let me know your way to tackle this problem – by way of leaving a comment.



Sarabpreet Anand

2 Feb

Trace Flag 2371 to Control Auto update Statistics threshold and behaviour in SQL Server

Hi Friends,

As you already know, the auto update statistics feature of SQL Server relies on number of rows changed or updated to determine if statistics update is needed.  The statistics of a table will only be automatically updated if the number of rows changed exceeds a threshold.

AutoStat Threshold

The sysindexes.rowmodctr column maintains a running total of all modifications to a table that, over time, can adversely affect the query processor’s decision making process. This counter is updated each time any of the following events occurs:

  • A single row insert is made.
  • A single row delete is made.
  • An update to an indexed column is made.

NOTE: TRUNCATE TABLE does not update rowmodctr.

The basic algorithm for auto update statistics is:

  • If the cardinality for a table is less than six and the table is in the tempdb database, auto update with every six modifications to the table.
  • If the cardinality for a table is greater than 6, but less than or equal to 500, update status every 500 modifications.
  • If the cardinality for a table is greater than 500, update statistics when (500 + 20 percent of the table) changes have occurred.
  • For table variables, cardinality changes do not trigger auto update statistics.

NOTE: In this strictest sense, SQL Server counts cardinality as the number of rows in the table.

NOTE: In addition to cardinality, the selectivity of the predicate also affects AutoStats generation. This means that statistics may not be updated after every 500 modifications if cardinality were < 500 or for every 20% of changes if cardinality were > 500.

Problem with VLDBs \ Large Tables

When a table becomes very large, the old threshold (a fixed rate – 20% of rows changed) may be too high and the Autostat process may not be triggered frequently enough. This could lead to potential performance problems. SQL Server 2008 R2 Service Pack 1 and later versions introduce trace flag 2371 that you can enable to change this default behaviour.The higher the number of rows in a table, the lower the threshold will become to trigger an update of the statistics.

For example, if the trace flag is activated, update statistics will be triggered on a table with 1 billion rows when 1 million changes occur. If the trace flag is not activated, then the same table with 1 billion records would need 200 million changes before an update statistics is triggered.

You can use the following guidance for enabling the new trace flag in your environment:

  1. If you have not observed performance issues due to outdated statistics, there is no need to enable this trace flag.
  2. If you are on SAP systems, enable this trace flag.  Refer to this blog
  3. If you have to rely on nightly job to update statistics because current automatic update is not triggered frequently enough, consider enabling SQL Server trace flag 2371 to reduce the threshold.

sys.dm_db_stats_Properties DMV

Starting SQL Server 2008 R2 SP2 and SQL Server 2012 Service Pack 1, a new DMV sys.dm_db_stats_Properties is introduced to more accurately track the number of rows changed in a table.   You can use this DMV to detect number of rows changed in a table and decide if you wish to update statistics manually.

DMV sys.dm_db_stats_properties (Transact-SQL)Returns properties of statistics for the specified database object (table or indexed view) in the current SQL Server database.

The DMV takes two parameters Object_ID and Stats_ID, where both the values are int. Syntax: sys.dm_db_stats_properties (object_id, stats_id)

object_id: Is the ID of the object in the current database for which properties of one of its statistics is requested.

stats_id:   Is the ID of statistics for the specified object_id. The statistics ID can be obtained from the sys.stats dynamic management view.

The DMV will return number of columns; the columns which require our attention are last_updated&modification_Counter.

  1. last_updated will tell us when was the last time the stats for this given table\index was updated.
  2. Modification_Counter Column will reflect the Total number of modifications for the leading statistics column (the column on which the histogram is built) since the last time statistics were updated.

System Center Advisor checks for statistics information using this DMV and raises an alert if the column modification counter is more than 1000.

Hope this new Trace Flag and DMV might help you to resolve the performance issues related to obsolete stats.

Do let me know your take on this – just leave a comment.



Sarabpreet Anand

28 Jan

Find Jobs scheduled to run during a specific time range

Hi Friends,

Today I was going to raise a change which was related to SAN Migration for one of our production servers. The affected server will not be available for almost 3 hours. I prepared the plan and asked Mr. X to look into this & execute the change during the weekend. Mr. X happily agreed to it, I asked him to go through the plan and clear the doubts if any.

After reviewing the plan he asked multiple question which I answered promptly. His last question was “Sarab, you’ve mentioned that we need to execute the daily jobs manually which are scheduled to run between 2:30 AM to 4:30 AM CST. But there are 48 Odd jobs on this server; do you want me to check each and every job manually?”

This question by Mr. X forced me to complete & document this long pending query (which I always wanted to document somewhere but I was lingering it due to Laziness)

So I wrote the complete query & documented it so that everyone can make use of this query.

Here is the link for the query: http://gallery.technet.microsoft.com/Find-Jobs-scheduled-to-run-d565fdfa/file/68524/1/Find%20Jobs%20scheduled%20to%20run%20during%20a%20specific%20time%20range.sql

The logic is pretty simple, the only challenge I faced was to change the time and date which is given in non-standard format.

To change the time I created a Function (also available with this script) which can change the time from 15500    to 01:55:00 which is more readable in nature.

This query gives the capability to filter\find the enabled jobs on the server which are scheduled to be executed between a certain time ranges.

You can also tweak the query to include the date parameter in where clause.

Here is the example output:


Do let me know your inputs on the same. – Just leave a quick comment.



Sarabpreet Anand