Tag Archives: SQLServer

Compare Data in Two SQLServer Tables without Any 3rd-Party Tool

For comparing data in tables you don’t need any 3rd party tool, SQL Server ships with the tablediff utility which can be used to compare the data in two tables. You can use it for troubleshooting replication. This utility can be used from the command prompt or in a batch file to perform the following tasks:

  • A row by row comparison between a source table in an instance of Microsoft SQL Server acting as a replication Publisher and the destination table at one or more instances of SQL Server acting as replication Subscribers.
  • Perform a fast comparison by only comparing row counts and schema.
  • Perform column-level comparisons.
  • Generate a Transact-SQL script to fix discrepancies at the destination server to bring the source and destination tables into convergence.
  • Log results to an output file or into a table in the destination database.

Lets see how to use it…

For testing i’ve created two tables Table1 and Table2 in DatabaseA and DatabaseB respectively on my local Instance, Then inserted few records in each table.

–On ServerA

create database DatabaseA

go

use DatabaseA

go

create table Table1(

tid int primary key,

tname varchar(20))

go

insert into Table1 values(1,’Sarab’)

insert into Table1 values(13,’Amit B’)

insert into Table1 values(18,’Amit K’)

insert into Table1 values(21,’Sachin’)

–ServerB(I’ve created this on my local instance however, You can create this on another server\Instance)

create database DatabaseB

go

use DatabaseB

go

create table Table2(

tid int primary key,

tname varchar(20))

go

insert into Table2 values(1,’Sarab’)

insert into Table2 values(13,’Amit B’)

insert into Table2 values(12,’Rishu’)

insert into Table2 values(18,’MV Priyank’)

The source table in the comparison must contain at least one primary key, identity, or ROWGUID column.

Now if you query both the tables you can see that I’ve purposely inserted 2 same records in both the tables (tid 1 & tid 13 marked Blue in the screenshot) and inserted same primary key with different values for tname (tid 18 marked Red in the screenshot) to show you how TableDiff Utility consider these scenarios.

1_SQL_Server_Compare_the_Data_in_Two_Tables_without_Any_3rd_Party_Tool

 

Location of the Utility File:
The TableDiff.exe Utility can be found at this location: C:\Program Files\Microsoft SQL Server\100\COM

Note – In case you are using SQL Server 2005 replace 100 with 90.

To start the action launch command prompt and locate the directory having TableDiff.exe utility, then execute the command below:

TableDiff.exe -SourceServer . -SourceDatabase DatabaseA -SourceTable Table1 -DestinationServer . -DestinationDatabase DatabaseB -DestinationTable Table2 -et difft1 -f d:\my_Diff_File.sql

2_SQL_Server_Compare_the_Data_in_Two_Tables_without_Any_3rd_Party_Tool

 

Parameters used:

Most of the parameters are self explanatory, the odd ones I am explaining below:

-et table_name
Specifies the name of the result table to create. If this table already exists, -DT must be used or the operation will fail.

-f [ file_name ]
Generates a Transact-SQL script to bring the table at the destination server into convergence with the table at the source server. You can optionally specify a name and path for the generated Transact-SQL script file. If file_name is not specified, the Transact-SQL script file is generated in the directory where the utility runs.
To get the complete list of all supported parameters visit: http://msdn.microsoft.com/en-us/library/ms162843.aspx

When -et parameter is used, the result table will be created in Destination Database. This table returns all rows where the data got mismatched based on primary (or Unique) Key including the script to resolve the conflict. It also returns all those rows which are only present at source or destination.
Here’s a screenshot of the output:

3_SQL_Server_Compare_the_Data_in_Two_Tables_without_Any_3rd_Party_Tool

 

And -f parameter will generate a file with T-SQL Script named my_Diff_File.sql which will have all the commands to resolve all the conflicts. The Output looks like this:

— Host: .

— Database: [databaseB]

— Table: [dbo].[table2]

DELETE FROM [dbo].[table2] WHERE [tid] = 12

UPDATE [dbo].[table2] SET [tname]=N’Amit K’ WHERE [tid] = 18

INSERT INTO [dbo].[table2] ([tid],[tname]) VALUES (21,N’Sachin’)

Points to consider:

  • The tablediff utility cannot be used with non-SQL Server servers.
  • Tables with sql_variant data type columns are not supported.

Permissions Required:

  • To compare tables, you need SELECT ALL permissions on the table objects being compared.
  • To use the -et option, you must be a member of the db_owner fixed database role, or at least have CREATE TABLE permission in the subscription database and ALTER permission on the destination owner schema at the destination server.
  • To use the -dt option, you must be a member of the db_owner fixed database role, or at least have ALTER permission on the destination owner schema at the destination server.
  • To use the -o or -f options, you must have write permissions to the specified file directory location.

Security Note: When possible, supply security credentials at runtime. If you must store credentials in a script file, you should secure the file to prevent unauthorized access.

 

Have Fun!

Regards

Sarabpreet Anand

Introduction of SQL Server on Azure VM (IaaS)

SQL Server on top of a VM running in Azure is yet another example of Infrastructure-as-a-Service (IaaS). This is just like running SQL Server on a VM in your own data center, the only difference here is you don’t have to own all that hardware and maintain it, all the hardware is owned, hosted & maintained by Microsoft only. The best part is you still get an option to bring your own license of SQL Server if you have one – means if you already have a SQL Server license you can (migrate your license to Azure) use the same license on cloud. If you use this model you’ll be only charged for (compute + storage cost)

Even if you don’t have a SQL License don’t you worry, you will get an option to select a pre-installed SQL VM image from gallery while creating your VM and you’ll be charged for your (compute + storage cost + SQL License cost) means you pay only what you use – As per your usage J; the billing is per minute basis.

Just like any other service here also you’ll get multiple service tiers, each with different capabilities and SLAs. I’ll discuss more on this in my later blogs.

While you will start using SQL Server on Azure VM, but it will be same On-Premise SQL Server product (Edition & Version of your own choice) with the same features and capabilities. You don’t have to compromise on any feature.

You have multiple ways of migrating data from on-premise to SQL Server running on Azure VM.

The icing of the cake is getting an option to configure AlwaysOn Availability group by using multiple VMs in Virtual Network.

Hope you enjoyed the post, feel free to leave a comment. 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips\blog posts in your Inbox

Why the size of the backup file changes?

Hi Friends,

Have you ever noticed, while taking a compressed backup the file which gets created initially has a different size than the final file size when the backup completes. To be more precise, the file which gets created initially allocates more space than the final backup file size.

 

But the question is why?

Actually the size of the backup file depends upon the compression settings applied while taking the backup. Now since SQL Server is programmed to get maximum performance, SQL Server calculates the estimated size of the backup and initially allocates the space depending upon an algorithm to avoid frequent growth which results in size changes.

 

Now the question arises does this happens every time?

The answer is No; SQL Server can estimate accurate size of the backup if we are not using any compression settings, based on the size of the Database.

 

Then on what algorithm the SQL Engine decides the backup size?

The size is estimated from the number of allocated extends in the data files, this should be close to the reserved column in the output of sp_spaceused.  In the case of compressed backups, the size depends upon the compression setting which is again dependent upon the data as to how much it can be compressed. In this case SQL server creates the initial file equal to one third of the reserved size of the database. If at any point during backup more space is required the file is extended and if in case the final size is smaller than the initial size (which will be the case most of the time), SQL Server will trim the backup file to the actual used size.

For Example: The DB Size is 50 GB, the target backup size decided by SQL Algorithm comes out to be 17 GB, means the initial size of the backup file would be somewhere around 17GBs but the final size comes to 10GB, because of the compression. The file gets trimmed before releasing the lock since the actual used size was just 10 GBs.

 

Regards

Sarabpreet Anand

Memory Configuration Changes | SQL Server 2012

Hi Friends,

There are a few changes in SQL Server 2012 in Memory Manger, and a DBA should be aware of such changes before upgrading or installation of SQL 2012 Instance on production.

In Earlier versions of SQL Server (before SQL 2012) SQL Server was also consuming memory outside of the limit defined in Max Server Memory. Those who don’t know, in earlier versions SQL Server was allocating memory mainly in two ways Single Page and Multi Page. Single page allocation was consuming memory from the limit defined in Min & Max Memory Setting; the main consumer of this type of memory is Buffer Pool. Multi-Page allocation was capable to consume memory out of the limit which is S0_Called available for OS.

SQL Server 2012 has a new page allocator that manages both single-page and multi-page allocations (less than 8 KB and greater than 8 KB allocation requests). Therefore, there is no separate categorization that is called “Multi-Page allocations” in SQL Server 2012.

 

Changes to “max & min server memory (MB)”

In earlier versions of SQL Server (2005, 2008, 2008 R2), the following configuration options determined the limits of physical memory that the buffer pool consumed. Notice that we are talking about physical memory in this case, physical memory that is committed by the SQL Server database engine process:

  • max server memory (MB)
  • min server memory (MB)

This configuration option typically included only memory allocations that were less than or equal to 8 KB in the SQL Server process. These allocations were also referred to as “single_page_allocations” because the SQL Server memory manager used a page size of 8 KB. This configuration did not include the following memory allocation requests:

  • Multi-Page allocations from SQL Server: These are allocations that request more than 8 KB.
  • CLR allocations: These allocations include the SQL CLR heaps and its global allocations that are created during CLR initialization.
  • Memory allocations for thread stacks in the SQL Server process.
  • Memory allocation requests made directly to Windows: These include Windows heap usage and direct virtual allocations made by modules that are loaded into the SQL Server process. Examples of such memory allocation requests include allocations from extended stored procedure DLLs, objects that are created by using Automation procedures (sp_OA calls), and allocations from linked server providers.

 

Starting with SQL Server 2012, Multi-Page allocations and CLR allocations are also included in memory limits that are controlled by max server memory (MB) and min server memory (MB).

Carefully review your current max server memory (MB) and min server memory (MB) values after you upgrade to SQL Server 2012. You should review such values because SQL Server 2012 now includes and accounts for more memory allocations compared to earlier versions. These changes apply to both 32-bit and 64-bit versions of SQL Server 2012.

The following table indicates whether a specific type of memory allocation is controlled by the max server memory (MB) and min server memory (MB) configuration options.

 

Type of memory allocation SQL Server 2005, SQL Server 2008, SQL Server 2008 R2 SQL Server 2012
Single-page allocations Yes Yes, consolidated into “any size” page allocations
Multi-page allocations No Yes, consolidated into “any size” page allocations
CLR allocations No Yes
Thread stacks memory No No
Direct allocations from Windows No No

 

Changes to “memory_to_reserve”

Because SQL Server 2012 has the new “any size” page allocator that handles allocations greater than 8 KB, the memory_to_reserve value does not include the multi-page allocations. Except for this change, everything else remains the same with this configuration option.

AWE Feature is not available in SQL Server 2012

http://www.sarabpreet.com/2016/02/sql-server-awe-enabled-option-deprecated/1509

Even though the “awe enabled” feature is not available in 32-bit SQL Server 2012, you can still use the “locked pages” feature by assigning the “lock pages in memory” user right for the SQL Server startup account.

Carefully review the memory requirements of the existing instance of SQL Server before you upgrade to SQL Server 2012.

Do let us know your thoughts on this by leaving a comment.

Regards

Sarabpreet Anand

Which trigger will be triggered first?

Hi Friends,

Yesterday I was working on a specific requirement of a client to create DDL trigger for a Database to log & validate certain things. While doing so I discovered there were already two DDL triggers created on the DB for the same event. Now I was not sure which one will be triggered first and after searching msdn I came across a stored procedure which can control this behaviour to certain extent.

SP_SetTriggerOrder, this is a system stored procedure which can specify the after triggers that should be fired at First or Last order. Unfortunately you can’t control the complete sequence but this much is more than enough to handle most of the requirements.

This system stored procedure accepts four parameters namely:

TriggerName: Trigger Name, you should mention schema name wherever applicable. However as stated earlier also this will not accept InsteadOf triggers.

Order: This parameter can accept one of these three values (‘First’, ‘Last’, ’None’) you can specify None if you don’t wish to set any sequence for this trigger.

Stmttype (StatementType): you can specify here the EventType for which the trigger has been created. You can’t specify an EventGroup.

NameSpace: This parameter describes the scope of trigger, the valid values are (‘Database’, ‘Server’, ‘NULL’) to define Database Scope, Server Scope and DML Trigger respectively.

Here is an example where I am configuring the zone_target Stored procedure to be the last fired trigger for Insert Statement

Exec sp_settriggerorder

@triggername= ‘sales.zone_target’,

@order=’Last’,

@stmttype = ‘Insert’;

Caveat –there are a few things you need to consider before using this SP.

  1. Replication automatically generates a First trigger for the tables if an immediate update or Queued updating subscription is configured, so in case you’ve defined a first trigger for a table you’ll not be able to add it in replication. To make it work either alter the replication trigger or change the sequence for other trigger on the object.
  2. There can be only one First and one Last trigger for each statement on a single table.
  3. If an ALTER TRIGGER statement changes a first or last trigger, the First or Last attribute originally set on the trigger is dropped, and the value is replaced by None. The order value must be reset by using sp_settriggerorder.
  4. If the same trigger must be designated as the first or last order for more than one statement type, sp_settriggerorder must be executed for each statement type. Also, the trigger must be first defined for a statement type before it can be designated as the First or Last trigger to fire for that statement type.
  5. The order of execution of DDL triggers that exist on the same event is as follows:
  • The server-level trigger marked First.
  • Other server-level triggers.
  • The server-level trigger marked Last.
  • The database-level trigger marked First.
  • Other database-level triggers.
  • The database-level trigger marked Last.

Hope this will help you, do leave us comments & let us know how we are doing.

 

 

Regards

Sarabpreet Anand

Calculate Number of Leaves – excluding weekends

Hi Friends,

Sometime back I was answering questions on TechNet Forum when I saw an interesting and genuine problem.

The user wanted to calculate the number of leaves taken by an employee excluding weekends. Now calculating the number of leaves wasn’t a big deal all we need to do is just use datediff function but excluding the weekends was something which forced me to think harder.

Then I started digging into this and finally I was able to make logic.

Here is the script:  (Updated)

Direct Link: (Updated)

It works pretty well

For the sake of simplicity I’m creating the table with just three columns:

  1. leave_start
  2. leave_end
  3. leave_working_days

Here is the output:

1_SQL_Server_T-SQL_Calculate_Number_of_Leaves_excluding_weekends

Do let me know your way to tackle this problem – by way of leaving a comment.

Enjoy!

Regards

Sarabpreet Anand

SQLServer Day 11th January 2014 | Gurgaon

Hi Friends,

SQLServerGeeks.com cordially invites you to SQL Server Day, the monthly dose of SQL Server knowledge & learning. And this time we are going to cover both Dev and ITPro Audience with two different topics. This is yet another opportunity to learn.

The Sessions are Optimizing from the Field: Parallelism and SQL Server Separation of Duties – Security redefined. Events are hosted at Microsoft premises in Gurgaon so that your experience is fabulous.

Register Fast and block your seat. It is absolutely free. Also forward this email to your friends and colleagues to spread awareness.

Note: The event will start sharp at 10.00 am. Please come by 9.30 am for registration. (Due to security policies, registration takes time. Please come with a valid photo ID and the ticket print) and Refreshments will be served

To Register visit: http://www.eventbrite.com/e/sql-server-day-january-11-2014-saturday-at-microsoft-gurgaon-tickets-10041425173

Sarabpreet Singh Anand SQLServerDay 11 January 2014 gurgaon

See you at the event!

MVP Community Day \ PowerShell Day 9th Nov’2013 | Gurgaon

Dear Friends,Community_Day_9_nov_2013_Sarab

This Saturday (9th Nov’2013) South East Asia MVP Community is celebrating MVP Community Day, a day dedicated to Knowledge sharing & Learning.

 

On this very day New Delhi PowerShell User Group is organizing an event in Gurgaon, and luckily my session “Managing SQL Server using PowerShell” has been selected. Aman Dhally (PowerShell MVP) is also taking a session on Managing Event Logs using PowerShell.

This session is for all those who want to learn PowerShell and want to know more about its integration with Event Logs etc. Please use the below link to register for the event.

Register here : https://www.eventbrite.ie/event/9156263631

Session Details
Time Session Topic Presenter
9:30 – 10 AM Start of Registrations  
10  – 11 AM Managing SQL Server using PowerShell Sarabpreet Singh Anand
(
MVP SQL)
11 -11:15 AM Tea Break  
11:15 – 12:15 PM Managing Windows Event Logs using PowerShell Aman Dhally
(
MVP PowerShell)

 

See you at the event.

Regards
Sarabpreet Singh Anand & Aman Dhally
http://newdelhipowershellusergroup.blogspot.in/