Tag Archives: sql

Compare Data in Two SQLServer Tables without Any 3rd-Party Tool

For comparing data in tables you don’t need any 3rd party tool, SQL Server ships with the tablediff utility which can be used to compare the data in two tables. You can use it for troubleshooting replication. This utility can be used from the command prompt or in a batch file to perform the following tasks:

  • A row by row comparison between a source table in an instance of Microsoft SQL Server acting as a replication Publisher and the destination table at one or more instances of SQL Server acting as replication Subscribers.
  • Perform a fast comparison by only comparing row counts and schema.
  • Perform column-level comparisons.
  • Generate a Transact-SQL script to fix discrepancies at the destination server to bring the source and destination tables into convergence.
  • Log results to an output file or into a table in the destination database.

Lets see how to use it…

For testing i’ve created two tables Table1 and Table2 in DatabaseA and DatabaseB respectively on my local Instance, Then inserted few records in each table.

–On ServerA

create database DatabaseA

go

use DatabaseA

go

create table Table1(

tid int primary key,

tname varchar(20))

go

insert into Table1 values(1,’Sarab’)

insert into Table1 values(13,’Amit B’)

insert into Table1 values(18,’Amit K’)

insert into Table1 values(21,’Sachin’)

–ServerB(I’ve created this on my local instance however, You can create this on another server\Instance)

create database DatabaseB

go

use DatabaseB

go

create table Table2(

tid int primary key,

tname varchar(20))

go

insert into Table2 values(1,’Sarab’)

insert into Table2 values(13,’Amit B’)

insert into Table2 values(12,’Rishu’)

insert into Table2 values(18,’MV Priyank’)

The source table in the comparison must contain at least one primary key, identity, or ROWGUID column.

Now if you query both the tables you can see that I’ve purposely inserted 2 same records in both the tables (tid 1 & tid 13 marked Blue in the screenshot) and inserted same primary key with different values for tname (tid 18 marked Red in the screenshot) to show you how TableDiff Utility consider these scenarios.

1_SQL_Server_Compare_the_Data_in_Two_Tables_without_Any_3rd_Party_Tool

 

Location of the Utility File:
The TableDiff.exe Utility can be found at this location: C:\Program Files\Microsoft SQL Server\100\COM

Note – In case you are using SQL Server 2005 replace 100 with 90.

To start the action launch command prompt and locate the directory having TableDiff.exe utility, then execute the command below:

TableDiff.exe -SourceServer . -SourceDatabase DatabaseA -SourceTable Table1 -DestinationServer . -DestinationDatabase DatabaseB -DestinationTable Table2 -et difft1 -f d:\my_Diff_File.sql

2_SQL_Server_Compare_the_Data_in_Two_Tables_without_Any_3rd_Party_Tool

 

Parameters used:

Most of the parameters are self explanatory, the odd ones I am explaining below:

-et table_name
Specifies the name of the result table to create. If this table already exists, -DT must be used or the operation will fail.

-f [ file_name ]
Generates a Transact-SQL script to bring the table at the destination server into convergence with the table at the source server. You can optionally specify a name and path for the generated Transact-SQL script file. If file_name is not specified, the Transact-SQL script file is generated in the directory where the utility runs.
To get the complete list of all supported parameters visit: http://msdn.microsoft.com/en-us/library/ms162843.aspx

When -et parameter is used, the result table will be created in Destination Database. This table returns all rows where the data got mismatched based on primary (or Unique) Key including the script to resolve the conflict. It also returns all those rows which are only present at source or destination.
Here’s a screenshot of the output:

3_SQL_Server_Compare_the_Data_in_Two_Tables_without_Any_3rd_Party_Tool

 

And -f parameter will generate a file with T-SQL Script named my_Diff_File.sql which will have all the commands to resolve all the conflicts. The Output looks like this:

— Host: .

— Database: [databaseB]

— Table: [dbo].[table2]

DELETE FROM [dbo].[table2] WHERE [tid] = 12

UPDATE [dbo].[table2] SET [tname]=N’Amit K’ WHERE [tid] = 18

INSERT INTO [dbo].[table2] ([tid],[tname]) VALUES (21,N’Sachin’)

Points to consider:

  • The tablediff utility cannot be used with non-SQL Server servers.
  • Tables with sql_variant data type columns are not supported.

Permissions Required:

  • To compare tables, you need SELECT ALL permissions on the table objects being compared.
  • To use the -et option, you must be a member of the db_owner fixed database role, or at least have CREATE TABLE permission in the subscription database and ALTER permission on the destination owner schema at the destination server.
  • To use the -dt option, you must be a member of the db_owner fixed database role, or at least have ALTER permission on the destination owner schema at the destination server.
  • To use the -o or -f options, you must have write permissions to the specified file directory location.

Security Note: When possible, supply security credentials at runtime. If you must store credentials in a script file, you should secure the file to prevent unauthorized access.

 

Have Fun!

Regards

Sarabpreet Anand

What is SQLAzure (Azure SQL Database)

SQLAzure or Azure SQL Database is a database-as-a-service cloud based offering (Microsoft Azure); this offers a fully managed relational Database solution which is highly compatible with current tools to manage SQL Server. This also offers built-in high availability & scalability based on requirements. You don’t have to worry anymore about the infrastructure or the management of the same, just create a Database and start using its features – that means rapid development and at the same time it accelerates your time to market.

Let me walk you through couple of features\offerings which will help you understand it better:

SQLAzure DB is available in Basic, Standard and Premium tiers. Each tier offers its own level of performance and capabilities to support tiniest to highly transactional database workload. The amazing part is you can change service tiers as your database requirements grow that too without any downtime; isn’t it amazing! This makes more sense to those scenarios where you can predict the peak usage and off-peak usage and based on the demand you can turn the knobs and manage cost.

Let’s say you can’t predict the actual usage or the usage pattern is not defined & you don’t want to waste resources\increase cost by over-provisioning then don’t you worry Elastic database pools are there to help you out.

When it comes to performance measure you can very well do that in SQLAzure and the unit of measure is a Database Transaction Unit (DTU) which represents the relative power of database to complete a database transaction.

You get all of these things without compromising anything over security; yes like On-premise SQL Server SQLAzure provides solid database security from limiting access to protecting data at a granular level. This gives you peace of mind & you can put your most confidential data over SQLAzure without any doubt.

How can we forget to mention the SLA (Service level agreement) and uptime; like any other service vendor Microsoft provides SLA which describes its commitments for uptime and connectivity; Azure provides 99.9% availability for its Web/Business tiers and 99.99% availability SLA for Basic, standard and premium tiers; each service tier has its own set of Business continuity features to choose.

To know more about SQLAzure stay tuned!

Feel free to leave a comment. 🙂

Thanks,
Sarabpreet Singh Anand
Subscribe now to get latest Tips\blog posts in your Inbox

Why the size of the backup file changes?

Hi Friends,

Have you ever noticed, while taking a compressed backup the file which gets created initially has a different size than the final file size when the backup completes. To be more precise, the file which gets created initially allocates more space than the final backup file size.

 

But the question is why?

Actually the size of the backup file depends upon the compression settings applied while taking the backup. Now since SQL Server is programmed to get maximum performance, SQL Server calculates the estimated size of the backup and initially allocates the space depending upon an algorithm to avoid frequent growth which results in size changes.

 

Now the question arises does this happens every time?

The answer is No; SQL Server can estimate accurate size of the backup if we are not using any compression settings, based on the size of the Database.

 

Then on what algorithm the SQL Engine decides the backup size?

The size is estimated from the number of allocated extends in the data files, this should be close to the reserved column in the output of sp_spaceused.  In the case of compressed backups, the size depends upon the compression setting which is again dependent upon the data as to how much it can be compressed. In this case SQL server creates the initial file equal to one third of the reserved size of the database. If at any point during backup more space is required the file is extended and if in case the final size is smaller than the initial size (which will be the case most of the time), SQL Server will trim the backup file to the actual used size.

For Example: The DB Size is 50 GB, the target backup size decided by SQL Algorithm comes out to be 17 GB, means the initial size of the backup file would be somewhere around 17GBs but the final size comes to 10GB, because of the compression. The file gets trimmed before releasing the lock since the actual used size was just 10 GBs.

 

Regards

Sarabpreet Anand

Which trigger will be triggered first?

Hi Friends,

Yesterday I was working on a specific requirement of a client to create DDL trigger for a Database to log & validate certain things. While doing so I discovered there were already two DDL triggers created on the DB for the same event. Now I was not sure which one will be triggered first and after searching msdn I came across a stored procedure which can control this behaviour to certain extent.

SP_SetTriggerOrder, this is a system stored procedure which can specify the after triggers that should be fired at First or Last order. Unfortunately you can’t control the complete sequence but this much is more than enough to handle most of the requirements.

This system stored procedure accepts four parameters namely:

TriggerName: Trigger Name, you should mention schema name wherever applicable. However as stated earlier also this will not accept InsteadOf triggers.

Order: This parameter can accept one of these three values (‘First’, ‘Last’, ’None’) you can specify None if you don’t wish to set any sequence for this trigger.

Stmttype (StatementType): you can specify here the EventType for which the trigger has been created. You can’t specify an EventGroup.

NameSpace: This parameter describes the scope of trigger, the valid values are (‘Database’, ‘Server’, ‘NULL’) to define Database Scope, Server Scope and DML Trigger respectively.

Here is an example where I am configuring the zone_target Stored procedure to be the last fired trigger for Insert Statement

Exec sp_settriggerorder

@triggername= ‘sales.zone_target’,

@order=’Last’,

@stmttype = ‘Insert’;

Caveat –there are a few things you need to consider before using this SP.

  1. Replication automatically generates a First trigger for the tables if an immediate update or Queued updating subscription is configured, so in case you’ve defined a first trigger for a table you’ll not be able to add it in replication. To make it work either alter the replication trigger or change the sequence for other trigger on the object.
  2. There can be only one First and one Last trigger for each statement on a single table.
  3. If an ALTER TRIGGER statement changes a first or last trigger, the First or Last attribute originally set on the trigger is dropped, and the value is replaced by None. The order value must be reset by using sp_settriggerorder.
  4. If the same trigger must be designated as the first or last order for more than one statement type, sp_settriggerorder must be executed for each statement type. Also, the trigger must be first defined for a statement type before it can be designated as the First or Last trigger to fire for that statement type.
  5. The order of execution of DDL triggers that exist on the same event is as follows:
  • The server-level trigger marked First.
  • Other server-level triggers.
  • The server-level trigger marked Last.
  • The database-level trigger marked First.
  • Other database-level triggers.
  • The database-level trigger marked Last.

Hope this will help you, do leave us comments & let us know how we are doing.

 

 

Regards

Sarabpreet Anand

LepideAuditor Suite Review

 

Product: LepideAuditor Suite Version 15.0

LepideAuditor Suite offers a composite platform to audit and monitor changes to Active Directory, Group Policy Objects, Exchange Server, SQL Server, and SharePoint Server. Software ensures business continuity by monitoring all configuration changes within IT environment. You get explicit answers for Who, What, When and Where questions related to all changes. It provides a single console to audit, report, review and alert on changes done to the important network components. Let’s review the SQL Server Audit part of this tool.

It is a Robust and amazing product which meets all your Instance and DB level auditing requirements. Report scheduling, Alerts and getting custom reports in your mailbox are a few out of the box features which makes this product outstanding.

 

1. Installation & Configuration

Installation is very easy, just like any other ordinary software. To start Auditing you’ve to first register your SQL Server Instance, configure a certain things & install Monitoring agent.

Let’s see how to register an Instance and what all parameters we can configure, after launching the product for the first time, go to settings tab and you’ll get this dialogue box.

1_sarabpreet

Once you select SQL Server, you’ll get dialogue box to connect to the SQL instance you wish to Monitor and from the same console you’ll be able to install Lepide Agent.

2_sarabpreet   3_sarabpreet    4_sarabpreet   6_sarabpreet 5_sarabpreet    

Once the agent is installed properly you’ll get the configuration console to configure the monitoring scope of the tool. The screenshots below are self-explanatory.

8_sarabpreet    7_sarabpreet

Once the agent is installed you can configure the SQL Instance where LepideAuditor Suite can keep all Audit logs and you can provide the name of the DB.

9_sarabpreet

What I liked the most is the inbuilt feature to schedule the Archival of audit data – this is obviously optional but a great feature.

10_sarabpreet

 

What happens under the hood?

To make this kind of robust auditing possible LepideAuditor Suite creates:

a. Couple of Stored Procedures in your Master Database

b. Creates a new Database named “LepideSQLTracker” (Customizable)

c. SQL Agent Job

d. SQL Server Login

 

2. Usage

Auditing is a basic need for every production environment but in some cases it becomes mandatory due to regulatory & compliance requirements like SOX, HIPPA etc. LepideAuditor Suite for SQL Server not only handles all those compliance requirements it also generates presentable reports for the same.

 

3. Is it really required?

You must be wondering why we need a tool when you can do all this yourself with the help of in-built SQL Features and some coding. Yes you are correct you can do so but this tool gives you much more than the basic auditing features. There is virtually no way to skip the tracking, it gives you an option to generate reports in multiple formats, and you can schedule reports and get those reports right in your mailbox. You can even configure alerts on events. You’ll realize the true power of this tool when you have more than just couple of servers, in those situations this tool helps you to manage the auditing from a single console and empower you to have a standard in all aspects be-it Reporting, tracking and monitoring changes, auditing or archiving of the logs.

This tool also provide you graphical representation of changes and you get multiple filter options. Moreover you get the actual T-SQL Statements of the changes which were executed along with the tracking\auditing records.

You can also create real-time monitoring alerts using the console. You don’t even have to change anything and worry about having different versions of SQL Server in your environment since the tool supports all SQL versions starting from SQL 2000 till SQL Server 2014.

Here are couple of screenshots of the console to show you how the console looks like:

Dashboard: There is a default Dashboard which has 6 different & most common things, but you can also customize it based on your needs, moreover you can also add another dashboard.

12_sarabpreet   

11_sarabpreet

 

Alerts: you can schedule reports or create alerts to be delivered directly to your mailbox.

13_sarabpreet

 

Audit Reports: there are tons of reports already available which you can extract.

15_sarabpreet   

14_sarabpreet

Easy to use UI and you can easily play with columns and re-arrange them

16_sarabpreet

 

Live Feed: I am sure you must have noticed Live Updates console by now, this is the coolest thing added in this version, in this console you get to see all changes in almost real time.

17_sarabpreet

There are 6 different graphs visible on the main SQL Instance level and you can change the timeline for each graph individually to select whether you want to see changes made during last day, week or month.

 

Nice and Presentable Reports

The tool itself let you export the reports in different formats like CSV, PDF or HTML, apart from that you also get a chance to export those nice looking presentable graphs as Images.

 

Why this is a Robust Product?

Every tool has some bugs or loopholes. The malicious users make use of those loopholes to do the unexpected without being trapped but in this tool there is no easy way out. I tested couple of scenarios to see if I can temporarily stop the monitoring or find a way to hide the activities but the tool was able to track each and every little activity. Here are the scenarios I tested:

a. Stopped the SQL Agent Service.

b. Disabled the default trace.

c. Stopped the monitoring itself from the Lepide console.

 

I personally recommend this tool.

You can download the trial version from here….

Thanks,
Sarabpreet Singh

Backup Encryption | Part-3

In Part-1 we discussed the feature of backup encryption and part-2 walks us through the scripts to prepare for the pre-requisites and take an encrypted backup, In part-2 we took the backup of Repro_FGDN Database on MBTEST server.

We took the encrypted backup using Certificate as an encryptor and used AES_256 encryption algorithm.

In this part we’ll try and restore the same encrypted backup on another server – FGDNPROD.

 

–Before you restore the encrypted backup on the destination server, you must create the Encryptor; in part-2 we created a certificate which was used as an encryptor. The certificate was encrypted with a DMK so we must first restore the DMK before creating the certificate.
RESTORE MASTER KEY
FROM FILE = ‘C:\backup\SQL_Prod_Repro_F.key’
DECRYPTION BY PASSWORD = ‘alseImRtjSnuEiheYf8b’
ENCRYPTION BY PASSWORD = ‘alseImRtjSnuEiheYf8b22e’;

 

–You must open the master key before using the DMK, once the Database Master Key is open – it stays open for the complete session.
OPEN MASTER KEY
DECRYPTION BY PASSWORD = ‘alseImRtjSnuEiheYf8b22e’

 

–Create the certificate
CREATE CERTIFICATE FGDN_DMK
FROM FILE = ‘C:\backup\FGDN_DMK_certificate.cer’
WITH PRIVATE KEY
        (
                FILE = ‘C:\backup\FGDN_DMK_certificate_private_key.key’
                , DECRYPTION BY PASSWORD = ‘abcalseImRtjSnuEiheYf8b’
        )

–Closing the DMK
close master key

 

Restoring service master key on destination server is not required.

–Restore encrypted Database backup
Restore database repro_FGDN
from disk =’C:\backup\Repro_FGDN_22.bak’

 

Finally after creation of the encryptor you will be able to restore the encrypted backup, refer the screenshot where I was able to restore the encrypted backup on another server named FGDNPROD.

Restore completed without service master key2

 

Thanks,
Sarabpreet Singh

Backup Encryption | Part-1

We’ve been asking for this feature from a long time and it was one of the main reasons why third party tools were being used, they were minting money like anything. Thankfully Microsoft introduced this feature in SQL Server 2014, now you can encrypt the data while creating a backup at runtime and creates an encrypted backup file.

The best part is all type of storage destinations be it On-Premise or Windows Azure Storage all of them are fully supported. In addition to that you can also use this feature if you are using SQL Server Managed Backup to Windows Azure. I know what you are thinking “you are still using old version and you’ve to rely on these third party tools” The good news is, if you want you can use Backup encryption in your old SQL versions also (like SQL Server 2005, 2008 and SQL 2008 R2) but only if you plan to take backups on Azure by using SQL Server Backup to Windows Azure Tool. Read more about this feature here…

 

Benefits

1. This helps in securing the data; SQL Server encrypts the data at the time of creating a backup – so no need to change anything on the application front.

2. You can also use Backup Encryption for TDE enabled databases.

3. Since this is also supported with SQL Server Managed backup to Windows Azure this gives an added security for off-site backups.

4. You’ve got multiple options to select the encryption algorithm to meet your requirements.

5. You can integrate encryption keys with Extended Key Management (EKM) providers.

 

Backup

To make use of Backup Encryption all you need to do is provide an encryptor (that could be a Certificate or Asymmetric Key) and choose an Encryption algorithm. SQL Server provides four encryption algorithm to choose from these are: AES 128, AES 192, AES 256 and Triple DES.

 

Restoring

During restore you don’t have to specify any encryption parameters. However the certificate or the asymmetric key used to encrypt the backup file must be available on the instance you are restoring to. The user account performing the restore must have View Definition permissions on the certificate or key. If the DB you are restoring is already TDE configured, the TDE certificate should also be available on the instance you are restoring to.

 

Encryptor

You must create a Database Master Key (DMK) before creating a Certificate or Asymmetric Key. A DMK is symmetric key that is used to protect the private keys of Certificates and Asymmetric keys that are present in the database.

Once a DMK is created you can create a certificate or asymmetric key to use for backup encryption.

 

Restrictions

1. While encrypting backups using asymmetric key always remember the keys residing in the EKM provider are supported.

2. Two SQL Editions (SQL Express and SQL Server Web) do not support encryption during backup. Having said that restoring from an encrypted backup to these editions is fully supported.

3. Also note that earlier versions of SQL Server cannot read encrypted backups.

4. Appending to an existing backup set option is also not supported if you are encrypting the backup.

It is very important to back up the certificate or asymmetric key, and preferably to a different location than the backup file it was used to encrypt. Without the certificate or asymmetric key, you cannot restore the backup, rendering the backup file unusable.

 

Permissions

To encrypt or restore from an encrypted backup View Definition permission on the certificate or asymmetric key that is used to encrypt the database backup. However Access to the TDE certificate is not required to backup or restore a TDE protected Database.

Thanks,
Sarabpreet Singh

SQL Tip-12 | Using SQL Server Backup to Windows Azure Tool – Part2

In last part we discussed the installation, configuration and usage of the SQL Server Backup to windows azure tool.

In this part we’ll discuss some internals of this tool and how to restore the backups which were moved to Azure storage.

As we discussed in the last part this tool creates a login ID with the name of SQLBackup2Azure , let’s diagnose it further. The tool creates this login with a low privilege account which is used to run this service. This tool also creates a group called “TempGroup” and the login SQLBackup2Azure gets the membership of this group. The name of the group and login are hardcoded and can’t be changed. However the password of the login gets generated automatically and randomly which is not known to anyone including Microsoft.

When you start adding different rules in this tool, this group get read\write access on the specified location – that is how this tool is able to take backup and restore.

The biggest disadvantage of this tool is it doesn’t provide a mechanism to track the uploaded files and any kind of error logging information, you don’t even have any control over the schedule of the uploading of backups since there is a service it keeps on checking the files.

 

Restore a Database from the Backup file which was uploaded using this tool

This tool creates a Stub file with some metadata to use during restore. You can use this tool just like a regular backup file when you want to restore a database. SQL Server automatically reads the metadata information in the file and restore the database by getting the required file from the Azure Account. Internet connectivity is a must since the stub file contains only the metadata information about the backup file rest everything is there in the Azure Storage account.

 

Stub File

This tool uploads the specified backups from the file path mentioned in the rule but once the backup file gets uploaded it creates a Stub file for the same and leaves it in the local storage folder with metadata information to be used during restore.

 

What if Stub file is inaccessible or Deleted by mistake?

A copy of the same Stub file gets uploaded along with the backup on your Azure Storage Account. If for any reason the stub file is inaccessible you can download the copy of the stub file and place it in the same location which is configured for this tool.

 

Some known issues:

1. In the event of a loss of network connectivity, SQL Server will surface the error as not being able to write to the device. You will need to clean up the files relating to this backup operation (if any) and retry the backup.

2. If you want to create a backup using the Tool, you must create a new backup file instead of overwriting/appending to an existing backup file.

 

Thanks,
Sarabpreet Singh