Tuesday, October 22, 2013

DEAD LOCK

What is a DEAD LOCK in SQL Server?
When two or more transactions conflict in such a way that each is waiting for the other before they proceed. For instance, Transaction A might have a lock on Record 1 while trying to write to Record 2, while Transaction B has a lock on Record 2 while trying to write to Record 1. The two transactions will wait for each other forever unless the deadlock is somehow resolved.
This can be understood by following example.
How to trace Deadlock in SQL Server ? What is a DEAD LOCK in SQL Server trace sql server deadlock TRACE DEADLOCK TRACE all DEAD LOCK information sql server trace flag sql server startup tarce flag sql server deadlock trace flags sql server dead lock Keep track of DEADLOCK in SQL Server how to enable deadlock tracking sql server dbcc traceon 3605 DBCC TRACEON 1222 DBCC TRACEON 1204 DBCC TRACEON  1 dbcc traceon
How to Keep track of DEADLOCK in SQL Server ?
How to TRACE DEADLOCK in SQL Server ?
By Default SQL Server doesn’t keep track of DEAD LOCKs, we as a DBA need to request SQL Server to keep track of DEAD LOCKs in SQL Server Error Log.
While placing a request to SQL Server, we need to specify
  • What to Capture?
  • Where to capture ?
  • Define Scope, from where you want to capture ?
In this case, we wanted to capture DEADLOCK Information to SQL Server Log.

To TRACE all DEAD LOCK information to SQL Server Error Log with detailed information, we need to run the following statement. This will work till your SQL Server is running, once your services gets restarted, you need to do this again.

DBCC TRACEON (3605,1204,1222,-1)

/* 
-- where 
-- What to Capture?
--------------------
  -- (1204) = Capture Deadlock Events. 
  -- (1222) = Capture Deadlock Events with detailed Information (works only with SQL 2005 and higher version) 
-- Where to capture ?
-------------------
   -- (3605) = Capture the selected Events in the SQL Server error log. 
-- What is the Scope of Capture 
--------------------------------
    -- (-1) = Switches on the specified trace flags globally, that means do it globally, for all users/sessions
*/
This deadlock tracking will work till your SQL Server is running, once your services gets restarted, you need to do this again. If you wanted to enable this for permanently, then
  • Create a SQL Agent job and put this code into and schedule that job to when the Agent starts  or
  • You need to enable this trace flags at SQL Server startup parameter, by specifying  -T1222; -T3605;in the startup parameter.

Best Practice – SQL Server Error Log


  1. Ensure you Error log directory is backed up regularly, with windows OS backup
  2. Increase the number of SQL Server Error Logs from the default value of six.
  3. Make a schedule job to Recycle the Error on a specified schedule to ensure, you log size in control as large files takes time to read data
  4. Check SQL Server Error log on daily basis as all important / critical messages and warnings are logged in SQL Server Error Log

TRUNCATE V/s DELETE

TRUNCATE V/s DELETE
TruncateDelete
TRUNCATE is a DDL commandDELETE is a DML command
TRUNCATE TABLE always locks the table and page but not each rowDELETE statement is executed using a row lock,                                             each row in the table is locked for deletion
Cannot use Where ConditionWe can specify filters in where clause
It Removes all the dataIt deletes specified data if where condition exists.
TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions.Delete activates a trigger because the operation                                are logged individually.
Faster in performance wise, because it is minimally logged in transaction log.Slower than truncate because, it maintain logs for every record
 Drop all object’s statistics and marks like High Water Mark free extents and leave the object really empty with the first extent. zero pages are left in the tablekeeps object’s statistics and all allocated space. After a                       DELETE statement is executed,the table can still contain empty pages.
TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data and records only the page deallocations in the transaction loThe DELETE statement removes rows one at a time                       and records an entry in the transaction log for each deleted row
If the table contains an identity column, the counter for that column is reset to the seed value that is defined for the columnDELETE retain the identity
Restrictions on using Truncate Statement
1. Are referenced by a FOREIGN KEY constraint.
2. Participate in an indexed view.
3. Are published by using transactional replication or merge replication.
Delete works at row level, thus row level constrains apply

Monday, October 7, 2013

Database Mirroring Enhancements in SQL Server 2008 from 2005



Database Mirroring Enhancements in SQL Server 2008 from 2005

Database mirroring is an alternative high-availability solution to failover clustering in SQL Server Enterprise. Database mirroring supports automatic failover, but does not require cluster-capable hardware, and can therefore provide a cost-effective alternative to failover clustering.
This article is not focused on to make you understand Database Mirroring Concept, rather this focuses onEnhancements, which are being done in SQL Server 2008 for Database Mirroring
Database Mirroring Enhancements in SQL Server 2008 from 2005
·         Page-level mirroring:
o    If a page on the principle or mirror server is corrupt, it is automatically replaced with corresponding copy on its partner
·         Automatic Page Repair on Mirror Servers
o    If a page on the principle or mirror server is corrupt, it is automatically replaced with the corresponding copy on its partner
o    Some page types cannot be automatically repaired:
§  File header pages
§  Database boot page
§  Allocation pages
o    I/O errors on the principle server may be fixed during the mirroring session
o    I/O errors on the mirror server require the mirroring session to be suspended
·         Compressed Data flow
o    Data Flow between the principle and mirror server is now compressed to improve performance
·         Manual Failover
o    Manual failover no longer require a database restart
·         Log Performance
o    Write-ahead on the incoming log stream on the mirror server
o    Improved use of log-send buffers
o    Page read-ahead during the undo phase after a failover
Database mirroring enables you to maintain two copies of a database. One copy is the principal server that client computers access. The other copy acts as a standby server. In the case of a failure of the principal server, the client computers can use the failover capability to connect to the standby computer with no loss of data. Mirroring can therefore increase the availability of a database and provide data protection. SQL Server 2008 offers several enhancements to the database-mirroring environment, including the following:

·         Page-level mirroring. This replaces corrupt pages on one server with the same page on the partner server.
·         Compressed data flow. This provides improved performance and reduces the network bandwidth that database mirroring uses.
·         Manual failover. This no longer requires a restart of the database
·         Log performance. This has the following improvements:
o    Write-ahead on the incoming log stream on the mirror server. This writes the incoming log records to disk asynchronously.
o    Improved use of log-send buffers. If the most recently used log cache contains enough free space for the current log records, they are appended to that log cache.
o    Page read-ahead during the undo phase after a failover. The new mirror server sends read-ahead hints to the principal server and the principal server puts those pages in its send buffer. This process improves the speed of the undo phase.
Before SQL Server 2008, data restore could occur at the file level only. A corrupt page may require a failover and then a restore of the file that contains the corrupted data. This procedure is expensive due to the resources and time that are used, and because more data is replaced than was actually corrupted.
SQL Server 2008 provides recovery at the page level. The database-mirroring environment automatically replaces corrupt pages on one server with the same pages from the partner server. The process does not require user intervention and does not interrupt the availability of the server. By using automatic page repair, the principal and mirror computers can recover from data page errors and from errors that prevent reading a data page.
If a mirror server finds a page with errors, it puts the mirroring session into the SUSPENDED state, logs the error, and then requests a copy of the page from the principal server. If the principal server can access the page, it sends a copy to the mirror server, which replaces the page and resumes the mirroring session. Otherwise, the mirroring session remains in a suspended state.
Automatic page repair is only available in SQL Server Enterprise. However, if you have one mirror running on Enterprise and one mirror on Standard, corrupt pages can be repaired on the Enterprise instance, but not on the Standard instance


Monday, August 26, 2013

Log Shipping Fail over and Role Change


Fail Over to a Log Shipping Secondary (SQL Server)

Failing over to a log shipping secondary is useful if the primary server instance fails or requires maintenance.


Typically, the primary and secondary databases are unsynchronized, because the primary database continues to be updated after its latest backup job. Also, in some cases, recent transaction log backups have not been copied to the secondary server instances, or some copied log backups might still not have been applied to the secondary database. We recommend that you begin by synchronizing all of the secondary databases with the primary database, if possible.
For information about log shipping jobs, see About Log Shipping (SQL Server).




To fail over to a secondary database:
  1. Copy any uncopied backup files from the backup share to the copy destination folder of each secondary server.
  2. Apply any unapplied transaction log backups in sequence to each secondary database. For more information, see Apply Transaction Log Backups (SQL Server).
  3. If the primary database is accessible, back up the active transaction log and apply the log backup to the secondary databases.
If the original primary server instance is not damaged, back up the tail of the transaction log of the primary database using WITH NORECOVERY. This leaves the database in the restoring state and therefore unavailable to users. Eventually you will be able to roll this database forward by applying transaction log backups from the replacement primary database.
For more information, see Transaction Log Backups (SQL Server).
  1. After the secondary servers are synchronized, you can fail over to whichever one you prefer by recovering its secondary database and redirecting clients to that server instance. Recovering puts the database into a consistent state and brings it online. 
          Note:

When you make a secondary database available, you should ensure that its metadata is consistent with the metadata of the original primary database. For more information, see Manage Metadata When Making a Database Available on Another Server Instance (SQL Server)

        5.  After you have recovered a secondary database, you can reconfigure it to act as a primary database  for other secondary databases.


Login Management

The logins from the Primary database must be defined on the Secondary. The best time to accomplish the
login synchronization is not during an emergency; therefore, I recommend a regular Agent job to
synchronize the logins between servers.
Failback
The Secondary database has become the Primary after Failover. In order to Failback to the original
Primary database the process is similar to the original Log Shipping configuration process.
1. Backup the database
2. Copy the backup to the instance
3. Restore the backup
4. Reconfigure Log Shipping

Change Roles Between Primary and Secondary Log Shipping Servers (SQL Server)


After you have failed over a SQL Server log shipping configuration to a secondary server, you can configure your secondary database to act as the primary database. Then, you will be able to swap primary and secondary databases as needed.




The first time you want to fail over to the secondary database and make it your new primary database, there is a series of steps you must take. After you have followed these initial steps, you will be able to swap roles between the primary database and the secondary database easily.
  1. Manually fail over from the primary database to a secondary database. Be sure to back up the active transaction log on your primary server with NORECOVERY. For more information, see Fail Over to a Log Shipping Secondary (SQL Server).
  2. Disable the log shipping backup job on the original primary server, and the copy and restore jobs on the original secondary server.
  3. On your secondary database (the database you want to be the new primary), configure log shipping using SQL Server Management Studio. For more information, seeConfigure Log Shipping (SQL Server). Include the following steps:
    1. Use the same share for creating backups that you created for the original primary server.
    2. When adding the secondary database, in the Secondary Database Settings dialog box, enter the name of the original primary database in the Secondary database box.
    3. In the Secondary Database Settings dialog box, select No, the secondary database is initialized.
  4. If log shipping monitoring was enabled on your former log shipping configuration, reconfigure log shipping monitoring to monitor the new log shipping configuration. Execute the following commands, replacing database_name with the name of your database:
    1. On the new primary server
Execute the following Transact-SQL statements:
-- Statement to execute on the new primary server
USE msdb
GO
EXEC master.dbo.sp_change_log_shipping_secondary_database @secondary_database = N'database_name', @threshold_alert_enabled = 0;
GO
    1. On the new secondary server
Execute the following Transact-SQL statements:
-- Statement to execute on the new secondary server
USE msdb
GO
EXEC master.dbo.sp_change_log_shipping_primary_database @database=N'database_name', @threshold_alert_enabled = 0;
GO


After you have completed the steps above for the initial role change, you can change roles between the primary database and the secondary database by following the steps in this section. To perform a role change, follow these general steps:
  1. Bring the secondary database online, backing up the transaction log on the primary server with NORECOVERY.
  2. Disable the log shipping backup job on the original primary server, and the copy and restore jobs on the original secondary server.
  3. Enable the log shipping backup job on the secondary server (the new primary server), and the copy and restore jobs on the primary server (the new secondary server).
Important:

When you change a secondary database to the primary database, to provide a consistent experience to users and applications, you might have to re-create some or all of the metadata for the database, such as logins and jobs, on the new primary server instance. For more information, see Manage Metadata When Making a Database Available on Another Server Instance (SQL Server)

Monday, August 5, 2013

Moving data between 32-bit and 64-bit SQL Server instances

Yes, you can move SQL Server data back and forth between x64, x86, and IA64 architectures. The data and log files themselves do not store anything that indicates the architecture and work the same on either 32-bit or 64-bit. The same applies to the backup files. Given those facts it becomes clear that we can easily move data between architectures. You can backup on x86 and restore to x64. Detach/attach works fine. Log shipping works because it is basically backup/restore with some scheduling. Mirroring and transactional replication take data from the transaction log and push the data to another system so again they work across architectures. Merge replication is basically just another application sitting on top of SQL Server, it moves data by reading tables in one location and modifying data in another location. Again, this can all be done across architectures.
 
Hopefully you are not installing new x86 boxes, 64-bit handles memory so much better. If you have legacy x86 boxes you can easily do a backup or detach from that old system and restore or attach on the new x64 instance. You can also reverse the process and copy data from x64 back to x86. The same logic applies to the other technologies listed above.
 
Per BOL (I used the SQL 2008 R2 version):
·         The SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments. Therefore, a database mirroring session can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.
·         Because the SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments, a replication topology can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.
·         The SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments. Therefore, a log shipping configuration can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.
 
If you're doing SAN level replication you'll need to talk to your SAN vendor about their support across platforms.
 
Some x64 info: