Sunday, June 12, 2011

Reindex job and update statistics

When the re-index job(weekly) and update statistics job(daily) are in different schedule, make sure these jobs are running in same time.

ALTER INDEX IX_test_CreatedOn ON dbo.test REORGANIZE
It will block the following:
UPDATE STATISTICS dbo.test IX_test_CreatedOn
Then it will block the other statement for a long period of time(during the re-indexing):
select top 1 * from test
 

Thursday, May 19, 2011

Error in SSMS when file size is greated than 2TB

When the size of single database file is greater than 2TB, SSMS will pop up error in shinking file. And database property, you cannot change the database file size in SSMS. I think it is because of the data type for the size in SSMS is integer. It is out of integer range.
We can check the free space and change database file size using script:
select name , filename , convert(decimal(15,1),size * 8192.0 / 1048576) as FileSizeMB , convert(decimal(15,1),fileproperty(name,'SpaceUsed') * 8192.0 / 1048576) as SpaceUsedMB , convert(decimal(15,1),(size-fileproperty(name,'SpaceUsed')) * 8192.0 / 1048576) as FreeSpaceMB from dbo.sysfiles

--change database size

ALTER DATABASE test MODIFY FILE ( NAME = N'test', SIZE = 2311100MB )

Monday, May 16, 2011

Split partition function slow because of no index on partition column

In slide windows, we want to move one month data from fast disk to the file group on slow disk. If there is no index on partition column as left most column of index, the split partition function will be slow. And during the splitting, it will lock the table. The solution to this:
 1. create a index on partition column. But if the table is huge, it will take long time to build it. And it cause the transaction log file growing to very huge.
2. The alternative way is to swtich out one more month data into staging table. So there are two partitions(months) having no data. To merge and split these empty partitions will be very fast.

DECLARE @date DATETIME
SELECT @date = GETDATE()
DECLARE @partValue VARCHAR(10)
DECLARE @priorMonth VARCHAR(10)
SELECT @priorMonth = CONVERT(VARCHAR(25), DATEADD(mm, -7, DATEADD(dd, -(DAY(@date)) + 1 , @date)), 101)
SELECT @partValue = CONVERT(VARCHAR(25), DATEADD(mm, -6, DATEADD(dd, -(DAY(@date)) + 1 , @date)), 101)
 --using script to move data from fast disk to test_staging slow disk, not include in this script
DECLARE @partNum int
SELECT @partNum = $partition.testPF(@partValue)
DECLARE @priorPartNum int
SELECT @priorPartNum = $partition.testPF(@priorMonth)
--switch out data to temp table
ALTER TABLE dbo.test SWITCH PARTITION @partNum TO dbo.test_temp --fast disk
ALTER TABLE dbo.test SWITCH PARTITION @priorPartNum TO dbo.test_staging2 -- on slow disk

--merge an split these empty partitions
ALTER PARTITION FUNCTION [testPF] ()
MERGE RANGE (@partValue)--Split the partition by creating a new partition in slow disk file group
ALTER PARTITION SCHEME testPS
NEXT USED slowFG
ALTER PARTITION FUNCTION [testPF] ()

--swith back the data

ALTER
TABLE dbo.test_staging SWITCH TO dbo.test PARTITION @partNum --on slow disk, this is the actual partition need to be moved from fast to slow disk
ALTER TABLE dbo.test_staging2 SWITCH TO dbo.test PARTITION @priorPartNum --on slow disk, this is just to fast split
 
 
SPLIT RANGE (@partValue)

Move database file location non existing file

When the database is moved from another location, attach the database as:
CREATE DATABASE [test] ON (
(
FILENAME = N'N:\Data\test.mdf' ), FILENAME = N'N:\Log\test.LDF' )
GO

There is a secondary filegroup for this database. The file location is same as orginal file location. We cannot update or remove this secondary file:
ALTER DATABASE testMODIFY FILE(NAME = test_secondary,FILENAME = N'N:\data\test_secondary.ndf');

It will fail because the file group is pending recovery.

ALTER
DATABASE test SET OFFLINE
FOR ATTACH

Friday, May 13, 2011

Log space consumed in creating index

If the recovery mode is in full, it will log the whole table(data + indexes) in the transaction log file, even if the new index is a non-clustered index. That is surprised to me. I think it will only log the new index.
For example, we have the customer table(30GB data), it has three indexes(10GB). For the following example:
Create index ix_date on customer (create_date) with (online = on).
The size of new index is 3GB. The transaction log file will use 43GB.

Exec a command in hex

declare @x varchar(99)set @x=0x77616974666f722064656c61792027303a303a323027exec

The command is: waitfor delay '0:0:20'
(@x)

Sunday, May 1, 2011

DDL will block query having with (nolock)

DDL such as alter table T add column a int not null default 0 will block the query select * from T with (nolock).
If the table T has large number rows, the DDL will take long time. And the query having with (nolock) will be blocked and waiting for SCH-S lock.
So to reduce the lock time:
1. alter table T add column a int null defalut 0
2. update T set a =0
3. alter table T alter column a int not null --this step will block the DML with (nolock)

Monday, April 25, 2011

sp_executesql performance issue

declare @sql nvarchar(1024)
set @sql = 'select * from customer where lastName = @lastName'
exec sp_executesql @sql, '@lastName nvarchar(80)' , 'Zz'

Suppose the customer has index on lastName. There is no lastName = 'Zz'. Sql will choose the index seek on the table. And cached the execute plan.

Then if the value of lastName changed in next execution, such as:
exec sp_executesql @sql, '@lastName nvarchar(80)' , 'Smith'
Suppose the customer has 1MM records with lastName = 'Smith'. Then the performance will be very bad becase it will use the cached plan by index seek.

We can change it:
set @sql = @sql = ' option (recompile)'
exec sp_executesql @sql, '@lastName nvarchar(80)' , 'Smith'
Then SQL server will recompile the query. It will choose the primary key index scan which is better.

Thursday, April 21, 2011

distribution cleanup slow because of immediate_sync

The flag affects the following code:
if @has_immediate_sync = 0     delete TOP(5000) MSrepl_transactions WITH (PAGLOCK) from    MSrepl_transactions with (INDEX(ucMSrepl_transactions)) wherepublisher_database_id = @publisher_database_id andxact_seqno <= @max_xact_seqno andxact_seqno <> @last_xact_seqno andxact_seqno <> @last_log_xact_seqno andxact_seqno <> @second_largest_log_xact_seqno --ensure at least two log entries are left, when there existed more than two log entriesOPTION (MAXDOP 1)elsedelete TOP(5000) MSrepl_transactions WITH (PAGLOCK) from MSrepl_transactions with (INDEX(ucMSrepl_transactions)) wherepublisher_database_id = @publisher_database_id andxact_seqno <= @max_xact_seqno andxact_seqno <> @last_xact_seqno andxact_seqno
xact_seqno
<> @last_log_xact_seqno and <> @second_largest_log_xact_seqno and --ensure at least two log entries are left, when there existed more than two log entries-- use nolock to avoid deadlocknot exists (select * from MSrepl_commands c with (nolock) wherec.publisher_database_id = @publisher_database_id andc
c
.xact_seqno = MSrepl_transactions.xact_seqno and .xact_seqno <= @max_xact_seqno)

IN [dbo].[sp_MSdelete_dodelete] and [dbo].[sp_MSdelete_dodelete], if @has_immediate_sync = 0 then it will check the table MSrepl_commands. The table has 160MM records, so the performance will be very bad.
OPTION (MAXDOP 1)

To check whether the flag is set to true:
select  immediate_sync,* from distribution.dbo.MSpublications

Find one publication in a DB has the flag = true, and this affects all publication in same database.To set the flag to false by: EXEC sp_changepublication @publication='Publication_name', @property='immediate_sync',@value='FALSE'

If the flag is true, the cleanup the job only remove the records specified by@max_distretention = 72.
EXEC dbo.sp_MSdistribution_cleanup @min_distretention = 0, @max_distretention = 72. So these two tables will keep data up to 72 hours.

This caused the table MSrepl_commands had 160MM records. The size of distribution DB is 70GB.

After setting the flag to false, the cleanup job removed replicated transactions and commands in distribution DB.
The "Distribution clean up: distribution" job takes from 1 hour to 20 seconds.
The records number in MSrepl_commands goes down to 2.6MM.
The size of distribution DB goes down to 2GB.

Wednesday, April 13, 2011

Replicate table with foreign key

Replicate table with foreign key having "ON DELETE CASCADE".
ALTER TABLE T2 WITH CHECK ADD CONSTRAINT [FK_T2_PKT1] FOREIGN KEY(PKT1)REFERENCES T1 (PKT1)ON

Replicte T1 and T2.

Delete from T1 where PKT1 = 1

On distributor, the following commands will be distributed:
Delete from T1 where PKT1 = 1
Delete from T2 where PKT2 = 2 --maybe more commands for all records in T2 have the foreign key of PKT1

It will work perfect.

But if I create a stored procedure(usp_delete_T1) to delete records in table T1, and replicate the execution of the SP.
exec usp_delete_T1 @PKT1 = 1 on publisher, it will delete records from T1 and T2.

The distributor will send:
 exec usp_delete_T1 @PKT1 = 1 to subscriber.

The problem is only the record in T1 will be deleted from T1. The records in T2 will not be deleted.
So we cannot replicate the execution of usp_delete_T1.