Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

By submitting your information, you agree that devx.com may send you DevX offers via email, phone and text message, as well as email offers about other products and services that DevX believes may be of interest to you. DevX will process your information in accordance with the Quinstreet Privacy Policy.


UPDATED SQL Server 2005 Bug Alert: Data Loss in Merge Replication : Page 2

A previous DevX article chronicled the data loss that the SQL Server 2005 merge replication engine can cause. Microsoft has since opened an active—although unpublished bug—for it. Find out how this bug manifests itself in real-world scenarios.




Application Security Testing: An Integral Part of DevOps

Reproducing the Data-Loss Scenario
This section explains how to reproduce two data-loss scenarios: Repro 1 and Repro 2. Repro 1 addresses the scenario described in the previous section (bug number 442076). Repro 2 addresses the data loss that occurs on a table where the join filters are in the direct order of PK-FK (not an active bug).

Repro 1
The relevant partition parameters for Repro 1 are as follows:

@keep_partition_changes=N'false' @use_partition_groups = N'false' @compatibility_level=N'90RTM'

The replication loses data on the Order table. The data that was filtered from the subscriber on a previous replication is lost on the second replication at the publisher.

Take the following steps to reproduce this data loss (download the scripts for Repro 1 here):

  1. Create Publication Database.sql.
  2. Run the script Insertions at the Publication Database.sql.
  3. Create Test Merge Publication.sql.
  4. Create Subscriber Database.sql.
  5. Run Snapshot Agent. Note: You should configure your distributor for the publisher if it hasn't been before.
  6. Create the subscription at subscriber. (Create Subscription.sql).
  7. Replicate (you can use Windows XP Synchronization Manager if you're using SQLE subscribers).
  8. Run the script First Batch Update (@Subscriber).sql.
  9. Replicate again. (This will make sure the replication works as expected.)
  10. Run the script Second Batch Update (@Subscriber).sql.
  11. Replicate again.

Data loss will occur, as the updated records from Step 8 (running the First Batch Update (@Subscriber).sql) are gone from the publication database after Step 11. Also, no conflict is recorded.

Repro 2
The relevant partition parameters for Repro 2 are as follows:

@keep_partition_changes=N'false' @use_partition_groups = N'false' @compatibility_level=N'90RTM'

Some factors to note about this scenario are:

  • The enumeration of changes does not account for the deletions in the ApprProperty table at the publisher.
  • The filter joins in this case are in direct order of PK-FK (The Comment table has a FK to the Order table).
  • The hierarchy is as follows:
    1. The table User is filtered based on username. The PK on this table is UserId.
    2. The table Order has a FK to User (UserId). The PK on this table is OrderId.
    3. The table Comment has a FK to Order (OrderlId).

Take the following steps to reproduce the Repro 2 data loss (download the scripts for Repro 2 here):

  1. Create Publication Database.sql.
  2. Run the script Insertions (@Publication).sql.
  3. Create Publication.sql.
  4. Run Snapshot Agent.
  5. Create Subscriber Database.sql.
  6. Create the subscription. (Create Subscription.sql.)
  7. Replicate.
  8. Run the script First Insertions in Comments (@Subscriber).sql.
  9. Replicate again. (This will make sure the replication works as expected.)
  10. Run the script First Update_Order (@Subscriber).sql. (This update changes the order rows for filtering.)
  11. Replicate. (The filter works properly and all comments are there.)
  12. Replicate again.

Comments will lose data after Step 12. It should contain three comments, but the comments inserted in Step 8 (running the First Insertions in Comments (@Subscriber).sql) are lost.

This information should be useful to DBAs and database developers who use merge replication systems with data filtering in SQL Server 2005.

Lizet Pena de Sola has eight years of experience in software development. She's an MCAD and MCTS in SQL Server 2005. She currently works as technical lead at Computer Aid Canada and is involved in the development of SmartClient applications.
Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.



We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.
Thanks for your registration, follow us on our social networks to keep up-to-date