devxlogo

Finding and Eliminating Duplicate Data

Finding and Eliminating Duplicate Data

uring my years as a database developer I have often faced the task of finding and deleting duplicate data in a database. Duplicates impact data accuracy and credibility, and they can also affect database performance if they’re too numerous compared to the total number of rows in the table. The solution (described later) that I was able to find on Oracle Support Services’ MetaLink Web site is inefficient and works extremely slowly with big data sets.



How do you delete duplicate data in a database?



Use a PL/SQL solution (a custom stored procedure) or a SQL solution (using the new analytic function RANK() with nested subqueries) to eliminate duplicates in a single SQL statement and control which rows to keep in the database.
What Is Duplicate Data?
By “duplicate data” I mean rows in a table that contain identical information in a field, or combination of fields, that’s supposed to be unique. For example, it could be the Social Security Number field, or Last Name and First Name fields. We call this a duplicate key. Most tables in a database nowadays have a primary key constraint, which maintains a unique value for each row. From a database point of view each row is unique; however, from a user point of view these rows are duplicates because they contain identical duplicate key (First Name + Last Name) values, even though their IDs differ:

ID   Last Name       First Name City            Phone—- ————— ———- ————— ———-1005 Krieger         Jeff       San Ramon       92529971001012 Krieger         Jeff       San Ramon       92529971001017 Krieger         Jeff       San Ramon       9252997100
How is duplicate data commonly created? Usually there are two processes that may lead to this situation:
  • Loading and merging data from different sources.

  • Entering data in the system via a graphical user interface, where the system generates a unique number for each row and assigns it as a primary key.
In both cases, unique constraint is missing, which opens the gate for duplicate data.

How to Find Duplicate Data
Let’s create a table called Customers and populate it with data that intentionally contains some duplicates (see Listing 1). As you can see, Listing 1 does not contain any code that prevents entering duplicates. The code below creates a unique constraint on the LastName and FirstName fields, which you could use during the initial database design before data gets loaded into the table, in order to prevent entering duplicates into the database:

ALTER TABLE Customers   ADD CONSTRAINT Customers_LastFirst   UNIQUE (LastName, FirstName);
The duplicate key in the Customers table is the combination of LastName and FirstName. Let’s group data by duplicate key and count rows within each group:
SELECT LastName, FirstName, COUNT(*)   FROM Customers   GROUP BY LastName, FirstName   ORDER BY LastName, FirstName;
Listing 2 shows the output of the above code. Three rows in the output have a count greater than 1, which means there are three groups of duplicates.

Let’s select them using the HAVING() clause, thus filtering out all “non-duplicate” data:

SELECT LastName, FirstName, COUNT(*)   FROM Customers   GROUP BY LastName, FirstName   HAVING COUNT(*) > 1;
Listing 3 shows the output of the above code. However, these query results do not show the IDs that identify each row. Using the last query as a subquery inside an IN clause will do just that:
SELECT ID, LastName, FirstName   FROM Customers   WHERE (LastName, FirstName) IN   (SELECT LastName, FirstName       FROM Customers       GROUP BY LastName, FirstName       HAVING COUNT(*) > 1)   ORDER BY LastName, FirstName;
Listing 4 shows the output of the above code. This query shows you that there are three groups of duplicates with ten rows total. We want to keep the first rows in each group with IDs 1005, 1009, and 1001; and delete seven rows with IDs 1012, 1017, 1010, 1011, 1016, 1019, and 1014.A Solution from Oracle Support
Does Oracle have a solution to our duplicates problem? I found an article called “Common SQL*Plus Questions and Answers” (Doc ID: 2194.1) on Oracle’s Support Services MetaLink Web site. It uses the Oracle aggregate function MIN()?or the MAX() function?to solve the problem.

MIN() allows you to select one row per group?duplicates and non-duplicates?so that you get a list of all the rows you want to keep:

SELECT MIN(ID) AS ID, LastName, FirstName   FROM Customers   GROUP BY LastName, FirstName;
Listing 5 shows the output of the above code.

Now you just need to delete rows that are not in this list, using the last query as a subquery inside an antijoin (the NOT IN clause):

DELETE FROM Customers   WHERE ID NOT IN   (SELECT MIN(ID)       FROM Customers	   GROUP BY LastName, FirstName);
However, an antijoin query with the NOT IN clause is inefficient to make this work. In our case two (!) full table scans need to be performed to resolve this SQL statement. That leads to substantial performance loss for big data sets. For performance testing I created the Customers data set with 500,000 rows and 45,000 duplicates (9 percent of the total). The above command ran for more than one hour with no results?except that it exhausted my patience?so I killed the process.

Another disadvantage of this syntax is that you can’t control which row per group of duplicates you can keep in the database.

A PL/SQL Solution: Deleting Duplicate Data with a Stored Procedure
Let me give you an example of a PL/SQL stored procedure, called DeleteDuplicate (see Listing 6), that cleans up duplicates. The algorithm for this procedure is pretty straightforward:

  1. It selects the duplicate data in the cursor, sorted by duplicate key (LastName, FirstName in our case), as shown in Listing 4.

  2. It opens the cursor and fetches each row, one by one, in a loop.

  3. It compares the duplicate key value with the previously fetched one.

  4. If this is a first fetch, or the value is different, then that’s the first row in a new group so it skips it and fetches the next row. Otherwise, it’s a duplicate row within the same group, so it deletes it.
Let’s run the stored procedure and check it against the Customers data:
BEGIN   DeleteDuplicates;  END;/SELECT LastName, FirstName, COUNT(*)   FROM Customers   GROUP BY LastName, FirstName   HAVING COUNT(*) > 1;
The last SELECT statement returns no rows, so the duplicates are gone.

The main job of extracting duplicates in this procedure is done by a SQL statement, which is defined in the csr_Duplicates cursor. The PL/SQL procedural code is used only to implement the logic of deleting all rows in the group except the first one. Could it all be done by one SQL statement?

A SQL Solution: Deleting Duplicate Data with a Single SQL Statement Using RANK()
The Oracle 8i analytic function RANK() allows you to rank each item in a group. (For more information about RANK(), see my 10-Minute Solution, “Performing Top-N Queries in Oracle.”) In our case, we are using this function to assign dynamically sequential numbers in the group of duplicates sorted by the primary key. With RANK(), grouping is specified in the PARTITION BY clause and sort order for ranking is specified in the ORDER BY clause:
SELECT ID, LastName, FirstName,   RANK() OVER (PARTITION BY LastName,       FirstName ORDER BY ID) SeqNumber   FROM Customers   ORDER BY LastName, FirstName;
Listing 7 shows the output of the above query.

Bingo! Now, values in the SeqNumber column, assigned by RANK(), allow you to separate all duplicate rows (SeqNumber > 1) from non-duplicates (SeqNumber = 1) and retrieve only those rows you want to delete:

SELECT ID, LastName, FirstName   FROM   (SELECT ID, LastName, FirstName,      RANK() OVER (PARTITION BY LastName,          FirstName ORDER BY ID) AS SeqNumber      FROM Customers)   WHERE SeqNumber > 1;
Listing 8 shows the output of the above code. It contains seven duplicate rows that have to be deleted. I tested this code on the Customers data set with 500,000 rows total and 45,000 duplicates, and it took only 77 seconds to count the duplicates.

Now you are ready to delete the duplicates by issuing the SQL DELETE command. Here is the first version of it, which executed (for me) in 135 seconds:

DELETE  FROM CUSTOMERS  WHERE ID IN    (SELECT ID      FROM      (SELECT ID, LastName, FirstName,         RANK() OVER (PARTITION BY LastName,             FirstName ORDER BY ID) AS SeqNumber         FROM Customers)      WHERE SeqNumber > 1);
You may notice that the last two statements rank all the rows in the table, which is inefficient. Let’s improve the last SQL SELECT statement by applying RANK() only to the groups of duplicates instead of all rows.

The following syntax is much more efficient, even though it’s not as concise as the last SELECT ID above:

SELECT ID, LastName, FirstName   FROM   (SELECT ID, LastName, FirstName,      RANK() OVER (PARTITION BY LastName,          FirstName ORDER BY ID) AS SeqNumber      FROM     (SELECT ID, LastName, FirstName         FROM Customers         WHERE (LastName, FirstName) IN         (SELECT LastName, FirstName            FROM Customers            GROUP BY LastName, FirstName            HAVING COUNT(*) > 1)))     WHERE SeqNumber > 1;
Counting the duplicates now took only 26 seconds, which amounted to a 67 percent performance gain.

Here is the improved SQL DELETE statement, which uses the improved SELECT statement as a subquery:

DELETE  FROM Customers  WHERE ID IN  (SELECT ID      FROM      (SELECT ID, LastName, FirstName,         RANK() OVER (PARTITION BY LastName,             FirstName ORDER BY ID) AS SeqNumber         FROM        (SELECT ID, LastName, FirstName            FROM Customers            WHERE (LastName, FirstName) IN            (SELECT LastName, FirstName               FROM Customers               GROUP BY LastName, FirstName               HAVING COUNT(*) > 1)))        WHERE SeqNumber > 1);
Now it took only 47 seconds to find and delete 45,000 duplicates from 500,000 rows, compared to the 135 seconds it took in my first version of DELETE. That’s a significant performance gain (65 percent).

By comparison, the DeleteDuplicates stored procedure clocked in at 56 seconds, which is a little slower (19 percent) than just the SQL statement.

Deleting Duplicate Rows When There’s No Primary Key

 
Although it’s a sign of bad database design, you may have a table with no primary key. In that case, you can use this technique to delete duplicate rows. Read on

Replacing the PL/SQL stored procedure with my single SQL statement will get you much more concise code and may improve your performance because there is no overhead caused by the PL/SQL-to-SQL context switch in the stored procedure. However, the performance comparison results between the SQL statement and the PL/SQL procedure may vary, depending on the data set size and percentage of duplicates. I would expect the PL/SQL procedure to get faster or even outperform the SQL statement if the number of duplicates is relatively small?i.e., 1 to 3 percent of all rows in the table.

What if your table doesn’t have a primary key? You can use another technique as well (see sidebar).

RANK()’s Additional Capabilities
The RANK() function allows you to select the row per group of duplicates you want to keep. Let’s say you need to keep the latest (or earliest) record determined by the value in the RecDate field. In this case you just need to include RecDate in the ORDER BY clause of RANK() in order to sort duplicates within each group by RecDate in DESCending (or ASCending) order, and then by ID.

Here is the syntax for keeping the latest record per group:

DELETE  FROM Customers  WHERE ID IN  (SELECT ID      FROM      (SELECT ID, LastName, FirstName,         RANK() OVER (PARTITION BY LastName,             FirstName ORDER BY RecDate DESC, ID) AS SeqNumber         FROM        (SELECT ID, LastName, FirstName, RecDate            FROM Customers            WHERE (LastName, FirstName) IN            (SELECT LastName, FirstName               FROM Customers               GROUP BY LastName, FirstName               HAVING COUNT(*) > 1)))        WHERE SeqNumber > 1);
The flexibility of this technique also allows you to control how many rows per group you want to keep in the table. Let’s say you have a database with promotional or rebate list information and you have common business conditions to enforce, such as “limit five entries per household” or “limit three rebates per person.” By specifying the number of rows to keep (3) in the WHERE and HAVING clauses, your SELECT statement will do the job again and delete all excessive (more than 3) rebate entries per person:
DELETE  FROM Customers  WHERE ID IN  (SELECT ID      FROM      (SELECT ID, LastName, FirstName,         RANK() OVER (PARTITION BY LastName,             FirstName ORDER BY ID) AS SeqNumber         FROM        (SELECT ID, LastName, FirstName            FROM Customers            WHERE (LastName, FirstName) IN            (SELECT LastName, FirstName               FROM Customers               GROUP BY LastName, FirstName               HAVING COUNT(*) > 3)))        WHERE SeqNumber > 3);
As you can see, using the RANK() function allows you to eliminate duplicates in a single SQL statement and gives you more capabilities by extending the power of your queries.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist