### WEBINAR: On-Demand

Unleash Your DevOps Strategy by Synchronizing Application and Database Changes REGISTER >

#### Create Your Own Aggregate Functions

Using the Y-to-X transformation, you can easily substitute a SQL standard aggregate function or create your own. For example, you can calculate the number of rows in the table and the summary on the column discount using a regular

`SELECT`

statement:

```
SELECT COUNT(*), SUM(discount) FROM pubs..discounts
```

Or you can get the same result without the `COUNT()`

or `SUM()`

aggregate functions:

```
DECLARE @cnt int, @totalDiscount dec(5,2)
SELECT @cnt = 0, @totalDiscount = 0
SELECT @cnt = @cnt + 1, @totalDiscount = @totalDiscount + discount
FROM pubs..discounts
SELECT @cnt, @totalDiscount
```

I tested both solutions on the table with nearly 15,000,000 rows and found the solution without aggregate functions slower than the standard one by approximately 15 percent. But when I tried to implement logic that was a little more complicated (if the current quantity value can be divided by 10 without a reminder, then that value has to be subtracted from the summary amount), both solutions showed almost the same execution times on a large table:

```
```**-- 1) Using Y-to-X transformation technique**
DECLARE @sum int
SELECT @sum = 0
SELECT @sum = CASE
WHEN qty%10 = 0 THEN @sum - qty
ELSE @sum + qty END
FROM pubs..sales
SELECT @sum
**-- 2) Using standard aggregate function**
SELECT SUM(CASE
WHEN qty%10 = 0 THEN - qty
ELSE + qty END)
FROM pubs..sales

You're probably wondering why you need to use a Y-to-X transformation technique for a task that can be done just as quickly or even faster using the traditional approach with aggregate functions. Well, because the two previous examples are very simple, they don't let you feel the full power and flexibility of Y-to-X transformations. The next example will allow you to implement much more sophisticated logic.

Suppose you want to know the date and time when the total qty amount in the pubs..sales table exceeded 185. In addition, you want to know how many orders contributed to that amount. The following listing presents the solution:

```
DECLARE @cnt int, @sum int, @date datetime
SELECT @sum = 0, @cnt = 0, @date = 0
SELECT * INTO #sales
FROM pubs..sales
ORDER BY ord_date
SELECT
@date = CASE
WHEN @sum < 185 THEN ord_date
ELSE @date END,
@sum = @sum + qty,
@cnt = CASE
WHEN @sum < 185 THEN @cnt + 1
ELSE @cnt END
FROM #sales
SELECT @cnt, @date
DROP TABLE #sales
-- Result:
--- -----------------------
6 1993-05-24 00:00:00.000
```

The second task makes sense only when the data set is sorted by date. For that reason, I created the #sales table. Using Y-to-X transformation, the second (main) query calculates three values: order date, total quantity amount, and row counter. The first and third members of the select list stop updating their values when the value in `@sum`

exceeds 185.

Note that the result of the query depends on the order of variables in the select list. If you place the variables in order (`@sum`

, `@cnt`

, `@date`

), you will get another result: *6 and 1993-05-22 00:00:00.000*. The `@cnt`

position was not changed relative to the `@sum`

position, and the `@cnt`

value still was calculated after the `@sum`

calculation. The result *@cnt = 6* means six first orders have a total quantity amount of less than 185.

On the other hand, the position of the `@date`

variable was changed relative to the `@sum`

position, and the update of `@date`

occurred later than the calculation of `@sum`

. This is why you got the last order's date within the limit of 185 instead of the first order's date that exceeds the limit.

#### Row-by-Row Processing Without Cursors or While Loops

Using Y-to-X transformation, you can avoid cursors or loops when you need row-by-row data processing. Consider the following example: You need to drop indexes from the table. You can do it in many different ways, but all of them require you to apply some kind of loop operation. Using Y-to-X transformation, you can do it in one shot:

```
```**-- 1) Create testDropIndexes table and indexes**
IF EXISTS (SELECT * FROM sysobjects
WHERE id = OBJECT_ID('testDropIndexes')
AND OBJECTPROPERTY(id, 'IsUserTable') = 1)
DROP TABLE testDropIndexes
GO
CREATE TABLE testDropIndexes (
col1 int NOT NULL,
col2 int NOT NULL,
col3 int NOT NULL,
col4 int NOT NULL,
col5 int NOT NULL
) ON [PRIMARY]
GO
CREATE INDEX nci_col1 ON testDropIndexes(col1)
CREATE INDEX nci_col2 ON testDropIndexes(col2)
CREATE INDEX nci_col3 ON testDropIndexes(col3)
CREATE INDEX nci_col4 ON testDropIndexes(col4)
CREATE INDEX nci_col5 ON testDropIndexes(col5)
**-- 2) Drop the indexes**
DECLARE @str nvarchar(4000), @tblName varchar(100)
SELECT @str = '', @tblName = 'testDropIndexes'
SELECT @str = @str +
'DROP INDEX ' + @tblName + '.' + name + ';'
FROM sysindexes
WHERE object_name(id) = @tblName
AND indID between 2 and 254
AND name NOT LIKE '_WA_SYS%'
EXECUTE sp_executeSQL @str

Using the sysindexes table and Y-to-X transformation, the code accumulated the drop indexes statements in the variable `@str`

and executed them.