Monday, February 19, 2018

Filter predicates with NVL on mandatory columns using the index

Last Friday at work, I overheard a conversation between two DBA's about a performance problem of a job which usually ran in 10 minutes, now taking more than 4 hours. I helped them pinpoint the problem: a simple SQL query which took >95% of the time. Clearly this query was the culprit. The query selects from just one table but with 10 filter predicates of this form:

nvl(column,'@') = nvl(:bind_variable,'@')

And all 10 columns where part of a single unique constraint. The trace file showed the query being executed 1500 times and the unique index belonging to the unique constraint was not being used, resulting in 1500 full table scans. Your index won't be used if you use a function around your column, like shown here. This is what I have learned in the past and what I had experienced many times. The query was part of a function generated by a TAPI generator, so I checked the table definition and to my surprise I saw the columns were all defined as NOT NULL, meaning the NVL constructs were simply unnecessary. So my advice to the developer responsible for this code, was to manually alter the generated code and remove the unnecessary NVL functions, which would make the query be able to use the unique index again.

This morning at the coffee machine, I asked the developer how things were going with the performance problem and he mentioned the problem was solved by rebuilding the index and without altering the code. I could not believe this, because a query with all those NVL predicates would not be able to use the index ever, right? So I built a little test case to see it for my self (or not) and ran it on my 12.1.0.2 database and subsequently ran the script on a very old 9.2.0.5 database and I saw a difference. I then narrowed the versions down to a 10.2.0.3 and 11.2.0.4 database with the same difference.

Here is the output of my testcase on a 10.2.0.3 database:

SQL> create table robtest
  2  as
  3   select level id
  4        , ceil(level/10000)  code1
  5        , ceil(level/1000)   code2
  6        , ceil(level/100)    code3
  7        , mod(level/100,100) code4
  8        , lpad('*',1000,'*') description
  9     from dual
10  connect by level <= 100000
11  /
 
Tabel is aangemaakt.
 
SQL> alter table robtest add constraint robtest_uk unique (code1,code2,code3,code4)
  2  /
 
Tabel is gewijzigd.

SQL> exec dbms_stats.gather_table_stats(user,'robtest',cascade=>true)
 
PL/SQL-procedure is geslaagd.
An analyzed table with 100,000 rows, where the four columns code1, code2, code3 and code4 make up the functional unique key. The four columns don't have a NOT NULL constraint, although they only contain NOT NULL values. In this particular 10.2.0.3 database, setting autotrace on did not fully succeed, but we still have enough information to draw conclusions:
SQL> set autotrace on
SP2-0618: Kan het sessie-identificatiesymbool niet vinden. Controleer of PLUSTRACE-rol actief is.
SP2-0611: Fout bij activeren van STATISTICS-rapport.
SQL> select *
  2    from robtest
  3   where code1 = 1
  4     and code2 = 1
  5     and code3 = 1
  6     and code4 = 1
  7  /
 
        ID      CODE1      CODE2      CODE3      CODE4 DESCRIPTION
---------- ---------- ---------- ---------- ---------- ----------------------------------------
       100          1          1          1          1 ****************************************
 
1 rij is geselecteerd.
 
 
Uitvoeringspan
----------------------------------------------------------
Plan hash value: 1745344684
 
------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |     1 |  1019 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| ROBTEST    |     1 |  1019 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | ROBTEST_UK |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("CODE1"=1 AND "CODE2"=1 AND "CODE3"=1 AND "CODE4"=1)
Of course, four regular filters lead to a nice access filter and INDEX UNIQUE SCAN, as expected.
SQL> select *
  2    from robtest
  3   where nvl(code1,-1) = 1
  4     and nvl(code2,-1) = 1
  5     and nvl(code3,-1) = 1
  6     and nvl(code4,-1) = 1
  7  /
 
        ID      CODE1      CODE2      CODE3      CODE4 DESCRIPTION
---------- ---------- ---------- ---------- ---------- ----------------------------------------
       100          1          1          1          1 ****************************************
 
1 rij is geselecteerd.
 
 
Uitvoeringspan
----------------------------------------------------------
Plan hash value: 1915841528
 
-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |  1019 |  3172   (1)| 00:00:39 |
|*  1 |  TABLE ACCESS FULL| ROBTEST |     1 |  1019 |  3172   (1)| 00:00:39 |
-----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(NVL("CODE4",(-1))=1 AND NVL("CODE3",(-1))=1 AND
              NVL("CODE2",(-1))=1 AND NVL("CODE1",(-1))=1)
And using NVL around the columns disables the use of the index. Even adding NOT NULL constraints on those four columns doesn't change the execution plan:
SQL> alter table robtest modify code1 constraint nn1 not null
  2  /
 
Tabel is gewijzigd.
 
SQL> alter table robtest modify code2 constraint nn2 not null
  2  /
 
Tabel is gewijzigd.
 
SQL> alter table robtest modify code3 constraint nn3 not null
  2  /
 
Tabel is gewijzigd.
 
SQL> alter table robtest modify code4 constraint nn4 not null
  2  /
 
Tabel is gewijzigd.
 
SQL> select *
  2    from robtest
  3   where nvl(code1,-1) = 1
  4     and nvl(code2,-1) = 1
  5     and nvl(code3,-1) = 1
  6     and nvl(code4,-1) = 1
  7  /
 
        ID      CODE1      CODE2      CODE3      CODE4 DESCRIPTION
---------- ---------- ---------- ---------- ---------- ----------------------------------------
       100          1          1          1          1 ****************************************
 
1 rij is geselecteerd.
 
 
Uitvoeringspan
----------------------------------------------------------
Plan hash value: 1915841528
 
-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |  1019 |  3172   (1)| 00:00:39 |
|*  1 |  TABLE ACCESS FULL| ROBTEST |     1 |  1019 |  3172   (1)| 00:00:39 |
-----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(NVL("CODE4",(-1))=1 AND NVL("CODE3",(-1))=1 AND
              NVL("CODE2",(-1))=1 AND NVL("CODE1",(-1))=1)
So far, everything as expected. Now watch what happens when executing the same scenario on a 11.2.0.4 database:
SQL> create table robtest
  2  as
  3   select level id
  4        , ceil(level/10000)  code1
  5        , ceil(level/1000)   code2
  6        , ceil(level/100)    code3
  7        , mod(level/100,100) code4
  8        , lpad('*',1000,'*') description
  9     from dual
10  connect by level <= 100000
11  /
 
Tabel is aangemaakt.
 
SQL> alter table robtest add constraint robtest_uk unique (code1,code2,code3,code4)
  2  /
 
Tabel is gewijzigd.
 
SQL> exec dbms_stats.gather_table_stats(user,'robtest',cascade=>true)
 
PL/SQL-procedure is geslaagd.
 
SQL> set autotrace on
SQL> select *
  2    from robtest
  3   where code1 = 1
  4     and code2 = 1
  5     and code3 = 1
  6     and code4 = 1
  7  /
 
        ID      CODE1      CODE2      CODE3      CODE4 DESCRIPTION
---------- ---------- ---------- ---------- ---------- ----------------------------------------
       100          1          1          1          1 ****************************************
 
1 rij is geselecteerd.
 
 
Uitvoeringspan
----------------------------------------------------------
Plan hash value: 1291870759
 
------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |     1 |  1020 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| ROBTEST    |     1 |  1020 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | ROBTEST_UK |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("CODE1"=1 AND "CODE2"=1 AND "CODE3"=1 AND "CODE4"=1)
 
 
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
       1590  bytes sent via SQL*Net to client
        348  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL> select *
  2    from robtest
  3   where nvl(code1,-1) = 1
  4     and nvl(code2,-1) = 1
  5     and nvl(code3,-1) = 1
  6     and nvl(code4,-1) = 1
  7  /
 
        ID      CODE1      CODE2      CODE3      CODE4 DESCRIPTION
---------- ---------- ---------- ---------- ---------- ----------------------------------------
       100          1          1          1          1 ****************************************
 
1 rij is geselecteerd.
 
 
Uitvoeringspan
----------------------------------------------------------
Plan hash value: 83448843
 
-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |  1020 |  3875   (1)| 00:00:47 |
|*  1 |  TABLE ACCESS FULL| ROBTEST |     1 |  1020 |  3875   (1)| 00:00:47 |
-----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(NVL("CODE4",(-1))=1 AND NVL("CODE3",(-1))=1 AND
              NVL("CODE2",(-1))=1 AND NVL("CODE1",(-1))=1)
 
 
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      14297  consistent gets
          0  physical reads
          0  redo size
       1649  bytes sent via SQL*Net to client
        359  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL> alter table robtest modify code1 constraint nn1 not null
  2  /
 
Tabel is gewijzigd.
 
SQL> alter table robtest modify code2 constraint nn2 not null
  2  /
 
Tabel is gewijzigd.
 
SQL> alter table robtest modify code3 constraint nn3 not null
  2  /
 
Tabel is gewijzigd.
 
SQL> alter table robtest modify code4 constraint nn4 not null
  2  /
 
Tabel is gewijzigd.
 
SQL> select *
  2    from robtest
  3   where nvl(code1,-1) = 1
  4     and nvl(code2,-1) = 1
  5     and nvl(code3,-1) = 1
  6     and nvl(code4,-1) = 1
  7  /
 
        ID      CODE1      CODE2      CODE3      CODE4 DESCRIPTION
---------- ---------- ---------- ---------- ---------- ----------------------------------------
       100          1          1          1          1 ****************************************
 
1 rij is geselecteerd.
 
 
Uitvoeringspan
----------------------------------------------------------
Plan hash value: 1291870759
 
------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |     1 |  1020 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| ROBTEST    |     1 |  1020 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | ROBTEST_UK |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("CODE1"=1 AND "CODE2"=1 AND "CODE3"=1 AND "CODE4"=1)
 
 
Statistics
----------------------------------------------------------
         38  recursive calls
          0  db block gets
         65  consistent gets
          0  physical reads
          0  redo size
       1590  bytes sent via SQL*Net to client
        348  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
          1  rows processed
As you can see in the execution plan, a INDEX UNIQUE SCAN is used. The predicate information section tells us that the optimizer was able to transform the predicates "NVL(column_with_not_null_constraint,-1)" to simply "column_with_not_null_constraint". The extra recursive calls and consistent gets disappear for subsequent executions. So somewhere between versions 10.2.0.3 and 11.2.0.4, the optimizer introduced this nice little optimization.

If you have a database at hand with a version in between the two mentioned versions and you would like to try, here is the script you can copy-paste-and-run:
set echo on
set serveroutput off
column description format a40 truncate
select * from v$version
/
create table robtest
as
select level id
      , ceil(level/10000)  code1
      , ceil(level/1000)   code2
      , ceil(level/100)    code3
      , mod(level/100,100) code4
      , lpad('*',1000,'*') description
   from dual
connect by level <= 100000
/
alter table robtest add constraint robtest_uk unique (code1,code2,code3,code4)
/
exec dbms_stats.gather_table_stats(user,'robtest',cascade=>true)
set autotrace on
select *
  from robtest
where code1 = 1
   and code2 = 1
   and code3 = 1
   and code4 = 1
/
select *
  from robtest
where nvl(code1,-1) = 1
   and nvl(code2,-1) = 1
   and nvl(code3,-1) = 1
   and nvl(code4,-1) = 1
/
alter table robtest modify code1 constraint nn1 not null
/
alter table robtest modify code2 constraint nn2 not null
/
alter table robtest modify code3 constraint nn3 not null
/
alter table robtest modify code4 constraint nn4 not null
/
select *
  from robtest
where nvl(code1,-1) = 1
   and nvl(code2,-1) = 1
   and nvl(code3,-1) = 1
   and nvl(code4,-1) = 1
/
set autotrace off
set echo off
drop table robtest purge
/
Please paste your results in the comments section along with the version used.

Wednesday, November 23, 2016

Performance aspects of APEX reports

As this post appears, I'm presenting on this subject for the Dutch Oracle User Group OGh. This blog post won't contain as much detail as the presentation itself, although it's pretty close. If you're interested to see and replay everything for yourself, you can find the material of the presentation in the "Presentations and papers" section and here directly. There are installation instructions in the readme.txt file to help you setup a special database schema and an APEX application on top of that schema. The running example is a query joining two tables, mytable and mytable_details, and a slow PL/SQL function which takes 0.01 seconds to execute due to a dbms_lock.sleep call. The objects are created like this:

demopaar@ORCL> create function my_slow_function return number
  2  is
  3  begin
  4    -- Wait 1/100 sec
  5    dbms_lock.sleep(.01);
  6    return 42;
  7  end;
  8  /

Functie is aangemaakt.

demopaar@ORCL> create table mytable (id,name,status,mydate,filler)
  2  as
  3   select level
  4        , 'Name ' || to_char(level)
  5        , mod(level,20) + 1
  6        , trunc(sysdate,'yyyy') + numtodsinterval(dbms_random.value * (trunc(sysdate)+1-trunc(sysdate,'yyyy')),'day')
  7        , lpad('*',1000,'*')
  8     from dual
  9  connect by level <= 50000
 10  /

Tabel is aangemaakt.

demopaar@ORCL> create table mytable_details (id,mytable_id,name)
  2  as
  3   select level
  4        , level
  5        , 'Name ' || to_char(level)
  6     from dual
  7  connect by level <= 50000
  8  /

Tabel is aangemaakt.

demopaar@ORCL> alter table mytable add constraint mytable_pk primary key (id)
  2  /

Tabel is gewijzigd.

demopaar@ORCL> alter table mytable_details add constraint mytable_details_pk primary key (id)
  2  /

Tabel is gewijzigd.

demopaar@ORCL> alter table mytable_details add constraint mytable_details_mytable_fk foreign key (mytable_id) references mytable(id)
  2  /

Tabel is gewijzigd.

demopaar@ORCL> create index myindex on mytable(status,mydate)
  2  /

Index is aangemaakt.

demopaar@ORCL> create index mytable_details_fk_i on mytable_details(mytable_id)
  2  /

Index is aangemaakt.

demopaar@ORCL> exec dbms_stats.gather_table_stats(user,'mytable',cascade=>true)

PL/SQL-procedure is geslaagd.

demopaar@ORCL> exec dbms_stats.gather_table_stats(user,'mytable_details',cascade=>true)

PL/SQL-procedure is geslaagd.

So both tables contain 50,000 rows. Table mytable contains 20 different statuses, each status occurring 2500 times. So the query below, with predicate "status = 3" retrieves 2500 rows. And therefore the query will take at least 25 seconds, because of the my_slow_function() call in its select list.
select t.id
     , t.name
     , t.mydate
     , td.name detail_name
     , my_slow_function()
  from            demopaar.mytable         t
       inner join demopaar.mytable_details td on (t.id = td.mytable_id)  
 where t.status = 3
 order by t.mydate desc

And the execution plan of this query (with the database parameter optimizer_mode at its default value all_rows) is using a nice hash join:

---------------------------------------------------------------------------------------
| Id  | Operation           | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                 |  2500 |   104K|  2288   (1)| 00:00:01 |
|   1 |  SORT ORDER BY      |                 |  2500 |   104K|  2288   (1)| 00:00:01 |
|*  2 |   HASH JOIN         |                 |  2500 |   104K|  2287   (1)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| MYTABLE         |  2500 | 67500 |  2234   (1)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| MYTABLE_DETAILS | 50000 |   781K|    53   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T"."ID"="TD"."MYTABLE_ID")
   3 - filter("T"."STATUS"=3)

Typically, in a web report, you only want the first few rows -say 15- out of a sorted set to appear on the page. And the page contains buttons to navigate to the next/previous set of 15 rows. To only select the first 15 rows, wrap up the query above and use pseudocolumn rownum twice, for the lower and upper bound, like this:
select *
  from ( select x.*
              , rownum rn
           from ( select t.id
                       , t.name
                       , t.mydate
                       , td.name detail_name
                       , my_slow_function
                    from            demopaar.mytable         t
                         inner join demopaar.mytable_details td on (t.id = td.mytable_id)
                   where t.status = 3
                   order by t.mydate desc
                ) x
          where rownum <= 15
       )
 where rn >= 1

Looking at the brackets, it looks like this query first needs to identify the 2500 rows, then sort them, execute the slow function, and only then limit the number of rows to 15. However, Oracle is smarter than that and uses an optimization technique for pagination queries, as you can see here:
rwijk@ORCL> set serveroutput off
rwijk@ORCL> alter session set statistics_level = all
  2  /

Sessie is gewijzigd.

rwijk@ORCL> select *
  2    from ( select x.*
  3                , rownum rn
  4             from ( select t.id
  5                         , t.name
  6                         , t.mydate
  7                         , td.name detail_name
  8                         , my_slow_function
  9                      from            demopaar.mytable         t
 10                           inner join demopaar.mytable_details td on (t.id = td.mytable_id)
 11                     where t.status = 3
 12                     order by t.mydate desc
 13                  ) x
 14            where rownum <= 15
 15         )
 16   where rn >= 1
 17  /

     ID NAME       MYDATE              DETAIL_NAME MY_SLOW_FUNCTION   RN
------- ---------- ------------------- ----------- ---------------- ----
  32102 Name 32102 04-11-2016 16:38:57 Name 32102                42    1
   8842 Name 8842  04-11-2016 14:18:24 Name 8842                 42    2
  44942 Name 44942 04-11-2016 13:47:10 Name 44942                42    3
   5662 Name 5662  04-11-2016 13:34:54 Name 5662                 42    4
  12422 Name 12422 04-11-2016 12:46:42 Name 12422                42    5
  41562 Name 41562 04-11-2016 11:35:30 Name 41562                42    6
  38002 Name 38002 04-11-2016 09:36:56 Name 38002                42    7
  26922 Name 26922 04-11-2016 09:03:07 Name 26922                42    8
   3442 Name 3442  04-11-2016 08:38:25 Name 3442                 42    9
  13842 Name 13842 04-11-2016 08:10:28 Name 13842                42   10
   8042 Name 8042  04-11-2016 06:13:13 Name 8042                 42   11
  14522 Name 14522 04-11-2016 06:11:02 Name 14522                42   12
   7642 Name 7642  04-11-2016 01:45:08 Name 7642                 42   13
  31122 Name 31122 04-11-2016 01:35:38 Name 31122                42   14
  36322 Name 36322 04-11-2016 00:57:18 Name 36322                42   15

15 rijen zijn geselecteerd.

rwijk@ORCL> pause

rwijk@ORCL> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'))
  2  /

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  6p620zht2qhp8, child number 1
-------------------------------------
select *   from ( select x.*               , rownum rn            from
( select t.id                        , t.name                        ,
t.mydate                        , td.name detail_name
     , my_slow_function                     from
demopaar.mytable         t                          inner join
demopaar.mytable_details td on (t.id = td.mytable_id)
 where t.status = 3                    order by t.mydate desc
      ) x           where rownum <= 15        )  where rn >= 1

Plan hash value: 2786083199

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                 | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                      |      1 |        |     15 |00:00:00.17 |      52 |
|*  1 |  VIEW                             |                      |      1 |     15 |     15 |00:00:00.17 |      52 |
|*  2 |   COUNT STOPKEY                   |                      |      1 |        |     15 |00:00:00.17 |      52 |
|   3 |    VIEW                           |                      |      1 |     15 |     15 |00:00:00.17 |      52 |
|   4 |     NESTED LOOPS                  |                      |      1 |     15 |     15 |00:00:00.01 |      52 |
|   5 |      NESTED LOOPS                 |                      |      1 |     15 |     15 |00:00:00.01 |      37 |
|   6 |       TABLE ACCESS BY INDEX ROWID | MYTABLE              |      1 |   2500 |     15 |00:00:00.01 |      19 |
|*  7 |        INDEX RANGE SCAN DESCENDING| MYINDEX              |      1 |     15 |     15 |00:00:00.01 |       4 |
|*  8 |       INDEX RANGE SCAN            | MYTABLE_DETAILS_FK_I |     15 |      1 |     15 |00:00:00.01 |      18 |
|   9 |      TABLE ACCESS BY INDEX ROWID  | MYTABLE_DETAILS      |     15 |      1 |     15 |00:00:00.01 |      15 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("RN">=1)
   2 - filter(ROWNUM<=15)
   7 - access("T"."STATUS"=3)
   8 - access("T"."ID"="TD"."MYTABLE_ID")


36 rijen zijn geselecteerd.
This query doesn't take 25+ seconds, but only 0.17 seconds, as it only fetches 15 rows. The my_slow_function call is executed only 15 times. This happens because of the COUNT STOPKEY [filter(ROWNUM<=15)] line. This basically tells the database to execute the steps below (lines 3-9), but stop once it has retrieved 15 rows. And as you can see from the A-Rows column, it works perfectly here. The execution plan starts with an INDEX RANGE SCAN DESCENDING of myindex (status,mydate). All statuses 3 are retrieved and since the index itself is sorted, it just needs to 4 consistent gets (see Buffers column line 7) to identify those 15 rows. Then the columns of mytable are retrieved and with two (!) NESTED LOOPS the mytable_details are joined. So this is an effective optimization technique, which prevents having to execute the function call 2500 times.

Lots of people have been written about pagination in the past. If you want to read more about the technique itself or its side effects, here you can read a few of the better articles:

APEX

Now let's have a look at what APEX does with our query. In APEX you enter your query via the GUI into the APEX repository and on runtime, APEX constructs and executes dynamic SQL around the query you entered. The exact SQL constructed depends on numerous APEX settings. By SQL tracing your APEX application you can see what exact query APEX constructed and executed.

We'll start by looking at an LOV query, then classic reports and finally interactive reports. At a client site, I tested it on version 4.2.4 and privately, the basis of this post, I use 5.0.3 (in a prebuilt App Development VM). I saw no differences between the two versions for this story.


APEX List of Values

This is the one with the least amount of variations. In an LOV query, we see APEX uses two variants of the pagination query. One for the very first set of rows, with just an upper bound bind variable:
select a.* from (
select t.name || ' | ' ||
       to_char(t.mydate,'dd-mm-yyyy hh24:mi:ss') || ' | ' ||
       td.name || ' | ' ||
       my_slow_function description
     , t.id
  from            mytable         t
       inner join mytable_details td on (t.id = td.mytable_id)
 where t.status = 3
 order by t.mydate desc
)a
where ROWNUM <= :p$_max_rows

and one for the subsequent sets, also containing an offset:
select * from (select a.*, ROWNUM as apx$rownum from (select t.name || ' | ' ||
       to_char(t.mydate,'dd-mm-yyyy hh24:mi:ss') || ' | ' ||
       td.name || ' | ' ||
       my_slow_function description
     , t.id
  from            mytable         t
       inner join mytable_details td on (t.id = td.mytable_id)
 where t.status = 3
 order by t.mydate desc
)a
where ROWNUM <= :p$_max_rows) where apx$rownum >= :p$_first_row

So, APEX LOV queries use the pagination optimization technique and are efficient. The user experience is that the first set of rows is taking 0.15+ seconds, the next set 0.3+ seconds, then 0.45+, et cetera. Below is tkprof snippet from clicking on the "Next" button on the page containing rows 31-45, which needs to retrieve the rows 46-60:
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.03       0.64       8096       8953          4          16
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.03       0.64       8096       8953          4          16

We see the elapsed time of the fetch taking 0.6+ seconds, as expected. Also worth noting is that APEX uses a single fetch to retrieve 16 rows. A nice bulk fetch. It fetches 16, not 15 rows, presumably because APEX needs to know whether to display the "Next" button when rendering the new page. If only 15 rows are present, the 16th row would be missing, and the "Next" button wouldn't be displayed.


APEX Classic reports

The queries issued from classic reports are very different from LOV's, since they don't use the pagination techique under the covers. If they did, the "Download" functionality wouldn't retrieve all rows, but just the rows currently being displayed. Which is probably not what you want. Also, user defined sorting would break, since only the retrieved rows -the small set- would be sorted. A classic report query therefore sets out to retrieve all rows, using the hash join plan in my example, but it just doesn't fetch all rows from the cursor, only the number of rows displayed, plus 2. Our query in a classic report, with an extra search box as indicated in the wizard, looks like this:

select * from (
select t.id
     , t.name
     , t.mydate
     , td.name detail_name
     , my_slow_function
  from            mytable         t
       inner join mytable_details td on (t.id = td.mytable_id)
 where t.status = 3
 order by t.mydate desc
) where (
 instr(upper("NAME"),upper(nvl(:P20_REPORT_SEARCH,"NAME"))) > 0 
)

And executing this query takes the full 25 seconds. A tkprof snippet from pressing "Next" on the page with rows 31-45, to get to rows 46-60:
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       62      0.47      25.18       8096       8953          4          62
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       64      0.47      25.18       8096       8953          4          62

Three things to note here:
  • Even though we need only 15 rows, the first 45 rows are also fetched, because no pagination (offset) is used.
  • A mysterious extra fetch is done. For 60 rows, we expect 61 fetches: 60 rows and one more to know whether APEX should render a "Next" button. So why is the number of fetches +2 ?
  • Classic reports don't bulk fetch like LOV's do. We need 62 single row fetches here instead of a single bulk fetch.
If we specify a pagination type with a Z (for example Row Ranges X of Y to Z), APEX fetches all 2500 rows for the classic report. And if we specify a Maximum Row Count, APEX fetches ((Maximum Row Count) + 1) rows. In all cases, the query stays the same, only the number of fetches changes.

If performance of a classic report is a problem, there are a couple of ways to address this:
  • Use an additional functional default filter, if applicable. For example, only select rows where mydate is more recent than a user provided value, default sysdate minus five days. This will limit the number of rows retrieved and will possibly flip the execution plan to use a nested loop.
  • Write your own pagination into the query, without using an offset. Internally, APEX keeps track of where you are in your pagination navigation. If you hover over the "Next" button, you see "javascript:apex.widget.report.paginate('', {min:16,max:15,fetched:15});". And if you follow the numbers in the second argument around javascript, you can see the first two end up in Oracle package variables wwv_flow.g_flow_current_min_row and wwv_flow.g_flow_current_max_rows. You can't use Oracle package variables in a SQL query, but by writing a wrapper package around these variables, you can use them, like this:
select * from (
select * from (
select t.id
     , t.name
     , t.mydate
     , td.name detail_name
     , my_slow_function
  from            mytable         t
       inner join mytable_details td on (t.id = td.mytable_id)
 where t.status = 3
 order by t.mydate desc
) where (
 instr(upper("NAME"),upper(nvl(:P20_REPORT_SEARCH,"NAME"))) > 0 
)) x where rownum <= pagination_hack.min_row + pagination_hack.max_rows

For example, for the second batch of 15 rows, by adding the 16 and 15, the predicate is "rownum <= 31". This is just enough to fetch all visible rows and let APEX know that a "Next" button is needed. APEX takes care of discarding the first set of 15 rows. Note (again) that this practice is not recommended, since it'll break your download functionality and gives unexpected sorting when you click on a sortable column. And you cannot sensibly use a pagination type with a Z.

The best way of addressing performance issues on classic reports is the same as for interactive reports. So hang on.


APEX Interactive reports (IR)

Interactive reports are great for the end user for giving them LOTS of options, to sort, filter, aggregate and more. APEX handles the user defined settings by dynamically adjusting the query fired to the database. Very different from a classic report, for which the query stays the same. This also makes it non trivial to demonstrate how the query arriving at the database will look like. Below is what I can come up with, after tracing lots of variants. It's obviously still far from complete and leaves many options untold, but it gives you an idea how an IR generated query looks like:

select null as apxws_row_pk,
       [ all visible columns ,]
       [ in case Maximum Row Count specified or Row Ranges X to Y of Z: count(*) over () as apxws_row_cnt]
  from ( select  *
           from ( select [ all columns (b.*), including LOV columns (Ln.descriptor)]
                    from ( select *
                             from ( [ your APEX-report query ] )
                         ) b,
                         [ outer join LOV tables ] 
                ) r
          where [ your IR filters ]
       ) r
 [ in case Max Row Count specified: where rownum <= to_number(:APXWS_MAX_ROW_CNT) ]
 order by [ IR user sort ]

Basically, the query is a non-pagination query that sets out to retrieve all rows. You can trim the result set by specifying the Maximum Row Count property.

There are two settings and a characteristic of the query execution plan important to the performance of IR queries:
  • Pagination Type
  • Maximum Row Count
  • Whether the base query can be fetched from directly
Pagination Type
The three possible options for an IR pagination type are None, Row Ranges X to Y of Z, and Row Ranges X to Y. And if you specify "Row Ranges X to Y of Z", APEX needs to calculate the Z and it does so by adding the analytic function "count(*) over ()" to the outermost block. To calculate this sum, all rows must be visited and the query will therefore never be able to fetch directly. If your IR query will retrieve lots of rows and performance is important, you should choose the Pagination type "Row Ranges X to Y", without the Z.

Maximum Row Count
If your base query retrieves too many rows, you might want to reduce this number by specifying the Maximum Row Count. If you set it to -let's say- 1000, an end user probably won't hit the "Next" button so many times that he/she will hit that number. There are some issues with this. First one is that the download functionality will be deemed broken, as only 1000 rows will appear in the file. A second one is that the user might be unpleasantly surprised when he starts sorting the 1000 rows and he expects rows at the top that are now being filtered. But if these disadvantages don't mean much to you, you can use this property for tuning purposes.

If you specify a non null value for Maximum Row Count, APEX will add a "where rownum <= to_number(:APXWS_MAX_ROW_CNT)" predicate to your query. This sounds nice, as it will introduce a COUNT STOPKEY in the query execution plan, and get us pagination-query-type performance. However, specifying the Maximum Row Count also introduces the "count(*) over ()" expression, regardless of the Pagination type! Which effectively kills the possibility to fetch directly as the database now still needs to visit all rows. By unwrapping some APEX packages for learning purposes, I learned that the "count(*) over ()" expression is generated when the Maximum Row Count is specified OR pagination type is "Row Ranges X to Y of Z". This doesn't make sense to me and I'd love to hear why this is the case. I'd expect the "count(*) over ()" only being generated when calculating the Z in pagination type "Row Ranges X to Y of Z". Don't try this at work, but in my development VM I hacked the APEX package and changed the OR into an AND and witnessed a spectacular performance improvement because the optimizer could now succesfully transform the execution plan to a plan similar to a pagination query.

Whether the base query can be fetched from directly
Here I'm talking about making it possible for the database to immediately start returning rows, right after issuing the query. If we have a simple query retrieving rows from just one table, this is possible. If we join a second table with a nested loop join, it's still possible. If we join the second table using a hash join, then it's not possible. The hash join needs to be complete before we start returning rows. There are all kinds of operations which prohibit the database from immediately returning rows, too many to list here. Aggregating, grouping and analytic functions generally prohibit the immediate returning of rows. For sorting you might need an index on the column(s) in the order by clause. Et cetera.

There is however a general good idea you can apply here. In our example we tell the database to give us 2500 rows and the CBO generates an execution plan to make that happen as fast as possible. But we didn't tell the CBO our intention of only wanting the first 15 rows. The purpose of the FIRST_ROWS(n) hint is to tell exactly that to the database. In our example, adding a FIRST_ROWS(10) hint immediately switches the plan from using a hash join to a nested loop join after an index range scan descending to identify the first 15 rows quickly. A FIRST_ROWS(n) hint is suboptimal for the download functionality, but that's a disadvantage which is usually outweighed by a much quicker response time for regular IR querying. Sometimes however, a FIRST_ROWS(n) hint is not enough. A customer I visited, had the database parameter OPTIMIZER_MODE set to FIRST_ROWS_10, but nevertheless didn't saw their queries switching to direct fetch plans. I reluctantly had to add optimizer hints to the queries, forcing nested loop joins and sometimes even an index_desc hint, which made most queries better. The trouble with adding hints, especially with interactive reports, is that they are not always the right fix for every query that can be generated.


Conclusion

For both classic reports and interactive reports, the recommended solution is the same: make your query able to fetch directly, preferably without hints or with the FIRST_ROWS(n) hint. If that doesn't work use other hints (like USE_NL or INDEX_DESC) to force the execution plan you want. Needless to say, a sound data model helps tremendously with writing straightforward queries allowing direct fetching.

Leave the Maximum Row Count property null, so classic reports won't fetch all the way to this number and interactive reports won't introduce the analytic function count(*) over ().

Don't use a Pagination Type with a Z, so classic reports won't fetch all rows and interactive reports again won't introduce count(*) over ().


Recommendations for the APEX development team

Maybe some of them have already been evaluated and rejected. Or maybe I'm unaware yet of certain disadvantages. If so, I'd appreciate it if you would mention this in the comments section.

1) Use bulk fetch
In classic reports and interactive reports, there are as much fetch operations as their are rows being retrieved, which indicates single row fetching. In a report you'll always retrieve 10 to 100 rows, so it makes sense to use bulk fetching here, which lowers the number of context switches between PL/SQL and SQL. When I tweeted this finding recently, Joel Kallman tweeted back with "IRs don't use array fetch, probably should. Legacy decision. Will evaluate using array fetch for IG in 5.1.", where IG stands for the new Interactive Grid. It's still worth implementing this for classic and interactive reports as well, in my opinion.

2) In interactive reports, use "count(*) over () apxws_row_cnt" if and only if Pagination Type is set to "Row Ranges X to Y of Z"
As it is now, this analytic function is also generated when Maximum Row Count is specified, needlessly preventing pagination optimizations. If you cannot make your query fetch immediately, the Maximum Row Count property, and giving up a useful download functionality, is a good option to lessen the performance impact. However, this works best when the analytic function is not there.

3) Default top level hint FIRST_ROWS(10), and checkbox to not generate this hint
It makes sense for APEX to always include a FIRST_ROWS(10) hint into the query in interactive reports and also in classic reports when Maximum Row Count is null and the Pagination Type doesn't contain a Z. This clearly states APEX' intent to the cost based optimizer of only fetching the first few rows and will generally lead to better execution plans. To still give the developer full control over the generated queries, APEX may need an extra checkbox to not include the hint in a rare case, even though I can't think of reason to not want this hint.

4) Default top level hint ALL_ROWS behind download functionality and classic reports with a pagination type containing a Z, and checkbox to not generate this hint.
The counterpart of recommendation 3, is that the download functionality in both the classic reports as the interactive report will be better with an ALL_ROWS hint, clearly stating the intent of wanting to fetch all rows to the cost based optimizer. The hint is also useful for a classic report with a pagination type containing a Z, because such a report will also fetch all rows.


Update 29 december 2016

In Apex 5.1 (5.1.0.00.45 to be more precise), all above conclusions and recommendations still stand.

Update 1 June 2018

In APEX 18.1, the inner workings of classic reports and interactive reports have changed, and the story above no longer applies. It applies up until version 5.1.4. Also, bulk fetch has been implemented in 18.1. Recommendations 2, 3 & 4 still stand and would be nice to have.

Thursday, April 30, 2015

Painting the Mona Lisa with SQL

Seven years after the first Obfuscated SQL Code Contest, a new one has been organised on the PL/SQL Challenge website. I had fun writing my first entry back then, but in the back of my mind I regretted not doing something with ASCII art. So the new contest was a good pretext for me to finally fill in that gap. Here is my entry for the second Obfuscated SQL Contest:

SQL> select listagg
  2   (chr(nvl2(x,y,
  3   32*nvl2(a,1,2)
  4    )))within group
  5    (order    by l)
  6     mona_lisa  from
  7     (select level"L"
  8      from user_users
  9      connect by 2*3*4
 10       *8*16+64>=rownum
 11       )l left join  (
 12        select ascii('@'
 13        )*(ascii(substr(
 14         x,rownum*3-2,1))
 15         -65)-65+ascii  (
 16          substr(x,rownum*
 17          3-1,1))"A",ascii(
 18           '@')*(ascii(substr
 19           (x,rownum*3-2,1))-
 20            65)  -131+ascii(
 21            substr(x,rownum*3
 22            -1,1))+ascii(substr
 23             (x,rownum*3,1))"B"
 24             from(select'HUKI'||
 25             'RLJRPKOULOUMNWN'||
 26              'NVONUPNVQBDQNYR'||
 27              'BDRMZSNOTUGUVEV'||
 28               'WDW]EXVEXgEYLOY'||
 29               ']NZLNZ\N[MN[^K\'||
 30                'MO\]L]NM]^J^OG^'||
 31                'VD^^I_OF_^H`QF`'||
 32                 '^EdSEeTDfUHgUHh'||
 33                 'WDaSF'x from dual
 34                  where 1=1/*voor@
 35                   patch72*/group by
 36                     cube('m','o',
 37                     'n','a','l',
 38                   'i','s','a') having
 39                grouping_id('m','o','n','a',
 40           'l','i','s','a')between 11 and 58))x on
 41           (l between a and b) left join (with x(v)
 42            as(select 1 from dual union   all select
 43            x.v+1 from dual join x on(28>=x.v))search
 44             depth first by v set a select(16+v+decode
 45             (sign(round((v-6)/5)),-1,0,0,1,4))*64"X",
 46              91"Y"from x union all select p,x from (
 47               select x,y,sum(nvl2(nullif(z,1),z-1,1))
 48                over (order by i) p from (select i,x,y
 49                 ,nvl(nullif(dense_rank()over(order by
 50                  decode(y,'louvre',x)   nulls first
 51                    ),28),29)z from(select i,x,case
 52                     when x in(34,35,36,37,38,48,
 53                      49,51,54,57,58,59,60,62,63
 54                       ,64,66,67,69,70,72,73,79,
 55                         81,82,83,85)then'louvre'
 56                           end y from(select i,
 57                             158-ascii(regexp_substr
 58                               (x,'((.)?)',i,1))x from
 59                                  user_users model return      updated
 60                                    rows dimension by(1 i) measures
 61                                      (rpad('leonardo da vinci',7
 62  *6*5*4*2-1)x)(x[for i from 1 to 1679 increment 1]='5>5*u("CCCCvoo'||
 63  '+su>vqBBoTT7<::m:1SSv;}voqClao;;*o}q(B}?RBu"Aq}ov}quBRBu(";iv}r}'||
 64  'R::Y:SoA};BBB("5Bo;JB(vv;qA}>o(Boovq"*BH(;R}1xQX<R*BB"u;olq((uo5'||
 65  '*BpqqoqrBrB(Br"uo(o1nSO<S}(}qv~uq}pCo;J~oopBqqwqq"qo;v9mZZK+BBBB'||
 66  'q""o(};Bpqrq"B>""BqBo5:kPI<pur>qrqo;q>5}rB}q}q}w}q}:mNCs tt]]]O<'||
 67  'opo>;qopBgqqqwpqq~qor5:ztN}>eBwDfV5pBBwpBp;~wr>r>BwqrqT{i>qdqq~>'||
 68  'q5E]e<h?Bq"qB;q~wp~qpr>opW{Sq~>_q~ugSQY;qqqqqo~q~>q~~qq5{]5Xq}DD'||
 69  'Y<}~B>"q>~>qrwq~WzCrXwpZfSYCoqrqopq>~pqo(z]uUquD:Q:[Bw~?B~q~>~r5'||
 70  'HTz}Vwq}vSiS[Cvo+Cp~~q~5BWzDqXw~}~q5iiDD\uvju>~r~q"<y}BVw~~>~"q*'||
 71  'EW^FPEDqz**y]qrMw~~>uv:^Zuf]yufxBLrqwqoS:\SZ{~~AADy:"q`r55p;rr~q'||
 72  'p5;RDSSnf|Shv|f|~~SPfxv~p5}(WW?nTjS<fDS<|<<S|:|ffm<e:S|q~uofSzS|'||
 73  '<|:N   JjvyT:{g>???+|QyQQf:z:cRQf|B}jf|Syf{:tz<H5z<]SRE  nt88oBP'||
 74  'Qf_:<{rBBS<z:|p *>~}t }>p~~qQ]uyw wpup>~>rwS\]S<|r>fQn~q>rr7(D>>'||
 75  'y]ooq~~pp;BsB>y5A:nQdf|5B|f|SzZ`B}wb}5Dfzf]xf:|<zfQ|9f:iYw}qbwqu'||
 76  'f[Qf5f:Df|Qyqb(~~rerB*Sc]nDl"|<|]Sz<|C`ood;;*<]z]S{:{:zSz<15{f|Q'||
 77  '|fzNqcqaopho5Zf:`]z:yfzQ]yBqnpz?u7l5kqvv:SSx:y:{SnSy<]S{S{u5xw;r'||
 78  'rS<{<SmuFuS<|Q{:zQ<{]|:nfzS|<y]|]]o5qx QyQ;yprB;auZf:{<z:ef:{]xQ'||
 79  '<nvw;B>xNNSvvr5A("qBq(ufFP]:QSm<{QS|]kf{QS|:|]|R}qq;u+?r~rvD+<R<'||
 80  '|B>q~pqPAoSQm:z]k:yS<zS|<zoq~~}ww  H88tPi8~q>~qrBuuSSmQS{:zQf:|<'||
 81  'x<zS]Snlqqyrr?TTo5uoq~"o(uPS|f:efkQ|Szf:|Sz<x:}rzwHHBuBBuBgvqujT'||
 82  '<|f|]zS|:mfhQyf{S|S<z:|(pm>qBBo(uff<|Q|]|S{Qz]{Q|f{]|:|f|Q|D<|:|'||
 83  'Q|Se1mqu}o+*<<|<{]|<xS<m<zSyS{:x:|Q|fxS+zrqo(TZ{f:|SnfeQSz<{Q{*P'||
 84  '|<{:|:|QeRRjTS:|]{:{SyQSyfd<nuPQ|f<{]z]`]{]{f{S|:yQ{S{S|]zf{Qxv*'||
 85  'QQz:{Qfy]y]z]h]yf<{f:^Q|*PD{Sz:y]xfzojP{f|<{:{Q|f|QS|Qf|Sn:hQoS]'||
 86  'fzQ]zQeC"*tD|Pyf{QfD]DD|Qz]|:xSz:{<ETjQn]cZpBwvESSDZ|f:S|i]fjED|'||
 87  ':Qc:zSi:Qf|f:|:kfnppq}ouuDSi]Sju]EvoFEoDz]z:zQxEf:P]{]S{:z<hRrqr'||
 88  'B}AABFviuDog;BB*ioSk<mfQfz]z]{fhSZR*}~}rq"*wvqBB}rBorB}DTWz:|Q:zW'
 89  )))))where y is null)y on(l.l=y.x)group by ceil(l.l/(2*ascii(' ')))
 90  /

MONA_LISA
-----------------------------------------------------------------
i`it)v|[[[[(//s+)`(-\\/JJgbdd@@@@@@@dmKK(c!(/-[2=/cct/!-v\!_L\)|
]-!/(!-)\L\)v|c5(!,!Ldd@@@@@@@@@@@@@@@@@@@dK/]!c\\\v|i\/cT\v((c-
]!`/v\//(-|t\VvcL!m@@@@@M@@@@@@@@@@@@@@@@@@@@bLt\\|)c/2-vv)/it\.
--/-,\,\v\,|)/v/m@@@@@@K@@@@@@@@@@@@@@@@@@@@@@@bK!v!-( )-!.[/cT
//.\--'--|-/c(e@@@@@@@DD@@@@@@@@@@@@@@@@@@@@@@@@@@s\\\\-||/v!c\.
-,-|\`||\-\/id@@@@@@@@N@@@@@@@@@@@@@@@@@@@@@@@@@@@@b.),`-,-/c-`i
!,\!-!-!'!-!d@@@@@@@P[+~**AAA@@@@@@@@@@@@@@@@@@@@@@@b/./`c-/.\7-
--'.-- -/,id@@@*P!`          \'Z8@@@@@@@@@@@@@@@@@@@@@i.\\'.\.c
',`,`\'-,-J@@5`-           -- `-iYA@@@@@@@@@@b@@@@@@@@@_\-|-\c-
'. -.,`/.G@@K- `               - )7KM@@@@@@@@@@@@@@@@@@@c-----/
- `-  --i@@Ai                    -!ZZ@@@@@@@@@@@@@@@@@@@b! \`|-`
 `-,'- G@@@[,                    '.D8K@@@@@@@@@@@@@@@@@@@[/-,-/.
-` .-/v@@@A)                      -)ZdMd@@@@@@@@@@@@@@@@@@\' _\
- ` ,iVJ@@@!                     '-!(K5K@@@@@@@@@@@@@@@@@@[(/s[.
  - i\G@@@Z-                    ' ! -i55ZZ@@@@@@@@@@@@@@@@@)(4)`
 , -|b@@@@!\                     '  ` |-tYG@@@@@@@@@@@@@@@@XNYZ-
   tt@@@@A-,                        '  `)(d@@@@@@@@@@@@@@@@D)8A[
   )8@@@@@\                         ,-'-/Kd@@@@@@@@@@@@@@@@@KD@[
  ]]Z@@@@d|-              ,ii.c,, -.icLZKK@@@@@@8@K@@@@@@@@@(@8[
  KN8@@@@@( .i!vGG_      J4Kb8ZKb@bbK@d@88@@@@@@@b@@@@@@@@@@dK@-
 )/8K@@@K@b@dP~~~T4(    Jd@@7`___s@M@@@@MM8d@@@d@@@@@@@@@@@@LM8[
\!48@K@@@@8@@d*@@@bVi   bAKLY~~@@@@@@*ff/\NM8@@@@@@@@@@@@@@@db@[
,\\Kb@@@d@.~t` !*~!`.  -MA)    '~'.).` `,'K@@@@@@@@@@@@@@@@@AKb[
,`8M@@@@@@ -`,,gvZ``    A//-  ..c\+\`    i]d@@@@@@M@@@@@@@@@@@8[
i\@8@K@@@D              \!'             !iZ8@@@8A@@@@@8d@b@@@8M[
e8d5@@@@@@             '!-             '-)8@@@@@@@@@@@@@@@@@@M8i
8dZ8@M@@@@-             v  ,          ,\tK@@@@@@@@@@@@A@@@@@@Z2|
@b@AK@@@b@[              //           cctbA@@@AK@@d@@d@@@K@@@bmi
@@8@M@8@@@P-            -=/.         /iD8d@@@@@@@@@@@@@@A@@@d@@[
@8@@@MA@@@@\-      .   _)g2i        -((dKK@@@@@d@@@@d@@K@@@@@@K[
@@@bAK@@K@@)i     'c,,Kb@@bK       )X)Kb@M@@d@@@Mb@@A@d@@@@@@8@[
@K@b@@@@A@AA/i-     ~M@@@@Mc    .,\c=)D8d@@b@@@d@@@@@@@@@@8d@@A[
@@@@Mb@@@@@@('c\`     PPK((,i]v|-\-v)8XNAdMK@@@@@@@b@@MK@A@@@@@[
@@8@@MK@d@A@L!--c)s_, ,(ZsbLb@\`- .-N]/KM@@@@@@@d@@@A@@@@@@@@d@[
@@Kb@@@K@b@@@/-  !''~~Vff*N5f -` -,\))KK@@@@@@@MK@@d@@@M8d@b@@@[
@b@@@KAK@@@@@@2--    ,,_JJ/i)/- |/v)NK@8d@@@@@@@@@@8@@@@@@@@M@K[
@@8d@K@@@b@@@@@d!,   'VV\)\\)\7(-)4Jb@8@A@@@K@d@@@@@@@8@@@@@@@@[
M@@@@8@@K@Kb@@@d@v.       `-\\/v)88b@M@A@K@@M@@@A@@M@8@@A@d@8@M[
Zb@d@M@K@@@@@@@@@@m       -)!/stbb@b@@A@b@@@@@Kb@@@@@@@b@@@K@@@[
K@@d@@@@@d@M@8@@@@@Ks   ,-/vJD@@8d@K@@@@@@8@@@@@@@@@@MK@@@b@@M@[
tN@b@@d@d@M@@@@@@@@@@LL4JKd@A@@d@@K@@@@MK@@@@8@@@@@@@@@@@b@@@@@[
)NM@8b@@A@@@A@@@@@@@@@@@@@@A@@A@@8@@K@d@@@@M@@K@@K@A@@@8@@M@@@@[
(tMM@@@d@@M8@@@@A@@@@A@@@A@@@@@@@@@A@@@@8b@@8d@@@@@@@@@@@@@@@@M[
tNZ@@K@@@d@@@@A@@@@@8@@@/4N@@8@b@@d@@M@8@MK@M8@K@@@@@@d@@@@@@@@[
M/KA8@@@MA@@@M@@@@@@@@@@[|t*Z@N@@@@8@@M8ZAZZ@M@@@A@d@@@@@K@@@d@[
bYJ4M@@@@@@A@@@@@@@@@@@@D.\'(YKKZD@8dK@5A84YZ@dM@@@@@@@@@@@@d@@[
K5dM8@8d@d@@@@@@@@8@@@@@@..-!/))ZK5AK4)AY(/XY/Z@@@A@@@d@@@M@@@@[
Y8dNA@@AK@@d@@@b@@@@@@@@@L,-,\!]]\X(5)Z/7c\\t5/K@@@@@@@@b@@@@@@[
8M8@@@A@@@A@@8@@@@@@@@@KDLt! !,-|t'(-\\!,\/,\!ZJG@@@d@Md@@@G@@@[

49 rijen zijn geselecteerd.
Note: I seem to have misunderstood the new contest. It's totally different than the first one. This only became clear to me after I tried to add my finished entry on the PL/SQL Challenge website. Oh well, it sure was fun!

Wednesday, June 18, 2014

9th Planboard DBA Symposium

A couple of months ago, Nienke Gijsen invited me to speak about materialized views at the upcoming Planboard DBA Symposium. Because I had the pleasure of presenting before, I knew the conference is always well organized and a pleasure to visit. So of course I accepted the invitation. We agreed I'd talk on "just" incremental refreshes of materialized views using materialized view logs and about my experience at client sites with MV logs. Since the central theme of all presentations is about sharing real world Oracle database related stories to help fellow DBA's, this fitted nicely.

During the day I visited four other interesting presentations:
- Edwin Biemond, about using Puppet to get to a fully working database and Fusion Middleware environment in minutes,
- Eric Zonneveld, about using I/O Resource Management on Exadata in a huge database environment at KPN,
- Yuri van Buren, about using AWR reports and ADDM reports together to get the best view on what might be wrong within the database,
- Bernhard de Cock Buning, who used a poll to talk about the four most voted for topics related to 12c: Multiple indexes on same column list, Adaptive SQL Plans, SQL row limits and Flex ASM.

My own presentation about materialized view logs can be downloaded from the "Presentations and Papers" section, and via these two links: powerpoint (pptx) and scripts (zip).

Thanks Nienke, Eric and Corne for organizing such a fine conference again.

Monday, March 31, 2014

APEX World 2014

The fifth edition of OGh APEX World took place last Tuesday at Hotel Figi, Zeist in the Netherlands. Again it was a beautiful day full of great APEX sessions. Every year I think we've reached the maximum number of people interested in APEX and we'll never attract more participants. But, after welcoming 300 attendees last year, 347 people showed up this year. Michel van Zoest and Denes Kubicek also wrote about the event. Below is a short summary my experiences.

Each year, Joel Kallman, head of the APEX development team, sends one of his team members to our conference. This year Joel Kallman himself agreed to come and deliver the keynote. The title was Oracle Application Express 5.0 - By APEX Developers, for APEX Developers. This summarizes the essence of the next major release very well. All new features seem to focus on making the life of an APEX developer even easier than it already was. The interactive page designer, one universal theme, uploading zipped files, builtin syntax highlighting and much more. Also, the design of the tool itself has improved with all the fresh new flat icons in iOS7 style. Can't wait to download the new version.

The first parallel session I visited was Dimitri Gielis' session "Things I do in every APEX project". I was eager to hear tips from someone as experienced with APEX as Dimitri. I think all people in the room, from beginner to expert, will have heard a few good tips they can use for themselves. The only unfortunate thing was the sessions length: only 50 minutes. It looked like Dimitri had lots more tips to share. Maybe a part two in the future?

Peter Raganitsch talked about "Going Public - What to consider when the world has access to your APEX Application." After a great introduction of some of the quirks around the APEX url, authentication and the session identifier, he showed several possibilities for having human readable URL's, applicable only to public (non-authenticated) applications. I've read about several alternatives in the past, but his method of using a RESTful webservice in the APEX listener (nowadays: Oracle Rest Data Services) seems the best I've encountered yet. If you're interested, you should check out his blog for a 3 part series on this subject.

The next speaker came all the way from Canada to the Netherlands for us to talk about APEX & HTML5: Martin Giffy D'Souza. After a brief introduction about how the web evolved, and speed skating (the Dutch sure made an impression at the other side of the ocean), Martin demonstrated a few of the new native HTML5 tags. These tags makes the life of a developer much easier. Fewer plugins are needed and fewer compatibility issues should arise. Martin is a great presenter and his demonstrations were impressive, especially of the canvas- and video-tag. He left out some of the technical details of his solutions, but you can check them out on the OGh site, once the demonstrations and Martin's demo app are uploaded. On Twitter, Martin asked if he could bring anything from Canada for someone and received two reactions: maple syrup and a moose. The two respondents were in the room and received their presents on stage. The moose was a bit difficult to get through customs, so he brought a nice cuddly moose instead.

Dan McGhan's (from the USA) session was about "Using AngularJS in Oracle Application Express". He decided to explore a change in approach and use the MEAN stack (MongoDB, Ember, Angular & Node). In this session he showed how to use AngularJS for his planning task application, gradually expanding it, so we learned step-by-step what to do. The main thought behind the process was to describe the state and let the UI "just" reflect that state, without having to do roundtrips to the database, as we would normally do with AJAX in APEX. It is all done at the client side. This session won the price for most difficult session, but I liked it very much for the brave effort of presenting us something so refreshingly different.

Nick Buytaert and Christel van Aerschot talked about single-click deployment in APEX development. Even though I already listened to Nick's ODTUG webinar about the same subject, I had to attend this session, since it is a subject close to my heart. Fortunately, the session contained quite a number of differences with the webinar. Jenkins was replaced by the CI Server Bamboo and git took the place of Subversion. Both are somewhat new for me and I was especially impressed by the user interface of Bamboo. Nick and Christel told their story very well and the demos went fine, except for a brief "oops, I now installed it in production" moment.

I decided to skip the 6th slot to be able to chat with several people. I talked with Nick, Christel and David from Contribute about some of their challenges and experiences with single-click deployment and continuous delivery. After the conference a group of almost 20 people wrapped up with a dinner at the neighboring restaurant.

If you attended the event and are a member of OGh, you have received an email with a poll about the future of APEX World. Please make sure you fill it in, if you haven't already.

Monday, January 6, 2014

Tabibitosan

I answered a few SQL questions on Stack Overflow recently, which I could solve easily by using the Tabibitosan method. It's such an elegant and efficient technique, I think it's worth giving it an extra bit of attention through this post. I'm certainly not the first to write about it: it was introduced to me by Japanese Oracle ACE Aketi Jyuuzou on OTN. He wrote a special forum post explaining his technique here, which also contains lots of examples. I have also included it in my SQL Masterclass. And Boneist and bluefrog have written about Tabibitosan in the past as well.

In its simplest form, the Tabibitosan technique lets you group consecutive rows easily, using just one level of analytic functions. And with a bit of imagination, you can also do some primitive SQL pattern matching avant la lettre. I'll show examples of both in this post.

The key of the technique is to map rows belonging to the same pattern to the same number, which can then be used for grouping or partitioning. To illustrate, let's start with a simple example to group consecutive numbers.

The table:

SQL> create table mytable (nr)
  2  as
  3  select 1 from dual union all
  4  select 2 from dual union all
  5  select 3 from dual union all
  6  select 6 from dual union all
  7  select 7 from dual union all
  8  select 11 from dual union all
  9  select 18 from dual union all
 10  select 19 from dual union all
 11  select 20 from dual union all
 12  select 21 from dual union all
 13  select 22 from dual union all
 14  select 25 from dual
 15  /

Table created.

With the question: show me all the groups of integer values that are in sequence without gaps. For each group show the starting number and end number. So the expected result set is this:

   MIN(NR)    MAX(NR)
---------- ----------
         1          3
         6          7
        11         11
        18         22
        25         25

Tabibitosan works by calculating an extra column, grp in my case, by subtracting row_number() from the value that defines the sequence, nr in my case. If the interval of the values in sequence is 1, then subtracting row_number() will result in a constant value for the group members. This query shows you the core of the technique:

SQL> select nr
  2       , row_number() over (order by nr) rn
  3       , nr - row_number() over (order by nr) grp
  4    from mytable
  5  /

        NR         RN        GRP
---------- ---------- ----------
         1          1          0
         2          2          0
         3          3          0
         6          4          2
         7          5          2
        11          6          5
        18          7         11
        19          8         11
        20          9         11
        21         10         11
        22         11         11
        25         12         13

12 rows selected.

It doesn't matter what the grp value is exactly. What matters is that it's the same constant value for all group members. This then allows for easy partitioning or grouping:

SQL> with tabibitosan as
  2  ( select nr
  3         , nr - row_number() over (order by nr) grp
  4      from mytable
  5  )
  6  select min(nr)
  7       , max(nr)
  8    from tabibitosan
  9   group by grp
 10   order by grp
 11  /

   MIN(NR)    MAX(NR)
---------- ----------
         1          3
         6          7
        11         11
        18         22
        25         25

5 rows selected.

You can see another example of this simple form of Tabibitosan in this Stack Overflow question.

Tabibitosan only works if the difference of adjacent column values in the first operand of the minus operator (here: nr) equals 1. If it's not 1, you'll have to come up with an expression to make it 1. For example, let's see how a similar example works if we'd now like to group rows for consecutive months. Here's the setup:

SQL> create table mytable (startdate)
  2  as
  3  select date '2013-01-01' from dual union all
  4  select date '2013-02-01' from dual union all
  5  select date '2013-03-01' from dual union all
  6  select date '2013-06-01' from dual union all
  7  select date '2013-07-01' from dual union all
  8  select date '2013-10-01' from dual union all
  9  select date '2013-12-01' from dual union all
 10  select date '2014-01-01' from dual
 11  /

Table created.

And this is the expected result set:

MIN(STARTD MAX(STARTD
---------- ----------
01-01-2013 01-03-2013
01-06-2013 01-07-2013
01-10-2013 01-10-2013
01-12-2013 01-01-2014

We can use the same technique, if only the interval between two months can be mapped to 1, which is what the MONTHS_BETWEEN function gives us. So let's use that function to calculate the grp value:

SQL> select startdate
  2       , months_between(startdate,date '2010-01-01') mb
  3       , months_between(startdate,date '2010-01-01')
  4         - row_number() over (order by startdate) grp
  5    from mytable
  6  /

STARTDATE          MB        GRP
---------- ---------- ----------
01-01-2013         36         35
01-02-2013         37         35
01-03-2013         38         35
01-06-2013         41         37
01-07-2013         42         37
01-10-2013         45         39
01-12-2013         47         40
01-01-2014         48         40

8 rows selected.

I used an arbitrary date here (January 1, 2010). Any date on the first day of the month would be good. With the grp value calculated, the Tabibitosan query is again easy:

SQL> with tabibitosan as
  2  ( select startdate
  3         , months_between(startdate,date '2010-01-01')
  4           - row_number() over (order by startdate) grp
  5      from mytable
  6  )
  7  select min(startdate)
  8       , max(startdate)
  9    from tabibitosan
 10   group by grp
 11   order by grp
 12  /

MIN(STARTD MAX(STARTD
---------- ----------
01-01-2013 01-03-2013
01-06-2013 01-07-2013
01-10-2013 01-10-2013
01-12-2013 01-01-2014

4 rows selected.

When the column that defines the order doesn't always increase with a constant number, the row_number() analytic function for the first operand of the minus operator comes in handy for the solution. Since I've found this one to be quite common, I'm also showing this case here. The example below is taken from this Stack Overflow question, where I'm comparing dates but excluding saturdays and sundays. The table and its contents look like this:

SQL> create table mytable (date_worked,country)
  2  as
  3  select to_date('1-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all
  4  select to_date('4-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all
  5  select to_date('5-Nov-13','dd-Mon-yy'), 'India' from dual union all
  6  select to_date('6-Nov-13','dd-Mon-yy'), 'India' from dual union all
  7  select to_date('7-Nov-13','dd-Mon-yy'), 'India' from dual union all
  8  select to_date('8-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all
  9  select to_date('11-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all
 10  select to_date('12-Nov-13','dd-Mon-yy'), 'India' from dual union all
 11  select to_date('13-Nov-13','dd-Mon-yy'), 'India' from dual union all
 12  select to_date('14-Nov-13','dd-Mon-yy'), 'India' from dual union all
 13  select to_date('15-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all
 14  select to_date('18-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all
 15  select to_date('19-Nov-13','dd-Mon-yy'), 'India' from dual union all
 16  select to_date('20-Nov-13','dd-Mon-yy'), 'India' from dual union all
 17  select to_date('21-Nov-13','dd-Mon-yy'), 'India' from dual union all
 18  select to_date('22-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all
 19  select to_date('25-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all
 20  select to_date('26-Nov-13','dd-Mon-yy'), 'India' from dual union all
 21  select to_date('27-Nov-13','dd-Mon-yy'), 'India' from dual union all
 22  select to_date('28-Nov-13','dd-Mon-yy'), 'India' from dual union all
 23  select to_date('29-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual
 24  /

Table created.

The query needs to return the start date and end date of each stay in a country. So the expected result set is this:

COUNTRY        START_DATE END_DATE
-------------- ---------- ----------
United Kingdom 01-11-2013 04-11-2013
India          05-11-2013 07-11-2013
United Kingdom 08-11-2013 11-11-2013
India          12-11-2013 14-11-2013
United Kingdom 15-11-2013 18-11-2013
India          19-11-2013 21-11-2013
United Kingdom 22-11-2013 25-11-2013
India          26-11-2013 28-11-2013
United Kingdom 29-11-2013 29-11-2013

9 rows selected.

By subtracting a partitioned row_number() from a regular full-set row_number() we can calculate the grp value:

SQL> select date_worked
  2       , country
  3       , row_number() over (order by date_worked) x
  4       , row_number() over (partition by country order by date_worked) y
  5       , row_number() over (order by date_worked)
  6         - row_number() over (partition by country order by date_worked) grp
  7    from mytable
  8  /

DATE_WORKE COUNTRY                 X          Y        GRP
---------- -------------- ---------- ---------- ----------
01-11-2013 United Kingdom          1          1          0
04-11-2013 United Kingdom          2          2          0
05-11-2013 India                   3          1          2
06-11-2013 India                   4          2          2
07-11-2013 India                   5          3          2
08-11-2013 United Kingdom          6          3          3
11-11-2013 United Kingdom          7          4          3
12-11-2013 India                   8          4          4
13-11-2013 India                   9          5          4
14-11-2013 India                  10          6          4
15-11-2013 United Kingdom         11          5          6
18-11-2013 United Kingdom         12          6          6
19-11-2013 India                  13          7          6
20-11-2013 India                  14          8          6
21-11-2013 India                  15          9          6
22-11-2013 United Kingdom         16          7          9
25-11-2013 United Kingdom         17          8          9
26-11-2013 India                  18         10          8
27-11-2013 India                  19         11          8
28-11-2013 India                  20         12          8
29-11-2013 United Kingdom         21          9         12

21 rows selected.

Note that just using the grp value for the final grouping, could lead to overlap of groups from different countries. So we need the country for the final grouping as well. The full query becomes:

SQL> with tabibitosan as
  2  ( select date_worked
  3         , country
  4         , row_number() over (order by date_worked)
  5           - row_number() over (partition by country order by date_worked) grp
  6      from mytable
  7  )
  8  select country
  9       , min(date_worked) start_date
 10       , max(date_worked) end_date
 11    from tabibitosan
 12   group by country
 13       , grp
 14   order by start_date
 15  /

COUNTRY        START_DATE END_DATE
-------------- ---------- ----------
United Kingdom 01-11-2013 04-11-2013
India          05-11-2013 07-11-2013
United Kingdom 08-11-2013 11-11-2013
India          12-11-2013 14-11-2013
United Kingdom 15-11-2013 18-11-2013
India          19-11-2013 21-11-2013
United Kingdom 22-11-2013 25-11-2013
India          26-11-2013 28-11-2013
United Kingdom 29-11-2013 29-11-2013

9 rows selected.

Please note that these Tabibitosan queries contain only two levels and look pretty clean. Also, the performance is nice: just one full table scan and two sorts for the two analytic functions, followed by the unavoidable HASH GROUP BY and SORT ORDER BY for the grouping and the sorted display:
-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |         |      1 |        |      9 |00:00:00.01 |       3 |       |       |          |
|   1 |  SORT ORDER BY         |         |      1 |      2 |      9 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   2 |   HASH GROUP BY        |         |      1 |      2 |      9 |00:00:00.01 |       3 |  1088K|  1088K| 1233K (0)|
|   3 |    VIEW                |         |      1 |     21 |     21 |00:00:00.01 |       3 |       |       |          |
|   4 |     WINDOW SORT        |         |      1 |     21 |     21 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   5 |      WINDOW SORT       |         |      1 |     21 |     21 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   6 |       TABLE ACCESS FULL| MYTABLE |      1 |     21 |     21 |00:00:00.01 |       3 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------

And here is another example from Stack Overflow using Tabibitosan with two row_numbers.

If it's becoming too hard to come up with a function that maps the differences between group members to the number 1, then you have an alternative which I'll call the max-on-case-row-number technique, by lack of a better name. I first saw this technique on AskTom. I'm going to repeat the same example as in the link to allow easy comparison.

The table:

SQL> create table mytable (time,quantity)
  2  as
  3  select trunc(sysdate) + to_dsinterval('0 12:22:01'), 100 from dual union all
  4  select trunc(sysdate) + to_dsinterval('0 12:22:03'), 200 from dual union all
  5  select trunc(sysdate) + to_dsinterval('0 12:22:04'), 300 from dual union all
  6  select trunc(sysdate) + to_dsinterval('0 12:22:06'), 200 from dual union all
  7  select trunc(sysdate) + to_dsinterval('0 12:22:45'), 100 from dual union all
  8  select trunc(sysdate) + to_dsinterval('0 12:22:46'), 200 from dual union all
  9  select trunc(sysdate) + to_dsinterval('0 12:23:12'), 100 from dual union all
 10  select trunc(sysdate) + to_dsinterval('0 12:23:12'), 200 from dual
 11  /

Table created.

The goal is to sum the amounts where the time is within 3 seconds of each other. So, the expected result set is this:

MIN(TIME)           MAX(TIME)           SUM(QUANTITY)
------------------- ------------------- -------------
05-01-2014 12:22:01 05-01-2014 12:22:06           800
05-01-2014 12:22:45 05-01-2014 12:22:46           300
05-01-2014 12:23:12 05-01-2014 12:23:12           300

First (the case-row-number part), we'll compute a new column rn, and assign a row_number to the rows that mark a new group: the first row and the ones where the previous row has a gap larger than 3 seconds. All other rows don't get a rn value:

SQL> select time
  2       , quantity
  3       , case
  4         when time - lag(time,1,date '0001-01-01') over (order by time) > 3/24/60/60 then
  5           row_number() over (order by time)
  6         end rn
  7    from mytable
  8  /

TIME                  QUANTITY         RN
------------------- ---------- ----------
05-01-2014 12:22:01        100          1
05-01-2014 12:22:03        200
05-01-2014 12:22:04        300
05-01-2014 12:22:06        200
05-01-2014 12:22:45        100          5
05-01-2014 12:22:46        200
05-01-2014 12:23:12        100          7
05-01-2014 12:23:12        200

8 rows selected.

Second (the max-on part), we'll use the analytic function MAX to compute the grp value, which gives the rows where "rn is null" the same value as the first value of the group, as you can see in this query:

SQL> with case_row_number as
  2  ( select time
  3         , quantity
  4         , case
  5           when time - lag(time,1,date '0001-01-01') over (order by time) > 3/24/60/60 then
  6             row_number() over (order by time)
  7           end rn
  8      from mytable
  9  )
 10  select time
 11       , quantity
 12       , max(rn) over (order by time) grp
 13    from case_row_number
 14  /

TIME                  QUANTITY        GRP
------------------- ---------- ----------
05-01-2014 12:22:01        100          1
05-01-2014 12:22:03        200          1
05-01-2014 12:22:04        300          1
05-01-2014 12:22:06        200          1
05-01-2014 12:22:45        100          5
05-01-2014 12:22:46        200          5
05-01-2014 12:23:12        100          7
05-01-2014 12:23:12        200          7

8 rows selected.

Now that we have a suitable grp value calculated, the last part -the grouping- is easy:

SQL> with case_row_number as
  2  ( select time
  3         , quantity
  4         , case
  5           when time - lag(time,1,date '0001-01-01') over (order by time) > 3/24/60/60 then
  6             row_number() over (order by time)
  7           end rn
  8      from mytable
  9  )
 10  , max_on_case_row_number as
 11  ( select time
 12         , quantity
 13         , max(rn) over (order by time) grp
 14      from case_row_number
 15  )
 16  select min(time)
 17       , max(time)
 18       , sum(quantity)
 19    from max_on_case_row_number
 20   group by grp
 21   order by min(time)
 22  /

MIN(TIME)           MAX(TIME)           SUM(QUANTITY)
------------------- ------------------- -------------
05-01-2014 12:22:01 05-01-2014 12:22:06           800
05-01-2014 12:22:45 05-01-2014 12:22:46           300
05-01-2014 12:23:12 05-01-2014 12:23:12           300

3 rows selected.

Compared to Tabibitosan, the max-on-case-row-number technique has similar performance characteristics: only one full table scan and a few sort operations on top. To compare it with Tabibitosan, you'd need to compare the sort operations. This is the plan for the query above:

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |      1 |        |      3 |00:00:00.01 |       3 |       |       |          |
|   1 |  SORT ORDER BY          |         |      1 |      8 |      3 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   2 |   HASH GROUP BY         |         |      1 |      8 |      3 |00:00:00.01 |       3 |  1034K|  1034K|  737K (0)|
|   3 |    VIEW                 |         |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
|   4 |     WINDOW BUFFER       |         |      1 |      8 |      8 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   5 |      VIEW               |         |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
|   6 |       WINDOW SORT       |         |      1 |      8 |      8 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   7 |        TABLE ACCESS FULL| MYTABLE |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
------------------------------------------------------------------------------------------------------------------------

There are three analytic functions in the query (lag, row_number and max), but all three order by time, so effectively there is only one WINDOW SORT operation. For calculating the max, Oracle discovers the intermediate result set is already sorted by time, so it does a WINDOW BUFFER instead of a WINDOW SORT. The outer HASH GROUP BY and SORT ORDER BY, are because of the "group by grp order by min(time)".

As an alternative, the Tabibitosan solution would need to map the rows within 3 seconds of each other to consecutive numbers, and leave a larger gap for the other rows. This is the best I could come up with:

SQL> with x as
  2  ( select time
  3         , quantity
  4         , case
  5           when time - lag(time,1,date '0001-01-01') over (order by time,rowid) > 3/24/60/60 then
  6             'Y'
  7           end gap_indicator
  8      from mytable
  9  )
 10  , tabibitosan as
 11  ( select time
 12         , quantity
 13         , count(*) over (order by time,rowid)
 14           + count(gap_indicator) over (order by time,rowid)
 15           - row_number() over (order by time,rowid) grp
 16      from x
 17  )
 18  select min(time)
 19       , max(time)
 20       , sum(quantity)
 21    from tabibitosan
 22   group by grp
 23   order by min(time)
 24  /

MIN(TIME)           MAX(TIME)           SUM(QUANTITY)
------------------- ------------------- -------------
05-01-2014 12:22:01 05-01-2014 12:22:06           800
05-01-2014 12:22:45 05-01-2014 12:22:46           300
05-01-2014 12:23:12 05-01-2014 12:23:12           300

3 rows selected.

You can see the similarities with the max-on-case-row-number. Note that I had to include rowid in the ordering, because time alone is not unique. The plan is exactly the same:

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |      1 |        |      3 |00:00:00.01 |       3 |       |       |          |
|   1 |  SORT ORDER BY          |         |      1 |      8 |      3 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   2 |   HASH GROUP BY         |         |      1 |      8 |      3 |00:00:00.01 |       3 |  1034K|  1034K|  735K (0)|
|   3 |    VIEW                 |         |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
|   4 |     WINDOW BUFFER       |         |      1 |      8 |      8 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   5 |      VIEW               |         |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
|   6 |       WINDOW SORT       |         |      1 |      8 |      8 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   7 |        TABLE ACCESS FULL| MYTABLE |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
------------------------------------------------------------------------------------------------------------------------

If you look closely to the expression with the three analytic functions, you'll notice that "count(*) over (order by time,rowid)" equals "row_number() over (order by time,rowid)" and thus they can be eliminated, which leads to a simpler non-Tabibitosan query:

SQL> with x as
  2  ( select time
  3         , quantity
  4         , case
  5           when time - lag(time,1,date '0001-01-01') over (order by time,rowid) > 3/24/60/60 then
  6             'Y'
  7           end gap_indicator
  8      from mytable
  9  )
 10  , y as
 11  ( select time
 12         , quantity
 13         , count(gap_indicator) over (order by time,rowid) grp
 14      from x
 15  )
 16  select min(time)
 17       , max(time)
 18       , sum(quantity)
 19    from y
 20   group by grp
 21   order by min(time)
 22  /

MIN(TIME)           MAX(TIME)           SUM(QUANTITY)
------------------- ------------------- -------------
05-01-2014 12:22:01 05-01-2014 12:22:06           800
05-01-2014 12:22:45 05-01-2014 12:22:46           300
05-01-2014 12:23:12 05-01-2014 12:23:12           300

3 rows selected.

but with the same performance characteristics again:

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |      1 |        |      3 |00:00:00.01 |       3 |       |       |          |
|   1 |  SORT ORDER BY          |         |      1 |      8 |      3 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   2 |   HASH GROUP BY         |         |      1 |      8 |      3 |00:00:00.01 |       3 |  1034K|  1034K|  735K (0)|
|   3 |    VIEW                 |         |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
|   4 |     WINDOW BUFFER       |         |      1 |      8 |      8 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   5 |      VIEW               |         |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
|   6 |       WINDOW SORT       |         |      1 |      8 |      8 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   7 |        TABLE ACCESS FULL| MYTABLE |      1 |      8 |      8 |00:00:00.01 |       3 |       |       |          |
------------------------------------------------------------------------------------------------------------------------

If you can find an expression to map the distance of same group members to a constant number, then grouping or partitioning your data using Tabibitosan will lead to a simple looking two-level query. Since alternatives will be more complex and sometimes will require an extra sort, it is definitely worth its place in your toolbox.

Thursday, November 28, 2013

INSERT ALL

During a performance inspection of some E-Business Suite batch programs running on 11.2.0.3, I noticed almost 99% of the consistent reads missing in the tkprof file for a particular insert statement. It was a giant INSERT ALL statement where the accompanying select statement contained several subqueries in a "case when exists (query)" construct. And those subqueries in the select clause was what was missing from the tkprof file. I've seen this behaviour before in version 9 databases in regular SELECT statements. But since version 10 this anomaly was resolved.

This is my reproduction of the scenario on 11.2.0.3: a small table containing 100 rows, a larger table containing 100,000 rows, and two new tables where the rows are copied into.

SQL> create table small_table (id,name)
  2  as
  3   select level
  4        , 'Name ' || to_char(level)
  5     from dual
  6  connect by level <= 100
  7  /
 
Table created.
 
SQL> create table large_table (id,name)
  2  as
  3   select level
  4        , 'Name ' || to_char(level)
  5     from dual
  6  connect by level <= 100000
  7  /
 
Table created.
 
SQL> create table new_table_1
  2  ( id         number
  3  , name       varchar2(11)
  4  , average_id number
  5  )
  6  /
 
Table created.
 
SQL> create table new_table_2
  2  ( id         number
  3  , name       varchar2(11)
  4  , average_id number
  5  )
  6  /
 
Table created.
 
SQL> begin
  2    dbms_stats.gather_table_stats(user,'small_table');
  3    dbms_stats.gather_table_stats(user,'large_table');
  4    dbms_stats.gather_table_stats(user,'new_table_1');
  5    dbms_stats.gather_table_stats(user,'new_table_2');
  6  end;
  7  /
 
PL/SQL procedure successfully completed.

For each of the 100 rows of the small_table, I'm selecting the average ID of 1000 rows in the large_table, as a scalar subquery.
The execution plan shows 100 starts of the scalar subquery on large_table. And we can see that it took 3.24 seconds.

SQL> set serveroutput off
SQL> alter session set statistics_level = all
  2  /
 
Session altered.
 
SQL> select s.id
  2       , s.name
  3       , ( select avg(l.id)
  4             from large_table l
  5            where ceil(l.id / 1000) = s.id
  6         ) avg_id
  7    from small_table s
  8  /
 
        ID NAME                                              AVG_ID
---------- --------------------------------------------- ----------
         1 Name 1                                             500,5
         2 Name 2                                            1500,5
         3 Name 3                                            2500,5
...[95 lines removed]...
        99 Name 99                                          98500,5
       100 Name 100                                         99500,5
 
100 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
  2  /
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7167wkgunj23z, child number 0
-------------------------------------
select s.id      , s.name      , ( select avg(l.id)            from
large_table l           where ceil(l.id / 1000) = s.id        ) avg_id
from small_table s
 
Plan hash value: 1885542926
 
--------------------------------------------------------------------------------------------
| Id  | Operation          | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |      1 |        |    100 |00:00:00.01 |       9 |
|   1 |  SORT AGGREGATE    |             |    100 |      1 |    100 |00:00:03.28 |   58900 |
|*  2 |   TABLE ACCESS FULL| LARGE_TABLE |    100 |   1000 |    100K|00:00:03.24 |   58900 |
|   3 |  TABLE ACCESS FULL | SMALL_TABLE |      1 |    100 |    100 |00:00:00.01 |       9 |
--------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter(CEIL("L"."ID"/1000)=:B1)
 
 
22 rows selected.

A full table scan against small_table and for each row of small_table (100) a full table scan against large_table. Now, let's see what the plan looks like when the same query is wrapped up in an INSERT SELECT statement.

SQL> insert into new_table_1
  2  select s.id
  3       , s.name
  4       , ( select avg(l.id)
  5             from large_table l
  6            where ceil(l.id / 1000) = s.id
  7         ) avg_id
  8    from small_table s
  9  /
 
100 rows created.
 
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
  2  /
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
SQL_ID  dfg734y8ym7vk, child number 0
-------------------------------------
insert into new_table_1 select s.id      , s.name      , ( select
avg(l.id)            from large_table l           where ceil(l.id /
1000) = s.id        ) avg_id   from small_table s
 
Plan hash value: 1885542926
 
--------------------------------------------------------------------------------------------------
| Id  | Operation                | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |             |      1 |        |      0 |00:00:03.25 |   58942 |
|   1 |  LOAD TABLE CONVENTIONAL |             |      1 |        |      0 |00:00:03.25 |   58942 |
|   2 |   SORT AGGREGATE         |             |    100 |      1 |    100 |00:00:03.25 |   58900 |
|*  3 |    TABLE ACCESS FULL     | LARGE_TABLE |    100 |   1000 |    100K|00:00:03.21 |   58900 |
|   4 |   TABLE ACCESS FULL      | SMALL_TABLE |      1 |    100 |    100 |00:00:00.01 |       2 |
--------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - filter(CEIL("L"."ID"/1000)=:B1)
 
 
23 rows selected.

An additional LOAD TABLE CONVENTIONAL step, and the selection against large_table still visible. Next, I'm going to use the same query with an INSERT ALL statement.

SQL> rollback
  2  /
 
Rollback complete.
 
SQL> insert all
  2    when mod(id,2) = 0 then into new_table_1
  3    when mod(id,2) = 1 then into new_table_2
  4  select s.id
  5       , s.name
  6       , ( select avg(l.id)
  7             from large_table l
  8            where ceil(l.id / 1000) = s.id
  9         ) avg_id
10    from small_table s
11  /
 
100 rows created.
 
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
  2  /
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
SQL_ID  2m5kj5cvf1jx1, child number 0
-------------------------------------
insert all   when mod(id,2) = 0 then into new_table_1   when mod(id,2)
= 1 then into new_table_2 select s.id      , s.name      , ( select
avg(l.id)            from large_table l           where ceil(l.id /
1000) = s.id        ) avg_id   from small_table s
 
Plan hash value: 1492625026
 
---------------------------------------------------------------------------------------------
| Id  | Operation           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT    |             |      1 |        |      0 |00:00:03.27 |   58947 |
|   1 |  MULTI-TABLE INSERT |             |      1 |        |      0 |00:00:03.27 |   58947 |
|   2 |   TABLE ACCESS FULL | SMALL_TABLE |      1 |    100 |    100 |00:00:00.01 |       2 |
|   3 |   INTO              | NEW_TABLE_1 |      0 |        |      0 |00:00:00.01 |       0 |
|   4 |   INTO              | NEW_TABLE_2 |      0 |        |      0 |00:00:00.01 |       0 |
---------------------------------------------------------------------------------------------
 
 
19 rows selected.

And look: the selections against large_table have disappeared. And it looks like the insertion is what takes 3.27 seconds. But the MULTI-TABLE INSERT step is just accumulating the time and buffers from the steps below, visible and invisible.

I repeated this test on 12.1.0.1. The optimizer in this version has improved, as can be seen by this plan for the exact same statement:

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT      |             |      1 |        |      0 |00:00:00.09 |     304 |       |       |          |
|   1 |  MULTI-TABLE INSERT   |             |      1 |        |      0 |00:00:00.09 |     304 |       |       |          |
|*  2 |   HASH JOIN OUTER     |             |      1 |    100K|    100 |00:00:00.09 |     294 |  1696K|  1696K| 1491K (0)|
|   3 |    TABLE ACCESS FULL  | SMALL_TABLE |      1 |    100 |    100 |00:00:00.01 |       3 |       |       |          |
|   4 |    VIEW               | VW_SSQ_1    |      1 |    100K|    100 |00:00:00.09 |     291 |       |       |          |
|   5 |     SORT GROUP BY     |             |      1 |    100K|    100 |00:00:00.09 |     291 | 73728 | 73728 |          |
|   6 |      TABLE ACCESS FULL| LARGE_TABLE |      1 |    100K|    100K|00:00:00.03 |     291 |       |       |          |
|   7 |   INTO                | NEW_TABLE_1 |      0 |        |      0 |00:00:00.01 |       0 |       |       |          |
|   8 |   INTO                | NEW_TABLE_2 |      0 |        |      0 |00:00:00.01 |       0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------

A much better plan: no more 100 starts of the query against large_table, but one nice HASH JOIN OUTER. However, the goal here is not to have the most efficient execution plan, but a plan equal to the 11.2.0.3 plan, that's why I added the optimizer hint NO_QUERY_TRANSFORMATION. And this is what the INSERT ALL looks like in 12.1.0.1:

SQL> insert all
  2    when mod(id,2) = 0 then into new_table_1
  3    when mod(id,2) = 1 then into new_table_2
  4  select /*+ no_query_transformation */ s.id
  5       , s.name
  6       , ( select avg(l.id)
  7             from large_table l
  8            where ceil(l.id / 1000) = s.id
  9         ) avg_id
 10    from small_table s
 11  /

100 rows created.

SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
  2  /

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  atb16ym82sygz, child number 0
-------------------------------------
insert all   when mod(id,2) = 0 then into new_table_1   when mod(id,2)
= 1 then into new_table_2 select /*+ no_query_transformation */ s.id
  , s.name      , ( select avg(l.id)            from large_table l
     where ceil(l.id / 1000) = s.id        ) avg_id   from small_table s

Plan hash value: 220198846

---------------------------------------------------------------------------------------------
| Id  | Operation           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT    |             |      1 |        |      0 |00:00:02.03 |   29113 |
|   1 |  MULTI-TABLE INSERT |             |      1 |        |      0 |00:00:02.03 |   29113 |
|   2 |   SORT AGGREGATE    |             |    100 |      1 |    100 |00:00:02.03 |   29100 |
|*  3 |    TABLE ACCESS FULL| LARGE_TABLE |    100 |   1000 |    100K|00:00:02.00 |   29100 |
|   4 |   VIEW              |             |      1 |    100 |    100 |00:00:02.03 |   29103 |
|   5 |    TABLE ACCESS FULL| SMALL_TABLE |      1 |    100 |    100 |00:00:00.01 |       3 |
|   6 |   INTO              | NEW_TABLE_1 |      0 |        |      0 |00:00:00.01 |       0 |
|   7 |   INTO              | NEW_TABLE_2 |      0 |        |      0 |00:00:00.01 |       0 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(CEIL("L"."ID"/1000)=:B1)

27 rows selected.

So in 12.1.0.1 the scalar subqueries against large_table are visible again. Problem solved, unless you're not on 12c yet.