Wednesday, May 19, 2010

It's probably quantum

Today I'm talking uncertainty. Not the "I don't know" uncertainty, but the "If I look at it then I get a different answer than if I don't look at it" type. And that becomes important when trying to put together automated tests which work by looking at things.

Here's a simple table and a procedure to insert into it.

create table test_1 (val number(2,0));

create or replace procedure proc_ins (p_val in number) is
begin
  insert into test_1 (val) values (p_val);
  insert into test_1 (val) values (p_val * 100);
end proc_ins;
/

What is the state of test_1 after I execute proc_ins with a value of 5 ?
(Assuming no other session activity, hidden triggers, magic doves)

The answer is, it depends.

From SQL*Plus

call proc_ins(5);
select * from test_1;

Empty.

The first insert into test_1 (which would have succeeded) was rolled back. This is called a statement level rollback. The documentation states that "An anonymous block is a SQL statement." and that an SQL statement that causes an error during execution will execute a statement level rollback and "The effect of the rollback is as if the statement had never been run."



An automated test

So what happens if I want to create an automated test to confirm the results.

declare
  e_too_big exception;
  PRAGMA EXCEPTION_INIT(e_too_big, -1438);
begin
  proc_ins(1);
exception
  when e_too_big then
    for c_rec in (select val from test_1) loop
      dbms_output.put_line(c_rec.val);
    end loop;
end;
/

Because the statement (the anonymous block) now includes the exception handler, the statement itself doesn't fail and the first insert is still there. By testing for the exception, I seem to change the behaviour. I say "seem" because the statement rollback is not explicitly part of the code I am testing, but is an artifact of the way the statement is called.

If proc_ins were to be part of a larger PL/SQL program, the tested behaviour would represent the actual run time behavior. But if it is to be called as a top level procedure, for example directly from a client application, the tested behaviour would be inaccurate.


Changing the results

I can change the unit test so that it represents a 'independent statement' rather than 'embedded call' using dynamic SQL. A dynamic SQL call will set a new level of SQL scope.

truncate table test_1;

declare
  e_too_big exception;
  PRAGMA EXCEPTION_INIT(e_too_big, -1438);
begin
  execute immediate 'call proc_ins(2)';
exception
  when e_too_big then
    for c_rec in (select val from test_1) loop
      dbms_output.put_line(c_rec.val);
    end loop;
end;
/

Within the scope of the SQL statement (the CALL) in which the exception occurs, there is no exception handler. The SQL statement is rolled back, as if it never happened. When we re-enter the outer scope, the exception is handled, but the statement rollback has already happened.

It doesn't make a difference whether we use the CALL statement (which is SQL), or a BEGIN...END; PL/SQL construction.

truncate table test_1;
declare
  e_too_big exception;
  PRAGMA EXCEPTION_INIT(e_too_big, -1438);
begin
  execute immediate 'begin proc_ins(3); end;';
exception
  when e_too_big then
    for c_rec in (select val from test_1) loop
      dbms_output.put_line(c_rec.val);
    end loop;
end;
/

I will own up and say this revisits a post a few months back but I felt the automated test aspect of it needed an emphahsis. Also I was prompted by the PL/SQL Challenge of May 13th.

The answer states that "Exceptions do not roll back DML statements unless the exception goes unhandled to the host environment, in which case most such environments (like SQL*Plus) will automatically perform a rollback in your session."

I didn't like the statetment about host environments as it is the database engine which performs the statement-level rollback whenever an SQL call raises an exception. The host can do whatever it likes (though it is generally safer to rollback when you get an exception you don't expect).

No comments: