Valentin Nikotin has posted some valuable examples on the PL/SQL Challenge Blog regarding the 11g enhancements for column-level dependencies being used to invalidate store program units.
In his first example, he uses a function-based index to create a hidden column on a table and a stored procedure referring to the hidden column. When the index is dropped the hidden column is dropped but the procedure is not invalidated. Instead it errors upon execution and is only invalidated when an attempt is made to recompile it. Generally I wouldn't expect a stored procedure (or anything) to refer to a hidden column with a system generated name, plus the 'failure' is pretty much what you'd expect anyway. I don't class this as an important issue.
The next two examples are nastier though.
In the second example, he has a procedure called "MU" which contains a query that uses "MU" as a table alias. He then uses qualified naming so that a variable in a query is represented by "MU.Z". When the column "Z" is added to the table used in the query, the procedure should be invalidated since "MU.Z" should now refer to the table column (though the "MU" alias) rather than the procedure variable. This invalidation doesn't occur so the program behaviour only changes when the procedure is subsequently compiled (either manually or by some later change that does cause an invalidation).
In the third example, a PL/SQL record variable is used that has the same name as a table. The query in the procedure initially refers to a field in the record. When a column of the same name is added to a table, the procedure should need recompilation so that the table column overrides the PL/SQL record variable. However again the invalidation isn't automatically performed and the procedure's behaviour would change unaccountably at some later time.
The effects are similar either way, so I'll just include a script for the latter (as record variables are more widely used that qualified variable names).
Setup:
create table inval_test (id number, val varchar2(10));
insert into inval_test values (1,'Blue');
create or replace procedure inval_test_proc is
type t_rec is record
(insert_id number, insert_val varchar2(10));
inval_test t_rec;
v_cnt number;
begin
inval_test.insert_id := 1;
inval_test.insert_val := 'Blue';
select count(*) into v_cnt
from inval_test
where id = inval_test.insert_id
and val = inval_test.insert_val;
dbms_output.put_line('Count is:'||v_cnt);
end;
/
Demonstration:
begin inval_test_proc; end;
alter table inval_test add insert_val varchar2(10);
begin inval_test_proc; end;
alter procedure inval_test_proc compile;
begin inval_test_proc; end;
The behaviour changes when the procedure is manually compiled, rather than when the column is added to the table. That compilation may happen weeks or months after the table change, making it difficult to determine the cause of the new bug.
The fundamental problem is that PL/SQL syntax doesn't differentiate between the specification of a schema object (table column, function etc) and PL/SQL variables, and that the globally defined schema object takes precedence over the locally scoped variable.
The only reasonable workaround is to use a variable naming strategy to distinguish between the two.
Just to blow my own trumpet a bit (and it's my blog, so I'm allowed), I pointed out potential pitfalls in this functionality when it was announced. For my money the problem is a big one to fix without changing some fundamentals. With editions in the frame for high-availability upgrading, I'd prefer at least an option on 'optimistic' or 'pessimistic' invalidation models.
1 comment:
Hi Gary, interesting. Especially since recently I had a couple of situations where nothing was invalid, still clients complained about ORA-06508 errors (PL/SQL: could not find program unit being called). And after recompilation of these apparently already valid packages, the problem got solved. At that time I also got the feeling that the refined invalidation in 11g as some flaws, but so far I cannot reproduce it...
Post a Comment