Here I create a test table with the normal 'created_on' column set as sysdate. I have another column set as the 'create_commit_on'. This will be left null by the insert, and a batch process will run in the background to pick up the null entries and set the column to sysdate. Actually, using DBMS_JOB the job won't pick up the exact commit time, but will normally be close enough.
An index is put on the create_commit_on column, using a decode so that ONLY the null values appear in the index. This will allow those records requiring update to be identified quickly (ie without a full scan of the table). [Not that full scans are evil, but if a large proportion of the table would require this update at one time, this probably isn't a practical technique for you unless it is a very small table.]
To demonstrate, we create the table and a procedure to insert into it with a pause of a few minutes between insert and commit. We'll run this a few times.
Another procedure is created to set the commit time, and is run as a background job repeating every minute. For this you will need job processing enabled, and the demo procedures also require access to DBMS_LOCK.
drop table test;In my example (the random waits mean yours would be slightly different) we can see that item 3 was created prior to item 4 but waited a minute less to commit and so was actually committed earlier. Similar with items 8 and 9/10.
create table test
(id number, col_a varchar2(100),
created_on date, create_commit_on date);
create index pend_ix on test (decode(create_commit_on,null,1,null));
drop sequence test_seq;
create sequence test_seq;
drop procedure set_commit_time;
create procedure set_commit_time is
update test set create_commit_on=sysdate
where decode(create_commit_on,null,1,null) = 1;
drop procedure add_test;
create procedure add_test is
v_pause := round(dbms_random.value(120,300));
insert into test
(id, col_a, created_on)
values (test_seq.nextval, 'Waiting '||round(v_pause/60,1)||' minutes', sysdate);
set serveroutput on size 100000
dbms_job.submit(v_num, 'begin set_commit_time; commit; end;',
sysdate, 'sysdate + (1/3600)');
for i in 1..10 loop
dbms_job.submit(v_num, 'declare n number; begin add_test; commit; end;');
column col_a format a30
alter session set nls_date_format ='DD/Mon/YY HH24:MI:SS';
Select * from test order by 1;
ID COL_A CREATED_ON CREATE_COMMIT_ON
---------- ------------------------------ ------------------ ------------------
1 Waiting 3.2 minutes 05/Oct/05 11:50:58 05/Oct/05 11:54:11
2 Waiting 2.3 minutes 05/Oct/05 11:51:08 05/Oct/05 11:54:11
3 Waiting 4.6 minutes 05/Oct/05 11:51:18 05/Oct/05 11:56:22
4 Waiting 3.4 minutes 05/Oct/05 11:51:29 05/Oct/05 11:55:05
5 Waiting 3 minutes 05/Oct/05 11:51:39 05/Oct/05 11:55:05
6 Waiting 3.6 minutes 05/Oct/05 11:51:50 05/Oct/05 11:55:56
7 Waiting 4 minutes 05/Oct/05 11:52:00 05/Oct/05 11:56:22
8 Waiting 2.6 minutes 05/Oct/05 11:52:10 05/Oct/05 11:55:05
9 Waiting 4.4 minutes 05/Oct/05 11:52:21 05/Oct/05 11:57:14
10 Waiting 2.5 minutes 05/Oct/05 11:53:26 05/Oct/05 11:56:22
The same technique could be applied to updates, as long as the updates all set the COMMIT_TIME column to null (possibly through a trigger).
Normally, it is difficult to pick records added or updated in a table since a particular time, as the delay between change and commit means that changes do not necessarily become visible in the order they are inserted. Even allowing, for example, a five minute window may miss long running transactions and requires processing to cater for duplicates.
By using this technique, we can use the time the record became visible. This is not a universal solution. The update would be onerous if a large number of rows had been changed and applying this technique to lots of tables would also have an impact and risk of deadlock unless each update is committed separately.