Monday, October 24, 2005

Mangling my PC partitions

Saturday was a BAD day. I had a chunk of unused space on my hard drive which I wanted to be able to use, and a magazine with Partition Manager 2005 as a freebie on its disk. My baby daughter had dozed off in her cot and my wife had taken my son to the library. I decided to go for it. Now, given that this was a small window of opportunity, there wasn't time for a full backup. Somewhere, there is a mischievous spirit who jumps at opportunities like this, so with half an hour of partition mangling done done and five minutes remaining on the clock, the power went off...

Once the power was back, the PC wanted to be booted from the Partition Manager recovery floppy. My PC simply wasn't interested, possibly because my floppies are at least five years old and have spent a lot of time in the laundry room. Given that a couple of weeks ago I had dumped Ubuntu Linux as a dual boot option and replaced it with Fedora 4, the Linux area was all fresh and, to my mind, expendable. So I re-installed Fedora in the hope that I'd be able to at least boot into it and see the Windows partitions and copy off a few of the more desirable files. Fedora went in fine, and I could see the main XP partition, but not the one with all my data one (because, of course, that was the one I'd been trying to merge with Partition Manager).

So I did a recovery install of Windows XP, which sort of worked. When I originally installed XP I had an old hard drive connected on C:, plus a DVD-writer on D:, a CD-writer on E: and a USB Flash drive on F:. So the main XP OS partition on the new hard drive named itself G:. After install, I created a new partition for all my data, which got called F: since in the meantime I'd removed the flash drive.

When I did the recovery install, I didn't have the USB drive in. The XP OS partition renamed itself from G: to F:. I fired up Partition Manager and it managed to recover the partition I had been mangling, but it was now G: instead of F:. So half my PC thinks all the stuff on my G: drive is on F: and vice versa, while the other half is quite content.

I SHOULD bite the bullet, copy out all the data files, clear off the entire drive and start everything from scratch. But my time comes in the small chunks that toddler and baby permit, so I know I'll end up checking everything on the PC to see what breaks, why and trying to fix it bit by bit.

But as a consolation, I've bought a new 1Gb flash drive for AUS$70 (I think about US$50). 1Gb. That old C: drive in my PC is about six years old and is just 4Gb and I'd spent those six years filling it up. Now I'm just carrying around 1Gb of storage. Amazing.

Friday, October 21, 2005

J2EE and ADF for Forms Developers workshop in Sydney

Yesterday I attended the "J2EE and ADF for Forms and Designer Developers workshop" by Sue Harper http://www.groundside.com/blog/content/SueHarper/ in Sydney (or at least the talk bit, though not the hands-on component).

Firstly, I'll declare my bias. I like Forms Developer (mostly). I like PL/SQL. I hate Reports Developer, but thats neither here nor there. However there seems to be a push away from Forms towards some form of Java-derived front end. Forms doesn't get the glamour press; the 'I had all these systems and solved my integration problems by running Java/XML/whatever over the lot" type press. It probably makes it harder to sell and I don't think Oracle are really trying to sell it. I think they should, and on the basis that having your application developed by your database (PL/SQL) developers, you'll get an application that runs better against the database. And you'll tie yourself into that database....Sorry, make that 'leverage your investment'.

That's what I don't like about the Java push. IT shops will look at "we need Oracle developers for the database and Java developers for the front end". There aren't enough good Java/Oracle developers to cover both bases. There's probably not even enough competant Oracle developers, they are still 'expensive'. But Java developers are being churned out of colleges, universities etc and they produce something pretty.

At a recent project, the PM wanted to show the end users the progress being made by development. It wasn't a matter of waving around a big bunch of code printouts or telling them how many batch routines had been written. They were shown the screens, and just the screens. When push comes to shove, pretty wins. And when the Java developers are saying the database is just a datadump and that everything can be written in Java, the database developers are going to get the pointy end of the stick.

And that's my beef with JDeveloper. I don't want to be told that, when you need a tricky bit of record validation, go and find a Java developer to write it and gradually you'll pick up enough so that you don't have to ask them as often. I've got a lot of PL/SQL experience, and some in C, Perl, COBOL and a couple of other languages. What I want is a PL/SQL to Java Rosetta Stone. This is how you translate PL/SQL collections to Java, this is how you call database procedures or SQL through Java, and so forth. That will give PL/SQL developers the knowledge to write Java.

Because my experience of Forms is that, while some Forms are simple base-table selects/insert/updates, the ones that aren't simple are the ones you spend the most time working on, and they contain lots of PL/SQL. And honestly, if I can't develop that in JDeveloper when I can in Forms, then I am not able to sell myself as a practical JDeveloper resource.

I wasn't able to stay around for the actual hands-on portion. That wasn't a great disappointment to me, as it isn't the way I'm comfortable learning. I'm distrusting of a rehearsed 'press here, drag this and its done' walkthrough. It reminds me of my first Oracle training for Forms 4.5 (or maybe 5.0), developing the normal DEPT/EMP style Forms. I want to try something tricky, with validations, changing defaults depending on this or that and so on. So I'll try JDeveloper on something, time permitting.

This has probably come across very negative, and is also probably unfair. I don't see that the J2EE approach is technically better than the Forms approach for the environments I've worked in or can see myself working in. But I feel it will may well get used there and I can't afford to be shut out. I hope I'm wrong and can stay with Forms. [My last experience of a Developer Day was about four years back. It was WAP this, WAP that and them telling us that we'd be developing everything to run on a mobile phone. I heard more WAP on that day than I'd heard before or in the four years since. Though I'll admit that today, I can at least see that sometime in the next five to ten years I will probably work on something that can be used on some type of mobile device.]

We'll see in six months or a year's time whether I've changed my tune about J2EE and JDeveloper. Convincing your friends about an issue, they'll give you the advantage of the benefit of the doubt. Convincing the enemy is always more of a challenge.

PS.

Reading back on this, I haven't given justice to the emphasis in the presentation on using the appropriate technology and the potential for integrating Forms and J2EE. That's an area which I need to look into more.

Wednesday, October 05, 2005

Recording the commit time on a record

It is often useful to record the time at which a record was inserted or updated, usually achieved by an appropriate column with a SYSDATE default or using triggers. However what is harder to achieve is the time that the record was committed. But rephrasing the question to "the time the record became visible to the rest of the database/application" a solution is suggested, using a background job to identify and update records which suddenly become 'visible'.

Here I create a test table with the normal 'created_on' column set as sysdate. I have another column set as the 'create_commit_on'. This will be left null by the insert, and a batch process will run in the background to pick up the null entries and set the column to sysdate. Actually, using DBMS_JOB the job won't pick up the exact commit time, but will normally be close enough.

An index is put on the create_commit_on column, using a decode so that ONLY the null values appear in the index. This will allow those records requiring update to be identified quickly (ie without a full scan of the table). [Not that full scans are evil, but if a large proportion of the table would require this update at one time, this probably isn't a practical technique for you unless it is a very small table.]


To demonstrate, we create the table and a procedure to insert into it with a pause of a few minutes between insert and commit. We'll run this a few times.
Another procedure is created to set the commit time, and is run as a background job repeating every minute. For this you will need job processing enabled, and the demo procedures also require access to DBMS_LOCK.
drop table test;

create table test
(id number, col_a varchar2(100),
created_on date, create_commit_on date);

create index pend_ix on test (decode(create_commit_on,null,1,null));

drop sequence test_seq;

create sequence test_seq;

drop procedure set_commit_time;

create procedure set_commit_time is
begin
update test set create_commit_on=sysdate
where decode(create_commit_on,null,1,null) = 1;
end;
.
/

drop procedure add_test;

create procedure add_test is
v_pause number;
begin
v_pause := round(dbms_random.value(120,300));
insert into test
(id, col_a, created_on)
values (test_seq.nextval, 'Waiting '||round(v_pause/60,1)||' minutes', sysdate);
dbms_lock.sleep(v_pause);
end;
.
/

set serveroutput on size 100000

declare
v_num number;
begin
dbms_job.submit(v_num, 'begin set_commit_time; commit; end;',
sysdate, 'sysdate + (1/3600)');
commit;
for i in 1..10 loop
dbms_job.submit(v_num, 'declare n number; begin add_test; commit; end;');
dbms_lock.sleep(10);
commit;
end loop;
commit;
dbms_output.put_line(v_num);
end;
.
/

column col_a format a30
alter session set nls_date_format ='DD/Mon/YY HH24:MI:SS';
Select * from test order by 1;

ID COL_A CREATED_ON CREATE_COMMIT_ON
---------- ------------------------------ ------------------ ------------------
1 Waiting 3.2 minutes 05/Oct/05 11:50:58 05/Oct/05 11:54:11
2 Waiting 2.3 minutes 05/Oct/05 11:51:08 05/Oct/05 11:54:11
3 Waiting 4.6 minutes 05/Oct/05 11:51:18 05/Oct/05 11:56:22
4 Waiting 3.4 minutes 05/Oct/05 11:51:29 05/Oct/05 11:55:05
5 Waiting 3 minutes 05/Oct/05 11:51:39 05/Oct/05 11:55:05
6 Waiting 3.6 minutes 05/Oct/05 11:51:50 05/Oct/05 11:55:56
7 Waiting 4 minutes 05/Oct/05 11:52:00 05/Oct/05 11:56:22
8 Waiting 2.6 minutes 05/Oct/05 11:52:10 05/Oct/05 11:55:05
9 Waiting 4.4 minutes 05/Oct/05 11:52:21 05/Oct/05 11:57:14
10 Waiting 2.5 minutes 05/Oct/05 11:53:26 05/Oct/05 11:56:22

In my example (the random waits mean yours would be slightly different) we can see that item 3 was created prior to item 4 but waited a minute less to commit and so was actually committed earlier. Similar with items 8 and 9/10.

The same technique could be applied to updates, as long as the updates all set the COMMIT_TIME column to null (possibly through a trigger).

Normally, it is difficult to pick records added or updated in a table since a particular time, as the delay between change and commit means that changes do not necessarily become visible in the order they are inserted. Even allowing, for example, a five minute window may miss long running transactions and requires processing to cater for duplicates.

By using this technique, we can use the time the record became visible. This is not a universal solution. The update would be onerous if a large number of rows had been changed and applying this technique to lots of tables would also have an impact and risk of deadlock unless each update is committed separately.