Thursday, December 29, 2011

Newsflash - the replacement for the Oracle Support HTML interface will be....HTML

Like many other Oracle peeps, I got an email telling me that the current Flash and HTML interfaces for My Oracle Support were to be replaced by a new snazzy ADF based UI.

What has irked me is the way that some people have implied that the new interface won't be HTML.One example is the Oracle Infogram ("There's a new version of MOS in the works, not Flash and not HTML"), though I've seen similar (especially on Twitter which is admittedly constrained by the 140-character limit).

Let's clarify this. Browsers mostly render HTML (though they deal with straight text, images, maybe video and some other stuff). Through plugins, they can do Flash or Java.

But the new Oracle UI will be HTML. HTML spewed by a bunch of App Servers running Java and ADF, true, but still HTML. Going further, if you actually do a 'View Source' in the current HTML support application, right at the top it has meta name="generator" content="Oracle ADF Faces"

I don't know how much, if any, of the ADF layer of the current HTML application is being re-used for the new UI. They may well have done a complete throw-away and rewrite. But it isn't a novel technology stack being used here.

Maybe this is nit-picky. I just had a horrible feeling that there might be some people out there in Oracle land (and, if I'm being blunt, I don't mean developers) who don't realise that Oracle Forms stuff is an exception and that the main web development paradigm consists of mid-tier code generating HTML (probably with lots of javascript too, but that's a separate issue).

Saturday, November 26, 2011

DBMS_METADATA and hashed passwords

In the wonderfully named "ORAganism" blog (don't try oragasm - it is used for something QUITE different) Neil Johnson remarks on the privilege requirements for DBMS_METADATA.

I like DBMS_METADATA. In the old days, you had scripts that tried to re-engineer CREATE scripts from DBA_TABLES and so on. They were complicated and worked in most situations. DBMS_METADATA is much more powerful.

One feature worth noting about DBMS_METADATA is that it can also extract details about users which can be handy in the post 11g world of Oracle.

In 11g the PASSWORD column in the DBA_USERS returns null, rather than hashed version of the user's password as it did in 10g and earlier. That made it a lot harder to extract those password hashes from the database. There is some underlying SYS object that shows the hashes, but SYS objects are really tricky to access.

DBMS_METADATA gives a backdoor to that information. Not to everyone, but it should be usable by 'regular' DBAs without jumping through hoops.

select dbms_metadata.get_ddl('USER','GARY') a from dual;

   DEFAULT TABLESPACE "APEX_4875120311438442"

Before you try to crack my password, don't bother. In my little home environment, it is the same as the username.

Friday, November 18, 2011

Wish List IV - Read only and mean it

Did you know that the SELECT privilege actually allows you to do a SELECT ... FOR UPDATE ? Yes, that means a user with just that privilege could lock an entire table.

I want a SELECT_NOLOCK privilege. The user can still do a SELECT, but they cannot have a lock on the table. Okay, while an SQL is executing, I guess that get the shared table lock. But that's it.

Thursday, November 17, 2011

Wish List IV - The Whole Truth

This is another little annoyance that, I hope, would be a simple change.

I want a column in USER_TAB_COLUMNS that tells me whether the object is a table or a view. Surely linking off to USER_TABLES to exclude views is an unnecessary burden.

Wednesday, November 16, 2011

Wish List III - The Sequence Cometh

This is another of those 'chores' for which every DBA and his dog has a script.

Copy a new version of a table from Test to Dev and then you have to recreate or adjust the associated sequence. My proposed syntax would be

ALTER SEQUENCE blah_seq SYNCHRONIZE WITH blah_table (blah_column);

That would lock the table, get the highest value of 'blah_column' and reset the 'last_number' of the sequence to that value. Even better, retain the fact that the sequence has been synchronized with that table/column so there is some metadata about the relationship.

Tuesday, November 15, 2011

Wish List II - Ultimate Destruction

I'm continuing on my theme of a wish list for 12c (or beyond), and this time I'm taking aim at destroying stuff.

Juggling development and test areas, every so often it is tempting to wipe the slate clean and start with a fresh schema. The simple way to do that is with a DROP USER blah CASCADE

It would be nice to have something one step below the 'nuclear' option. I'm thinking


plus similar options for dropping sequences, views, procedures, triggers and so on. The user/schema continues to exist, with all their privileges and defaults. But you get to clear out the objects the schema owns.

Yes, you can script this. The trick is to disable all the referential integrity constraints first, so that you can drop the tables without worrying about the dependencies.

Cross-schema constraints could still be a problem, as could firing DDL triggers. I'm open to alternative mechanisms. Perhaps DBMS_METADATA could generate a script for dropping objects.

Monday, November 14, 2011

12c Wishlist - Part 1

Last week Nuno posted his 'wish list' for 12c. Here's the "Number 1" on my list:

Oracle Personal Edition for Linux.

A long, long time ago...I can still remember
when Windows was the OS that ran on the PC on your desk. It made sense that someone running their 'personal' database would just install it natively on that desktop machine, and that meant a Windows install.

Too often that would just be an install on top of a Standard Operating Environment pre-configured with Office, a virus checker and the other bloat that hangs around. Yuck.

What I'd like to recommend is that you start with an OS that is stable and decidedly unglamorous, such as RedHat or Oracle Linux. That could run on a commodity x86 under your desk. It could run in a VirtualBox virtual machine on your SOE machine, or in a VMWare or Oracle VM environment managed by real server admins. It could migrate happily between any of them.

But when I go to the documentation I see "Personal Edition is available on Windows platforms only"
Oracle are saying go and buy software off Microsoft before you talk to us about running our database ?

I want Oracle Personal Edition on Linux. Why not a bundle price for the database and Oracle Linux ? You could even sell it as a pre-configured VirtualBox appliance - just add disk. You could 'give away' a one-year Personal Edition licence (cost around $100) with every OCP certification. But push Linux as the preferred platform.

I'm not sure of the production use cases for Personal Edition. I heard, many years ago, that it had market share in Life Sciences, but it may have since been eclipsed by more specialised solutions, open source RDBMS or NoSQL data stores. Maybe it won't even be offered in 12c, squashed out between Express Edition and MySQL.


I've got a few more items on my wish list which I'll post over the coming days. I figured I'd do them as individual posts. In theory that means Plus +1 and retweets might give a guage as to the support for each idea.

Saturday, November 12, 2011

What's the difference

I enjoy working with Apex. However one challenge is that it doesn't sit too well with conventional source control systems. Plus the fact that not all organisations use source control systems, especially for the small departmental applications which are a particular sweet spot for Apex.

One item in my little library of scripts is based on the ApexExport utility. This is part of the main Apex install which you'll probably find on the server. If you don't have access to the server (and some DBAs are understandably cautious about that) ApexExport is also available as part of the Apex Listener. That's a much more compact download, and I've based my script on that.

The Apex Listener download is a zip file. When you unzip it you get a WAR file. If you unzip the WAR file, you find the "apex.jar" file, which is what you need. If you like playing Russian dolls, you can even unzip the JAR file into the class files, but I don't see the point.

My Apex diff script simply extracts the same application (based on the id) from two Apex workspaces and does a compare of the results. This is handy to see if the version in a development workspace is the same as that in Testing/Production (eg has another developer done some work on it). An alternative use case is to ensure that no-one has been sneaking unapproved amendments into Prod.

My script is a zip because downloading BAT files is often blocked. It is trivially small, so the code is shown below too.

I've been lazy and hard-coded the username password here. Feel free to replace them with more parameters. Similarly with directory names, and you can replace the MS-DOS "fc" call with one to your favorite diff utility. I've used ExamDiff for a long time and my preferred editor, PsPad has one built in as well.

REM Usage "diff JDBC_conn1 JDBC_conn2 nnnn"
REM Note: Use JDBC connection format

del f%3_1.sql
del f%3_2.sql

cd G:\oracleXE_11\apex_listener\WEB-INF\lib
set CLASSPATH=%CLASSPATH%;.\;.\ojdbc6.jar;.\apex.jar

java oracle.dbtools.apex.utilities.APEXExport -db %1 -user gary -password gary -application %3
rename f%3.sql f%3_1.sql

java oracle.dbtools.apex.utilities.APEXExport -db %2 -user gary -password gary -application %3
rename f%3.sql f%3_2.sql

fc /c /l /n /w f%3_1.sql f%3_2.sql

Saturday, November 05, 2011

How to prevent your SQL Developer sessions from being killed

A while back I worked on a site where the network would terminate connections that were idle too long (around 20-30 minutes). It is pretty frustrating when you've been working on something for a while then you go grab a coffee or have a meeting and come back to find the connection had been killed. It was even more frustrating when the session was still alive in the database and had to be killed there, so that it could rollback the work so that it could be repeated all over again.

What I'd like is a SQL Developer feature that would periodically send a lightweight request over to the database so the connection is kept alive. Alas, the powers that be seem averse to providing such a function.

I also accept that there are barriers in trying to get people to change their opinions.

The good news is that SQL Developer has the concept of extensions
If something you want is too specialised to be included by Oracle themselves, or they haven't got around to it or simply don't want to do it, then an extension can help.

In this case, a bloke called MinChen Chai has published an extension to add KeepAlive functionality to SQL Developer.

Just download it and add the .jar to the directory sqldeveloper\sqldeveloper\extensions and the Right-click menu on a connection will include a Keep-Alive item. 

If you want to be on the cautious side, you can use unzip or similar to extract the class files from the JAR file. Then you can use the Java Decompiler (or similar) and view the source from the class files. The SQL used is in the Pinger.class and is a pretty inoffensive SELECT SYSDATE FROM DUAL. 

Warning : I haven't tested to see if it works if you spawn out an unshared worksheet.

If your environment is locked down to prevent extensions being added, then all I can suggest is a User-Defined Report with a refresh set to the maximum delay. Unfortunately this grabs focus in Windows and the maximum delay is just two minutes which is much too frequent in my opinion, but it is hard coded. As an aside, the 'reports' component also sits in the extensions directory (

Should we be doing this ?

I accept there are situations where it is appropriate for a DBA to terminate sessions. And this won't stop an explicit ALTER SYSTEM KILL SESSION.

However I think terminating sessions automatically based on 'being idle' is something that is abnormal, and is inappropriate for most environments.  
Before terminating a session, several factors should be considered. These include 

  • Is the session actually doing something important - like the monthly pay cheques
  • Is the session blocking other sessions (and which session is more vital to the business)
  • Is there a need to preserve session state (PL/SQL global variables, temporary tables etc)
  • Is there an active transaction
  • Can the transaction be (safely) committed
  • How much rollback would need to be applied if it is killed
  • Is the session consuming lots of memory
  • Can the user/client cope gracefully with a failure

If your DBA doesn't consider such factors and is happy to terminate sessions without worrying about the effect, then you have a problem. You'll just have to grin and bear it and hope that that there's some form of karma. 

It may help to read a DBA's blog post about a hosting provider admin deleting some of his log files . I love the comments that "anything with 'log' in the name is by convention disposable but useful for diagnosis" and "Log files do build up and unless there is meaningful data in them you can remove them."

Monday, October 31, 2011

A short story about death

Let me tell you about an application I worked with some time ago.

It is an 'intranet' application. The application is just used within an organisation, though the database it runs against is also used by public web applications too. 

Unusually for a web application, it doesn't use a connection pool. Users log in and a dedicated connection is spawned. And if they do certain operations, those also get a dedicated connection. 

None of those connections is allowed to be idle for more than 25 minutes. Every five minutes, a house keeping job comes along and kills off any connections that are idle for more than 20 minutes.

On the face of it, that is a resource saving measure. Otherwise all those idle connections would pile up and kill the server. But actually it isn't. If you look at the 'life' of a session, it spends a few minutes being active, then sits around for 20 minutes doing nothing before being killed. In fact most of the sessions on the database are idle, waiting for their death sentence to be actioned.

If it wasn't for that house-keeping job, someone would have come along and actually fixed the source of the problem. That is, after the operation is complete, the session would be disconnected by the application. Or the logout button on the application would actually do something - like log out of the application.

It is like keeping the shower running after you get out, then coming back ten minutes later to turn it off so as to not waste water. 

The best time to end a session is immediately after it stops doing useful work. If you choose to implement a 'kill if idle' option, you are saying it is okay for resources to be wasted for the period up until you decide it has been 'too idle'. Sometimes it is - it may be too difficult or expensive to fix the source of the problem.

But don't pretend this is the most efficient mechanism to manage resources.

Tuesday, October 11, 2011

An underhand compilation

Oracle cheats. Or, more generously, it sometimes plays by different rules. Normally a DDL will do an implicit commit of any outstanding transaction. Actually it does a couple of implicit commits, one before it tries the DDL and a second if it succeeds.

Furthermore, an 'ALTER PACKAGE .. COMPILE' is a DDL statement. And it does an implicit commit.

However, if the compilation occurs implicitly, because you are trying to execute a package that it is invalid, then your outstanding transaction ISN'T committed. That is what you want as your application can't cope with having partial transactions committed just because it needs to compile a package.

The question really is, if the commit isn't actually necessary, is it being performed by the ALTER PACKAGE simply to be consistent ?

Demo script follows:


XE> CREATE TABLE plch_data
  2    (id NUMBER, col_a NUMBER, col_b NUMBER);

  2  IS
  3     FUNCTION ret_count RETURN NUMBER;
  4  END plch_pkg;
  5  /

  2  IS
  4     v_num NUMBER;
  5   BEGIN
  6     SELECT COUNT(id) INTO v_num
  7     FROM plch_data;
  8     --
  9     RETURN v_num;
 10   END ret_count;
 11  END plch_pkg;
 12  /

XE> SELECT object_name, object_type, status
  2  FROM user_objects
  3  WHERE object_name = 'PLCH_PKG';

---------- ------------------- -------

XE> ALTER TABLE plch_data RENAME COLUMN col_a TO id;

XE> SELECT object_name, object_type, status
  2  FROM user_objects
  3  WHERE object_name = 'PLCH_PKG';

---------- ------------------- -------

XE> INSERT INTO plch_data (id) VALUES (10);

XE> EXEC dbms_output.PUT_LINE( plch_pkg.ret_count);


XE> EXEC dbms_output.PUT_LINE( plch_pkg.ret_count);

XE> REM =================================================================
XE> ALTER TABLE plch_data RENAME COLUMN col_b TO id;

XE> SELECT object_name, object_type, status
  2  FROM user_objects
  3  WHERE object_name = 'PLCH_PKG';

---------- ------------------- -------

XE> INSERT INTO plch_data (id) VALUES (10);

XE> SELECT object_name, object_type, status
  2  FROM user_objects
  3  WHERE object_name = 'PLCH_PKG';

---------- ------------------- -------

XE> EXEC dbms_output.PUT_LINE( plch_pkg.ret_count);


XE> EXEC dbms_output.PUT_LINE( plch_pkg.ret_count);

Saturday, October 08, 2011

It's Enterprise and won't be cheap....

Most of OOW seemed to be about Boxes, the bigger the better. 

I'm not a DBA. The people at Pythian are, and they've said nice things about the Appliance. I think it suits their business model, which is external DBA support. If a company already has good DBAs and listens to them, it shouldn't fare too badly on getting the correct hardware setup. If it doesn't have good DBAs (or it doesn't listen to them), then it probably won't hear about the appliance, and they'll end up with some commodity kit (maybe running MySQL or even SQL Server).

The Exalytics machines is way out of my field. It is probably good. It is probably expensive. It is probably something I won't get my hands on. The same goes for the rest of this Big Data stuff. Clusters of scores of machines are expensive. 

We do have a nice NoSQL solution. I'll probably download that to play with, but it is something that is really for Java developers (or some of these newer languages that get fashionable every couple of years). I think I'll have to re-title myself as a 'Relational Database Developer'. 

So the 'cloud' bit was the part that appealed most. Not the Fusion Apps stuff. Its nice that it is finally out there, but that isn't my field either. 

Guy Harrison's tweets have included some nuggets "DB cloud service will not support direct SQL from outside oracle public cloud - stateless REST calls only." and "DB cloud service gives you a schema in a shared DB, not a whole instance like Amazon RDS. More like SQL azure than RDS"

Pricing won't be cheap. This is Enterprise Edition folks. It's not for startups trying to run a business on pizza, coke and little white pills. It is for people with big pockets. 

Roel Hartman indicates that SQL Developer 3.1 will have some special sauce that will allow you to connect to these cloudy databases. I think the twitter-stream has a screen shot, but I've lost it. [Twitter was in turbo mode the past few days - unfortunate for us down under where a lot of the action happens while we are in bed.]

Other IDEs may need to do some catching up to work with those databases. I'd also suspect that it might prove a barrier to any applications not running on Oracle's cloud too. While Java (and other JVM languages ?) might be doable, I wonder whether Forms will make the cut.

Application Express seems to be making its presence felt. Though after Larry aired his views on Salesforce, I wonder whether the "apex" name will become a battleground as a "cloud language". Hey folks, the domain name appears vacant at the moment.

Oh yes, the Social Network thing. My last company used Yammer, which was pretty similar to what Salesforce offers with Chatter, and the company I'm working at now has something similar. Yes they have a Facebook 'feel', but the focus is 'employer' related. People will have both and won't have a problem keeping them separate, mostly because your employer 'owns' one and all your friends who work elsewhere won't be there.

Friday, October 07, 2011

One head, plus one body (slightly damaged)

We all know that two heads are better than one. Apparently a head without a body can be more useful than one with a broken body.

This train of thought was prompted by an interesting question on the PL/SQL Challenge earlier this week (October 6, 2011 - to be precise). The question posed the puzzle about what happens when you reference a constant defined in the specification of a PL/SQL package if there was an invalid body.

My initial thought was that it is quite valid to have a package specification without a body, and you can safely references types and constants in that specification. I incorrectly assumed that an invalid body wouldn't be a problem.

One reason why it fails is that a body can contain an initialisation section. That is you can have:

   g_stuff   VARCHAR2 (10) := 'Stuff';

   PROCEDURE show_dummy;
END plch_pkg;

   PROCEDURE show_dummy
      DBMS_OUTPUT.put_line ('Dummy');
  dbms_output.put_line('Start it up');
END plch_pkg;

exec dbms_output.put_line(plch_pkg.g_stuff);

This would output both "Start it up" and "Stuff". But of course it can only do that if the body is valid and can be executed by Oracle, which is why Oracle has to try to recompile an invalid package body and must throw an error if it fails.

Sunday, September 18, 2011

Laying down the law over warnings

Tim commented on yesterday's post by saying "I would not be overly disturbed if Oracle made this clause [DEFINER/INVOKER RIGHTS] mandatory in future."

With such a broad range of software that runs against Oracle, some of it from third party vendors, I don't see Oracle making changes that would break existing applications.

But they could and have allowed for you to make it mandatory yourself. Perhaps they even make it so in their internal development.

Start with a simple procedure:

create or replace procedure do_it is
end do_it;

If we compile it with all the warnings enabled, we get the "Missing AUTHID clause" warning.

XE> alter session set plsql_warnings='ENABLE:ALL';
XE> alter procedure do_it compile;

SP2-0805: Procedure altered with compilation warnings

XE> select text, attribute from user_errors where name = 'DO_IT';

TEXT                                                                             ATTRIBUTE
-------------------------------------------------------------------------------- ---------
PLW-05018: unit DO_IT omitted optional AUTHID clause; default value DEFINER used WARNING
XE> exec do_it

But we can use the PLSQL_WARNINGS setting to make enforcement more "Judge Dredd".

XE> alter procedure do_it compile plsql_warnings='ENABLE:ALL','ERROR:05018';
Warning: Procedure altered with compilation errors.

XE> select text, attribute from user_errors where name = 'DO_IT';
TEXT                                                                             ATTRIBUTE
-------------------------------------------------------------------------------- ---------
PLS-05018: unit DO_IT omitted optional AUTHID clause; default value DEFINER used ERROR

XE> exec do_it
BEGIN do_it; END;
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00905: object GARY.DO_IT is invalid
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

Rather than just enabling the warning, we can treat it as an error, effectively making the clause mandatory in our environment. Obviously it doesn't make sense for this particular warning to be set at the program unit level. It might make sense for other warnings to be set as errors at the schema or database level, and then individual overrides can be set for exceptional program units.

Saturday, September 17, 2011


Yes, the catch-cry from the "Lost in Space" Robot. But I'm more interested in PL/SQL Warnings.

My new contract is working with an Oracle 11gR2 database. [Aside: It's great using up-to-date versions and with the new version of XE, it's almost like Christmas.]

With new versions come new features and several new PL/SQL compiler warnings have been added to 11gR2. One of those is "PLW-05018: unit <program unit> omitted optional AUTHID clause; default value DEFINER used". This message was described by Steven Feuerstein as "one of those "nuisance" warnings that will pop up on almost every one of our compiles, because so few developers use AUTHID (for invoker rights) and legacy code certainly does not include this clause"

This description was in the answer to one of the Championship playoff questions which asked players to look at some code and give the number of warnings that it would generate. Most players underestimated the answer, partly because the number of warnings is going up with each version.

For most developers this warning is pretty much superfluous. It is pretty rare to be developing DEFINER rights code for applications. However utilities and some of the built-in schemas (eg MDSYS etc) might have elevated privileges and exposing that power through a procedure is something that needs careful consideration.

It is possible to enable and disable specific warnings. Personally I use the fine-grained option available on the command line through ALTER SESSION. 


The DBAs can set these using ALTER SYSTEM, or might set up LOGON triggers on specific schemas to set appropriate values. You may also want a different level of warnings in your development databases that your testing and production databases.

It may also be possible to set your preferred options in your IDE. However looking at "SQL Developer", the options in the GUI to enable/disable these warnings are pretty coarse-grained, so I wouldn't recommend this solution.

SQL Developer options for setting PL/SQL Warnings

Another location for defining the PLSQL_WARNINGS level is on the object itself. 

XE> create or replace procedure do_it (i_text IN OUT VARCHAR) is
  2  begin
  3    dbms_output.put_line(i_text);
  4  end do_it;
  5  /

XE> alter session set plsql_warnings='ENABLE:ALL';
Session altered.
XE> alter procedure do_it compile;
SP2-0805: Procedure altered with compilation warnings
XE> select text from user_errors where name = 'DO_IT';
PLW-07203: parameter 'I_TEXT' may benefit from use of the NOCOPY compiler hint
PLW-05018: unit DO_IT omitted optional AUTHID clause; default value DEFINER used

This simple procedure gives two errors when all warnings are enabled. If I then recompile it with those warnings disabled, it comes out clean.

XE> alter procedure do_it compile plsql_warnings='ENABLE:ALL','DISABLE:07203','DISABLE:05018';
Procedure altered.
XE> select text from user_errors where name = 'DO_IT';
no rows selected

If I do a straight compile on that procedure afterwards, it will use the current session settings (which will be inherited from the system settings if not set specifically).

XE> alter procedure do_it compile;
SP2-0805: Procedure altered with compilation warnings
XE> select text from user_errors where name = 'DO_IT';
PLW-07203: parameter 'I_TEXT' may benefit from use of the NOCOPY compiler hint
PLW-05018: unit DO_IT omitted optional AUTHID clause; default value DEFINER used

However you can specify 'REUSE SETTINGS' so that the explicitly define settings are retained. 

XE> alter procedure do_it compile plsql_warnings='ENABLE:ALL','DISABLE:07203','DISABLE:05018';
Procedure altered.
XE> alter procedure do_it compile reuse settings;
Procedure altered.
XE> select text from user_errors where name = 'DO_IT';
no rows selected

My recommendations would be for the development environment to have at least some warnings enabled, either at the system level or for specific schemas. Then, if individual modules need differing settings, these can be specified and retained (assuming your IDE is up to the job).

For testing environments, having warnings enabled might be a useful step as this may be the first 'independent' compilation of the code. A second set of eyes reviewing the warnings could be a basic 'code review'. 

I don't see a benefit in warnings in production environments. But I don't see a real downside either.

Friday, September 02, 2011

Oracle 11gR2 Express Edition is FREE

I've just wiped my Oracle XE 10g Edition on my home PC and replaced it with a fresh clean 11g Express Edition.

For those who haven't met XE before, it is the free edition of Oracle.

It allows for up to 11Gb of data (up from 4Gb in the old XE) and will use 1 CPU and 1 GB of RAM. The server can have more power, but it won't get used by the database. 

You can use it for free (developing, testing, running production applications and even give training using it).

You don't get some of the snazzy Enterprise Edition features. No FLASHBACK DATABASE or FLASHBACK TABLE. No PL/SQL Function Result Cache. No Diagnostic or Tuning packs. Or table compression.

The big 'Standard Edition' feature I can see that is missing is Java in the database.

It comes with Apex 4.0 pre-installed. 

Wednesday, August 31, 2011

Swapping the white shirt for a towel ?

For almost three years I have been a consultant. Dark suit, white shirt, tie. The official uniform of the 'You can trust us' brigade. [Aside, I was reading Superfreakonomics recently and learned that ties are often germ ridden so the soon we all stop wearing them, the better!]

Next week I start a new job. Back to contract work, with a role doing Application Express development. Yeah ! I do like Apex. I'd previously remarked about the feeling of accomplishment I get when building code. It is nice to see something you've built being used.

I'm contracting through an agency (BSIPeople) who supplied me with a snazzy Sports Bag on sign up. The bag contained the ubiquitous pen, a hat, a drink bottle and a towel. My wife's reaction to the towel was an immediate reference to the Hitchhikers Guide to the Galaxy.

"any man who can hitch the length and breadth of the galaxy, rough it, slum it, struggle against terrible odds, win through, and still knows where his towel is, is clearly a man to be reckoned with"

There's something appropriate about going back to contracting armed with a towel. I'm looking forward to it.

Sunday, August 28, 2011

Canadian dogs on drugs

This is what happens when you get your data model wrong.

I subscribe to RISKS digests, and the latest issue has an interesting post on a poor Canadian bloke who got confused with his dog.

The dog was on some prescription medicine. Dogs can't go into the local pharmacy and buy this stuff themselves, so the owner performed this duty. The drug purchase/pickup got entered into the nationwide recording system against the purchaser's name.

The problem comes when this guy goes into hospital and that purchase shows up on his records. There's no indication that the drugs are actually intended for a different recipient (Cooper the dog) so the doctors didn't know this. Fortunately the owner was conscious and lucid and able to tell the doctor that he wasn't on those drugs and the mixup got sorted.

It is perfectly reasonable to have a pharmacy record that doctors in hospitals can easily refer to so that they can find out what medications a patient is on. My dad takes various pills for things, and i'd be pretty sure he won't remember the individual names. Plus they deal with accidents and cases where the patient may be unconscious or unable to talk.

I suspect they've also started to use the same pharmacy records to pick up people who buy too much of particular drugs. Here in Oz they've gone as far as to stop selling various cold remedies whose ingredients were used in making illegal drugs. So drug purchases for pets get thrown into the same bit bucket.

The problem is that corruption of the data model, of the meaning of the data, has impacted the primary purpose of the system. Keep your data clean folks

Sunday, August 21, 2011

Redo log buffer or redo log file ?

There's a very interesting post on Jonathan Lewis' blog under the unassuming title of REDO.

Oracle treats a transaction as committed when the change and the commit has been written to the redo log BUFFER and doesn't require it to be written to the redo log FILE.

This can cause issues if the instance fails before that log data is flushed to the the file. If the disk write fails BECAUSE the instance fails, it is very unlikely that anything would have had a chance to look at the data in that tiny gap (but even rare risks WILL happen somewhere, sometime). If the disk write fails but the instance continues for a time (which is simulated in Jonathan's post) then the risk gets higher.

I think RAC throws a few more variables in there. Conceivably, a transaction in one instance may have committed data (not written to redo log file) shunted to another instance where it can be amended again and written to that instance's redo log file. If that first instance fails without a persistent copy of that initial transaction, then it can never be re-applied in its entirety.

But then I'm a developer and maybe I'm missing something in the the way instance recoveries are managed in RAC.

So is Oracle 'broken' ?

One problem is the notion of 'durability' is vague. A committed action should last beyond 'system failure', but in a 'cheap' system the CPU and disks can be in the same rack (or the same server) and a failure, such as a fire, could destroy both. Does that mean a transaction shouldn't be viewed as 'committed' until the log file is archived and shipped elsewhere ? Conversely, an expensive 'Data Guard' architecture with a maximum protection mode/level might not be impacted by the failure of a redo log write on a single node.

I suspect this one will run on for a while, and it is worth keeping an eye on Jonathan's post as the experts weigh in.

Friday, August 19, 2011

The casino approach to security

We can learn a lot from casinos. At least that's one view on application security. The rules have changed from being a case of what is allowed versus what isn't allowed, to something a bit fuzzier.

As an example, on my way to the Sydney Oracle Meetup 'Expert' event I had a phone call from my bank. Apparently my credit card had just been rejected in a Australia Post office as a suspect transaction. Since I'd been on the train it wasn't me. The card was in my wallet too, so it hadn't been nicked. The card had been skimmed somewhere.

The suspect transaction wasn't above my credit limit. It's not like I've never used it in an Australia Post either (though never for the amount involved). It may have been an odd location, but I hadn't had any rejections on a recent holiday. In short, it probably wasn't any single factor that tripped the security wire, rather a whole set of criteria that marked it as suspect. Okay, maybe it came down to a suspicious employee of Australia Post, but this is a database blog not an episode of Law and Order.

There's a careful balancing act the bank has to perform (and it did it very well in this case). It has to, in the words of the article, "let the people play". It can't stop every credit card transaction, but has to look for rogue behavior. Its not something I've directly worked in, but I've done fuzzy algorithms for data matching. You add some points for one fact, a few more for another and at the end you get a resulting numeric value for 'how likely'.

That requires lots of logging of activity, and there's an increasing requirement to analyse these logs, sometimes in real time (or near real time) for those behavioral factors. Hadoop seems to get some mentions for this form of analysis, though I'm sure there are other solutions in the NoSQL world. I'm sure VoltDB would offer themselves up here too. My gut feel for this style of application would be some form of distributed data store.

There are rumors of an Oracle Hadoop Appliance which may or not be confirmed in a few weeks at OOW. That said, there have long been rumors of an Oracle Games Console, and I'm thinking of applying for a trademark for ExaPhone before Larry gets in there. Oracle do have a pedigree of dealing with vast amounts of data (anyone mention CERN ?) so it will be interesting to see how they play out in this area.