Monday, October 31, 2011

A short story about death

Let me tell you about an application I worked with some time ago.

It is an 'intranet' application. The application is just used within an organisation, though the database it runs against is also used by public web applications too. 

Unusually for a web application, it doesn't use a connection pool. Users log in and a dedicated connection is spawned. And if they do certain operations, those also get a dedicated connection. 

None of those connections is allowed to be idle for more than 25 minutes. Every five minutes, a house keeping job comes along and kills off any connections that are idle for more than 20 minutes.

On the face of it, that is a resource saving measure. Otherwise all those idle connections would pile up and kill the server. But actually it isn't. If you look at the 'life' of a session, it spends a few minutes being active, then sits around for 20 minutes doing nothing before being killed. In fact most of the sessions on the database are idle, waiting for their death sentence to be actioned.

If it wasn't for that house-keeping job, someone would have come along and actually fixed the source of the problem. That is, after the operation is complete, the session would be disconnected by the application. Or the logout button on the application would actually do something - like log out of the application.

It is like keeping the shower running after you get out, then coming back ten minutes later to turn it off so as to not waste water. 

The best time to end a session is immediately after it stops doing useful work. If you choose to implement a 'kill if idle' option, you are saying it is okay for resources to be wasted for the period up until you decide it has been 'too idle'. Sometimes it is - it may be too difficult or expensive to fix the source of the problem.

But don't pretend this is the most efficient mechanism to manage resources.

Tuesday, October 11, 2011

An underhand compilation

Oracle cheats. Or, more generously, it sometimes plays by different rules. Normally a DDL will do an implicit commit of any outstanding transaction. Actually it does a couple of implicit commits, one before it tries the DDL and a second if it succeeds.


Furthermore, an 'ALTER PACKAGE .. COMPILE' is a DDL statement. And it does an implicit commit.


However, if the compilation occurs implicitly, because you are trying to execute a package that it is invalid, then your outstanding transaction ISN'T committed. That is what you want as your application can't cope with having partial transactions committed just because it needs to compile a package.


The question really is, if the commit isn't actually necessary, is it being performed by the ALTER PACKAGE simply to be consistent ?


Demo script follows:





XE> DROP TABLE plch_data PURGE;


XE> CREATE TABLE plch_data
  2    (id NUMBER, col_a NUMBER, col_b NUMBER);


XE> CREATE OR REPLACE PACKAGE plch_pkg
  2  IS
  3     FUNCTION ret_count RETURN NUMBER;
  4  END plch_pkg;
  5  /


XE> CREATE OR REPLACE PACKAGE BODY plch_pkg
  2  IS
  3   FUNCTION ret_count RETURN NUMBER IS
  4     v_num NUMBER;
  5   BEGIN
  6     SELECT COUNT(id) INTO v_num
  7     FROM plch_data;
  8     --
  9     RETURN v_num;
 10   END ret_count;
 11  END plch_pkg;
 12  /


XE> SELECT object_name, object_type, status
  2  FROM user_objects
  3  WHERE object_name = 'PLCH_PKG';


OBJECT_NAM OBJECT_TYPE         STATUS
---------- ------------------- -------
PLCH_PKG   PACKAGE             VALID
PLCH_PKG   PACKAGE BODY        VALID


XE> ALTER TABLE plch_data DROP COLUMN id;
XE> ALTER TABLE plch_data RENAME COLUMN col_a TO id;


XE> SELECT object_name, object_type, status
  2  FROM user_objects
  3  WHERE object_name = 'PLCH_PKG';


OBJECT_NAM OBJECT_TYPE         STATUS
---------- ------------------- -------
PLCH_PKG   PACKAGE             VALID
PLCH_PKG   PACKAGE BODY        INVALID


XE> INSERT INTO plch_data (id) VALUES (10);


XE> EXEC dbms_output.PUT_LINE( plch_pkg.ret_count);
1


XE> ROLLBACK;


XE> EXEC dbms_output.PUT_LINE( plch_pkg.ret_count);
0




XE> REM =================================================================
XE> ALTER TABLE plch_data DROP COLUMN id;
XE> ALTER TABLE plch_data RENAME COLUMN col_b TO id;


XE> SELECT object_name, object_type, status
  2  FROM user_objects
  3  WHERE object_name = 'PLCH_PKG';


OBJECT_NAM OBJECT_TYPE         STATUS
---------- ------------------- -------
PLCH_PKG   PACKAGE             VALID
PLCH_PKG   PACKAGE BODY        INVALID


XE> INSERT INTO plch_data (id) VALUES (10);


XE> ALTER PACKAGE PLCH_PKG COMPILE BODY;
XE> SELECT object_name, object_type, status
  2  FROM user_objects
  3  WHERE object_name = 'PLCH_PKG';


OBJECT_NAM OBJECT_TYPE         STATUS
---------- ------------------- -------
PLCH_PKG   PACKAGE             VALID
PLCH_PKG   PACKAGE BODY        VALID


XE> EXEC dbms_output.PUT_LINE( plch_pkg.ret_count);
1


XE> ROLLBACK;


XE> EXEC dbms_output.PUT_LINE( plch_pkg.ret_count);
1


Saturday, October 08, 2011

It's Enterprise and won't be cheap....

Most of OOW seemed to be about Boxes, the bigger the better. 


I'm not a DBA. The people at Pythian are, and they've said nice things about the Appliance. I think it suits their business model, which is external DBA support. If a company already has good DBAs and listens to them, it shouldn't fare too badly on getting the correct hardware setup. If it doesn't have good DBAs (or it doesn't listen to them), then it probably won't hear about the appliance, and they'll end up with some commodity kit (maybe running MySQL or even SQL Server).


The Exalytics machines is way out of my field. It is probably good. It is probably expensive. It is probably something I won't get my hands on. The same goes for the rest of this Big Data stuff. Clusters of scores of machines are expensive. 


We do have a nice NoSQL solution. I'll probably download that to play with, but it is something that is really for Java developers (or some of these newer languages that get fashionable every couple of years). I think I'll have to re-title myself as a 'Relational Database Developer'. 


So the 'cloud' bit was the part that appealed most. Not the Fusion Apps stuff. Its nice that it is finally out there, but that isn't my field either. 


Guy Harrison's tweets have included some nuggets "DB cloud service will not support direct SQL from outside oracle public cloud - stateless REST calls only." and "DB cloud service gives you a schema in a shared DB, not a whole instance like Amazon RDS. More like SQL azure than RDS"


Pricing won't be cheap. This is Enterprise Edition folks. It's not for startups trying to run a business on pizza, coke and little white pills. It is for people with big pockets. 


Roel Hartman indicates that SQL Developer 3.1 will have some special sauce that will allow you to connect to these cloudy databases. I think the twitter-stream has a screen shot, but I've lost it. [Twitter was in turbo mode the past few days - unfortunate for us down under where a lot of the action happens while we are in bed.]


Other IDEs may need to do some catching up to work with those databases. I'd also suspect that it might prove a barrier to any applications not running on Oracle's cloud too. While Java (and other JVM languages ?) might be doable, I wonder whether Forms will make the cut.


Application Express seems to be making its presence felt. Though after Larry aired his views on Salesforce, I wonder whether the "apex" name will become a battleground as a "cloud language". Hey folks, the appex.com domain name appears vacant at the moment.


Oh yes, the Social Network thing. My last company used Yammer, which was pretty similar to what Salesforce offers with Chatter, and the company I'm working at now has something similar. Yes they have a Facebook 'feel', but the focus is 'employer' related. People will have both and won't have a problem keeping them separate, mostly because your employer 'owns' one and all your friends who work elsewhere won't be there.



Friday, October 07, 2011

One head, plus one body (slightly damaged)

We all know that two heads are better than one. Apparently a head without a body can be more useful than one with a broken body.

This train of thought was prompted by an interesting question on the PL/SQL Challenge earlier this week (October 6, 2011 - to be precise). The question posed the puzzle about what happens when you reference a constant defined in the specification of a PL/SQL package if there was an invalid body.

My initial thought was that it is quite valid to have a package specification without a body, and you can safely references types and constants in that specification. I incorrectly assumed that an invalid body wouldn't be a problem.

One reason why it fails is that a body can contain an initialisation section. That is you can have:

CREATE OR REPLACE PACKAGE plch_pkg
IS
   g_stuff   VARCHAR2 (10) := 'Stuff';

   PROCEDURE show_dummy;
END plch_pkg;
/

CREATE OR REPLACE PACKAGE BODY plch_pkg
IS
   PROCEDURE show_dummy
   IS
   BEGIN
      DBMS_OUTPUT.put_line ('Dummy');
   END;
   
BEGIN
  dbms_output.put_line('Start it up');
END plch_pkg;
/

exec dbms_output.put_line(plch_pkg.g_stuff);

This would output both "Start it up" and "Stuff". But of course it can only do that if the body is valid and can be executed by Oracle, which is why Oracle has to try to recompile an invalid package body and must throw an error if it fails.