Thursday, October 12, 2023

Whatever happened to SydOracle

 

 So I haven't posted in mumble years. What happened ?

 

Firstly there was this thing called Twitter. There seemed to be less blogging and more tweeting, and the twitter thing never suited me. I'd consume my RSS feed collection at times that were convenient to me (mostly on the train), and the tweets all seemed to come overnight or while I was actually working, and by the time I saw them, the sequencing and lack of nuance in the content made it all too hard.

 

Secondly, back in 2014 I switched from gigs, either contracting or consulting, into a permanent, full-time job. Gigging meant never being entirely sure what was going to come next, so I consumed and played with stuff pretty widely. While I've still been doing a range of stuff in my job, there has been greater direction and focus, and a bunch of the rabbit holes I've followed have been deep and of little interest to a wider community.

Also (and this is probably 2b, rather than 3), it is quite an atypical workplace, especially from an Oracle perspective. I've worked in large telcos, utilities, government, logistics, insurance, defense... And this isn't a large organization with thousands or tens of thousands of users. It isn't enterprise-y, and some of the challenges would be almost incomprehensible to people used to larger organizations. And unlike the consulting organizations I worked for, it isn't pushing for exposure and I would be limited in what details I could include to explain the context of some of the work.

 

And of course, there's been family and similar grown-up stuff. Kids, house, medical things (a detached retina and resulting eye surgeries - recommend avoiding if possible), those ugly Trump years. Oh and that pandemic. 

 

Which brings me to now-ish. I've been doing the work-from-home thing since the early days of the pandemic with every sign of that continuing. The youngest of my children finished school almost a year ago. And the end of some of those ties to location has led to the end of the 'Sydney' part of Syd-Oracle. Moved a bit further down south, close enough that the train will still get me to Sydney in 90 minutes, but the car will get me to the beach in less than 15 and the lake in half that. Plus I'm no longer working out of a corner of the bedroom, but have a room that I can call my 'Office'. 

 

I don't plan to start blogging again. The permie job continues. Maybe next year, I'll do a post to explain Long Service Leave . Yes Twitter may be sinking slowly into the swamp, but there's still Trump.  


Monday, February 29, 2016

Client support for WITH using PL/SQL

My employer has been using 12c for about a year now, migrating away from 11gR2. It's fun working out the new functionality, including wider use of PL/SQL.

In the 'old' world, you had SQL statements that had to include PL/SQL, such as CREATE TRIGGER, PROCEDURE etc). And you had statements that could never include PL/SQL, such as CREATE SYNONYM, CREATE SEQUENCE. DML (SELECT, INSERT, UPDATE, DELETE and MERGE) were in the latter category.

One of the snazzy new 12c features is the use of PL/SQL in SELECTs, so we have a new category of statements which may include PL/SQL. In some cases that confuses clients that try to interpret the semi-colons in PL/SQL as SQL statement terminators.

SQL Plus

The good news is the the 12c SQL Plus client works great (or at least I haven't got it confused yet), so gets a grade A pass. However, if you're stuck with an older 11g client, you have to make accommodations to use this 12 stuff.

Fortunately, even the older sqlplus clients have a SET SQLTERMINATOR statement. By setting the value to OFF, the client will ignore the semi-colons. That means you'll be using the slash character on a new line to execute your SQL statements. Given the necessary workaround, I'll give it a B grade, but that's not bad for a superseded version of the client.

SET SQLTERMINATOR OFF

WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123 val
  FROM dual
/

SQLCL

If you grab the latest version of SQLcl (mentioned by Jeff Smith here) you'll be fine with the WITH...SELECT option. It also seemed to work fine for the other DML statements. Note that, as per the docs, "If the top-level statement is a DELETEMERGEINSERT, or UPDATE statement, then it must have the WITH_PLSQL hint." 

INSERT /*+WITH_PLSQL */ INTO t123 
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123
  FROM dual
/

It does fall down on the CREATE statements. The CREATE TABLE, CREATE VIEW and CREATE MATERIALIZED VIEW statements all allow WITH PL/SQL, and do not require the hint. The following works fine in SQL Plus (or if you send it straight to the SQL engine via JDBC or OCI, or through dynamic SQL).

CREATE TABLE t123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123  val
  FROM dual
/

Again, there's a workaround, and sqlcl will process the statement if it does contain the WITH_PLSQL hint. However that hint isn't genuine as far as the database is concerned (ie not in the data dictionary and won't be pulled out via a DBMS_METADATA.GET_DDL). Also sqlcl doesn't support the SQL Plus SET SQLTERMINATOR command, so we can't use that workaround. Still, I'll give it a B grade.

CREATE /*+WITH_PLSQL */ TABLE t123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123  val
  FROM dual
/

SQL Developer

As of 4.1.3, SQL Developer offers the weakest support for this 12c functionality. 
[Note: Scott in Perth noted the problems back in 2014.]

Currently the plain WITH...SELECT works correctly, but DML and CREATE statements all fail when it hits the semi-colon and it tries to run the statement as two or more separate SQLs. The only work around is to execute the statement as dynamic SQL through PL/SQL.

Since it seems to share most of the parsing logic with sqlcl, I'd expect it to catch up with its younger sibling on the next release. Hopefully they'll be quicker supporting any 12cR2 enhancements.

I'll give it a 'D' until the next release. In the meantime, pair it up with SQL Plus

TOAD 11

While I very rarely use it, I do have access to TOAD at work. TOAD recognizes blank lines as the separator between statements, so doesn't have an issue with semi-colons in the middle of SQL statements. Grade A for this functionality.

Sunday, January 31, 2016

Multisessioning with Python

I'll admit that I pretty constantly have at least one window either open into SQL*Plus or at the command line ready to run a deployment script through it. But there's time when it is worth taking a step beyond.

One problem with the architecture of most SQL clients is they connect to a database, send off a SQL statement and do nothing until the database responds back with an answer. That's a great model when it takes no more than a second or two to get the response. It is cumbersome when the statement can take minutes to complete. Complex clients, like SQL Developer, allow the user to have multiple sessions open, even against a single schema if you use "unshared" worksheets. But they don't co-ordinate those sessions in any way.

Recently I needed to run a task in a number of schemas. We're all nicely packaged up and all I needed to do was execute a procedure in each of the schemas and we can do that from a master schema with appropriate grants. However the tasks would take several minutes for each schema, and we had dozens of schemas to process. Running them consecutively in a single stream would have taken many hours and we also didn't want to set them all off at once through the job scheduler due to the workload. Ideally we wanted a few running concurrently, and when one finished another would start. I haven't found an easy way to do that in the database scheduler.

Python, on the other hand, makes it so darn simple.
[Credit to Stackoverflow, of course]

proc connects to the database, executes the procedure (in this demo just setting the client info with a delay so you can see it), and returns.
Strs is a collection of parameters.
pool tells it how many concurrent operation to run. And then it maps the strings to the pool, so A, B and C will start, then as they finish D,E,F and G will be processed as threads become available.

I could my collection was a list of the schema names, and the statement was more like 'begin ' + arg + '.task; end;'

#!/usr/bin/python

"""
Global variables
"""

db    = 'host:port/service'
user  = 'scott'
pwd   = 'tiger'

def proc(arg):
   con = cx_Oracle.connect(user + '/' + pwd + '@' + db)
   cur = con.cursor()
   cur.execute('begin sys.dbms_application_info.set_client_info(:info); end;',{'info':arg})
   time.sleep(10)   
   cur.close()
   con.close()
   return
   
import cx_Oracle, time
from multiprocessing.dummy import Pool as ThreadPool 

strs = [
  'A',  'B',  'C',  'D',  'E',  'F',  'G'
  ]

# Make the Pool of workers
pool = ThreadPool(3) 
# Pass the elements of the array to the procedure using the pool 
#  In this case no values are returned so the results is a dummy
results = pool.map(proc, strs)
#close the pool and wait for the work to finish 
pool.close() 
pool.join() 

PS. In this case, I used cx_Oracle as the glue between Python and the database.
The pyOraGeek blog is a good starting point for that.

If/when I get around to blogging again, I'll discuss jaydebeapi / jpype as an alternative. In short, cx_Oracle goes through the OCI client (eg Instant Client) and jaydebeapi takes the JVM / JDBC route.



Sunday, May 17, 2015

With PL/SQL and LONGs (and PRODUCT_USER_PROFILE)

One use for the 12.1.0.2 addition of PL/SQL functions in the WITH clause is to get the HIGH_VALUE of a partition in a usable column format.

with
 FUNCTION char2000(i_tab in varchar2, i_part in varchar2) 
 RETURN VARCHAR2 IS
   v_char varchar2(2000);
 BEGIN
   select high_value into v_char
   from user_tab_partitions a
   where a.table_name = i_tab
   and a.partition_name = i_part;
   --
   if v_char like 
     'TO_DATE(''%'', ''SYYYY-MM-DD HH24:MI:SS'', ''NLS_CALENDAR=GREGORIAN'')'
   then
      v_char := regexp_substr(v_char,q'{'[^']+'}');
   end if;
   --
   RETURN v_char;
 END;
select table_name, partition_name, 
       char2000(table_name, partition_name) high_val,
       partition_position, tablespace_name, 
       segment_created, num_rows, last_analyzed, 
       global_stats, user_stats
from user_tab_partitions ut
where segment_created='YES'
order by table_name, high_val;
/

Oracle have spent well over a decade telling us that LONG is deprecated, but still persist in using it in their data dictionary. PL/SQL is the only practical way of getting the values into a more usable data type.

You will want the last version of the SQL Plus client. For SQL, sqlplus treats the semi-colon as a "go off and execute this". PL/SQL has traditionally needed a period on an otherwise empty line to switch from the statement editor to the command prompt.

For example:

Having PL/SQL embedded in the SQL statement confuses the older clients, and we get a bout of premature execution.


In the 12.1.0.2 client, a WITH statement is treated as a PL/SQL statement if it contains PL/SQL (ie needing the period statement terminator). If it doesn't contain PL/SQL then it doesn't (so there's no change required for older scripts). That said, I'd recommend consistently using the period terminator for PL/SQL and SQL.  


The SQLcl client (still beta/early adopter) currently manages the straight select okay, but fails if it is part of a CREATE VIEW. 


Tim Hall has already noted that the WITH PL/SQL doesn't currently work when embedded in a PL/SQL block (such as a procedure), but that is expected in a future release. 

Oh, and while it isn't documented in manual, WITH is its own statement for the purposes of PRODUCT_USER_PROFILE. I can't imagine anyone on the planet is still using PRODUCT_USER_PROFILE for security. If they are, they need to rethink in light of WITH statements and result sets being returned by PL/SQL. 



Sunday, May 10, 2015

Oracle things that piss me off (pt 2) - No Direction

The SQL Developer team has been chugging forward with it's SQL Command Line (sqlcl) tool.

As I developer, I understand where they are coming from. SQL Developer benefited from being able to run scripts built for the SQL*Plus command line tool. Then there's the temptation to add a few more useful titbits to the tool. And if it is built 'properly', then it would be relatively easy to decouple it from the GUI and have it as a stand-alone. 

BUT.....

where's the big picture ?

I'm pretty sure (but happy to be corrected) that "SQL Developer" is part of the 12.1 database installation. It is certainly referenced in the guides. So I'd assume that the next 12.2 release will have "SQL Developer" and "sqlcl" command line tool and SQL Plus. I couldn't guess whether the sqlplus will be offered as a last gasp, "to be deprecated" option or whether the long term plan is to supply two SQL command line tools.

Unix/Linux users are probably used to something similar, as they generally have the options of different shells, such as bash, ksh, csh etc. But to remedy any confusion, scripts are generally written with a shebang so it can automatically work out which of the available shells it should use.

What DBAs are most likely to end up with is a script for which they'll have to guess whether it is aimed at sqlplus or sqlcl (or, if they are lucky, a comment at the start of the code).

Having the clients "sort of" compatible makes it worse. It is harder to tell what it is aimed at, and what might go wrong if the incorrect client is used. Plus opting for compatibility perpetuates some of the dumb crud that has accumulated in sqlplus over the decades.

For example:
This is an SQL statement:
SET ROLE ALL;
This is a directive to the SQLPlus client
SET TIMING ON
You could tell the subtle difference between the SET as SQL statement and SET as sqlplus directive by the semi-colon at the end. Except that both sqlplus and sqlcl will happily accept a semicolon on the end of a 'local' SET command.

If you think it is hard keeping track of what commands are processed by the database, and what are processed by the client, we also have commands that do both.



16:01:49 SQL> select sysdate, to_char(sysdate), cast(sys_context('USERENV','NLS_DATE_FORMAT') as varchar2(20)) dt_fmt,
  2          cast(sys_context('USERENV','NLS_CALENDAR') as varchar2(20)) cal
  3          from dual;

SYSDATE                 TO_CHAR(SYSDATE)   DT_FMT               CAL
----------------------- ------------------ -------------------- --------------------
10/MAY/15               10/MAY/15          DD/MON/RR            GREGORIAN


16:02:35 SQL> alter session set nls_date_format = 'DD/Mon/YYYY';

Session altered.

16:02:40 SQL> select sysdate, to_char(sysdate), cast(sys_context('USERENV','NLS_DATE_FORMAT') as varchar2(20)) dt_fmt,
  2          cast(sys_context('USERENV','NLS_CALENDAR') as varchar2(20)) cal
  3          from dual;

SYSDATE            TO_CHAR(SYSDATE)     DT_FMT               CAL
------------------ -------------------- -------------------- --------------------
10/May/2015        10/May/2015          DD/Mon/YYYY          GREGORIAN

To clarify this, the statement returns one column as a DATE, which will be converted to a string by the client according to its set of rules, and one column as a string converted from a DATE by the database's set of rules.

The ALTER SESSION has been interpreted by both the client AND the server.

This becomes obvious when we do this:

16:02:44 SQL> alter session set nls_calendar='Persian';

Session altered.

16:06:22 SQL> select sysdate, to_char(sysdate),
  2       cast(sys_context('USERENV','NLS_DATE_FORMAT') as varchar2(20)) dt_fmt,
  3       cast(sys_context('USERENV','NLS_CALENDAR') as varchar2(20)) cal
  4       from dual;

SYSDATE                 TO_CHAR(SYSDATE)       DT_FMT               CAL
----------------------- ---------------------- -------------------- ----------
10 May       2015       20 Ordibehesht 1394    DD Month YYYY        Persian

The database knows what to do with the Persian calendar, but the sqlcl client didn't bother. SQLPlus copes with this without a problem, and can also detect when the NLS_DATE_FORMAT is changed in a stored procedure in the database rather than via ALTER SESSION. I assume some NLS values are available/fed back to the client via OCI.

If I was going for a brand-new SQL client, I'd draw a VERY strong line between commands meant for the client and commands intended for the database (maybe a : prefix, reminiscent of vi). I'd also consider that some years down the track, I might be using the same client to extract data from the regular Oracle RDBMS, their mySQL database, a cloud service.... 

To be honest, I'd want one tool that is aimed at deploying DDL to databases (procedures, new columns etc) and maybe data changes (perhaps through creating and executing a procedure). A lot of the rest would be better off supplied as a collection of libraries to be used with a programming language, rather than as a client tool. That way you'd get first class support for error/exception handling, looping, conditions....

PS.
When it comes to naming this tool, bear in mind this is how the XE install refers to the SQL Plus client:



Sunday, April 26, 2015

Oracle things that piss me off (pt 1)


This annoys me.
The fact that Oracle thinks it is appropriate to sell me to 'Ask' whenever I update my Oracle JRE. 


On my home machines,I've ditched the Oracle route for JRE. Java runtime is a requirement for running Minecraft (now owned by Microsoft) and they've now incorporated keeping the JRE updated as part of their updates. No attempts to install some crappy piece of spyware on my machine. 

And it is at the stage where I trust Microsoft over Oracle any day of the week.



Saturday, April 04, 2015

A world of confusion

It has got to the stage where I often don't even know what day it is. No, not premature senility (although some may disagree). But time zones.

Mostly I've had it fairly easy in my career. When I worked in the UK, I just had the one time zone to work with. The only time things got complicated was when I was working at one of the power generation companies, and we had to make provision for the 23-hour and 25-hour days that go with Daylight Savings.

And in Australia we only have a handful of timezones, and when I start and finish work, it is the same day for any part of Australia. I did work on one system where the database clock was set to UTC, but dates weren't important on that application.

Now it is different. I'm dealing with events that happen all over the world. Again the database clock is UTC, with the odd effect that TRUNC(SYSDATE) 'flips over' around lunchtime. Now when I want to look at 'recent' entries (eg a log table) I've got into the habit of asking WHERE LOG_DATE > SYSDATE - INTERVAL '9' HOUR

And we also have columns that are TIMESTAMP WITH TIMEZONE. So I'm getting into the habit of selecting COL_TS AT TIME ZONE DBTIMEZONE . I could use sessiontimezone, but then the time component of DATE columns would be inconsistent.  This becomes just a little more confusing this time of year as various places slip in and out of Daylight Savings.

Now things are getting even more complicated for me.

Again, during my career, I've been lucky enough to be pretty oblivious to character set issues. Most things have squeezed in to my databases without any significant trouble. Occasionally I've had to look for some accented characters in people's names, but that's been it.

In the past few months, I've been working with some European data where the issues have been more pronounced. Aside from a few issues in emails, I've been coping quite well (with a lot of help from Google Translate). 

Now I get to work with some Japanese data. And things get complicated.

"The modern Japanese writing system is a combination of two character types: logographic kanji, which are adopted Chinese characters, and syllabic kana. Kana itself consists of a pair of syllabarieshiragana, used for native or naturalised Japanese words and grammatical elements, and katakana, used for foreign words and names, loanwordsonomatopoeia, scientific names, and sometimes for emphasis. Almost all Japanese sentences contain a mixture of kanji and kana. Because of this mixture of scripts, in addition to a large inventory of kanji characters, the Japanese writing system is often considered to be the most complicated in use anywhere in the world.[1][2]"Japanese writing system

Firstly I hit katakana. With some tables, I can get syllables corresponding to the characters and work out something that I can eyeball and match up to some English data. As an extra complication, there are also half-width characters which are semantically equivalent but occupy different codepoints in Unicode. That has parallels to upper/lower case in English, but is a modern development that came about from trying to fit the previously squarish forms into print, typewriters and computer screens.

Kanji is a different order of shock. Primary school children in Japan learn the first 1000 or so characters. Another thousand plus get taught in high school. The character set is significantly larger in total.

I will have to see if the next few months cause my head to explode. In the mean time, I can recommend reading this article about the politics involved in getting characters (glyphs ? letters ?) into Unicode.  I Can Text You A Pile of Poo, But I Can’t Write My Name

Oh, and I'm still trying to find the most useful character/font set I can have on my PC and  use practically in SQL Developer. My current choice shows the Japanese characters when I click in the field in the dataset, but only little rectangles when I'm not in the field. The only one I've found that does show up all the time is really UGLY.