I will attend this year’s event by invitation from the Oracle ACE Program. Prior to the start of the conference, I will be attending a two day product briefing with product teams at Oracle HQ. It’s like a mini OpenWorld but only for Oracle ACE Directors.
During the briefing, Oracle product managers talk about the latest and greatest product news. They also share super secret information that is not yet made public. I will report this information to you here on awads.net and via Twitter, unless of course it is protected by a non-disclosure agreement.
See you there!Leave a comment | Filed in Oracle | Tags: oow
A recent addition to my Oracle PL/SQL library is the book Oracle PL/SQL Performance Tuning Tips & Techniques by Michael Rosenblum and Dr. Paul Dorsey.
I agree with Steven Feuerstein’s review that “if you write PL/SQL or are responsible for tuning the PL/SQL code written by someone else, this book will give you a broader, deeper set of tools with which to achieve PL/SQL success”.
In the foreword of the book, Bryn Llewellyn writes:
The database module should be exposed by a PL/SQL API. And the details of the names and structures of the tables, and the SQL that manipulates them, should be securely hidden from the application server module. This paradigm is sometimes known as “thick database.” It sets the context for the discussion of when to use SQL and when to use PL/SQL. The only kind of SQL statement that the application server may issue is a PL/SQL anonymous block that invokes one of the API’s subprograms.
I subscribe to the thick database paradigm. The implementation details of how a transaction is processed and where the data is stored in the database should be hidden behind PL/SQL APIs. Java developers do not have to know how the data is manipulated or the tables where the data is persisted, they just have to call the API.
However, like Bryn, I have seen many projects where all calls to the database are implemented as SQL statements that directly manipulate the application’s database tables. The manipulation is usually done via an ORM framework such as Hibernate.
In the book, the authors share a particularly bad example of this design. A single request from a client machine generated 60,000 round-trips from the application server to the database. They explain the reason behind this large number:
Java developers who think of the database as nothing more than a place to store persistent copies of their classes use Getters and Setters to retrieve and/or update individual attributes of objects. This type of development can generate a round-trip for every attribute of every object in the database. This means that inserting a row into a table with 100 columns results in a single INSERT followed by 99 UPDATE statements. Retrieving this record from the database then requires 100 independent queries. In the application server.
Wow! That’s bad. Multiply this by a 100 concurrent requests and users will start complaining about a “slow database”. NoSQL to the rescue!2 Comments | Filed in Oracle | Tags: book, pl/sql, sql
The UTL_FILE database package is used to read from and write to operating system directories and files. By default, PUBLIC is granted execute permission on UTL_FILE. Therefore, any database account may read from and write to files in the directories specified in the UTL_FILE_DIR database initialization parameter [...] Security considerations with UTL_FILE can be mitigated by removing all directories from UTL_FILE_DIR and using the Directory functionality instead.Leave a comment | Filed in Oracle | Tags: pl/sql, Security
Steven Feuerstein was dismayed when he found in a PL/SQL procedure a cursor FOR loop that contained an INSERT and an UPDATE statements.
That is a classic anti-pattern, a general pattern of coding that should be avoided. It should be avoided because the inserts and updates are changing the tables on a row-by-row basis, which maximizes the number of context switches (between SQL and PL/SQL) and consequently greatly slows the performance of the code. Fortunately, this classic antipattern has a classic, well-defined solution: use BULK COLLECT and FORALL to switch from row-by-row processing to bulk processing.Leave a comment | Filed in Oracle | Tags: pl/sql, sql
However, as Marco reported from Oracle OpenWorld, native JSON support may be an upcoming new feature in Oracle Database 12c.
This new feature allows the storage of JSON documents in table columns with existing data types like VARCHAR2, CLOB, RAW, BLOB and BFILE.
A new check constraint makes sure only valid JSON is inserted.
For example: CHECK column IS JSON.
New built-in operators allow you to work with stored JSON documents. For example, JSON_VALUE enables you to query JSON data and return the result as a SQL value. Other operators include JSON_QUERY, JSON_EXISTS and JSON_TABLE.
Cool stuff!Comments Off | Filed in Oracle | Tags: 12c, json
Head to the Content Catalog and start downloading your favorite sessions. No registration needed. Sessions will be available for download until March 2014.
Note that some presenters chose not to make their sessions available.
Via the Oracle OpenWorld Blog.2 Comments | Filed in Oracle | Tags: oow
The in-memory component duplicates data (specified tables – perhaps with a restriction to a subset of columns) in columnar format in a dedicated area of the SGA. The data is kept up to date in real time, but Oracle doesn’t use undo or redo to maintain this copy of the data because it’s never persisted to disc in this form, it’s recreated in-memory (by a background process) if the instance restarts. The optimizer can then decide whether it would be faster to use a columnar or row-based approach to address a query.Comments Off | Filed in Oracle | Tags: 12c, in-memory
The intent is to help systems which are mixed OLTP and DSS – which sometimes have many “extra” indexes to optimise DSS queries that affect the performance of the OLTP updates. With the in-memory columnar copy you should be able to drop many “DSS indexes”, thus improving OLTP response times – in effect the in-memory stuff behaves a bit like non-persistent bitmap indexing.
Speaking of new features, here is what’s new in Oracle Database, SQL and PL/SQL from 9iR1 until 12cR1.Comments Off | Filed in Oracle | Tags: 12c
To reverse engineer an existing database into a relational model, I used SQL Developer Data Modeler, a free data modeling and database design tool from Oracle.
I had a problem with the tool. I could not save the model. It appeared to be saved but when I reopened the .dmd file, the relational model was nowhere to be found.
I tried all kinds of combinations on my Windows 7 64-bit laptop, like using JDK 6 vs. JDK 7, 32 bit vs. 64 bit versions, etc. No luck.
Then I stumbled upon this Oracle Forum thread while searching for a solution online. The poster suggested that enabling support for version control in the tool solved the issue.
I had versioning support disabled in SQLdev Data Modeler.
Following the hints in the forum post, I enabled it (Tools > Preferences > Extensions > toggle Versioning Support). Restarted SQLdev Data Modeler, and voila! I can now save my relational models!
There was no way I could have guessed that versioning support was interfering with saving relational models. I am guessing this is a bug.Comments Off | Filed in Oracle, Tips | Tags: data-modeler, sql-developer
Since lack of Histograms or freezing CBO Statistics do not guarantee Plan Stability, do not rely on these two myths. If what you are looking for is Plan Stability use then SQL Plan Management available since 11g or SQL Profiles available from 10g.Comments Off | Filed in Oracle | Tags: performance