msgbartop
News, views, tips and tricks on Oracle and other fun stuff
msgbarbottom

Database as a Storage (DBaaS) vs. Thick Database

A recent addition to my Oracle PL/SQL library is the book Oracle PL/SQL Performance Tuning Tips & Techniques by Michael Rosenblum and Dr. Paul Dorsey.

I agree with Steven Feuerstein’s review that “if you write PL/SQL or are responsible for tuning the PL/SQL code written by someone else, this book will give you a broader, deeper set of tools with which to achieve PL/SQL success”.

In the foreword of the book, Bryn Llewellyn writes:

The database module should be exposed by a PL/SQL API. And the details of the names and structures of the tables, and the SQL that manipulates them, should be securely hidden from the application server module. This paradigm is sometimes known as “thick database.” It sets the context for the discussion of when to use SQL and when to use PL/SQL. The only kind of SQL statement that the application server may issue is a PL/SQL anonymous block that invokes one of the API’s subprograms.

I subscribe to the thick database paradigm. The implementation details of how a transaction is processed and where the data is stored in the database should be hidden behind PL/SQL APIs. Java developers do not have to know how the data is manipulated or the tables where the data is persisted, they just have to call the API.

However, like Bryn, I have seen many projects where all calls to the database are implemented as SQL statements that directly manipulate the application’s database tables. The manipulation is usually done via an ORM framework such as Hibernate.

In the book, the authors share a particularly bad example of this design. A single request from a client machine generated 60,000 round-trips from the application server to the database. They explain the reason behind this large number:

Java developers who think of the database as nothing more than a place to store persistent copies of their classes use Getters and Setters to retrieve and/or update individual attributes of objects. This type of development can generate a round-trip for every attribute of every object in the database. This means that inserting a row into a table with 100 columns results in a single INSERT followed by 99 UPDATE statements. Retrieving this record from the database then requires 100 independent queries. In the application server.

Wow! That’s bad. Multiply this by a 100 concurrent requests and users will start complaining about a “slow database”. NoSQL to the rescue!

2 Comments | Filed in Oracle | Tags: , ,


Avoid UTL_FILE_DIR Security Weakness – Use Oracle Directories Instead

Integrigy:

The UTL_FILE database package is used to read from and write to operating system directories and files. By default, PUBLIC is granted execute permission on UTL_FILE. Therefore, any database account may read from and write to files in the directories specified in the UTL_FILE_DIR database initialization parameter [...] Security considerations with UTL_FILE can be mitigated by removing all directories from UTL_FILE_DIR and using the Directory functionality instead.
Leave a comment | Filed in Oracle | Tags: ,


Are You Using BULK COLLECT and FORALL for Bulk Processing Yet?

Steven Feuerstein was dismayed when he found in a PL/SQL procedure a cursor FOR loop that contained an INSERT and an UPDATE statements.

That is a classic anti-pattern, a general pattern of coding that should be avoided. It should be avoided because the inserts and updates are changing the tables on a row-by-row basis, which maximizes the number of context switches (between SQL and PL/SQL) and consequently greatly slows the performance of the code. Fortunately, this classic antipattern has a classic, well-defined solution: use BULK COLLECT and FORALL to switch from row-by-row processing to bulk processing.

Comments Off | Filed in Oracle | Tags: ,


Logger: a PL/SQL Logging and Debugging Framework

Martin D’Souza:

Logger is a PL/SQL logging and debugging framework. It’s used in many organizations to instrument code in their Oracle applications. Tyler Muth created Logger a few years ago and has since released several upgrades, the last being 1.4.0. After some great feedback, I’m pleased to announce that we’ve just launched Logger 2.0.0 Beta.

Instrumentation, logging and debugging utilities are essential in any PL/SQL development project.

Comments Off | Filed in Oracle | Tags: ,


Oracle Magazine PL/SQL Columns by Steven Feuerstein, the Whole List

Dating back to January 2003.

Comments Off | Filed in Oracle | Tags:


See How Easily You Can Improve Performance by Using These Five Data Caching Techniques

I will be presenting a free Webinar, brought to you by ODTUG, on Tuesday, July 17, 9:00 AM – 10:00 AM PDT. I will be talking about and demonstrating SQL and PL/SQL caching techniques that will considerably improve the performance of your queries and programs. The Webinar will cover the following five features:

  • SQL Query Result Cache
  • PL/SQL Function Result Cache
  • Package-Based Cache
  • Deterministic Function Cache
  • Scalar Subquery Cache

Click here to register.

Update:
The webinar went very well. I thank all of you who attended and ODTUG for hosting it.
Click here to download a zip file containing the following files:

  • Power Point presentation
  • Corresponding white paper
  • All the demo scripts
3 Comments | Filed in Oracle | Tags: , ,


Row versus Set Processing, Surprise!

Craig Shallahamer:

1. Set based processing will likely be much faster than row based processing. Our experiment of processing 100K rows showed row based processing was 3700 times slower than set based processing. Not twice as slower or even 10 times slower… 3700 times slower!

2. Compared to set based processing, row based processing times degrade much quicker than set based processing. That is, row based processing does not scale nearly as well as set based processing. We saw this in that the linear trend line for row based processing was 0.00259 compared to 0.00000 for set based processing.
Comments Off | Filed in Oracle | Tags: ,


Bind, don’t concatenate, to optimize performance and simplify dynamic string construction

Steven Feuerstein gives us a great and simple example of how to use bind variables to avoid concatenation of variable values into dynamic SQL strings and, at the same time, improve performance by order of magnitude:

When you use bind variables, you greatly simplify the task of writing the dynamic SQL string. You don’t have to write all that concatenating code and you don’t have to perform datatype conversions. The USING clause automatically performs native binding of the appropriate types.

He also notes:

You can bind only variable values. You can’t bind in the names of tables or columns, nor can you bind in parts of a SQL statement structure, such as the entire WHERE clause. In these cases, you must use concatenation.
Comments Off | Filed in Oracle | Tags: ,


User Exceptions Gotcha, When an Error Appears as a Success

This is a guest post by David Clement, a veteran database engineer, a coworker and a friend of mine. You can find him online at davidclement.org.

Here is an interesting feature that’s worth knowing about because of the potential for strange bugs. When you run a SQL script in the Unix shell, if you use a user-defined exception, you have to take care to prevent failure and success from returning the same result.

The following code snippet shows a normal way to return an error code to the Unix shell from a SQL script. I saved this as return_error_code.sql:

Continue reading…

3 Comments | Filed in Oracle, Tips | Tags: , , , , ,


Working with Microsoft Office 2007 (OOXML) files using Oracle PL/SQL

Morten Braten:

I have written a package for working with OOXML documents. It’s called OOXML_UTIL_PKG and you can download it as part of (you guessed it) the Alexandria utility library for PL/SQL.
Comments Off | Filed in Links, Oracle | Tags: ,