Knowledge increases exponentially. Today, you probably own more books than great universities of times past—Cambridge University owned less than two hundred books in the fifteenth century. First came the invention of writing, then alphabets, then paper, then the printing press, then mechanization. Each step caused an exponential increase in the collective human knowledge. In our generation, Al Gore invented the internet and the last barriers to the spread of knowledge have been broken. Today, everybody has the ability to contribute, communicate, and collaborate. We are all caught up in a tsunami, an avalanche, a conflagration, a veritable explosion of knowledge for the betterment of humankind. This is the blog of the good folks at Database Specialists, a brave band of Oracle database administrators from the great state of California. We bid you greeting, traveler. We hope you find something of value on these pages and we wish you good fortune in your journey.

White Paper on Oracle 11g Enterprise User Security now available!

Managing database user accounts in a large enterprise can be very challenging.  Creating individual accounts for every person in every database can be very time-consuming for DBAs and managing different passwords in every database is time-consuming for both DBAs and users.  When people leave the enterprise, it is critical to remove their access immediately but this can be difficult if you have many databases.  However, the cost of not properly managing these accounts is that your databases are more vulnerable to security breaches. Oracle offers a solution to this in the form of Enterprise User Security (EUS) which is part of the Oracle Identity Management product. With EUS, user accounts are created and managed in an LDAP repository called Oracle Internet Directory (OID).  OID is responsible for authenticating the users as well as managing the access rights of each user.

I recently wrote a white paper called “Implementing Oracle 11g Enterprise User Security” that explains how to install and configure EUS. In this paper, I will give step-by-step instructions on how to install the software components required to run Oracle Identity Management and also how to implement Enterprise User Security to handle a typical real-world scenario in which different people need different levels of access to a number of databases.

You can find it here: http://www.dbspecialists.com/presentations.html#implementing_oracle_11g_enterprise_user_security

I hope you enjoy it.  Please feel free to give feedback

Quick Reference Guide for Managing Physical Standby Databases

Using a Physical Standby database (aka Data Guard) is one of the most effective ways to protect your database and improve its availability.  My colleague Jay Stanley wrote a great blog on this subject: http://www.dbspecialists.com/blog/database-backups/need-uptime-use-and-oracle-physical-standby-database/.

The Standby database can be activated in the event of a disaster or it can be done for maintenance reasons, such as upgrading the host OS.  In this case, you can do a switchover to the Standby, upgrade the Primary host and then switch back to the Primary, all with potentially just a few minutes of downtime and zero data loss.  A failover occurs when the Primary becomes unavailable and the Standby needs to be used for production operations.  This is not typically a planned exercise and is done after some sort of disaster renders the Primary unavailable.  This could result in data loss, depending on the protection mode that has been configured. 

In this blog, I will demonstrate how to accomplish several common tasks for maintaining Physical Standby databases, including how to do both a switchover (for maintenance) and a failover (for disaster).  There are actually several different ways to accomplish these tasks: Grid Control, Data Guard command line (DGMGRL) and SQLPlus.  Because of the nature of the remote DBA work that I do, I like to use SQLPlus as it is the one tool that I can count on at any client site.

I highly recommend that you practice all of these commands in a non-production environment.  All of this was done using 11.2.0.2 Enterprise Edition on 64-bit linux.

Start a Standby Database in Redo Apply Mode with no delay

startup mount;
alter database recover managed standby database disconnect;

Start the Standby Database in Redo Apply Mode with a 120 minute delay

startup mount;
alter database recover managed standby database DELAY 120 disconnect;

Start the Standby Database in Real-Time Apply Mode

In order to use Real-Time Apply, the Standby database must have Standby Redo Logs.  For more details, see Oracle Support Note: 828274.1 Data Guard Real-Time Apply FAQ.

startup mount;
alter database recover managed standby database using current logfile disconnect;

Stop Redo Apply and open the Standby Database for Read Only access

alter database recover managed standby database cancel;
alter database open read only;

Perform a Switchover to a Physical Standby database.

Step 1: Make sure everything is ready

Query the SWITCHOVER_STATUS column of the V$DATABASE view on the primary database.

Select switchover_status from v$database;
SWITCHOVER_STATUS
-----------------
 TO STANDBY
 1 row selected

A value of TO STANDBY or SESSIONS ACTIVE indicates that the primary database can be switched to the standby role. If neither of these values is returned, a switchover is not possible because redo transport is either misconfigured or is not functioning properly.  Shutdown the application, disconnect all sessions and stop job processing.  You want to ensure that there are no active transactions occurring in order to minimize the time it takes to switchover.  If you have a delay configured, cancel and restart Redo Apply without it.

select delay_mins from v$managed_standby;

Note:  According to the docs, this should show the delay but in my testing, this value was always zero.  However, this message in the alert log indicates that there is a delay:

Media Recovery Delayed for 120 minute(s) (thread 1 sequence 23)
alter database recover managed standby database cancel;
alter database recover managed standby database NODELAY disconnect;

Check to see that the Standby has received and processed the latest archive log from the Primary.  Run this query on the Standby and compare against the latest sequence generated from the Primary

select al.thrd "Thread",
almax "Last Seq Received",
lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
     from v$archived_log
     where resetlogs_change#=(select resetlogs_change# from v$database)
     group by thread#) al,
    (select thread# thrd, max(sequence#) lhmax
     from v$log_history
     where first_time=(select max(first_time) from v$log_history)
     group by thread#) lh
where al.thrd = lh.thrd;
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1                74               74

Run this on the Primary to see the latest sequence generated.  The last sequence applied to the Standby should be within 1 or 2 of this number:

SQL> select thread#, sequence# from v$thread;
THREAD#  SEQUENCE#
---------- ----------
1           75

If there is big difference between the last log generated and the last one applied, it could be because the Standby is not processing fast enough or there could be a gap in the Archive logs.  You can check for this by querying v$archive_gap.  You should resolve any such issues prior to doing the switchover.

Step 2: Convert the Primary to Standby:

alter database commit to switchover to physical standby with session shutdown;
shutdown abort;
startup mount;

Step 3: Convert the standby to Primary:

Query the SWITCHOVER_STATUS column of the V$DATABASE view on the standby database.

SELECT SWITCHOVER_STATUS FROM V$DATABASE;
SWITCHOVER_STATUS
-----------------
TO_PRIMARY

A value of TO PRIMARY or SESSIONS ACTIVE indicates that the standby database is ready to be switched to the primary role. If neither of these values is returned, verify that Redo Apply is active and that redo transport is configured and working properly. Continue to query this column until the value returned is either TO PRIMARY or SESSIONS ACTIVE.

alter database commit to switchover to primary with session shutdown;
alter database open;

Step 4: Start Redo Apply on the new physical standby database and any others that exist.

alter database recover managed standby database using current logfile disconnect;

At this point, the databases have switched roles.  Confirm that the new Standby is receiving and applying logs from the new Primary.  You can do the same steps again to reverse them back to their original roles.  You should take a backup of the new Primary now.

Perform a Failover to a Physical Standby Database

Step 1:  Flush any unsent redo from the primary database to the target standby database.

If the primary database can be mounted, it may be possible to flush any unsent archived and current redo from the primary database to the standby database. If this operation is successful, a zero data loss failover is possible even if the primary database is not in a zero data loss data protection mode. Mount, but do not open the primary database. If the primary database cannot be mounted, go to Step 2.

Issue the following SQL statement at the primary database:

alter system flush redo to <Standby DB Name>;

For <Standby DB Name>, specify the DB_UNIQUE_NAME of the standby database that is to receive the redo flushed from the primary database. This statement flushes any unsent redo from the primary database to the standby database, and waits for that redo to be applied to the standby database. If this statement completes without any errors, go to Step 3. If the statement completes with any error, or if it must be stopped because you cannot wait any longer for the statement to complete, continue with Step 2.

Step 2:   Verify that the standby database has the most recently archived redo log file for each primary database redo thread and that there are no gaps. 

 Query the V$ARCHIVED_LOG  and v$archive_gap view on the target standby database to obtain the highest log sequence number for each redo thread.  If necessary, manually copy archive logs to the Standby and register them.

 alter database register physical logfile '<full path and file name>';

 Step 3:  Stop Redo Apply and finish applying all received redo data.

alter database recover managed standby database cancel;
alter database recover managed standby database finish;

Step 4: Open the new Primary database

4a) If  the “alter database…finish” statement in Step 3 fails, try to resolve the problem.  If it cannot be resolved a failover can still be performed (with some data loss) by issuing the following SQL statement on the target standby database:

alter database activate physical standby database;
alter database open; 

4b) If the “alter database…finish” completes without errors, then you can do a switchover with no data loss.  This allows you the option of switching back to the original primary after you resolve whatever caused it to become unavailable.

alter database commit to switchover to primary with session shutdown;
alter database open;

At this point, you should take a backup of the new Primary database.  If you have other Physical Standby databases, then you can restart the redo apply now.

Flashback a Failed Primary Database into a Physical Standby Database

In the event of a failover, the original Primary database will need to be rebuilt or flashed back before it can become a Standby.  The following steps assume that a failover has been performed to a physical standby database and that Flashback Database was enabled on the old primary database at the time of the failover. This procedure brings the old primary database back into the Data Guard configuration as a physical standby database.

Step 1: On the new primary database, issue the following query to determine the SCN at which the old standby database became the new primary database:

Select to_char(standby_became_primary_scn) from v$database;

Step 2:   Flashback and convert the database to a physical standby database.

Issue the following statements on the old primary database:

shutdown immediate;
startup mount;
flashback database to scn <SCN from Step 1>;
alter database convert to physical standby;
shutdown immediate;
startup mount;
alter database recover managed standby database using current logfile disconnect;

 Step 3:   Start transporting redo to the new physical standby database.

Configure the log_archive_dest_2 parameter and enable it:

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE;

On the new standby database, you may also need to change the LOG_ARCHIVE_DEST_n initialization parameters so that redo transport services do not transmit redo data to other databases.  Confirm that the new Standby is receiving and applying logs from the new Primary.

Here at Database Specialists, we all have many years of experience working with Standby databases.  If you have questions or problems with your database, feel free to contact us and we will give you a hand.

An introduction to emacs SQLi mode

Introduction

Early in my career, when I was first introduced to Unix, my mentor at the time recommended that I use a text editor called emacs, but also learn how to use vi as well. I personally like both editors, but I prefer emacs for coding. Over the decades, I’ve continued to use emacs; I use it for nearly everything work-related that doesn’t fit in a web browser these days. I use it for email (GNUS), writing presentations and papers (org-mode), and coding every possible language and system imaginable(perl-mode, SQLi mode, gnuplot-mode, various lisp-modes, and more). I’ve found that I’m usually much more productive using emacs than using a full-featured integrated development environment like Eclipse, SQL*Developer, or Toad except in some very few special cases. Over the years, there have been many changes and improvements to the original SQL mode and related pl-sql mode; these have all been superseeded with a new mode called SQLi mode. SQLi mode has support for most databases today, including Oracle and mySQL among many others.

In this quick tutorial, I’ll show how to set up emacs version 24.2.1 to work with an Oracle database, and what you can do with this setup.

What does SQLi mode do?

SQLi mode makes it easier to interact with sql*plus.

Developing and administering Oracle ends up going something like this:

  1. Write a script or SQL statement
  2. Start a sqlplus session in another window
  3. Test your script in the new window by invoking @scriptname
  4. Look at the output to see what went wrong
  5. Fix script
  6. Switch back to your editing buffer, and repeat from step 3 above.

It’s not very fun, because of the constant switching between the terminal in which you’re running sqlplus, and the terminal you’re editing in. It’s also not fun having to retype the SQL file name over & over again. SQLi mode automates this so that you can send either the latest statement, or a highlighted region, or an entire buffer to sqlplus with just a few quick keystrokes. It gives you a very useful ability to re-execute previous bits of SQL you’ve used. Finally, it highlights and color-codes your SQL and pl/sql code which makes it very nice to code in – for example, it becomes very easy to see comments, as they show up in red.

SQLi mode comes automatically with Emacs v24, so there’s nothing extra that you need to install.

Finding more in-depth help about SQLi mode

Just enter M-x sql-help[return] to open a buffer with documentation about SQLi mode.

Configuring sql-mode to access an Oracle database

The easiest way to set it up is to execute configure-group for the SQL group; [escape] x customize-group [return] SQL [return], or M-x customize-group [return] SQL [return]. A buffer will be created that has some options you’ll probably want to change as follows:

Option Change/description
Sql Database change to your target database name
SQL Oracle Program Full path to ’sqlplus’
SQL Password oracle account password
SQL Product set to ‘Oracle’
SQL Server database host (or localhost if using port forwarding)
SQL User oracle username

Note that I’ve been able to get this working even on Windows 7, under emacs running under cygwin/X, with sqlplus installed under windows. Under that environment, the cygwin path to sqlplus will need to be configured for the SQL Oracle Program; for me it is /cygdrive/c/product/11.1.0/client_2/SQLPLUS.EXE.

If you do not wish to be prompted for login credentials, you can optionally set the (defalias 'sql-get-login 'ignore) in your .emacs initialization file.

If you do not have global-font-lock-mode set, you may wish to do that so that whenever you visit some SQL, it will automatically show up displayed in color.

Starting it up

First, start the *SQL* buffer, which actually runs sqlplus. Just type M-x sql-oracle[return] – unless you have set sql-get-login, it will prompt for the username,password, and database; if you have set the variables above they will be the defaults.

Normally, this *SQL* buffer is not directly manipulated; instead, start up a new buffer/file. M-x find-file[return] /home/jstanley/test.sql You’ll notice that it will automatically be put into SQL[Oracle] mode.

Then, start typing some SQL that you’d like to work with;

select object_name from user_objects;

You can then use C-c C-b to send this to your sqlplus session!

You can use the following key bindings to interact with the your sql process:

Key binding What it does
C-c C-b sql-send-buffer
C-c C-c sql-send-paragraph
C-c C-r sql-send-region
C-c C-s sql-send-string
C-M-q prog-indent-sexp
C-c C-l a sql-list-all
C-c C-l t sql-list-table
C-c TAB sql-product-interactive

If you get an error that says: No sql process started, then in your test.sql buffer, enter: M-x sql-set-sqli-buffer RET *SQL* RET

The sql-list-all command basically lists all of the objects in the USER_OBJECTS view.

The sql-list-table command lists only the tables owned by the current user.

Conclusion

You can use emacs SQLi mode to really make development much easier than doing it with vi and sqlplus.

For a more in-depth tutorial (for IBM DB2, but very similar), check out http://www.ibm.com/developerworks/data/library/techarticle/0206mathew/0206mathew.html

Author: Jay Stanley <jstanley@dbspecialists.com>

Date: 2012-10-26 Fri

HTML generated by org-mode 6.34c in emacs 24

How To Install and Configure Oracle Enterprise Manager Cloud Control 12c

In this blog I will take a look at the newest version of Oracle Enterprise Manager, officially known as Oracle Enterprise Manager Cloud Control 12c and will give step-by-step instructions on how to install and configure it.  I wrote a similar blog for Grid Control which can be found here: http://www.dbspecialists.com/blog/database-monitoring/installing-11g-grid-control-on-linux.

The basic architecture of Cloud Control is the same as Grid Control.  There is a central repository and one or more Oracle Management Servers (OMS) that coordinates data received from the hosts.  There are agents on each monitored host that collect data about the targets on the host and send that data to the OMS.  The agents are also responsible for executing any commands that are initiated from Cloud Control.

The concept of plug-ins has been around since before Cloud Control but they seemingly take on a more important role in the Cloud Control architecture.  The core Enterprise Manager Cloud Control features for managing and monitoring Oracle technologies, such as Oracle Database, Oracle Fusion Middleware, and Oracle Fusion Applications, are now provided through plug-ins that can be downloaded and deployed using the new Self Update feature. This new “pluggable” framework enables Cloud Control to be updated with management support for the latest Oracle product releases, without having to wait for the next Cloud Control release to provide such functionality. For example, when a new version of Oracle Database is released, you can download and deploy the latest Oracle Database plug-in, which will include management support for the latest release.  The four plug-ins that get installed by default are: Oracle Database, Oracle Fusion Middleware, My Oracle Support and Oracle Exadata.  There are quite a few other plug-ins available as well, some for Oracle products and some not.  For example, there is a plug-in for IBM DB2, Microsoft SQL Server and several EMC storage devices.  You can see the complete list of plug-ins available at http://www.oracle.com/technetwork/oem/extensions/index.html#OEM12c

There are some new features in Cloud Control that may be nice to have, but nothing that stands out as something I need to have.  It is really similar in functionality to Grid Control, which is not a bad thing because I have always liked Grid Control.  You can see a complete list of the new features here: http://docs.oracle.com/cd/E24628_01/doc.121/e25353/whats_new.htm#CEGIFFGA

Step 1) Install Oracle Database 11.2.0.2 and create a database to be used for the Cloud Control repository.

Step 2) Download the Cloud Control software

Go to technet.oracle.com, click Download, Enterprise Management and then Oracle Enterprise Manager.  Select Linux x86-64, Accept the license agreement and download all 3 files for Cloud Control 12 Release 2 (12.1.0.2).  Unzip all 3 files into a staging location

Step 3) Install Cloud Control on the OMS host

From the staging location where you unzipped the files, execute ./runInstaller to bring up the Cloud Control Installation GUI.  The first screen prompts you to enter your Oracle Support credentials so you can download and install the updates. If you have a valid Oracle Support contract, I suggest you take this option as it saves you the trouble of separately installing any recommended/mandatory patches.  After selecting this option, the Installer connected to Oracle Support, downloaded patch number 14145094 and proceeded with the main installation.

The installation of Cloud Control is slightly different than Grid Control in that you don’t need to separately install WebLogic Server as Cloud Control now does that for you.  In my case, however, I had WebLogic already installed for other reasons and I assumed that Cloud Control would use my existing installation.  No such luck.  Cloud Control requires its own installation of WebLogic.  According to the installation guide:

You must ensure that the Oracle WebLogic Server 11g Release 1 (10.3.5) installed by the Enterprise Manager Cloud Control Installation Wizard or by you is dedicated for Enterprise Manager Cloud Control. You must not have any other Oracle Fusion Middleware product installed in that Middleware home.  Enterprise Manager Cloud Control cannot coexist with any Oracle Fusion Middleware product in the same Middleware home because the ORACLE_COMMON property is used by both the products.

During the Cloud Control installation, it will also install the Cloud Control Agent and will prompt you to specify the install location.  In my case, I installed Cloud Control at /opt/oracle/product/Cloud_Control/Middleware and the Agent at /opt/oracle/product/Cloud_Control/agent.  It will also ask if you want to install any plug-ins in addition the 4 default ones.

After the installation is complete, the following URLs are available:

Cloud Control: https://mdlinux:7799/em

WebLogic Admin server: https://mdinux:7101/console

In order to stop or start Cloud Control on the OMS host, run  /etc/init.d/gcstartup stop | start.  You will be prompted for the unix password of the user that installed the software , usually oracle

Step 4) Install the agent on any additional nodes you want to manage

The Agent installation files come with the Cloud Control software.  Use the “emcli” command to get the proper file for the platform, transfer that file to the new node and then install using the agentDeploy.sh script.  There are other ways to install the agent as well but this is the way I did it.  Note that “mdlinux” is the OMS host and “mdlinux-vm2″ is the host that I am adding

[oracle@mdlinux ]$ emcli login -username=sysman -password=oracle123
Login successful

[oracle@mdlinux ]$ emcli sync
Synchronized successfully
[oracle@mdlinux]$emcli get_agentimage -destination=/tmp -platform="Linux x86-64" -version=12.1.0.2.0
Platform:Linux x86-64
Destination:/tmp
=== Partition Detail ===
Space free : 387 GB
Space required : 1 GB
.
.
.

This created the file /tmp/12.1.0.2.0_AgentCore_226.zip on the OMS host which I transfer to the new host.  After transferring, I unzip, edit the agent.rsp file and then run agentDeploy.sh

[oracle@mdlinux-vm2]$ unzip 12.1.0.2.0_AgentCore_226.zip
Archive:  12.1.0.2.0_AgentCore_226.zip
inflating: unzip
inflating: agentDeploy.sh
inflating: agentimage.properties
inflating: agent.rsp
extracting: agentcoreimage.zip
.
.
.

Edit the agent.rsp and specify values for

OMS_HOST=mdlinux
EM_UPLOAD_PORT=4900
AGENT_REGISTRATION_PASSWORD=oracle123

You can find the upload port by running “emctl status agent” from the $AGENT_HOME/bin (in my case /opt/oracle/product/Cloud_Control/agent/agent_inst) on the OMS server and looking for the port in the REPOSITORY_URL

Repository URL    : https://mdlinux.localdomain:4900/empbs/upload
[oracle@mdlinux-vm2 ]$ ./agentDeploy.sh AGENT_BASE_DIR=/u01/app/oracle/product
RESPONSE_FILE=/tmp/agent.rsp

The last step failed so I manually secured the agent and added the targets:

Performing the agent configuration...
Executing command: /u01/app/oracle/product/core/12.1.0.2.0/oui/bin/runConfig.sh
ORACLE_HOME=/u01/app/oracle/product/core/12.1.0.2.0
RESPONSE_FILE=/u01/app/oracle/product/core/12.1.0.2.0/agent.rsp ACTION=configure
MODE=perform COMPONENT_XML={oracle.sysman.top.agent.11_1_0_1_0.xml}
RERUN=true
Configuration Log Location:
/u01/app/oracle/product/core/12.1.0.2.0/cfgtoollogs/cfgfw/CfmLogger<timestamp>.log
ERROR: Agent Configuration Failed SEVERE:emctl secure agent command has failed with status=1

I followed the below steps to manually secure and add the host and agent targets:

/u01/app/oracle/product/agent_inst/bin/emctl secure agent
/u01/app/oracle/product/agent_inst/bin/emctl start agent
/u01/app/oracle/product/agent_inst/bin/emctl config agent addinternaltargets

The agent is now installed on mdlinux-vm2 and the host can be monitored by Cloud Control.  In order to stop, start or status the Agent on either the OMS host or a monitored host, run

$AGENT_HOME/bin/emctl status | stop | start agent
[oracle@mdlinux-vm2 ]$ /u01/app/oracle/product/agent_inst/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 12.1.0.2.0
OMS Version       : 12.1.0.2.0
Protocol Version  : 12.1.0.1.0
Agent Home        : /u01/app/oracle/product/agent_inst
Agent Binaries    : /u01/app/oracle/product/core/12.1.0.2.0
Agent Process ID  : 14366
Parent Process ID : 14322
Agent URL         : https://mdlinux-vm2.localdomain:3872/emd/main/
Repository URL    : https://mdlinux.localdomain:4900/empbs/upload
Started at        : 2012-10-12 09:20:48
Started by user   : oracle|
Last Reload       : (none)
Last successful upload                       : 2012-10-12 10:57:53
Last attempted upload                        : 2012-10-12 10:57:53
Total Megabytes of XML files uploaded so far : 0.51
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 36.81%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2012-10-12 10:57:11
Last successful heartbeat to OMS             : 2012-10-12 10:57:11
Next scheduled heartbeat to OMS              : 2012-10-12 10:58:11
---------------------------------------------------------------
Agent is Running and Ready

Step 5) Use Cloud Control to discover the targets
Log into Cloud Control as sysman
Click on Targets, then Hosts and you should see the new host listed
Click on Setup, Add Target, Add Targets Manually
Add Non-Host Targets using Guided Process
Target Type=Oracle Database, Listener and Automatic Storage Management
Add using Guided Discovery
Specify the host and click Continue

After it finishes with the Auto Discovery process, it will show you the database(s), listener and ASM instances on that host. Select whichever ones you want to add, specify the password for the dbsnmp user (make sure the account is not locked) and then Test Connection.   If successful, click Next, then Next then Save.

You should now see the new targets in Cloud Control

Deferred Segment Creation

Recently I have come across a few problems with a new 11gR2 feature known as ‘deferred segment creation’ that are worth sharing. Deferred Segment Creation is a new feature added to 11gR2 that cleverly avoids creating an empty segment until it is actually used. The documentation explains that this feature, which is the default in 11gR2, (potentially) saves on disk space and allows quicker installations. Certainly some large third party products, Peoplesoft HR springs to mind for me, create lots of tables many of which may not be required by a particular client, so by deferring the creation of the segments until they are actually used the database can certainly save space and installation time.

However, deferring the creation of the segment until it is used can cause issues, here are three that I have encountered recently.

1.
It is now possible to get quota errors at DML time that would usually be generated at create (DDL) time. I’ll illustrate this below but essentially instead of getting errors during a ‘build deployment’ they happen later when the users are back on the system.

2.
In 11.2.0.1 conventional export (i.e. ‘exp’ not ‘expdp’) silently skips these tables, this is a bug that is fixed in 11.2.0.2 but is something to be aware of in the base release. Conventional export is “desupported for general use” in 11g so this may not affect many people these days although clearly bugs are still getting fixed.

3.
It is now possible to drop tablespaces that *will* contain segments. This seems to affect all version of 11g currently.

If you encounter these (or other) issues with this feature you can disable it for the database by setting the values of DEFERRED_SEGMENT_CREATION=FALSE in the init.ora (spfile) or by using the ‘SEGMENT CREATION IMMEDIATE’ clause during table / index creation.

Here is some sample code to illustrate issues 1 & 3.

– Create two tablespaces
CREATE TABLESPACE DSC_TABLES DATAFILE … size 1024M
CREATE TABLESPACE DSC_INDEXES DATAFILE … size 1024M
(in the above obviously customize the paths depending upon your environment)

– Now create a user but fail to properly assign the quota on the DSC_INDEXES tablespace
CREATE USER DSC IDENTIFIED BY dsc DEFAULT TABLESPACE DSC_TABLES TEMPORARY TABLESPACE TEMP
GRANT RESOURCE, CONNECT TO DSC
REVOKE UNLIMITED TABLESPACE FROM DSC
ALTER USER DSC QUOTA UNLIMITED ON DSC_TABLES
REM ALTER USER DSC QUOTA UNLIMITED ON DSC_INDEXES

–Now as user DSC create our table
create table test ( test_id integer not null, test_text varchar2(400),
constraint test_pk primary key (test_id) using index tablespace dsc_indexes);

at this point we have a table but no segment

SQL> select table_name from user_tables where table_name = ‘TEST’;

returns a row but

SQL> select segment_name from user_segments where segment_name = ‘TEST’

does not.

When we attempt to insert a row

SQL> insert into dsc.test values (1,’test’)

Oracle realizes that the quota’s are not right generating the following error “ORA-01950: no privileges on tablespace ‘DSC_INDEXES’”. Note, that you get this error even if the user running the insert has a quota on the DSC_INDEXES tablespace - even SYS gets the error - because the error relates back to the segment owner rather than the user attempting the insert.

Of course, you can say “well your code was screwed up” and that is true but wouldn’t it be safer to find that out during a deployment or build rather than when people start using the system?

3.
Here is how to drop a tablespace that *will* contain the segment, basically the same as above except this time we can correctly assign the quota (it doesn’t matter).

– Create two tablespaces
CREATE TABLESPACE DSC_TABLES DATAFILE … size 1024M
CREATE TABLESPACE DSC_INDEXES DATAFILE … size 1024M
(in the above obviously customize the paths depending upon your environment)

– Now create a user and assign tablespace quotas
CREATE USER DSC IDENTIFIED BY dsc DEFAULT TABLESPACE DSC_TABLES TEMPORARY TABLESPACE TEMP
GRANT RESOURCE, CONNECT TO DSC
REVOKE UNLIMITED TABLESPACE FROM DSC
ALTER USER DSC QUOTA UNLIMITED ON DSC_TABLES
ALTER USER DSC QUOTA UNLIMITED ON DSC_INDEXES

–Now as user DSC create our table
create table test ( test_id integer not null, test_text varchar2(400),
constraint test_pk primary key (test_id) using index tablespace dsc_indexes);

Same as before, at this point we have a table with no segments. However, we can now drop both the tablespaces

SQL> drop tablespace DSC_TABLES;
SQL> drop tablespace DSC_INDEXES;

note these commands would error out if these tablespaces actually contained segments but because they do not yet they happily go away. Attempts to insert a row into the test table now generates the following error

SQL> INSERT INTO DSC.TEST VALUES (1,’THIS IS A TEST’)
ORA-00959: tablespace ‘DSC_TABLES’ does not exist

and of course this makes sense - the tablespace doesn’t exist - but as with the previous example, it may not be initially obvious what is causing the error. Unfortunately attempts to move the table to a now existing tablespace also throw the same error, although you can re-create the missing tablespace to fix it.

Grid Control or Database Control - which one is right for your enterprise?

Grid Control and Database Control are both web-based tools designed to manage Oracle databases. Starting with 10g, Database Control is installed and available with every Oracle Database installation. From Database Control, you can monitor and administer a single Oracle Database instance or a clustered database. With Grid Control, you can manage all of your databases from a single console. This ability to manage multiple databases is what makes Grid Control different than Database Control

 As you may expect, there is a bit more setup and maintenance needed for Grid Control over Database Control. Grid Control is made up of several components:

 Oracle Management Service (OMS): OMS is a J2EE Web application that coordinates with Management Agents to discover targets, monitor and manage them, and store the collected information in a repository for future reference and analysis. OMS also renders the user interface for the Grid Control console. OMS is deployed to the application server that is installed along with other core components of Enterprise Manager Grid Control. In Oracle 10, the application server was Oracle Application Server but in 11g, they are using WebLogic server. For performance and availability, it is possible to have multiple Management Services running on separate servers behind a load balancer. The OMS is installed in its own location separate from any database $ORACLE_HOME and preferably on its own dedicated server.

 Management Agent – This runs on each monitored host and is responsible for discovering targets on the host (usually databases and listeners but can be others), collecting information about them and sending that information via XML files to the OMS. The Agent also collects data about the host itself and the network which can be very valuable when troubleshooting problems. This is installed in its own location separate from the database $ORACLE_HOME on each server. You can even install the agent on a server with no database and use Grid Control to monitor just the server.

 Management Repository – The Management Repository is the database where all the information collected by the Management Agent gets stored. It consists of objects such as database jobs, packages, procedures, views, and tablespaces. OMS uploads the monitoring data it receives from the Management Agents to the Management Repository. The Management Repository then organizes the data so that it can be retrieved by OMS and displayed in the Grid Control console. Since data is stored in the Management Repository, it can be shared between any number of administrators accessing Grid Control.

Grid Control Console - This is the user interface you see after you install Grid Control. From the Grid Control console, you can monitor and administer your entire enterprise from one location on the network. All the services within your enterprise, including hosts, databases, listeners, application servers, and so on, are easily managed from one central location.

Database Control has a similar architecture with the exception of the OMS:

Management Agent – This works like the Grid Control agent does but it is installed in the database $ORACLE_HOME and is not started/stopped independently of Database Control.

Management Repository –It performs the same function as it does for Grid Control but stores information for only one database. It is a schema within the Database Control database.

Database Control Console - This is the user interface you see after you install Database Control. From the Database Control console, you can monitor and administer a single database.

The big advantage of Grid Control over Database Control is, obviously, the ability to manage multiple databases with one tool. With Grid Control it is much easier to manage all of your databases uniformly. You can apply consistent monitoring rules to all databases.  If you have certain DBAs responsible for only certain databases you can create Groups of databases. If you have a rotating on-call schedule you can configure the Notification Rules to alert the proper person in case of a problem. You can submit jobs to multiple databases at once and even apply patches to multiple databases at once. If you have scheduled maintenance coming up, you can create a Blackout for all the databases on a host at one time. There are reports available that provide a wealth of information about the databases across your enterprise. There are many other examples that I could give on why Grid Control is better than Database Control. Of course, this all comes with a price in terms of complexity.

In my experience as a remote DBA, the biggest problem you run into when using Grid Control to manage lots of databases on many hosts is that the Management Agent is not always stable. I found myself sometimes having to manually restart and even reinstall the agent in order to get it working.

In choosing whether to implement Grid Control or Database Control, I think it simply comes down to how many databases you need to manage. If you have more than just a couple, I would suggest taking the extra time to implement Grid Control. On the other hand, if you have just 1 or 2 databases, then Database Control is probably all you need. It is really simple to get working, requires no separate installation and gives you some great functonality.

 Grid Control commands:

OMS in 10g: $OMS_HOME/opmn/bin/opmnctl [start | stop | status]
OMS in 11g: $OMS_HOME/bin/emctl [start | stop | status] oms
Management Agent: $AGENT_HOME/bin/emctl [start | stop | status] agent

Database Control commands

Database Control: $ORACLE_HOME/bin/emctl [start | stop | status] dbconsole
Install Database Control repository: $ORACLE_HOME/bin/emca -config dbcontrol db -repos create

 For complete details on how to set up Grid Control, you can see my blog on the subject at http://www.dbspecialists.com/blog/database-monitoring/installing-11g-grid-control-on-linux.

 

 

 

postgreSQL, mysql, and Oracle

Although my main interest is in the venerable Oracle database as database administrator and as a programmer over the past 17 years or so, at the same time I have been following along with the progress of the various open-source databases as well with quite a keen interest. The two that I’ve been mainly following are mysql and postgreSQL. You’re probably aware that mysql started as a pure open-source database, when it was acquired by Sun, which was then of course acquired by Oracle itself. postgreSQL has always been open-source; both databases have been around a very long time; at least as long as I’ve been mastering Oracle. I’ve installed them myself for home use several times over the years, and I’ve even run a number of mysql databases for various production functions over the years.

I’ve done a fair amount of work understanding their various limitations over the years. After working with them, it’s easy to conclude that mysql, postgreSQL and the Oracle RDBMS are very, very different databases on a very fundamental level, but that it’s hard to my finger on how they are different.

Recently, I stumbled upon a blog entry that explained the differences between these databases: I would encourage you to go there & check it out: http://ledgersmbdev.blogspot.co.uk/2012/09/or-modelling-interlude-postgresql-vs.html

His claim that discussing these databases very often ends in an an all-caps flame-war, with unassailable opinions from proponents of each is quite true. It’s too bad that this is the case.

I completely agree with the author’s conclusions, and I give kudos to Chris Travers for coming up with such a clear explanation of the differences, especially between mySQL and postgreSQL.

As primarly an Oracle dba/developer, it’s interesting that I’ve never come into contact with anyone ever using Oracle’s object-relational features, among the many, many client’s databases that I’ve worked with. Those features as I remember began in Oracle 8i, and they have been in OCP (Oracle Certified Professional) tests since then, probably 13 years. It’s been part of the feature-set for a very long time; it isn’t that these features don’t work; it’s just that I don’t see them in use. The blog entry doesn’t go into Oracle that much. I think that Oracle is much closer to postgreSQL, where you use the database to serve multiple applications, and you use the database to do quite a lot of the data validation and consistency-checking.

I’ll be very interested in the follow-up posts from Chris’s blog regarding the differences in object-relational use of PostgreSQL and Oracle.

-jay

Author: Jay Stanley <jstanley@dbspecialists.com>

Date: 2012-09-12 Wed

HTML generated by org-mode 6.34c in emacs 23

Not licensed for AWR? Use Statspack instead!

StatsPack (stands for Statistics Package) is a set of Oracle-provided packages that will take snapshots of performance-related data and then report on the activity that occurred between any two snapshots.  This can be very useful for troubleshooting many kinds of performance problems as it will show you the top activity in the database during a period of time.  StatsPack has been around for many years but since the release of 10g, Oracle has been promoting the use of the Active Workload Repository (AWR) in place of Statspack.  AWR is included with the Diagnostics and Tuning Pack options and costs extra, sometimes a lot extra.  AWR is very feature-rich compared with StatsPack and is integrated with other performance tools, like Active Session History and Automated Database Diagnostic Monitor (ADDM).  Given the choice, I will use AWR over Statspack.  But if you have found yourself dealing with a performance problem and are not licensed to use AWR, read on and find out how to get started with StatsPack - at no additional charge!

 Statspack is simple to install.  All of the scripts are located in $ORACLE_HOME/rdbms/admin

[oracle@mdlinux admin]$ sqlplus / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @?/rdbms/admin/spcreate
SQL> @?/rdbms/admin/spauto

Follow the prompts for the password, default and temporary tablespace.  The spcreate.sql will create the PERFSTAT user along with all of the Statspack objects and spauto.sql will create a dbms_job that will execute a snapshot every hour.  You can modify the job to execute more frequently if needed.  To manually create a snapshot:

SQL> exec perfstat.statspack.snap;

Once you have at least two snapshots created, you can run the performance reports.  There are two reports available - an Instance report, and a SQL report.  The Instance Report (spreport.sql) is a general instance health report, covering all aspects of instance performance.  It will show the top wait events, the top SQL queries, the busiest segments, the busiest datafiles, the database activity level and many, many other things.  The SQL report (sprepsql.sql) is a report for a specific SQL statement.  The SQL report is usually run after examining the high-load SQL sections of the instance health report.  The SQL report provides detailed statistics and data for a single SQL statement (as identified by the Hash Value).  This report will also show detailed execution plan information, including whether it has changed or not but only if you change the snap level to 6.  The default value of 5 does not collect execution plan information.  In my opinion, the default value should be 6.

 SQL> exec statspack.modify_statspack_parameter(i_snap_level=>6);

 There are other parameters that you can change if you wish but in my experience there isn’t much need to do so.  They are more fully documented in the $ORACLE_HOME/rdbms/admin/spdoc.txt README for StatsPack.  To run the Instance report:

SQL> @?/rdbms/admin/spreport 

It will show you a list of snapshots and prompt you for the two you are interested in.  The results will be written to a text file.   The first thing I look in the StatsPack Report is the Load Profile, which gives some key statistics that let you know how much work the database is doing.  It is not necessarily the most important thing in the report, but it is close to the top

 Load Profile                            Per Second       Per Transaction
~~~~~~~~~~~~                       ---------------       ---------------
                  Redo size:            296,369.66             18,862.06|
              Logical reads:             20,614.15              1,311.96
              Block changes:              1,313.78                 83.61
             Physical reads:                761.11                 48.44
            Physical writes:                 61.87                  3.94
                 User calls:                350.98                 22.34
                     Parses:                120.54                  7.67
                Hard parses:                  1.61                  0.10
                      Sorts:                 33.22                  2.11
                     Logons:                  1.22                  0.08
                   Executes:                504.59                 32.11|
               Transactions:                 15.71

 

So in this case, the database was doing 15.71 transactions per second, 20,614 Logical Reads per Second and 761 Physical Reads per second.  Is that a lot or a little?  Well it all depends on what this database “normally” does.  Compare this report with a report from the same time period 1 week earlier and see if there is a significant change. Just below the Load Profile is the Instance Efficiency Percentages

 Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer  Hit   %:   99.07    In-memory Sort %:  100.00
            Library Hit   %:   99.68        Soft Parse %:   98.66
         Execute to Parse %:   76.11         Latch Hit %:   99.90
Parse CPU to Parse Elapsd %:   17.75     % Non-Parse CPU:   97.72

 The most famous Hit Ratio of all time, the Buffer Hit %, is of course shown here and “should” be close to 100.  This depends obviously on your application.  For an OLTP app, it definitely should be close to 100 but for a Data Warehousing app perhaps it will be much lower.  If you look at previous reports and the Hit Ratio is usually in the upper 90’s and now it is in the 70’s, then you can bet something changed for the worse.  I also look at the Soft Parse % and the %Non-Parse CPU.  These indicate how many hard parses are being done and how much time CPU time is spent parsing.  Parsing in general, and hard parsing in particular, is a very resource intensive operation and should be minimized.  A little further down is the Top 5 Timed Events.  In this case, “db file sequential read” is taking up about 50% of all database call time.  This is a pretty normal event as it indicates the database is reading from indexes.  If you see “db file scattered read” at the top, that probably means you are doing lots of full table scans or fast full index scans. 

 Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                        108,422         889      8   50.2
CPU time                                                       243          13.7
direct path read                               311,108         147      0    8.3
log file parallel write                            867         138    159    7.8
latch: library cache                               511         137    268    7.8

 Now keep going a bit further and you get to the really interesting stuff: the top SQL statements.  They are ordered in several different ways: by CPU, by Elapsed Time, by Logical Reads, Physical Reads, Executions and Parses.  Depending on what kind of issue you are having, you may need to look at different sections.  For example, suppose your database suddenly starts waiting on Latches and the number of parses per second goes up.  The logical thing to look for are SQL statements that are doing lots of parsing.  Or you see that “db file scattered read” is your top wait event.  You would want to look at the SQL statements doing the most Physical Reads.

 SQL ordered by CPU  DB/Inst: ORAPROD1/oraprod1  Snaps: 43266-43267
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total DB CPU (s):             238
-> Captured SQL accounts for  178.8% of Total DB CPU
-> SQL reported below exceeded  1.0% of Total DB CPU
     CPU                  CPU per             Elapsd                     Old
  Time (s)   Executions  Exec (s)  %Total   Time (s)    Buffer Gets  Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
     80.14        9,306       0.01   33.7     129.20         541,158 1525535925
begin zx_hostengine.runtask (:v1, :v2, :v3, :v4, :v5); end;
      79.77        1,461       0.05   33.6     261.86       2,080,995 1189966800
Module: JDBC Thin Client
call zx_client.sendMessage(:1)
      45.76          193       0.24   19.3     155.81       1,240,393 3937937216
Module: JDBC Thin Client
BEGIN zx_client.submitJob (:1,:2); END;

After identifying the high resource SQL, you can run the StatsPack SQL report for that particular statement.  It will prompt you for the snapshots and the hash value

SQL>@?/rdbms/admin/sprepsql.sql

 Another useful section of the report is the Segment Activity

Segments by Physical Reads  DB/Inst: ORAPROD1/oraprod1  Snaps: 43266-43267
-> End Segment Physical Reads Threshold:      1000
                                           Subobject    Obj.      Physical   Pct
Owner      Tablespace Object Name          Name         Type         Reads Total
---------- ---------- -------------------- ------------ ----- ------------ -----
HOSTTASK USERS      SYS_LOB0000064624C00            LOB        132,065  48.6
HOSTTASK HOSTTASK SYS_LOB0000064668C00              LOB         86,055  31.6
HOSTTASK HOSTTASK SYS_LOB0000064593C00              LOB          8,511   3.1
HOSTTASK HOSTTASK TS_JOB                            TABLE        2,258    .8
OWF_MGR    OWF_DATA   WF_ITEM_ACTIVTY_STA           INDEX        2,248    .8

This will show you which segments account for the most activity.  This can sometimes really help to pinpoint a problem if you see one particular segment accounts for a majority of the physical reads, for example.  In this case, there are 3 LOB segments that account for over 80% of all physical reads in the database. 

 This is by no means a complete guide to Statspack.  There are many sections of the report that I didn’t mention and there are many sections of the report that I have never even looked at.  If you are on a pre-10g version of Oracle or are not licensed for AWR, I encourage you to install Statspack and start using it.  It is a great tool and can be invaluable in solving performance problems.

Note:  After I initially wrote this, I discovered that Statspack will break if you upgrade the database from 10.2 to 11.2.  In order to upgrade Statspack from 10.2 to 11.2, do the following after the database upgrade is complete.

SQL> @?/rdbms/admin/spup102.sql
SQL> @?/rdbms/admin/spup1101.sql
SQL> @?/rdbms/admin/spup11201.sql

If you want complete details on upgrading from 10.2 to 11.2, check out the blog I wrote on that: http://www.dbspecialists.com/blog/uncategorized/upgrade-from-10204-to-112032-including-time-zone-files/

Why doesn’t a database role work for PL/SQL procedures?

Stacy, who is a DBA, has an interesting problem; the company that she works for just acquired another company. The newly-acquired company needs readonly SQL access to a table owned by the main application schema, called ‘HQ’. The newly-acquired company will be writing stored procedures in their own schema, which is called ‘FIELDOFFICE’, and these schemas will be reading data from the table owned by the ‘HQ’ owner.

This appears to be a pretty simple issue: Stacy tries this:

First, she creates the two users, and grants them basic permissions:

SQL> create user hq
identified by hq
default tablespace users
temporary tablespace temp;
User created.

SQL> grant connect,resource to hq;

Grant succeeded.

SQL> create user fieldoffice
identified by fieldoffice
default tablespace users
temporary tablespace temp;
User created.

SQL> grant connect,resource to fieldoffice;

Grant succeeded.

Then, she created a role that she wants to use to control access to the tables in HQ:

SQL> create role accesshq;

Role created.

SQL> grant accesshq to fieldoffice;

Grant succeeded.

Now, she’d like to test it. First, she connects to the HQ user, and creates a test table:

SQL> connect hq/hq
Connected.
SQL> create table mydata(i number);

Table created.

SQL> insert into mydata(i) values (1);

1 row created.

SQL> commit;

Commit complete.

Next, she grants the ACCESSHQ role the ability to SELECT from the HQ.MYDATA table.

SQL> grant select on mydata to accesshq;

Grant succeeded.

Now, she want to test the FIELDOFFICE user, to make sure that they can indeed have select access from that table:

SQL> connect fieldoffice/fieldoffice;
Connected.
SQL> select * from session_roles;
ROLE
------------------------------
CONNECT
RESOURCE
ACCESSHQ
SQL> select * from hq.mydata;
I
----------
1

Well! that appears to work. Next, she creates a small PL/SQL procedure, to make sure that it, too, can access that table:

SQL> create or replace procedure showmydata
  2  is
  3    v_i  number;
  4  begin
  5    select i
  6    into  v_i
  7    from hq.mydata
  8    where rownum = 1;
  9    dbms_output.put_line('First number is '||v_i);
 10  end;
 11  /

Warning: Procedure created with compilation errors.

SQL>
SQL> show err
Errors for PROCEDURE SHOWMYDATA:

LINE/COL ERROR
-------- -----------------------------------------------------------------
5/3      PL/SQL: SQL Statement ignored
7/11     PL/SQL: ORA-00942: table or view does not exist
SQL>

What??? It sure appears that even though she can do a direct SQL select from that table, when PL/SQL is involved, it doesn’t work!

PL/SQL will NOT use roles in this way! In order to allow access to the HQ.MYDATA table from within PL/SQL, a direct grant is required.

SQL> connect hq/hq
Connected.
SQL> grant select on mydata to fieldoffice;

Grant succeeded.

Then, the pl/sql procedure can be recompiled and executed:

SQL> connect fieldoffice/fieldoffice
Connected.
SQL> alter procedure showmydata compile;

Procedure altered.

SQL> execute showmydata;
First number is 1

PL/SQL procedure successfully completed.

This is a very, very common problem that crops up, especially among application developers who haven’t had a lot of experience with Oracle’s use of roles. For more information, do a search on ‘Definers -vs- Invoker’s rights’.

Author: Jay Stanley <jstanley@dbspecialists.com>

Date: 2012-09-10 Mon

HTML generated by org-mode 6.34c in emacs 23

Patching your Oracle database – Critical Patch Update (CPU) or Patch Set Update (PSU)?

Keeping your Oracle database software up to date is a critical and time-consuming task for DBAs.  For many years now, Oracle has been releasing Critical Patch Updates on a quarterly basis.  These patches, as the name implies, contain critical updates to the software, often released in response to a newly found security vulnerability.  More recently, Oracle has also been releasing Patch Set Updates on a quarterly basis.  These also contain important fixes to the Oracle software.  However, there is confusion about the difference between the two and more importantly, confusion about which one needs to be applied.  So whats the difference and which one should you apply?

 According to Oracle Support article ID 1446582.1: Frequently Asked Questions (FAQ) Patching Oracle Database Server:

“A PSU is a collection of proactive, stabilizing cumulative patches for a particular product version (base release or patch set).  PSUs are cumulative and include all of the security fixes from CPU patches, plus additional fixes.  Critical Patch Updates are the primary means of releasing security fixes for Oracle products. CPUs are cumulative with respect to prior CPUs and generally contain only security fixes.”

So, there you have it.  CPUs are smaller and more focused than PSU and mostly deal with security issues.  PSUs contain bug fixes AND they contain the security fixes from the CPU.  When you download a PSU, it will tell you which CPU it contains.  PSUs are on the same quarterly schedule as the Critical Patch Updates (CPU), specifically the Tuesday closest to the 17th of January, April, July, and October.  One thing to keep in mind, however, is that once a PSU has been installed, the recommended way to get future security content is to apply subsequent PSUs.  Reverting from PSU back to CPU, while possible, would require significant effort and so is not advised.  So with this in mind, why would someone choose to apply a CPU rather than a PSU?  I suppose for folks who are concerned only with security fixes and not functionality fixes, a CPU-only approach may be best.  It does seem to be the more conservative approach as a CPU is (in theory) less like to cause trouble than a PSU, simply because it has less code changes in it.

My personal preference is to apply PSUs and not worry about CPUS.

If you would like to be notified when Oracle releases Security Alerts, you can sign up on the Oracle Technology Network website at http://www.oracle.com/technetwork/topics/security/securityemail-090378.html.  You will need to have an account and then you can subscribe to Oracle Security Alerts