Knowledge increases exponentially. Today, you probably own more books than great universities of times past—Cambridge University owned less than two hundred books in the fifteenth century. First came the invention of writing, then alphabets, then paper, then the printing press, then mechanization. Each step caused an exponential increase in the collective human knowledge. In our generation, Al Gore invented the internet and the last barriers to the spread of knowledge have been broken. Today, everybody has the ability to contribute, communicate, and collaborate. We are all caught up in a tsunami, an avalanche, a conflagration, a veritable explosion of knowledge for the betterment of humankind. This is the blog of the good folks at Database Specialists, a brave band of Oracle database administrators from the great state of California. We bid you greeting, traveler. We hope you find something of value on these pages and we wish you good fortune in your journey.

How To Install and Configure Oracle Enterprise Manager Cloud Control 12c

In this blog I will take a look at the newest version of Oracle Enterprise Manager, officially known as Oracle Enterprise Manager Cloud Control 12c and will give step-by-step instructions on how to install and configure it.  I wrote a similar blog for Grid Control which can be found here: http://www.dbspecialists.com/blog/database-monitoring/installing-11g-grid-control-on-linux.

The basic architecture of Cloud Control is the same as Grid Control.  There is a central repository and one or more Oracle Management Servers (OMS) that coordinates data received from the hosts.  There are agents on each monitored host that collect data about the targets on the host and send that data to the OMS.  The agents are also responsible for executing any commands that are initiated from Cloud Control.

The concept of plug-ins has been around since before Cloud Control but they seemingly take on a more important role in the Cloud Control architecture.  The core Enterprise Manager Cloud Control features for managing and monitoring Oracle technologies, such as Oracle Database, Oracle Fusion Middleware, and Oracle Fusion Applications, are now provided through plug-ins that can be downloaded and deployed using the new Self Update feature. This new “pluggable” framework enables Cloud Control to be updated with management support for the latest Oracle product releases, without having to wait for the next Cloud Control release to provide such functionality. For example, when a new version of Oracle Database is released, you can download and deploy the latest Oracle Database plug-in, which will include management support for the latest release.  The four plug-ins that get installed by default are: Oracle Database, Oracle Fusion Middleware, My Oracle Support and Oracle Exadata.  There are quite a few other plug-ins available as well, some for Oracle products and some not.  For example, there is a plug-in for IBM DB2, Microsoft SQL Server and several EMC storage devices.  You can see the complete list of plug-ins available at http://www.oracle.com/technetwork/oem/extensions/index.html#OEM12c

There are some new features in Cloud Control that may be nice to have, but nothing that stands out as something I need to have.  It is really similar in functionality to Grid Control, which is not a bad thing because I have always liked Grid Control.  You can see a complete list of the new features here: http://docs.oracle.com/cd/E24628_01/doc.121/e25353/whats_new.htm#CEGIFFGA

Step 1) Install Oracle Database 11.2.0.2 and create a database to be used for the Cloud Control repository.

Step 2) Download the Cloud Control software

Go to technet.oracle.com, click Download, Enterprise Management and then Oracle Enterprise Manager.  Select Linux x86-64, Accept the license agreement and download all 3 files for Cloud Control 12 Release 2 (12.1.0.2).  Unzip all 3 files into a staging location

Step 3) Install Cloud Control on the OMS host

From the staging location where you unzipped the files, execute ./runInstaller to bring up the Cloud Control Installation GUI.  The first screen prompts you to enter your Oracle Support credentials so you can download and install the updates. If you have a valid Oracle Support contract, I suggest you take this option as it saves you the trouble of separately installing any recommended/mandatory patches.  After selecting this option, the Installer connected to Oracle Support, downloaded patch number 14145094 and proceeded with the main installation.

The installation of Cloud Control is slightly different than Grid Control in that you don’t need to separately install WebLogic Server as Cloud Control now does that for you.  In my case, however, I had WebLogic already installed for other reasons and I assumed that Cloud Control would use my existing installation.  No such luck.  Cloud Control requires its own installation of WebLogic.  According to the installation guide:

You must ensure that the Oracle WebLogic Server 11g Release 1 (10.3.5) installed by the Enterprise Manager Cloud Control Installation Wizard or by you is dedicated for Enterprise Manager Cloud Control. You must not have any other Oracle Fusion Middleware product installed in that Middleware home.  Enterprise Manager Cloud Control cannot coexist with any Oracle Fusion Middleware product in the same Middleware home because the ORACLE_COMMON property is used by both the products.

During the Cloud Control installation, it will also install the Cloud Control Agent and will prompt you to specify the install location.  In my case, I installed Cloud Control at /opt/oracle/product/Cloud_Control/Middleware and the Agent at /opt/oracle/product/Cloud_Control/agent.  It will also ask if you want to install any plug-ins in addition the 4 default ones.

After the installation is complete, the following URLs are available:

Cloud Control: https://mdlinux:7799/em

WebLogic Admin server: https://mdinux:7101/console

In order to stop or start Cloud Control on the OMS host, run  /etc/init.d/gcstartup stop | start.  You will be prompted for the unix password of the user that installed the software , usually oracle

Step 4) Install the agent on any additional nodes you want to manage

The Agent installation files come with the Cloud Control software.  Use the “emcli” command to get the proper file for the platform, transfer that file to the new node and then install using the agentDeploy.sh script.  There are other ways to install the agent as well but this is the way I did it.  Note that “mdlinux” is the OMS host and “mdlinux-vm2″ is the host that I am adding

[oracle@mdlinux ]$ emcli login -username=sysman -password=oracle123
Login successful

[oracle@mdlinux ]$ emcli sync
Synchronized successfully
[oracle@mdlinux]$emcli get_agentimage -destination=/tmp -platform="Linux x86-64" -version=12.1.0.2.0
Platform:Linux x86-64
Destination:/tmp
=== Partition Detail ===
Space free : 387 GB
Space required : 1 GB
.
.
.

This created the file /tmp/12.1.0.2.0_AgentCore_226.zip on the OMS host which I transfer to the new host.  After transferring, I unzip, edit the agent.rsp file and then run agentDeploy.sh

[oracle@mdlinux-vm2]$ unzip 12.1.0.2.0_AgentCore_226.zip
Archive:  12.1.0.2.0_AgentCore_226.zip
inflating: unzip
inflating: agentDeploy.sh
inflating: agentimage.properties
inflating: agent.rsp
extracting: agentcoreimage.zip
.
.
.

Edit the agent.rsp and specify values for

OMS_HOST=mdlinux
EM_UPLOAD_PORT=4900
AGENT_REGISTRATION_PASSWORD=oracle123

You can find the upload port by running “emctl status agent” from the $AGENT_HOME/bin (in my case /opt/oracle/product/Cloud_Control/agent/agent_inst) on the OMS server and looking for the port in the REPOSITORY_URL

Repository URL    : https://mdlinux.localdomain:4900/empbs/upload
[oracle@mdlinux-vm2 ]$ ./agentDeploy.sh AGENT_BASE_DIR=/u01/app/oracle/product
RESPONSE_FILE=/tmp/agent.rsp

The last step failed so I manually secured the agent and added the targets:

Performing the agent configuration...
Executing command: /u01/app/oracle/product/core/12.1.0.2.0/oui/bin/runConfig.sh
ORACLE_HOME=/u01/app/oracle/product/core/12.1.0.2.0
RESPONSE_FILE=/u01/app/oracle/product/core/12.1.0.2.0/agent.rsp ACTION=configure
MODE=perform COMPONENT_XML={oracle.sysman.top.agent.11_1_0_1_0.xml}
RERUN=true
Configuration Log Location:
/u01/app/oracle/product/core/12.1.0.2.0/cfgtoollogs/cfgfw/CfmLogger<timestamp>.log
ERROR: Agent Configuration Failed SEVERE:emctl secure agent command has failed with status=1

I followed the below steps to manually secure and add the host and agent targets:

/u01/app/oracle/product/agent_inst/bin/emctl secure agent
/u01/app/oracle/product/agent_inst/bin/emctl start agent
/u01/app/oracle/product/agent_inst/bin/emctl config agent addinternaltargets

The agent is now installed on mdlinux-vm2 and the host can be monitored by Cloud Control.  In order to stop, start or status the Agent on either the OMS host or a monitored host, run

$AGENT_HOME/bin/emctl status | stop | start agent
[oracle@mdlinux-vm2 ]$ /u01/app/oracle/product/agent_inst/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 12.1.0.2.0
OMS Version       : 12.1.0.2.0
Protocol Version  : 12.1.0.1.0
Agent Home        : /u01/app/oracle/product/agent_inst
Agent Binaries    : /u01/app/oracle/product/core/12.1.0.2.0
Agent Process ID  : 14366
Parent Process ID : 14322
Agent URL         : https://mdlinux-vm2.localdomain:3872/emd/main/
Repository URL    : https://mdlinux.localdomain:4900/empbs/upload
Started at        : 2012-10-12 09:20:48
Started by user   : oracle|
Last Reload       : (none)
Last successful upload                       : 2012-10-12 10:57:53
Last attempted upload                        : 2012-10-12 10:57:53
Total Megabytes of XML files uploaded so far : 0.51
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 36.81%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2012-10-12 10:57:11
Last successful heartbeat to OMS             : 2012-10-12 10:57:11
Next scheduled heartbeat to OMS              : 2012-10-12 10:58:11
---------------------------------------------------------------
Agent is Running and Ready

Step 5) Use Cloud Control to discover the targets
Log into Cloud Control as sysman
Click on Targets, then Hosts and you should see the new host listed
Click on Setup, Add Target, Add Targets Manually
Add Non-Host Targets using Guided Process
Target Type=Oracle Database, Listener and Automatic Storage Management
Add using Guided Discovery
Specify the host and click Continue

After it finishes with the Auto Discovery process, it will show you the database(s), listener and ASM instances on that host. Select whichever ones you want to add, specify the password for the dbsnmp user (make sure the account is not locked) and then Test Connection.   If successful, click Next, then Next then Save.

You should now see the new targets in Cloud Control

Deferred Segment Creation

Recently I have come across a few problems with a new 11gR2 feature known as ‘deferred segment creation’ that are worth sharing. Deferred Segment Creation is a new feature added to 11gR2 that cleverly avoids creating an empty segment until it is actually used. The documentation explains that this feature, which is the default in 11gR2, (potentially) saves on disk space and allows quicker installations. Certainly some large third party products, Peoplesoft HR springs to mind for me, create lots of tables many of which may not be required by a particular client, so by deferring the creation of the segments until they are actually used the database can certainly save space and installation time.

However, deferring the creation of the segment until it is used can cause issues, here are three that I have encountered recently.

1.
It is now possible to get quota errors at DML time that would usually be generated at create (DDL) time. I’ll illustrate this below but essentially instead of getting errors during a ‘build deployment’ they happen later when the users are back on the system.

2.
In 11.2.0.1 conventional export (i.e. ‘exp’ not ‘expdp’) silently skips these tables, this is a bug that is fixed in 11.2.0.2 but is something to be aware of in the base release. Conventional export is “desupported for general use” in 11g so this may not affect many people these days although clearly bugs are still getting fixed.

3.
It is now possible to drop tablespaces that *will* contain segments. This seems to affect all version of 11g currently.

If you encounter these (or other) issues with this feature you can disable it for the database by setting the values of DEFERRED_SEGMENT_CREATION=FALSE in the init.ora (spfile) or by using the ‘SEGMENT CREATION IMMEDIATE’ clause during table / index creation.

Here is some sample code to illustrate issues 1 & 3.

– Create two tablespaces
CREATE TABLESPACE DSC_TABLES DATAFILE … size 1024M
CREATE TABLESPACE DSC_INDEXES DATAFILE … size 1024M
(in the above obviously customize the paths depending upon your environment)

– Now create a user but fail to properly assign the quota on the DSC_INDEXES tablespace
CREATE USER DSC IDENTIFIED BY dsc DEFAULT TABLESPACE DSC_TABLES TEMPORARY TABLESPACE TEMP
GRANT RESOURCE, CONNECT TO DSC
REVOKE UNLIMITED TABLESPACE FROM DSC
ALTER USER DSC QUOTA UNLIMITED ON DSC_TABLES
REM ALTER USER DSC QUOTA UNLIMITED ON DSC_INDEXES

–Now as user DSC create our table
create table test ( test_id integer not null, test_text varchar2(400),
constraint test_pk primary key (test_id) using index tablespace dsc_indexes);

at this point we have a table but no segment

SQL> select table_name from user_tables where table_name = ‘TEST’;

returns a row but

SQL> select segment_name from user_segments where segment_name = ‘TEST’

does not.

When we attempt to insert a row

SQL> insert into dsc.test values (1,’test’)

Oracle realizes that the quota’s are not right generating the following error “ORA-01950: no privileges on tablespace ‘DSC_INDEXES’”. Note, that you get this error even if the user running the insert has a quota on the DSC_INDEXES tablespace - even SYS gets the error - because the error relates back to the segment owner rather than the user attempting the insert.

Of course, you can say “well your code was screwed up” and that is true but wouldn’t it be safer to find that out during a deployment or build rather than when people start using the system?

3.
Here is how to drop a tablespace that *will* contain the segment, basically the same as above except this time we can correctly assign the quota (it doesn’t matter).

– Create two tablespaces
CREATE TABLESPACE DSC_TABLES DATAFILE … size 1024M
CREATE TABLESPACE DSC_INDEXES DATAFILE … size 1024M
(in the above obviously customize the paths depending upon your environment)

– Now create a user and assign tablespace quotas
CREATE USER DSC IDENTIFIED BY dsc DEFAULT TABLESPACE DSC_TABLES TEMPORARY TABLESPACE TEMP
GRANT RESOURCE, CONNECT TO DSC
REVOKE UNLIMITED TABLESPACE FROM DSC
ALTER USER DSC QUOTA UNLIMITED ON DSC_TABLES
ALTER USER DSC QUOTA UNLIMITED ON DSC_INDEXES

–Now as user DSC create our table
create table test ( test_id integer not null, test_text varchar2(400),
constraint test_pk primary key (test_id) using index tablespace dsc_indexes);

Same as before, at this point we have a table with no segments. However, we can now drop both the tablespaces

SQL> drop tablespace DSC_TABLES;
SQL> drop tablespace DSC_INDEXES;

note these commands would error out if these tablespaces actually contained segments but because they do not yet they happily go away. Attempts to insert a row into the test table now generates the following error

SQL> INSERT INTO DSC.TEST VALUES (1,’THIS IS A TEST’)
ORA-00959: tablespace ‘DSC_TABLES’ does not exist

and of course this makes sense - the tablespace doesn’t exist - but as with the previous example, it may not be initially obvious what is causing the error. Unfortunately attempts to move the table to a now existing tablespace also throw the same error, although you can re-create the missing tablespace to fix it.

Grid Control or Database Control - which one is right for your enterprise?

Grid Control and Database Control are both web-based tools designed to manage Oracle databases. Starting with 10g, Database Control is installed and available with every Oracle Database installation. From Database Control, you can monitor and administer a single Oracle Database instance or a clustered database. With Grid Control, you can manage all of your databases from a single console. This ability to manage multiple databases is what makes Grid Control different than Database Control

 As you may expect, there is a bit more setup and maintenance needed for Grid Control over Database Control. Grid Control is made up of several components:

 Oracle Management Service (OMS): OMS is a J2EE Web application that coordinates with Management Agents to discover targets, monitor and manage them, and store the collected information in a repository for future reference and analysis. OMS also renders the user interface for the Grid Control console. OMS is deployed to the application server that is installed along with other core components of Enterprise Manager Grid Control. In Oracle 10, the application server was Oracle Application Server but in 11g, they are using WebLogic server. For performance and availability, it is possible to have multiple Management Services running on separate servers behind a load balancer. The OMS is installed in its own location separate from any database $ORACLE_HOME and preferably on its own dedicated server.

 Management Agent – This runs on each monitored host and is responsible for discovering targets on the host (usually databases and listeners but can be others), collecting information about them and sending that information via XML files to the OMS. The Agent also collects data about the host itself and the network which can be very valuable when troubleshooting problems. This is installed in its own location separate from the database $ORACLE_HOME on each server. You can even install the agent on a server with no database and use Grid Control to monitor just the server.

 Management Repository – The Management Repository is the database where all the information collected by the Management Agent gets stored. It consists of objects such as database jobs, packages, procedures, views, and tablespaces. OMS uploads the monitoring data it receives from the Management Agents to the Management Repository. The Management Repository then organizes the data so that it can be retrieved by OMS and displayed in the Grid Control console. Since data is stored in the Management Repository, it can be shared between any number of administrators accessing Grid Control.

Grid Control Console - This is the user interface you see after you install Grid Control. From the Grid Control console, you can monitor and administer your entire enterprise from one location on the network. All the services within your enterprise, including hosts, databases, listeners, application servers, and so on, are easily managed from one central location.

Database Control has a similar architecture with the exception of the OMS:

Management Agent – This works like the Grid Control agent does but it is installed in the database $ORACLE_HOME and is not started/stopped independently of Database Control.

Management Repository –It performs the same function as it does for Grid Control but stores information for only one database. It is a schema within the Database Control database.

Database Control Console - This is the user interface you see after you install Database Control. From the Database Control console, you can monitor and administer a single database.

The big advantage of Grid Control over Database Control is, obviously, the ability to manage multiple databases with one tool. With Grid Control it is much easier to manage all of your databases uniformly. You can apply consistent monitoring rules to all databases.  If you have certain DBAs responsible for only certain databases you can create Groups of databases. If you have a rotating on-call schedule you can configure the Notification Rules to alert the proper person in case of a problem. You can submit jobs to multiple databases at once and even apply patches to multiple databases at once. If you have scheduled maintenance coming up, you can create a Blackout for all the databases on a host at one time. There are reports available that provide a wealth of information about the databases across your enterprise. There are many other examples that I could give on why Grid Control is better than Database Control. Of course, this all comes with a price in terms of complexity.

In my experience as a remote DBA, the biggest problem you run into when using Grid Control to manage lots of databases on many hosts is that the Management Agent is not always stable. I found myself sometimes having to manually restart and even reinstall the agent in order to get it working.

In choosing whether to implement Grid Control or Database Control, I think it simply comes down to how many databases you need to manage. If you have more than just a couple, I would suggest taking the extra time to implement Grid Control. On the other hand, if you have just 1 or 2 databases, then Database Control is probably all you need. It is really simple to get working, requires no separate installation and gives you some great functonality.

 Grid Control commands:

OMS in 10g: $OMS_HOME/opmn/bin/opmnctl [start | stop | status]
OMS in 11g: $OMS_HOME/bin/emctl [start | stop | status] oms
Management Agent: $AGENT_HOME/bin/emctl [start | stop | status] agent

Database Control commands

Database Control: $ORACLE_HOME/bin/emctl [start | stop | status] dbconsole
Install Database Control repository: $ORACLE_HOME/bin/emca -config dbcontrol db -repos create

 For complete details on how to set up Grid Control, you can see my blog on the subject at http://www.dbspecialists.com/blog/database-monitoring/installing-11g-grid-control-on-linux.

 

 

 

postgreSQL, mysql, and Oracle

Although my main interest is in the venerable Oracle database as database administrator and as a programmer over the past 17 years or so, at the same time I have been following along with the progress of the various open-source databases as well with quite a keen interest. The two that I’ve been mainly following are mysql and postgreSQL. You’re probably aware that mysql started as a pure open-source database, when it was acquired by Sun, which was then of course acquired by Oracle itself. postgreSQL has always been open-source; both databases have been around a very long time; at least as long as I’ve been mastering Oracle. I’ve installed them myself for home use several times over the years, and I’ve even run a number of mysql databases for various production functions over the years.

I’ve done a fair amount of work understanding their various limitations over the years. After working with them, it’s easy to conclude that mysql, postgreSQL and the Oracle RDBMS are very, very different databases on a very fundamental level, but that it’s hard to my finger on how they are different.

Recently, I stumbled upon a blog entry that explained the differences between these databases: I would encourage you to go there & check it out: http://ledgersmbdev.blogspot.co.uk/2012/09/or-modelling-interlude-postgresql-vs.html

His claim that discussing these databases very often ends in an an all-caps flame-war, with unassailable opinions from proponents of each is quite true. It’s too bad that this is the case.

I completely agree with the author’s conclusions, and I give kudos to Chris Travers for coming up with such a clear explanation of the differences, especially between mySQL and postgreSQL.

As primarly an Oracle dba/developer, it’s interesting that I’ve never come into contact with anyone ever using Oracle’s object-relational features, among the many, many client’s databases that I’ve worked with. Those features as I remember began in Oracle 8i, and they have been in OCP (Oracle Certified Professional) tests since then, probably 13 years. It’s been part of the feature-set for a very long time; it isn’t that these features don’t work; it’s just that I don’t see them in use. The blog entry doesn’t go into Oracle that much. I think that Oracle is much closer to postgreSQL, where you use the database to serve multiple applications, and you use the database to do quite a lot of the data validation and consistency-checking.

I’ll be very interested in the follow-up posts from Chris’s blog regarding the differences in object-relational use of PostgreSQL and Oracle.

-jay

Author: Jay Stanley <jstanley@dbspecialists.com>

Date: 2012-09-12 Wed

HTML generated by org-mode 6.34c in emacs 23

Not licensed for AWR? Use Statspack instead!

StatsPack (stands for Statistics Package) is a set of Oracle-provided packages that will take snapshots of performance-related data and then report on the activity that occurred between any two snapshots.  This can be very useful for troubleshooting many kinds of performance problems as it will show you the top activity in the database during a period of time.  StatsPack has been around for many years but since the release of 10g, Oracle has been promoting the use of the Active Workload Repository (AWR) in place of Statspack.  AWR is included with the Diagnostics and Tuning Pack options and costs extra, sometimes a lot extra.  AWR is very feature-rich compared with StatsPack and is integrated with other performance tools, like Active Session History and Automated Database Diagnostic Monitor (ADDM).  Given the choice, I will use AWR over Statspack.  But if you have found yourself dealing with a performance problem and are not licensed to use AWR, read on and find out how to get started with StatsPack - at no additional charge!

 Statspack is simple to install.  All of the scripts are located in $ORACLE_HOME/rdbms/admin

[oracle@mdlinux admin]$ sqlplus / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @?/rdbms/admin/spcreate
SQL> @?/rdbms/admin/spauto

Follow the prompts for the password, default and temporary tablespace.  The spcreate.sql will create the PERFSTAT user along with all of the Statspack objects and spauto.sql will create a dbms_job that will execute a snapshot every hour.  You can modify the job to execute more frequently if needed.  To manually create a snapshot:

SQL> exec perfstat.statspack.snap;

Once you have at least two snapshots created, you can run the performance reports.  There are two reports available - an Instance report, and a SQL report.  The Instance Report (spreport.sql) is a general instance health report, covering all aspects of instance performance.  It will show the top wait events, the top SQL queries, the busiest segments, the busiest datafiles, the database activity level and many, many other things.  The SQL report (sprepsql.sql) is a report for a specific SQL statement.  The SQL report is usually run after examining the high-load SQL sections of the instance health report.  The SQL report provides detailed statistics and data for a single SQL statement (as identified by the Hash Value).  This report will also show detailed execution plan information, including whether it has changed or not but only if you change the snap level to 6.  The default value of 5 does not collect execution plan information.  In my opinion, the default value should be 6.

 SQL> exec statspack.modify_statspack_parameter(i_snap_level=>6);

 There are other parameters that you can change if you wish but in my experience there isn’t much need to do so.  They are more fully documented in the $ORACLE_HOME/rdbms/admin/spdoc.txt README for StatsPack.  To run the Instance report:

SQL> @?/rdbms/admin/spreport 

It will show you a list of snapshots and prompt you for the two you are interested in.  The results will be written to a text file.   The first thing I look in the StatsPack Report is the Load Profile, which gives some key statistics that let you know how much work the database is doing.  It is not necessarily the most important thing in the report, but it is close to the top

 Load Profile                            Per Second       Per Transaction
~~~~~~~~~~~~                       ---------------       ---------------
                  Redo size:            296,369.66             18,862.06|
              Logical reads:             20,614.15              1,311.96
              Block changes:              1,313.78                 83.61
             Physical reads:                761.11                 48.44
            Physical writes:                 61.87                  3.94
                 User calls:                350.98                 22.34
                     Parses:                120.54                  7.67
                Hard parses:                  1.61                  0.10
                      Sorts:                 33.22                  2.11
                     Logons:                  1.22                  0.08
                   Executes:                504.59                 32.11|
               Transactions:                 15.71

 

So in this case, the database was doing 15.71 transactions per second, 20,614 Logical Reads per Second and 761 Physical Reads per second.  Is that a lot or a little?  Well it all depends on what this database “normally” does.  Compare this report with a report from the same time period 1 week earlier and see if there is a significant change. Just below the Load Profile is the Instance Efficiency Percentages

 Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer  Hit   %:   99.07    In-memory Sort %:  100.00
            Library Hit   %:   99.68        Soft Parse %:   98.66
         Execute to Parse %:   76.11         Latch Hit %:   99.90
Parse CPU to Parse Elapsd %:   17.75     % Non-Parse CPU:   97.72

 The most famous Hit Ratio of all time, the Buffer Hit %, is of course shown here and “should” be close to 100.  This depends obviously on your application.  For an OLTP app, it definitely should be close to 100 but for a Data Warehousing app perhaps it will be much lower.  If you look at previous reports and the Hit Ratio is usually in the upper 90’s and now it is in the 70’s, then you can bet something changed for the worse.  I also look at the Soft Parse % and the %Non-Parse CPU.  These indicate how many hard parses are being done and how much time CPU time is spent parsing.  Parsing in general, and hard parsing in particular, is a very resource intensive operation and should be minimized.  A little further down is the Top 5 Timed Events.  In this case, “db file sequential read” is taking up about 50% of all database call time.  This is a pretty normal event as it indicates the database is reading from indexes.  If you see “db file scattered read” at the top, that probably means you are doing lots of full table scans or fast full index scans. 

 Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                        108,422         889      8   50.2
CPU time                                                       243          13.7
direct path read                               311,108         147      0    8.3
log file parallel write                            867         138    159    7.8
latch: library cache                               511         137    268    7.8

 Now keep going a bit further and you get to the really interesting stuff: the top SQL statements.  They are ordered in several different ways: by CPU, by Elapsed Time, by Logical Reads, Physical Reads, Executions and Parses.  Depending on what kind of issue you are having, you may need to look at different sections.  For example, suppose your database suddenly starts waiting on Latches and the number of parses per second goes up.  The logical thing to look for are SQL statements that are doing lots of parsing.  Or you see that “db file scattered read” is your top wait event.  You would want to look at the SQL statements doing the most Physical Reads.

 SQL ordered by CPU  DB/Inst: ORAPROD1/oraprod1  Snaps: 43266-43267
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total DB CPU (s):             238
-> Captured SQL accounts for  178.8% of Total DB CPU
-> SQL reported below exceeded  1.0% of Total DB CPU
     CPU                  CPU per             Elapsd                     Old
  Time (s)   Executions  Exec (s)  %Total   Time (s)    Buffer Gets  Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
     80.14        9,306       0.01   33.7     129.20         541,158 1525535925
begin zx_hostengine.runtask (:v1, :v2, :v3, :v4, :v5); end;
      79.77        1,461       0.05   33.6     261.86       2,080,995 1189966800
Module: JDBC Thin Client
call zx_client.sendMessage(:1)
      45.76          193       0.24   19.3     155.81       1,240,393 3937937216
Module: JDBC Thin Client
BEGIN zx_client.submitJob (:1,:2); END;

After identifying the high resource SQL, you can run the StatsPack SQL report for that particular statement.  It will prompt you for the snapshots and the hash value

SQL>@?/rdbms/admin/sprepsql.sql

 Another useful section of the report is the Segment Activity

Segments by Physical Reads  DB/Inst: ORAPROD1/oraprod1  Snaps: 43266-43267
-> End Segment Physical Reads Threshold:      1000
                                           Subobject    Obj.      Physical   Pct
Owner      Tablespace Object Name          Name         Type         Reads Total
---------- ---------- -------------------- ------------ ----- ------------ -----
HOSTTASK USERS      SYS_LOB0000064624C00            LOB        132,065  48.6
HOSTTASK HOSTTASK SYS_LOB0000064668C00              LOB         86,055  31.6
HOSTTASK HOSTTASK SYS_LOB0000064593C00              LOB          8,511   3.1
HOSTTASK HOSTTASK TS_JOB                            TABLE        2,258    .8
OWF_MGR    OWF_DATA   WF_ITEM_ACTIVTY_STA           INDEX        2,248    .8

This will show you which segments account for the most activity.  This can sometimes really help to pinpoint a problem if you see one particular segment accounts for a majority of the physical reads, for example.  In this case, there are 3 LOB segments that account for over 80% of all physical reads in the database. 

 This is by no means a complete guide to Statspack.  There are many sections of the report that I didn’t mention and there are many sections of the report that I have never even looked at.  If you are on a pre-10g version of Oracle or are not licensed for AWR, I encourage you to install Statspack and start using it.  It is a great tool and can be invaluable in solving performance problems.

Note:  After I initially wrote this, I discovered that Statspack will break if you upgrade the database from 10.2 to 11.2.  In order to upgrade Statspack from 10.2 to 11.2, do the following after the database upgrade is complete.

SQL> @?/rdbms/admin/spup102.sql
SQL> @?/rdbms/admin/spup1101.sql
SQL> @?/rdbms/admin/spup11201.sql

If you want complete details on upgrading from 10.2 to 11.2, check out the blog I wrote on that: http://www.dbspecialists.com/blog/uncategorized/upgrade-from-10204-to-112032-including-time-zone-files/

Why doesn’t a database role work for PL/SQL procedures?

Stacy, who is a DBA, has an interesting problem; the company that she works for just acquired another company. The newly-acquired company needs readonly SQL access to a table owned by the main application schema, called ‘HQ’. The newly-acquired company will be writing stored procedures in their own schema, which is called ‘FIELDOFFICE’, and these schemas will be reading data from the table owned by the ‘HQ’ owner.

This appears to be a pretty simple issue: Stacy tries this:

First, she creates the two users, and grants them basic permissions:

SQL> create user hq
identified by hq
default tablespace users
temporary tablespace temp;
User created.

SQL> grant connect,resource to hq;

Grant succeeded.

SQL> create user fieldoffice
identified by fieldoffice
default tablespace users
temporary tablespace temp;
User created.

SQL> grant connect,resource to fieldoffice;

Grant succeeded.

Then, she created a role that she wants to use to control access to the tables in HQ:

SQL> create role accesshq;

Role created.

SQL> grant accesshq to fieldoffice;

Grant succeeded.

Now, she’d like to test it. First, she connects to the HQ user, and creates a test table:

SQL> connect hq/hq
Connected.
SQL> create table mydata(i number);

Table created.

SQL> insert into mydata(i) values (1);

1 row created.

SQL> commit;

Commit complete.

Next, she grants the ACCESSHQ role the ability to SELECT from the HQ.MYDATA table.

SQL> grant select on mydata to accesshq;

Grant succeeded.

Now, she want to test the FIELDOFFICE user, to make sure that they can indeed have select access from that table:

SQL> connect fieldoffice/fieldoffice;
Connected.
SQL> select * from session_roles;
ROLE
------------------------------
CONNECT
RESOURCE
ACCESSHQ
SQL> select * from hq.mydata;
I
----------
1

Well! that appears to work. Next, she creates a small PL/SQL procedure, to make sure that it, too, can access that table:

SQL> create or replace procedure showmydata
  2  is
  3    v_i  number;
  4  begin
  5    select i
  6    into  v_i
  7    from hq.mydata
  8    where rownum = 1;
  9    dbms_output.put_line('First number is '||v_i);
 10  end;
 11  /

Warning: Procedure created with compilation errors.

SQL>
SQL> show err
Errors for PROCEDURE SHOWMYDATA:

LINE/COL ERROR
-------- -----------------------------------------------------------------
5/3      PL/SQL: SQL Statement ignored
7/11     PL/SQL: ORA-00942: table or view does not exist
SQL>

What??? It sure appears that even though she can do a direct SQL select from that table, when PL/SQL is involved, it doesn’t work!

PL/SQL will NOT use roles in this way! In order to allow access to the HQ.MYDATA table from within PL/SQL, a direct grant is required.

SQL> connect hq/hq
Connected.
SQL> grant select on mydata to fieldoffice;

Grant succeeded.

Then, the pl/sql procedure can be recompiled and executed:

SQL> connect fieldoffice/fieldoffice
Connected.
SQL> alter procedure showmydata compile;

Procedure altered.

SQL> execute showmydata;
First number is 1

PL/SQL procedure successfully completed.

This is a very, very common problem that crops up, especially among application developers who haven’t had a lot of experience with Oracle’s use of roles. For more information, do a search on ‘Definers -vs- Invoker’s rights’.

Author: Jay Stanley <jstanley@dbspecialists.com>

Date: 2012-09-10 Mon

HTML generated by org-mode 6.34c in emacs 23

Patching your Oracle database – Critical Patch Update (CPU) or Patch Set Update (PSU)?

Keeping your Oracle database software up to date is a critical and time-consuming task for DBAs.  For many years now, Oracle has been releasing Critical Patch Updates on a quarterly basis.  These patches, as the name implies, contain critical updates to the software, often released in response to a newly found security vulnerability.  More recently, Oracle has also been releasing Patch Set Updates on a quarterly basis.  These also contain important fixes to the Oracle software.  However, there is confusion about the difference between the two and more importantly, confusion about which one needs to be applied.  So whats the difference and which one should you apply?

 According to Oracle Support article ID 1446582.1: Frequently Asked Questions (FAQ) Patching Oracle Database Server:

“A PSU is a collection of proactive, stabilizing cumulative patches for a particular product version (base release or patch set).  PSUs are cumulative and include all of the security fixes from CPU patches, plus additional fixes.  Critical Patch Updates are the primary means of releasing security fixes for Oracle products. CPUs are cumulative with respect to prior CPUs and generally contain only security fixes.”

So, there you have it.  CPUs are smaller and more focused than PSU and mostly deal with security issues.  PSUs contain bug fixes AND they contain the security fixes from the CPU.  When you download a PSU, it will tell you which CPU it contains.  PSUs are on the same quarterly schedule as the Critical Patch Updates (CPU), specifically the Tuesday closest to the 17th of January, April, July, and October.  One thing to keep in mind, however, is that once a PSU has been installed, the recommended way to get future security content is to apply subsequent PSUs.  Reverting from PSU back to CPU, while possible, would require significant effort and so is not advised.  So with this in mind, why would someone choose to apply a CPU rather than a PSU?  I suppose for folks who are concerned only with security fixes and not functionality fixes, a CPU-only approach may be best.  It does seem to be the more conservative approach as a CPU is (in theory) less like to cause trouble than a PSU, simply because it has less code changes in it.

My personal preference is to apply PSUs and not worry about CPUS.

If you would like to be notified when Oracle releases Security Alerts, you can sign up on the Oracle Technology Network website at http://www.oracle.com/technetwork/topics/security/securityemail-090378.html.  You will need to have an account and then you can subscribe to Oracle Security Alerts

Unindexed foreign keys & enq:TM - contention

I was recently reminded of the chaos that can be caused by an unindexed foreign key column when there are corresponding updates or deletes from the parent table. To illustrate this particular situation here is some example DDL

CREATE TABLE parent_t
(
parent_id	INTEGER NOT NULL,
parent_name	VARCHAR2(30) NULL,
CONSTRAINT parent_t_pk PRIMARY KEY (parent_id)
);

INSERT INTO parent_t SELECT object_id, object_name FROM dba_objects;

CREATE TABLE child_t
(
child_id	INTEGER NOT NULL,
parent_id	INTEGER NULL,
child_name	VARCHAR2(30) NULL,
CONSTRAINT child_t_pk PRIMARY KEY (child_id),
CONSTRAINT parent_id_fk FOREIGN KEY (parent_id) REFERENCES parent_t (parent_id)
);

in short we have two tables, a parent (parent_t) and a child (child_t) and we have a foreign key constraint declared between them on the parent_id column but no index on the foreign key column (parent_id) in the child table i.e. we have no index on child_t (parent_id). This is a relatively common situation and by itself does not cause any problems, the problems arise when we update the primary key column(s) or delete parent rows.

E.g. along comes a session that inserts a row into the child table

SQL> select distinct sid from v$mystat;

       SID
----------
       135

SQL> insert into child_t values (1,20,'child 1');

1 row created.

and along comes a second session that attempts an update on any parent row e.g.

SQL> select distinct sid from v$mystat;

       SID
----------
        71

SQL> update parent_t set parent_id=10, parent_name='par 10' where parent_id=10;

This parent update blocks waiting on eng:TM - contention. However, the situation now rapidly gets worse, other operations against the parent table also stack up behind the first one e.g. a regular insert

SQL> select distinct sid from v$mystat;
       SID
----------
       132

SQL> insert into parent_t values (1000000,'par 1000000');

This parent insert also blocks waiting on eng:TM - contention. With a busy parent table it is easy to see that the situation will rapidly turn into a problem. At the moment v$session shows the blockers as

SQL> select sid, sql_id, blocking_session, event, p2 from v$session where blocking_session is not null
     union
     select sid, sql_id, blocking_session, event, p2 from v$session where sid in (select blocking_session from v$session);

       SID SQL_ID        BLOCKING_SESSION EVENT                                P2
---------- ------------- ---------------- ---------------------------- ----------
        71 25nmv2v6ht56k              135 enq: TM - contention             178452
       132 f0f74a4jzdnpd               71 enq: TM - contention             178452
       135                                SQL*Net message from client           1

while utllockt.sql shows the following output

WAITING_SESSION   LOCK_TYPE         MODE_REQUESTED MODE_HELD      LOCK_ID1          LOCK_ID2
----------------- ----------------- -------------- -------------- ----------------- -----------------
135               None
   71             DML               Share          Row-X (SX)     178452            0
   132            DML               Row-X (SX)     Row-X (SX)     178452            0

here is the dba_locks output

SQL> select session_id, lock_type, mode_held, mode_requested, lock_id1, blocking_others
from dba_locks where session_id in (71,135,132) order by last_convert

SESSION_ID LOCK_TYPE         MODE_HELD       MODE_REQUESTED  LOCK_ID1          BLOCKING_OTHERS
---------- ----------------- --------------- --------------- ----------------- ---------------
       132 DML               Row-X (SX)      None            178450            Not Blocking
       132 DML               None            Row-X (SX)      178452            Not Blocking
        71 DML               Row-X (SX)      None            178450            Not Blocking
        71 DML               None            Share           178452            Not Blocking
       135 Transaction       Exclusive       None            65559             Not Blocking
       135 DML               Row-X (SX)      None            178450            Not Blocking
       135 DML               Row-X (SX)      None            178452            Blocking
       132 AE                Share           None            100               Not Blocking
        71 AE                Share           None            100               Not Blocking
       135 AE                Share           None            100               Not Blocking

What is happening in this situation is that SID=71 is blocking waiting to obtain a share lock on the CHILD_T table but this is not available because of the Row-X (SX) lock held by the uncommited insert by session SID=135. For enq:TM - contention waits the ‘p2′ value corresponds to the dba_objects.object_id value i.e.

SQL> select object_id, object_name from dba_objects where object_id = 178452

 OBJECT_ID OBJECT_NAME
---------- ------------------------------
    178452 CHILD_T

If we remove the ‘dummy’ primary key update from the SQL issued by SID=71 i.e. if we change the SQL to

SQL> select distinct sid from v$mystat;

       SID
----------
        71

SQL> update parent_t set parent_name='par 10' where parent_id=10;

the problem goes away because the RI no longer requires the child table lock. It is worth noting that the child table is empty during this, of course there is an uncommitted row which is causing the problem but obviously the size of the child table is not a factor here. Besides fixing the SQL to remove the unnecessary update we could also index the foreign key column ‘create index child_t_parent_id on child_t(parent_id)’ or remove the constraint. In general, if you join from the parent to the child which is often the case e.g. ‘get me all the order lines for this order’ then having the index in place is desirable from the performance perspective. In terms of troubleshooting this if you see lots of sessions blocking on enq:TM - contention if you take the p2 value and check the corresponding object for foreign keys with missing indexes that will usually point you in the right direction.

The Importance of Cleaning Up Historical Data

One of my first programming jobs was at a company which was quite large – it had a large number of programming teams, and very separate teams that did system, network and storage administration. As a programmer, especially a ‘green’ one, a database was pretty much a big black box – accessed not via SQL but via a pretty well-defined API. When a program we were working on needed to store something, we’d call the ’store’ api; when we needed to update it, we’d call the ‘update’ API call, and when we needed it, we’d use a ‘retrieve’ call. Simple!

Skip several years later, and the database had grown to be huge for that time. Retrievals were taking longer and longer, and talking to some of the sysadmins late at night around the coffee machine, I learned that the backup was getting too large to even back it up within a week. The application was starting to hit pretty much a ‘brick wall’; it started to fall further and further behind, and something really needed to be done.

Management recognized that there was indeed a big problem, and created a team of people from sysadmin, storage, DBAs, the COO, and programmers, to find a solution – and the big question from the top was “Is it really required to spend a considerable amount of money on hardware to make it work?”. I was of course one of the lucky people assigned to that team.

It actually only took us a week to find the problem; there was no ‘cleanup’ or ‘delete’ call in the API, and there were no business rules in place concerning how long data needed to be retained – both from a physical standpoint (what is needed to keep the business functioning) as well as from a legal standpoint. These business concern & legal requirements can be quite different I later learned.

If all that a program can do is add & update data, then it is pretty inevitable that a datastore will eventually become just too big. After figuring out the business requirements for data retention, and creating a new API that would flag things for deletion, export them, and then remove them, the amount of count of rows in the database in question shrunk to about 1/10th of its original size, and the good news was that everything started to work much better, except of course the backups.

I should add that this was on Oracle v7, which didn’t have the ability to shrink and relocate segments, rebuild indexes online, or shrink datafiles.

The system administrators were at first confused. They asked “if we deleted almost 90% of the data, why was the database still so large?”.

Of course, the answer was ‘Well, when you delete rows in a table, Oracle does not shrink that table (or associated index) segments; there will be more ‘empty blocks’ or ‘non-full blocks’ available, but these are still owned by the table (or index) segment; they cannot be used for any other segments.’

So, even after deleting a majority of the data, we still had a pretty big problem concerning backups. And, fixing that took a pretty huge amount of effort – over several months and many man-hours involving people from both the business side & the technical side to fix it.

I learned a few very good lessons over the course of that problem;

  • Databases that get too large can be very difficult to shrink without affecting uptime/availability,
  • If someone had understood the problem before it became a critical issue, and if cleanup were done regularly from the beginning, a lot of work (and downtime) could be avoided,
  • The amount of effort required to create and program those business rules were eventually required anyway, so the work done on shrinking it was extra work/expense on the company’s bottom-line,
  • Understanding what can be deleted, what needs to be backed-up/archived before getting deleted, and what must not be deleted, will very likely require someone with business expertise – for example, the COO and the legal team. And often, this is not a simple question at all, and it can take considerable effort to even understand what rules need to be followed.
  • Testing the delete/delete+archive processes is very, very important. Further, if archived data MAY be required in the future, it is important to do the work beforehand so that retrieving it in a suitable form does not take much effort or require programming resources,
  • If the policies are clearly figured out in the early stages of a product, and suitable programming done so that it works reliably, the problem will likely never become an issue.
  • Unless it’s a pretty trivial database, it’s very, very important not to forget the above lessons!

Now that we have Oracle v11G, it is possible to shrink segments online, rebuild indexes online, migrate segments between tablespaces, and even shrink datafiles, if the Enterprise Edition is in-use. If the database edition is not the Enterprise Edition, then reorganization will likely not be possible on-line while the database is up – it may even require a full export/expdp and import/impdp which can take a considerable amount of downtime.

Even if Enterpise Edition is in use, if there is a segment which is occuping the top part of a datafile, then it is not possible to shrink the datafile past that point, so you can still run into problems — see Mike Dean’s blog post on Resizing Datafiles and Understanding the High Water Mark.

TL;DR - it is quite important to insure that suitable data cleanup processes are in place early in a database’s life, to insure that performance doesn’t hit a ‘brick wall’ later in its life.

Overview of Oracle Auditing

As a Remote DBA over the years, I have implemented auditing at a number of customers and have found it to be easy to implement, low overhead and very useful.  In my opinion, every production database should have at least basic auditing enabled.  In this blog, I will give a quick overview of the different types of Auditing and how to implement them.

 

Oracle has basically three flavors of auditing: Mandatory, Standard and Fine-Grained.

 

Mandatory auditing happens automatically and records database startup/shutdown as well as SYSDBA and SYSOPER logins to the location specified by the AUDIT_FILE_DEST parameter.  On Windows, it writes these records to the Event Viewer.  On Unix, you can optionally set the AUDIT_SYSLOG_LEVEL parameter and have these events go to the Unix syslog. 

 

Standard auditing is enabled by setting the AUDIT_TRAIL parameter and writes its records to the SYS.AUD$ table.  The following are valid values for this parameter:

 

NONE = disables standard auditing

OS = writes audit records to an OS file

DB = writes audit records to the SYS.AUD$ table in the database 

DB, EXTENDED = writes audit records to the SYS.AUD$ and includes the complete SQL statement that was executed along with any bind values

XML = writes audit records to an OS file in XML format

XML, EXTENDED = writes audit records to an OS file in XML format plus records the SQL statement and bind values to SYS.AUD$

 

The database will need to be restarted for it to take effect.  Once you have enabled auditing, you have to tell Oracle which statements you want to audit.  At a minimum, you should audit logon/logoff, user privileges, the use of system privileges and changes to the database (alter database or alter system).   It would be a big mistake to think you can audit every statement against every table without a performance impact and an overload of useless data.  You should think about your data and decide what needs to be audited.  For example, you may have an EMPLOYEE table that contains salary information.  You can keep track of any updates to this table by running AUDIT INSERT,UPDATE, DELETE ON EMPLOYEE BY ACCESS

 

 

In addition to your specific needs, there are some commands that I think should be audited in every database:

        ·        AUDIT ALTER ANY PROCEDURE BY ACCESS;
·        AUDIT ALTER ANY TABLE BY ACCESS;
·         AUDIT ALTER DATABASE BY ACCESS;
·         AUDIT ALTER SYSTEM BY ACCESS;
·         AUDIT CREATE ANY EDITION;
·         AUDIT CREATE ANY JOB BY ACCESS;
·         AUDIT CREATE ANY LIBRARY BY ACCESS;
·         AUDIT CREATE ANY PROCEDURE BY ACCESS;
·         AUDIT CREATE ANY TABLE BY ACCESS;
·         AUDIT CREATE EXTERNAL JOB BY ACCESS;
·         AUDIT DROP ANY EDITION;
·         AUDIT DROP ANY PROCEDURE BY ACCESS;
·         AUDIT DROP ANY TABLE BY ACCESS;
·         AUDIT ALTER PROFILE BY ACCESS;
·         AUDIT ALTER USER BY ACCESS;
·         AUDIT AUDIT SYSTEM BY ACCESS;
·         AUDIT CREATE PUBLIC DATABASE LINK BY ACCESS;
·         AUDIT CREATE SESSION BY ACCESS;
·         AUDIT CREATE USER BY ACCESS;
·         AUDIT DROP PROFILE BY ACCESS;
·         AUDIT DROP USER BY ACCESS;
·         AUDIT EXEMPT ACCESS POLICY BY ACCESS;
·         AUDIT GRANT ANY OBJECT PRIVILEGE BY ACCESS;
·         AUDIT GRANT ANY PRIVILEGE BY ACCESS;
·         AUDIT GRANT ANY ROLE BY ACCESS;
·         AUDIT ROLE BY ACCESS;

One word of caution about auditing “create session”.  This will record every logon/logoff that occurs and is extremely important from a security perspective, but can impose a performance overhead if you have a lot of connection activity. If this is a problem for you, rather than not auditing “create session”, it would be better to examine your application to see if you can reduce the amount of logon/logoff activity.  The best way to minimize connection activity is through the use of a connection pool, where you create a bunch of database connections and the application uses them and then releases them as needed. 

Fine-grained auditing enables you to create policies that define specific conditions that must take place for the audit to occur. This enables you to monitor data access based on content. It provides granular auditing of queries, and INSERT, UPDATE, and DELETE operations. For example, a CFO must track access to financial records to guard against employee snooping, with enough detail to determine what data was accessed. It is not enough to know that SELECT privilege was used by a specific user on a particular table. Fine-grained auditing provides this deeper functionality.

In general, fine-grained audit policies are based on simple, user-defined SQL predicates on table objects as conditions for selective auditing. During fetching, whenever policy conditions are met for a row, the query is audited.

For example, you can use fine-grained auditing to audit the following types of actions:

        ·         Accessing a table outside of normal working hours
·         Logging in from a particular IP address
·         Selecting or updating a particular table column
·         Modifying a value in a table column

Fine-grained auditing records are stored in the SYS.FGA_LOG$ table. To find the records have been generated for the audit policies that are in effect, you can query the DBA_FGA_AUDIT_TRAIL view. The DBA_COMMON_AUDIT_TRAIL view combines both standard and fine-grained audit log records.  The DBA_AUDIT_TRAIL view contains standard Auditing records.

To create a fine-grained audit policy, use the DBMS_FGA.ADD_POLICY procedure. This procedure creates an audit policy using the supplied predicate as the audit condition. Oracle Database executes the policy predicate with the privileges of the user who created the policy. The maximum number of fine-grained policies on any table or view object is 256. Oracle Database stores the policy in the data dictionary table, but you can create the policy on any table or view that is not in the SYS schema. After you create the fine-grained audit policy, it does not reside in any specific schema, although the definition for the policy is stored in the SYS.FGA$ data dictionary table.  You cannot modify a fine-grained audit policy after you have created it. If you need to modify the policy, drop it and then recreate it.

 

The syntax for the ADD_POLICY procedure is:

DBMS_FGA.ADD_POLICY(
   object_schema      VARCHAR2,
   object_name        VARCHAR2,
   policy_name        VARCHAR2,
   audit_condition    VARCHAR2,
   audit_column       VARCHAR2,
   handler_schema     VARCHAR2,
   handler_module     VARCHAR2,
   enable             BOOLEAN,
   statement_types    VARCHAR2,
   audit_trail        BINARY_INTEGER IN DEFAULT,
   audit_column_opts  BINARY_INTEGER IN DEFAULT);

In this specification:

        ·         object_schema: Specifies the schema of the object to be audited. (If NULL, the current log-on user schema is assumed.)
·         object_name: Specifies the name of the object to be audited.
·         policy_name: Specifies the name of the policy to be created. Ensure that this name is unique.
·         audit_condition: Specifies a Boolean condition in a row. NULL is allowed and acts as TRUE. If you specify NULL or no audit condition, then any action on a table with that policy creates an audit record, whether or not rows are returned
·         audit_column: Specifies one or more columns to audit, including hidden columns. If set to NULL or omitted, all columns are audited. These can include Oracle Label Security hidden columns or object type columns. The default, NULL, causes audit if any column is accessed or affected.
·         handler_schema: If an alert is used to trigger a response when the policy is violated, specifies the name of the schema that contains the event handler.
·         handler_module: Specifies the name of the event handler. Include the package the event handler is in. This function is invoked only after the first row that matches the audit condition in the query is processed. If the procedure fails with an exception, then the user SQL statement fails as well.
·         enable: Enables or disables the policy using true or false. If omitted, the policy is enabled. The default is TRUE.
·         statement_types: Specifies the SQL statements to be audited: INSERT, UPDATE, DELETE, or SELECT only.
·         audit_trail: Specifies the destination (DB or XML) of fine-grained audit records. Also specifies whether to populate LSQLTEXT and LSQLBIND in FGA_LOG$.
·         audit_column_opts: If you specify more than one column in the audit_column parameter, then this parameter determines whether to audit all or specific columns.

You can audit all updates to the SALARY column:

DBMS_FGA.ADD_POLICY(

object_schema      => ‘MIKE’,

object_name        => ‘EMPLOYEE’,

policy_name        => ’salary_change’,

audit_column       => ‘SALARY’,

enable             =>  TRUE,

statement_types    => ‘UPDATE’,

audit_trail        =>  DBMS_FGA.DB + DBMS_FGA.EXTENDED);

end;

/

Or, you can get more specific by specifying the AUDIT_CONDITION.  This will create an FGA policy to record all updates to the SALARY column unless it is done by the owner of the table MIKE.  I am using the SYS_CONTEXT function to find information about the user and using that to determine if auditing should occur. 

begin

DBMS_FGA.ADD_POLICY(

object_schema      => ‘MIKE’,

object_name        => ‘EMPLOYEE’,

policy_name        => ’salary_change’,

audit_column       => ‘SALARY’,

audit_condition    => ‘SYS_CONTEXT(”USERENV”,”SESSION_USER”) <> ”MIKE” ‘,

enable             =>  TRUE,

statement_types    => ‘UPDATE’,

audit_trail        =>  DBMS_FGA.DB + DBMS_FGA.EXTENDED);

end;

/

 

Or, you may want to audit any changes that occur outside of your application (as set with the DBMS_APPLICATION_INFO package)

 

begin

DBMS_FGA.ADD_POLICY(

object_schema      => ‘MIKE’,

object_name        => ‘EMPLOYEE’,

policy_name        => ’salary_change’,

audit_column       => ‘SALARY’,

audit_condition    => ‘SYS_CONTEXT(”USERENV”,”MODULE”) <> ”MY_APP” ‘,

enable             =>  TRUE,

statement_types    => ‘UPDATE,INSERT,DELETE’,

audit_trail        =>  DBMS_FGA.DB + DBMS_FGA.EXTENDED);

end;

/

 

One advantage of FGA over Standard Auditing is that you don’t need to set the AUDIT_TRAIL parameter for it to work.  This means that you can enable auditing without restarting the database.  So with the combination of Mandatory, Standard and Fine Grained Auditing, along with some careful consideration of your own auditing requirements, you can record as much or as little information as necessary.