Knowledge increases exponentially. Today, you probably own more books than great universities of times past—Cambridge University owned less than two hundred books in the fifteenth century. First came the invention of writing, then alphabets, then paper, then the printing press, then mechanization. Each step caused an exponential increase in the collective human knowledge. In our generation, Al Gore invented the internet and the last barriers to the spread of knowledge have been broken. Today, everybody has the ability to contribute, communicate, and collaborate. We are all caught up in a tsunami, an avalanche, a conflagration, a veritable explosion of knowledge for the betterment of humankind. This is the blog of the good folks at Database Specialists, a brave band of Oracle database administrators from the great state of California. We bid you greeting, traveler. We hope you find something of value on these pages and we wish you good fortune in your journey.

postgreSQL, mysql, and Oracle

Although my main interest is in the venerable Oracle database as database administrator and as a programmer over the past 17 years or so, at the same time I have been following along with the progress of the various open-source databases as well with quite a keen interest. The two that I’ve been mainly following are mysql and postgreSQL. You’re probably aware that mysql started as a pure open-source database, when it was acquired by Sun, which was then of course acquired by Oracle itself. postgreSQL has always been open-source; both databases have been around a very long time; at least as long as I’ve been mastering Oracle. I’ve installed them myself for home use several times over the years, and I’ve even run a number of mysql databases for various production functions over the years.

I’ve done a fair amount of work understanding their various limitations over the years. After working with them, it’s easy to conclude that mysql, postgreSQL and the Oracle RDBMS are very, very different databases on a very fundamental level, but that it’s hard to my finger on how they are different.

Recently, I stumbled upon a blog entry that explained the differences between these databases: I would encourage you to go there & check it out: http://ledgersmbdev.blogspot.co.uk/2012/09/or-modelling-interlude-postgresql-vs.html

His claim that discussing these databases very often ends in an an all-caps flame-war, with unassailable opinions from proponents of each is quite true. It’s too bad that this is the case.

I completely agree with the author’s conclusions, and I give kudos to Chris Travers for coming up with such a clear explanation of the differences, especially between mySQL and postgreSQL.

As primarly an Oracle dba/developer, it’s interesting that I’ve never come into contact with anyone ever using Oracle’s object-relational features, among the many, many client’s databases that I’ve worked with. Those features as I remember began in Oracle 8i, and they have been in OCP (Oracle Certified Professional) tests since then, probably 13 years. It’s been part of the feature-set for a very long time; it isn’t that these features don’t work; it’s just that I don’t see them in use. The blog entry doesn’t go into Oracle that much. I think that Oracle is much closer to postgreSQL, where you use the database to serve multiple applications, and you use the database to do quite a lot of the data validation and consistency-checking.

I’ll be very interested in the follow-up posts from Chris’s blog regarding the differences in object-relational use of PostgreSQL and Oracle.

-jay

Author: Jay Stanley <jstanley@dbspecialists.com>

Date: 2012-09-12 Wed

HTML generated by org-mode 6.34c in emacs 23

Not licensed for AWR? Use Statspack instead!

StatsPack (stands for Statistics Package) is a set of Oracle-provided packages that will take snapshots of performance-related data and then report on the activity that occurred between any two snapshots.  This can be very useful for troubleshooting many kinds of performance problems as it will show you the top activity in the database during a period of time.  StatsPack has been around for many years but since the release of 10g, Oracle has been promoting the use of the Active Workload Repository (AWR) in place of Statspack.  AWR is included with the Diagnostics and Tuning Pack options and costs extra, sometimes a lot extra.  AWR is very feature-rich compared with StatsPack and is integrated with other performance tools, like Active Session History and Automated Database Diagnostic Monitor (ADDM).  Given the choice, I will use AWR over Statspack.  But if you have found yourself dealing with a performance problem and are not licensed to use AWR, read on and find out how to get started with StatsPack - at no additional charge!

 Statspack is simple to install.  All of the scripts are located in $ORACLE_HOME/rdbms/admin

[oracle@mdlinux admin]$ sqlplus / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @?/rdbms/admin/spcreate
SQL> @?/rdbms/admin/spauto

Follow the prompts for the password, default and temporary tablespace.  The spcreate.sql will create the PERFSTAT user along with all of the Statspack objects and spauto.sql will create a dbms_job that will execute a snapshot every hour.  You can modify the job to execute more frequently if needed.  To manually create a snapshot:

SQL> exec perfstat.statspack.snap;

Once you have at least two snapshots created, you can run the performance reports.  There are two reports available - an Instance report, and a SQL report.  The Instance Report (spreport.sql) is a general instance health report, covering all aspects of instance performance.  It will show the top wait events, the top SQL queries, the busiest segments, the busiest datafiles, the database activity level and many, many other things.  The SQL report (sprepsql.sql) is a report for a specific SQL statement.  The SQL report is usually run after examining the high-load SQL sections of the instance health report.  The SQL report provides detailed statistics and data for a single SQL statement (as identified by the Hash Value).  This report will also show detailed execution plan information, including whether it has changed or not but only if you change the snap level to 6.  The default value of 5 does not collect execution plan information.  In my opinion, the default value should be 6.

 SQL> exec statspack.modify_statspack_parameter(i_snap_level=>6);

 There are other parameters that you can change if you wish but in my experience there isn’t much need to do so.  They are more fully documented in the $ORACLE_HOME/rdbms/admin/spdoc.txt README for StatsPack.  To run the Instance report:

SQL> @?/rdbms/admin/spreport 

It will show you a list of snapshots and prompt you for the two you are interested in.  The results will be written to a text file.   The first thing I look in the StatsPack Report is the Load Profile, which gives some key statistics that let you know how much work the database is doing.  It is not necessarily the most important thing in the report, but it is close to the top

 Load Profile                            Per Second       Per Transaction
~~~~~~~~~~~~                       ---------------       ---------------
                  Redo size:            296,369.66             18,862.06|
              Logical reads:             20,614.15              1,311.96
              Block changes:              1,313.78                 83.61
             Physical reads:                761.11                 48.44
            Physical writes:                 61.87                  3.94
                 User calls:                350.98                 22.34
                     Parses:                120.54                  7.67
                Hard parses:                  1.61                  0.10
                      Sorts:                 33.22                  2.11
                     Logons:                  1.22                  0.08
                   Executes:                504.59                 32.11|
               Transactions:                 15.71

 

So in this case, the database was doing 15.71 transactions per second, 20,614 Logical Reads per Second and 761 Physical Reads per second.  Is that a lot or a little?  Well it all depends on what this database “normally” does.  Compare this report with a report from the same time period 1 week earlier and see if there is a significant change. Just below the Load Profile is the Instance Efficiency Percentages

 Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer  Hit   %:   99.07    In-memory Sort %:  100.00
            Library Hit   %:   99.68        Soft Parse %:   98.66
         Execute to Parse %:   76.11         Latch Hit %:   99.90
Parse CPU to Parse Elapsd %:   17.75     % Non-Parse CPU:   97.72

 The most famous Hit Ratio of all time, the Buffer Hit %, is of course shown here and “should” be close to 100.  This depends obviously on your application.  For an OLTP app, it definitely should be close to 100 but for a Data Warehousing app perhaps it will be much lower.  If you look at previous reports and the Hit Ratio is usually in the upper 90’s and now it is in the 70’s, then you can bet something changed for the worse.  I also look at the Soft Parse % and the %Non-Parse CPU.  These indicate how many hard parses are being done and how much time CPU time is spent parsing.  Parsing in general, and hard parsing in particular, is a very resource intensive operation and should be minimized.  A little further down is the Top 5 Timed Events.  In this case, “db file sequential read” is taking up about 50% of all database call time.  This is a pretty normal event as it indicates the database is reading from indexes.  If you see “db file scattered read” at the top, that probably means you are doing lots of full table scans or fast full index scans. 

 Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                        108,422         889      8   50.2
CPU time                                                       243          13.7
direct path read                               311,108         147      0    8.3
log file parallel write                            867         138    159    7.8
latch: library cache                               511         137    268    7.8

 Now keep going a bit further and you get to the really interesting stuff: the top SQL statements.  They are ordered in several different ways: by CPU, by Elapsed Time, by Logical Reads, Physical Reads, Executions and Parses.  Depending on what kind of issue you are having, you may need to look at different sections.  For example, suppose your database suddenly starts waiting on Latches and the number of parses per second goes up.  The logical thing to look for are SQL statements that are doing lots of parsing.  Or you see that “db file scattered read” is your top wait event.  You would want to look at the SQL statements doing the most Physical Reads.

 SQL ordered by CPU  DB/Inst: ORAPROD1/oraprod1  Snaps: 43266-43267
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total DB CPU (s):             238
-> Captured SQL accounts for  178.8% of Total DB CPU
-> SQL reported below exceeded  1.0% of Total DB CPU
     CPU                  CPU per             Elapsd                     Old
  Time (s)   Executions  Exec (s)  %Total   Time (s)    Buffer Gets  Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
     80.14        9,306       0.01   33.7     129.20         541,158 1525535925
begin zx_hostengine.runtask (:v1, :v2, :v3, :v4, :v5); end;
      79.77        1,461       0.05   33.6     261.86       2,080,995 1189966800
Module: JDBC Thin Client
call zx_client.sendMessage(:1)
      45.76          193       0.24   19.3     155.81       1,240,393 3937937216
Module: JDBC Thin Client
BEGIN zx_client.submitJob (:1,:2); END;

After identifying the high resource SQL, you can run the StatsPack SQL report for that particular statement.  It will prompt you for the snapshots and the hash value

SQL>@?/rdbms/admin/sprepsql.sql

 Another useful section of the report is the Segment Activity

Segments by Physical Reads  DB/Inst: ORAPROD1/oraprod1  Snaps: 43266-43267
-> End Segment Physical Reads Threshold:      1000
                                           Subobject    Obj.      Physical   Pct
Owner      Tablespace Object Name          Name         Type         Reads Total
---------- ---------- -------------------- ------------ ----- ------------ -----
HOSTTASK USERS      SYS_LOB0000064624C00            LOB        132,065  48.6
HOSTTASK HOSTTASK SYS_LOB0000064668C00              LOB         86,055  31.6
HOSTTASK HOSTTASK SYS_LOB0000064593C00              LOB          8,511   3.1
HOSTTASK HOSTTASK TS_JOB                            TABLE        2,258    .8
OWF_MGR    OWF_DATA   WF_ITEM_ACTIVTY_STA           INDEX        2,248    .8

This will show you which segments account for the most activity.  This can sometimes really help to pinpoint a problem if you see one particular segment accounts for a majority of the physical reads, for example.  In this case, there are 3 LOB segments that account for over 80% of all physical reads in the database. 

 This is by no means a complete guide to Statspack.  There are many sections of the report that I didn’t mention and there are many sections of the report that I have never even looked at.  If you are on a pre-10g version of Oracle or are not licensed for AWR, I encourage you to install Statspack and start using it.  It is a great tool and can be invaluable in solving performance problems.

Note:  After I initially wrote this, I discovered that Statspack will break if you upgrade the database from 10.2 to 11.2.  In order to upgrade Statspack from 10.2 to 11.2, do the following after the database upgrade is complete.

SQL> @?/rdbms/admin/spup102.sql
SQL> @?/rdbms/admin/spup1101.sql
SQL> @?/rdbms/admin/spup11201.sql

If you want complete details on upgrading from 10.2 to 11.2, check out the blog I wrote on that: http://www.dbspecialists.com/blog/uncategorized/upgrade-from-10204-to-112032-including-time-zone-files/

Why doesn’t a database role work for PL/SQL procedures?

Stacy, who is a DBA, has an interesting problem; the company that she works for just acquired another company. The newly-acquired company needs readonly SQL access to a table owned by the main application schema, called ‘HQ’. The newly-acquired company will be writing stored procedures in their own schema, which is called ‘FIELDOFFICE’, and these schemas will be reading data from the table owned by the ‘HQ’ owner.

This appears to be a pretty simple issue: Stacy tries this:

First, she creates the two users, and grants them basic permissions:

SQL> create user hq
identified by hq
default tablespace users
temporary tablespace temp;
User created.

SQL> grant connect,resource to hq;

Grant succeeded.

SQL> create user fieldoffice
identified by fieldoffice
default tablespace users
temporary tablespace temp;
User created.

SQL> grant connect,resource to fieldoffice;

Grant succeeded.

Then, she created a role that she wants to use to control access to the tables in HQ:

SQL> create role accesshq;

Role created.

SQL> grant accesshq to fieldoffice;

Grant succeeded.

Now, she’d like to test it. First, she connects to the HQ user, and creates a test table:

SQL> connect hq/hq
Connected.
SQL> create table mydata(i number);

Table created.

SQL> insert into mydata(i) values (1);

1 row created.

SQL> commit;

Commit complete.

Next, she grants the ACCESSHQ role the ability to SELECT from the HQ.MYDATA table.

SQL> grant select on mydata to accesshq;

Grant succeeded.

Now, she want to test the FIELDOFFICE user, to make sure that they can indeed have select access from that table:

SQL> connect fieldoffice/fieldoffice;
Connected.
SQL> select * from session_roles;
ROLE
------------------------------
CONNECT
RESOURCE
ACCESSHQ
SQL> select * from hq.mydata;
I
----------
1

Well! that appears to work. Next, she creates a small PL/SQL procedure, to make sure that it, too, can access that table:

SQL> create or replace procedure showmydata
  2  is
  3    v_i  number;
  4  begin
  5    select i
  6    into  v_i
  7    from hq.mydata
  8    where rownum = 1;
  9    dbms_output.put_line('First number is '||v_i);
 10  end;
 11  /

Warning: Procedure created with compilation errors.

SQL>
SQL> show err
Errors for PROCEDURE SHOWMYDATA:

LINE/COL ERROR
-------- -----------------------------------------------------------------
5/3      PL/SQL: SQL Statement ignored
7/11     PL/SQL: ORA-00942: table or view does not exist
SQL>

What??? It sure appears that even though she can do a direct SQL select from that table, when PL/SQL is involved, it doesn’t work!

PL/SQL will NOT use roles in this way! In order to allow access to the HQ.MYDATA table from within PL/SQL, a direct grant is required.

SQL> connect hq/hq
Connected.
SQL> grant select on mydata to fieldoffice;

Grant succeeded.

Then, the pl/sql procedure can be recompiled and executed:

SQL> connect fieldoffice/fieldoffice
Connected.
SQL> alter procedure showmydata compile;

Procedure altered.

SQL> execute showmydata;
First number is 1

PL/SQL procedure successfully completed.

This is a very, very common problem that crops up, especially among application developers who haven’t had a lot of experience with Oracle’s use of roles. For more information, do a search on ‘Definers -vs- Invoker’s rights’.

Author: Jay Stanley <jstanley@dbspecialists.com>

Date: 2012-09-10 Mon

HTML generated by org-mode 6.34c in emacs 23

Patching your Oracle database – Critical Patch Update (CPU) or Patch Set Update (PSU)?

Keeping your Oracle database software up to date is a critical and time-consuming task for DBAs.  For many years now, Oracle has been releasing Critical Patch Updates on a quarterly basis.  These patches, as the name implies, contain critical updates to the software, often released in response to a newly found security vulnerability.  More recently, Oracle has also been releasing Patch Set Updates on a quarterly basis.  These also contain important fixes to the Oracle software.  However, there is confusion about the difference between the two and more importantly, confusion about which one needs to be applied.  So whats the difference and which one should you apply?

 According to Oracle Support article ID 1446582.1: Frequently Asked Questions (FAQ) Patching Oracle Database Server:

“A PSU is a collection of proactive, stabilizing cumulative patches for a particular product version (base release or patch set).  PSUs are cumulative and include all of the security fixes from CPU patches, plus additional fixes.  Critical Patch Updates are the primary means of releasing security fixes for Oracle products. CPUs are cumulative with respect to prior CPUs and generally contain only security fixes.”

So, there you have it.  CPUs are smaller and more focused than PSU and mostly deal with security issues.  PSUs contain bug fixes AND they contain the security fixes from the CPU.  When you download a PSU, it will tell you which CPU it contains.  PSUs are on the same quarterly schedule as the Critical Patch Updates (CPU), specifically the Tuesday closest to the 17th of January, April, July, and October.  One thing to keep in mind, however, is that once a PSU has been installed, the recommended way to get future security content is to apply subsequent PSUs.  Reverting from PSU back to CPU, while possible, would require significant effort and so is not advised.  So with this in mind, why would someone choose to apply a CPU rather than a PSU?  I suppose for folks who are concerned only with security fixes and not functionality fixes, a CPU-only approach may be best.  It does seem to be the more conservative approach as a CPU is (in theory) less like to cause trouble than a PSU, simply because it has less code changes in it.

My personal preference is to apply PSUs and not worry about CPUS.

If you would like to be notified when Oracle releases Security Alerts, you can sign up on the Oracle Technology Network website at http://www.oracle.com/technetwork/topics/security/securityemail-090378.html.  You will need to have an account and then you can subscribe to Oracle Security Alerts

Unindexed foreign keys & enq:TM - contention

I was recently reminded of the chaos that can be caused by an unindexed foreign key column when there are corresponding updates or deletes from the parent table. To illustrate this particular situation here is some example DDL

CREATE TABLE parent_t
(
parent_id	INTEGER NOT NULL,
parent_name	VARCHAR2(30) NULL,
CONSTRAINT parent_t_pk PRIMARY KEY (parent_id)
);

INSERT INTO parent_t SELECT object_id, object_name FROM dba_objects;

CREATE TABLE child_t
(
child_id	INTEGER NOT NULL,
parent_id	INTEGER NULL,
child_name	VARCHAR2(30) NULL,
CONSTRAINT child_t_pk PRIMARY KEY (child_id),
CONSTRAINT parent_id_fk FOREIGN KEY (parent_id) REFERENCES parent_t (parent_id)
);

in short we have two tables, a parent (parent_t) and a child (child_t) and we have a foreign key constraint declared between them on the parent_id column but no index on the foreign key column (parent_id) in the child table i.e. we have no index on child_t (parent_id). This is a relatively common situation and by itself does not cause any problems, the problems arise when we update the primary key column(s) or delete parent rows.

E.g. along comes a session that inserts a row into the child table

SQL> select distinct sid from v$mystat;

       SID
----------
       135

SQL> insert into child_t values (1,20,'child 1');

1 row created.

and along comes a second session that attempts an update on any parent row e.g.

SQL> select distinct sid from v$mystat;

       SID
----------
        71

SQL> update parent_t set parent_id=10, parent_name='par 10' where parent_id=10;

This parent update blocks waiting on eng:TM - contention. However, the situation now rapidly gets worse, other operations against the parent table also stack up behind the first one e.g. a regular insert

SQL> select distinct sid from v$mystat;
       SID
----------
       132

SQL> insert into parent_t values (1000000,'par 1000000');

This parent insert also blocks waiting on eng:TM - contention. With a busy parent table it is easy to see that the situation will rapidly turn into a problem. At the moment v$session shows the blockers as

SQL> select sid, sql_id, blocking_session, event, p2 from v$session where blocking_session is not null
     union
     select sid, sql_id, blocking_session, event, p2 from v$session where sid in (select blocking_session from v$session);

       SID SQL_ID        BLOCKING_SESSION EVENT                                P2
---------- ------------- ---------------- ---------------------------- ----------
        71 25nmv2v6ht56k              135 enq: TM - contention             178452
       132 f0f74a4jzdnpd               71 enq: TM - contention             178452
       135                                SQL*Net message from client           1

while utllockt.sql shows the following output

WAITING_SESSION   LOCK_TYPE         MODE_REQUESTED MODE_HELD      LOCK_ID1          LOCK_ID2
----------------- ----------------- -------------- -------------- ----------------- -----------------
135               None
   71             DML               Share          Row-X (SX)     178452            0
   132            DML               Row-X (SX)     Row-X (SX)     178452            0

here is the dba_locks output

SQL> select session_id, lock_type, mode_held, mode_requested, lock_id1, blocking_others
from dba_locks where session_id in (71,135,132) order by last_convert

SESSION_ID LOCK_TYPE         MODE_HELD       MODE_REQUESTED  LOCK_ID1          BLOCKING_OTHERS
---------- ----------------- --------------- --------------- ----------------- ---------------
       132 DML               Row-X (SX)      None            178450            Not Blocking
       132 DML               None            Row-X (SX)      178452            Not Blocking
        71 DML               Row-X (SX)      None            178450            Not Blocking
        71 DML               None            Share           178452            Not Blocking
       135 Transaction       Exclusive       None            65559             Not Blocking
       135 DML               Row-X (SX)      None            178450            Not Blocking
       135 DML               Row-X (SX)      None            178452            Blocking
       132 AE                Share           None            100               Not Blocking
        71 AE                Share           None            100               Not Blocking
       135 AE                Share           None            100               Not Blocking

What is happening in this situation is that SID=71 is blocking waiting to obtain a share lock on the CHILD_T table but this is not available because of the Row-X (SX) lock held by the uncommited insert by session SID=135. For enq:TM - contention waits the ‘p2′ value corresponds to the dba_objects.object_id value i.e.

SQL> select object_id, object_name from dba_objects where object_id = 178452

 OBJECT_ID OBJECT_NAME
---------- ------------------------------
    178452 CHILD_T

If we remove the ‘dummy’ primary key update from the SQL issued by SID=71 i.e. if we change the SQL to

SQL> select distinct sid from v$mystat;

       SID
----------
        71

SQL> update parent_t set parent_name='par 10' where parent_id=10;

the problem goes away because the RI no longer requires the child table lock. It is worth noting that the child table is empty during this, of course there is an uncommitted row which is causing the problem but obviously the size of the child table is not a factor here. Besides fixing the SQL to remove the unnecessary update we could also index the foreign key column ‘create index child_t_parent_id on child_t(parent_id)’ or remove the constraint. In general, if you join from the parent to the child which is often the case e.g. ‘get me all the order lines for this order’ then having the index in place is desirable from the performance perspective. In terms of troubleshooting this if you see lots of sessions blocking on enq:TM - contention if you take the p2 value and check the corresponding object for foreign keys with missing indexes that will usually point you in the right direction.

The Importance of Cleaning Up Historical Data

One of my first programming jobs was at a company which was quite large – it had a large number of programming teams, and very separate teams that did system, network and storage administration. As a programmer, especially a ‘green’ one, a database was pretty much a big black box – accessed not via SQL but via a pretty well-defined API. When a program we were working on needed to store something, we’d call the ’store’ api; when we needed to update it, we’d call the ‘update’ API call, and when we needed it, we’d use a ‘retrieve’ call. Simple!

Skip several years later, and the database had grown to be huge for that time. Retrievals were taking longer and longer, and talking to some of the sysadmins late at night around the coffee machine, I learned that the backup was getting too large to even back it up within a week. The application was starting to hit pretty much a ‘brick wall’; it started to fall further and further behind, and something really needed to be done.

Management recognized that there was indeed a big problem, and created a team of people from sysadmin, storage, DBAs, the COO, and programmers, to find a solution – and the big question from the top was “Is it really required to spend a considerable amount of money on hardware to make it work?”. I was of course one of the lucky people assigned to that team.

It actually only took us a week to find the problem; there was no ‘cleanup’ or ‘delete’ call in the API, and there were no business rules in place concerning how long data needed to be retained – both from a physical standpoint (what is needed to keep the business functioning) as well as from a legal standpoint. These business concern & legal requirements can be quite different I later learned.

If all that a program can do is add & update data, then it is pretty inevitable that a datastore will eventually become just too big. After figuring out the business requirements for data retention, and creating a new API that would flag things for deletion, export them, and then remove them, the amount of count of rows in the database in question shrunk to about 1/10th of its original size, and the good news was that everything started to work much better, except of course the backups.

I should add that this was on Oracle v7, which didn’t have the ability to shrink and relocate segments, rebuild indexes online, or shrink datafiles.

The system administrators were at first confused. They asked “if we deleted almost 90% of the data, why was the database still so large?”.

Of course, the answer was ‘Well, when you delete rows in a table, Oracle does not shrink that table (or associated index) segments; there will be more ‘empty blocks’ or ‘non-full blocks’ available, but these are still owned by the table (or index) segment; they cannot be used for any other segments.’

So, even after deleting a majority of the data, we still had a pretty big problem concerning backups. And, fixing that took a pretty huge amount of effort – over several months and many man-hours involving people from both the business side & the technical side to fix it.

I learned a few very good lessons over the course of that problem;

  • Databases that get too large can be very difficult to shrink without affecting uptime/availability,
  • If someone had understood the problem before it became a critical issue, and if cleanup were done regularly from the beginning, a lot of work (and downtime) could be avoided,
  • The amount of effort required to create and program those business rules were eventually required anyway, so the work done on shrinking it was extra work/expense on the company’s bottom-line,
  • Understanding what can be deleted, what needs to be backed-up/archived before getting deleted, and what must not be deleted, will very likely require someone with business expertise – for example, the COO and the legal team. And often, this is not a simple question at all, and it can take considerable effort to even understand what rules need to be followed.
  • Testing the delete/delete+archive processes is very, very important. Further, if archived data MAY be required in the future, it is important to do the work beforehand so that retrieving it in a suitable form does not take much effort or require programming resources,
  • If the policies are clearly figured out in the early stages of a product, and suitable programming done so that it works reliably, the problem will likely never become an issue.
  • Unless it’s a pretty trivial database, it’s very, very important not to forget the above lessons!

Now that we have Oracle v11G, it is possible to shrink segments online, rebuild indexes online, migrate segments between tablespaces, and even shrink datafiles, if the Enterprise Edition is in-use. If the database edition is not the Enterprise Edition, then reorganization will likely not be possible on-line while the database is up – it may even require a full export/expdp and import/impdp which can take a considerable amount of downtime.

Even if Enterpise Edition is in use, if there is a segment which is occuping the top part of a datafile, then it is not possible to shrink the datafile past that point, so you can still run into problems — see Mike Dean’s blog post on Resizing Datafiles and Understanding the High Water Mark.

TL;DR - it is quite important to insure that suitable data cleanup processes are in place early in a database’s life, to insure that performance doesn’t hit a ‘brick wall’ later in its life.

Overview of Oracle Auditing

As a Remote DBA over the years, I have implemented auditing at a number of customers and have found it to be easy to implement, low overhead and very useful.  In my opinion, every production database should have at least basic auditing enabled.  In this blog, I will give a quick overview of the different types of Auditing and how to implement them.

 

Oracle has basically three flavors of auditing: Mandatory, Standard and Fine-Grained.

 

Mandatory auditing happens automatically and records database startup/shutdown as well as SYSDBA and SYSOPER logins to the location specified by the AUDIT_FILE_DEST parameter.  On Windows, it writes these records to the Event Viewer.  On Unix, you can optionally set the AUDIT_SYSLOG_LEVEL parameter and have these events go to the Unix syslog. 

 

Standard auditing is enabled by setting the AUDIT_TRAIL parameter and writes its records to the SYS.AUD$ table.  The following are valid values for this parameter:

 

NONE = disables standard auditing

OS = writes audit records to an OS file

DB = writes audit records to the SYS.AUD$ table in the database 

DB, EXTENDED = writes audit records to the SYS.AUD$ and includes the complete SQL statement that was executed along with any bind values

XML = writes audit records to an OS file in XML format

XML, EXTENDED = writes audit records to an OS file in XML format plus records the SQL statement and bind values to SYS.AUD$

 

The database will need to be restarted for it to take effect.  Once you have enabled auditing, you have to tell Oracle which statements you want to audit.  At a minimum, you should audit logon/logoff, user privileges, the use of system privileges and changes to the database (alter database or alter system).   It would be a big mistake to think you can audit every statement against every table without a performance impact and an overload of useless data.  You should think about your data and decide what needs to be audited.  For example, you may have an EMPLOYEE table that contains salary information.  You can keep track of any updates to this table by running AUDIT INSERT,UPDATE, DELETE ON EMPLOYEE BY ACCESS

 

 

In addition to your specific needs, there are some commands that I think should be audited in every database:

        ·        AUDIT ALTER ANY PROCEDURE BY ACCESS;
·        AUDIT ALTER ANY TABLE BY ACCESS;
·         AUDIT ALTER DATABASE BY ACCESS;
·         AUDIT ALTER SYSTEM BY ACCESS;
·         AUDIT CREATE ANY EDITION;
·         AUDIT CREATE ANY JOB BY ACCESS;
·         AUDIT CREATE ANY LIBRARY BY ACCESS;
·         AUDIT CREATE ANY PROCEDURE BY ACCESS;
·         AUDIT CREATE ANY TABLE BY ACCESS;
·         AUDIT CREATE EXTERNAL JOB BY ACCESS;
·         AUDIT DROP ANY EDITION;
·         AUDIT DROP ANY PROCEDURE BY ACCESS;
·         AUDIT DROP ANY TABLE BY ACCESS;
·         AUDIT ALTER PROFILE BY ACCESS;
·         AUDIT ALTER USER BY ACCESS;
·         AUDIT AUDIT SYSTEM BY ACCESS;
·         AUDIT CREATE PUBLIC DATABASE LINK BY ACCESS;
·         AUDIT CREATE SESSION BY ACCESS;
·         AUDIT CREATE USER BY ACCESS;
·         AUDIT DROP PROFILE BY ACCESS;
·         AUDIT DROP USER BY ACCESS;
·         AUDIT EXEMPT ACCESS POLICY BY ACCESS;
·         AUDIT GRANT ANY OBJECT PRIVILEGE BY ACCESS;
·         AUDIT GRANT ANY PRIVILEGE BY ACCESS;
·         AUDIT GRANT ANY ROLE BY ACCESS;
·         AUDIT ROLE BY ACCESS;

One word of caution about auditing “create session”.  This will record every logon/logoff that occurs and is extremely important from a security perspective, but can impose a performance overhead if you have a lot of connection activity. If this is a problem for you, rather than not auditing “create session”, it would be better to examine your application to see if you can reduce the amount of logon/logoff activity.  The best way to minimize connection activity is through the use of a connection pool, where you create a bunch of database connections and the application uses them and then releases them as needed. 

Fine-grained auditing enables you to create policies that define specific conditions that must take place for the audit to occur. This enables you to monitor data access based on content. It provides granular auditing of queries, and INSERT, UPDATE, and DELETE operations. For example, a CFO must track access to financial records to guard against employee snooping, with enough detail to determine what data was accessed. It is not enough to know that SELECT privilege was used by a specific user on a particular table. Fine-grained auditing provides this deeper functionality.

In general, fine-grained audit policies are based on simple, user-defined SQL predicates on table objects as conditions for selective auditing. During fetching, whenever policy conditions are met for a row, the query is audited.

For example, you can use fine-grained auditing to audit the following types of actions:

        ·         Accessing a table outside of normal working hours
·         Logging in from a particular IP address
·         Selecting or updating a particular table column
·         Modifying a value in a table column

Fine-grained auditing records are stored in the SYS.FGA_LOG$ table. To find the records have been generated for the audit policies that are in effect, you can query the DBA_FGA_AUDIT_TRAIL view. The DBA_COMMON_AUDIT_TRAIL view combines both standard and fine-grained audit log records.  The DBA_AUDIT_TRAIL view contains standard Auditing records.

To create a fine-grained audit policy, use the DBMS_FGA.ADD_POLICY procedure. This procedure creates an audit policy using the supplied predicate as the audit condition. Oracle Database executes the policy predicate with the privileges of the user who created the policy. The maximum number of fine-grained policies on any table or view object is 256. Oracle Database stores the policy in the data dictionary table, but you can create the policy on any table or view that is not in the SYS schema. After you create the fine-grained audit policy, it does not reside in any specific schema, although the definition for the policy is stored in the SYS.FGA$ data dictionary table.  You cannot modify a fine-grained audit policy after you have created it. If you need to modify the policy, drop it and then recreate it.

 

The syntax for the ADD_POLICY procedure is:

DBMS_FGA.ADD_POLICY(
   object_schema      VARCHAR2,
   object_name        VARCHAR2,
   policy_name        VARCHAR2,
   audit_condition    VARCHAR2,
   audit_column       VARCHAR2,
   handler_schema     VARCHAR2,
   handler_module     VARCHAR2,
   enable             BOOLEAN,
   statement_types    VARCHAR2,
   audit_trail        BINARY_INTEGER IN DEFAULT,
   audit_column_opts  BINARY_INTEGER IN DEFAULT);

In this specification:

        ·         object_schema: Specifies the schema of the object to be audited. (If NULL, the current log-on user schema is assumed.)
·         object_name: Specifies the name of the object to be audited.
·         policy_name: Specifies the name of the policy to be created. Ensure that this name is unique.
·         audit_condition: Specifies a Boolean condition in a row. NULL is allowed and acts as TRUE. If you specify NULL or no audit condition, then any action on a table with that policy creates an audit record, whether or not rows are returned
·         audit_column: Specifies one or more columns to audit, including hidden columns. If set to NULL or omitted, all columns are audited. These can include Oracle Label Security hidden columns or object type columns. The default, NULL, causes audit if any column is accessed or affected.
·         handler_schema: If an alert is used to trigger a response when the policy is violated, specifies the name of the schema that contains the event handler.
·         handler_module: Specifies the name of the event handler. Include the package the event handler is in. This function is invoked only after the first row that matches the audit condition in the query is processed. If the procedure fails with an exception, then the user SQL statement fails as well.
·         enable: Enables or disables the policy using true or false. If omitted, the policy is enabled. The default is TRUE.
·         statement_types: Specifies the SQL statements to be audited: INSERT, UPDATE, DELETE, or SELECT only.
·         audit_trail: Specifies the destination (DB or XML) of fine-grained audit records. Also specifies whether to populate LSQLTEXT and LSQLBIND in FGA_LOG$.
·         audit_column_opts: If you specify more than one column in the audit_column parameter, then this parameter determines whether to audit all or specific columns.

You can audit all updates to the SALARY column:

DBMS_FGA.ADD_POLICY(

object_schema      => ‘MIKE’,

object_name        => ‘EMPLOYEE’,

policy_name        => ’salary_change’,

audit_column       => ‘SALARY’,

enable             =>  TRUE,

statement_types    => ‘UPDATE’,

audit_trail        =>  DBMS_FGA.DB + DBMS_FGA.EXTENDED);

end;

/

Or, you can get more specific by specifying the AUDIT_CONDITION.  This will create an FGA policy to record all updates to the SALARY column unless it is done by the owner of the table MIKE.  I am using the SYS_CONTEXT function to find information about the user and using that to determine if auditing should occur. 

begin

DBMS_FGA.ADD_POLICY(

object_schema      => ‘MIKE’,

object_name        => ‘EMPLOYEE’,

policy_name        => ’salary_change’,

audit_column       => ‘SALARY’,

audit_condition    => ‘SYS_CONTEXT(”USERENV”,”SESSION_USER”) <> ”MIKE” ‘,

enable             =>  TRUE,

statement_types    => ‘UPDATE’,

audit_trail        =>  DBMS_FGA.DB + DBMS_FGA.EXTENDED);

end;

/

 

Or, you may want to audit any changes that occur outside of your application (as set with the DBMS_APPLICATION_INFO package)

 

begin

DBMS_FGA.ADD_POLICY(

object_schema      => ‘MIKE’,

object_name        => ‘EMPLOYEE’,

policy_name        => ’salary_change’,

audit_column       => ‘SALARY’,

audit_condition    => ‘SYS_CONTEXT(”USERENV”,”MODULE”) <> ”MY_APP” ‘,

enable             =>  TRUE,

statement_types    => ‘UPDATE,INSERT,DELETE’,

audit_trail        =>  DBMS_FGA.DB + DBMS_FGA.EXTENDED);

end;

/

 

One advantage of FGA over Standard Auditing is that you don’t need to set the AUDIT_TRAIL parameter for it to work.  This means that you can enable auditing without restarting the database.  So with the combination of Mandatory, Standard and Fine Grained Auditing, along with some careful consideration of your own auditing requirements, you can record as much or as little information as necessary. 

Upgrade from 10.2.0.4 to 11.2.0.3.2 including Time Zone files

I recently upgraded a customer database from 10.2.0.4 to 11.2.0.3.2 and wanted to share the procedure that I followed.  It is a pretty straightforward upgrade, similar to many others that I have done.  The one thing that was new to me was upgrading the Time Zone files with the DBMS_DST package. 

 

I used the following Metalink Notes to come up with this procedure.  While most of the following steps would apply to any database, it is important that you review these documents and make sure that your particular situation is covered.  For example, if you are running Database Vault, there are some required steps that are not mentioned in this document.  Also, if your application uses UTL_TCP, UTL_SMTP, UTL_MAIL, UTL_HTTP, or UTL_INADDR packages, then you will need to perform certain operations to allow continued access.

Complete Checklist for Manual Upgrades to 11gR2 [ID 837570.1]

 

Actions For DST Updates When Upgrading To Or Applying The 11.2.0.3 Patchset [ID 1358166.1]

 

Updating the RDBMS DST version in 11gR2 (11.2.0.1 and up) using DBMS_DST [ID 977512.1]

 

 

Pre-Upgrade:  These steps can occur prior to the actual database upgrade

 

Step 1) Install 11.2.0.3 (patch 10404530) into /opt/app/oracle/product/11.2.0/db_1

Download this patch from Oracle Support.  There are 7 zip files associated with this patch but the database requires just the *1of7.zip and *2of7.zip files.  Unzip them into the same directory and install using runInstaller.  The current version of Oracle is installed in /opt/app/oracle/product/10.2.0/db_1.  Oracle 11 will be installed into a new Oracle Home at /opt/app/oracle/product/11.2.0/db_1.

 

Step 2) Install latest version of OPatch (patch 6880880)

Download the 11.2 version of Patch 6880880 for your platform.  Unzip it into the 11g $ORACLE_HOME/OPatch location.  Modify your $PATH variable to include this directory

 

Step 3) Install 11.2.0.3.2 PSU (patch 13696216)

Download patch 13696216 to a staging location

cd <staging location>/13696216

opatch apply

 

Step 4) Run utlu112i.sql and resolve any issues. 

Download this script from Note 884522.1 and run it as SYSDBA.  It will list issues that need to be resolved before the upgrade can succeed.  Create a modified init.ora based upon the output of this script. All of the *dump_dest parameters are replaced by the diagnostic_dest parameter. Increase tablespace size, SGA size as recommended.  Copy it to the new $ORACLE_HOME/dbs

 

Note:  if you don’t run this script prior to the upgrade, it will cause the upgrade to fail

 

Step 5) Run dbupgdiag.sql and resolve any issues

You can get this script from Note 556610.1.  Look through the results of this script to see if any anomalies pop out.  One thing in particular you should look for is how many records are in the SYS.AUD$ table.  If you have a lot of Audit Trail records, it could significantly increase the amount of time for the upgrade.  For more information about this issue, you can look at Oracle Support Note How to Pre-Process SYS.AUD$ Records Pre-Upgrade From 10.1 or later to 11gR1 or later. [ID 1329590.1].  Also, check the section listing duplicate objects owned by SYS and follow the recommendations.

 

Step 6) Check for database dictionary corruption

This script came from Note 837570.1 and will detect any logical corruption in data dictionary objects.

Set verify off
Set space 0
Set line 120
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql

SELECT 'Analyze cluster "'||cluster_name||'" validate structure cascade;'
FROM dba_clusters
WHERE owner='SYS'
UNION
SELECT 'Analyze table "'||table_name||'" validate structure cascade;'
FROM dba_tables
WHERE owner='SYS'
AND partitioned='NO'
AND (iot_type='IOT' OR iot_type is NULL)
UNION
SELECT 'Analyze table "'||table_name||'" validate structure 

cascade into invalid_rows;’
FROM dba_tables
WHERE owner='SYS'
AND partitioned='YES';

spool off

@$ORACLE_HOME/rdbms/admin/utlvalid.sql

@analyze.sql

 

You should get no output

 

Database Upgrade:

 

Step 1)  Shutdown processes and jobs.  Disable dbms_jobs and dba_scheduler jobs.

lsnrctl stop

emctl stop dbconsole

sqlplus / as sysdba

@?/rdbms/admin/wkremov.sql (Note: This script is only required if you

                                  have Ultra Search installed)

spool /tmp/before_upgrade_values.txt

show parameters job_queue_process

alter system set job_queue_processes=0;

select job_name, enabled from dba_scheduler_jobs;

select username, account_status from dba_users order by 1 asc;

spool off

exec dbms_scheduler.disable(<name>);

purge dba_recyclebin;

exec dbms_stats.gather_dictionary_stats;

shutdown immediate

exit;

 

Step 2) Take a cold backup

Seriously, take a full, cold backup of your database.  It probably won’t be needed but you don’t want to bet your job on it, do you?

 

Step 3) change /etc/oratab and reset environment variables with oraenv

 

Step 4) Upgrade the Database

cd $ORACLE_HOME/rdbms/admin

SQL> startup UPGRADE

SQL> set echo on

SQL> SPOOL upgrade.log

SQL> @catupgrd.sql

 

$ sqlplus “/as sysdba”

SQL> STARTUP

SQL> @utlu112s.sql

SQL> @utlrp.sql

SQL> @catuppst.sql (Note: This script runs “@catbundle psu apply” which

                          will apply the PSU)

SQL> @utlrp.sql

SQL> @dbupgdiag.sql

SQL> alter system set job_queue_processes=<value>

SQL> exec dbms_scheduler.enable(<name>);

–  the next three scripts upgrade the Statspack schema

SQL> @spup102.sql

SQL> @spup1101.sql

SQL> @spup11201.sql

SQL> shutdown

SQL> startup

SQL> create spfile from pfile;

SQL> exit

 

Step 5) Upgrade Time Zone files

conn / as sysdba

shutdown immediate;

startup upgrade;

set serveroutput on

 

– check if previous prepare window is ended

 

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value

FROM DATABASE_PROPERTIES

WHERE PROPERTY_NAME LIKE ‘DST_%’

ORDER BY PROPERTY_NAME;

 

– output should be

– PROPERTY_NAME VALUE

– —————————- ——————————

– DST_PRIMARY_TT_VERSION <the old DST version number>

– DST_SECONDARY_TT_VERSION 0

– DST_UPGRADE_STATE NONE

 

purge dba_recyclebin;

 

TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;

TRUNCATE TABLE sys.dst$affected_tables;

TRUNCATE TABLE sys.dst$error_table;

 

alter session set “_with_subquery”=materialize;

alter session set “_simple_view_merging”=TRUE;

 

 

EXEC DBMS_DST.BEGIN_UPGRADE(14);

- check if this select

 

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value

FROM DATABASE_PROPERTIES

WHERE PROPERTY_NAME LIKE ‘DST_%’

ORDER BY PROPERTY_NAME;

 

– gives this output:

– PROPERTY_NAME VALUE

– ————————— ——————————

– DST_PRIMARY_TT_VERSION <the new DST version number>

– DST_SECONDARY_TT_VERSION <the old DST version number>

– DST_UPGRADE_STATE UPGRADE

 

shutdown immediate

startup

 

alter session set “_with_subquery”=materialize;

alter session set “_simple_view_merging”=TRUE;

 

set serveroutput on

VAR numfail number

BEGIN

DBMS_DST.UPGRADE_DATABASE(:numfail,

parallel => TRUE,

log_errors => TRUE,

log_errors_table => ‘SYS.DST$ERROR_TABLE’,

log_triggers_table => ‘SYS.DST$TRIGGER_TABLE’,

error_on_overlap_time => FALSE,

error_on_nonexisting_time => FALSE);

DBMS_OUTPUT.PUT_LINE(’Failures:’|| :numfail);

END;

/

 

– ouput of this will be a list of tables like:

 

– Table list: SYSMAN.AQ$_MGMT_NOTIFY_QTABLE_S

– Number of failures: 0

 

– if there where no failures then end the upgrade.

 

VAR fail number

BEGIN

DBMS_DST.END_UPGRADE(:fail);

DBMS_OUTPUT.PUT_LINE(’Failures:’|| :fail);

END;

/

 

 

– output that will be seen:

– An upgrade window has been successfully ended.

– Failures:0

 

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value

FROM DATABASE_PROPERTIES

WHERE PROPERTY_NAME LIKE ‘DST_%’

ORDER BY PROPERTY_NAME;

 

– needed output:

– PROPERTY_NAME VALUE

– —————————- ——————————

– DST_PRIMARY_TT_VERSION <the new DST version number>

– DST_SECONDARY_TT_VERSION 0

– DST_UPGRADE_STATE NONE

 

 

SELECT * FROM v$timezone_file;

 

FILENAME                VERSION

——————– ———-

timezlrg_14.dat              14

select TZ_VERSION from registry$database;

 

update registry$database set TZ_VERSION = (select version FROM v$timezone_file);

commit;

 

Step 6) Update Network config files

Change ORACLE_HOME in listener.ora and copy files to new $ORACLE_HOME/network/admin.  Start the listener and check connectivity

 

Step 7) Modify cron jobs, profiles, etc that may have $ORACLE_HOME hard-coded.

Step 8 ) Move any Media Management Libraries to the new $ORACLE_HOME/lib

If you do RMAN backups directly to tape, you likely have a symbolic link called libobk.so in $ORACLE_HOME/lib pointing to Media Management software (Netbackup, Legato, etc).  This needs to be recreated in the new $ORACLE_HOME

Step 9 ) Check for any new accounts that should be locked. 

Depending on what options are installed, Oracle may create new users.  Unless you are going to use that functionality, these accounts should be locked.

SQL> select username, account_status from dba_users order by 1 asc;

Resizing datafiles and understanding the High Water Mark

Resizing datafiles is something that I seem to do a lot of.  Usually, I am increasing them to allow for data growth but sometimes I need to shrink them to reclaim unused space.  When shrinking a datafile, you cannot make it smaller than the highest allocated extent in the datafile, aka the High Water Mark (HWM).  If you do, you will get a message like this:

alter database datafile '/oradata/DB10201/test.dbf' resize 120m;
alter database datafile '/oradata/DB10201/test.dbf' resize 120m
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

Below is a quick demo that shows how this works and how to determine exactly how small you can make your datafile, eliminating the trial-and-error approach.  You can also determine exactly which objects are preventing you from shrinking the datafile. First, I will create a 500MB tablespace and a couple of tables.

 create tablespace test2 datafile '/oradata/DB10201/test.dbf' size 500m uniform size 1m;
 create table test (
COL1 NUMBER,
COL2 VARCHAR2(2000),
COL3 VARCHAR2(2000),
COL4 VARCHAR2(2000)) tablespace test2;
declare
v_string varchar2(2000);|
begin
for i in 1..1000 loop
v_string := v_string||'XX';
end loop;
for i in 1..10000 loop
      insert into test values (i,v_string,v_string,v_string);     
end loop;
commit;
end;
/

 how big is the TEST table?

select sum(bytes)/1024/1024 from dba_segments where segment_name='TEST'
and tablespace_name='TEST2'; 
SUM(BYTES)/1024/1024
--------------------
                  80

What is the HWM for TEST?

select
max((block_id + blocks-1)*8192))/1024/1024 "HWM (MB)"
from dba_extents  where owner='SYS' and file_id=6
and segment_name='TEST';
HWM (MB)
----------
   81.0625

(multiply by 8192 because that is the block size.  Divide by 1024/1024 to convert to MB)

 This makes sense.  The table is 80MB and the HWM is 81MB (with a bit of overhead).  Now I will create another table in the same tablespace, moving the HWM towards the end of the datafile.

 create table TEST2 tablespace TEST2 as select * from test;

 Check the HWM again

select (max((block_id + blocks-1)*8192))/1024/1024 "HWM (MB)"
from dba_extents where file_id=6;
  HWM (MB)
----------
  160.0625

 Again, this makes sense.  Now I will drop the first table and check the HWM and the total segment size in the tablespace.

drop table test purge;
Table dropped.
select sum(bytes)/1024/1024 "Segment Size (MB)" from dba_segments
where tablespace_name='TEST2';
Segment Size (MB)
-----------------
               80
select (max((block_id + blocks-1)*8192))/1024/1024 "HWM (MB)"
from dba_extents
where file_id=6;
  HWM (MB)
----------
  160.0625

So, at this point there is only 80MB worth of data in the tablespace but the HWM is 160MB.  If you came into this situation not knowing the history, it would reasonable to think you could shrink the datafile to something a little more than 80MB.  You would be incorrect.  Because the HWM is 160MB, you won’t be able to shrink it lower than that unless you re-org the table.

alter database datafile '/oradata/DB10201/test.dbf' resize 159m;
alter database datafile '/oradata/DB10201/test.dbf' resize 159m
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value
alter database datafile '/oradata/DB10201/test.dbf' resize 161m;
Database altered.

Now lets look at it from a different angle.  Suppose I try to shrink the datafile to 100MB and get the ORA-03297 error.  You can use this query to find out exactly which objects are preventing it from happening.

select segment_name, segment_owner from dba_extents
where file_id=6
and ((block_id + blocks-1)*8192 > 104857600;

Of course, we already know the answer because this is just a silly demo.  There are various ways to reset the HWM for a datafile but I will save that for another blog.

Use Database Profiles to help secure your database

I have seen many times over the years where a production database will experience performance problems that are caused by Adhoc queries running and consuming too many resources.  These will often be from developers trying to debug a problem, or someone in Operations trying to get a custom report, or from some other source.  In extreme cases, this can be, in effect, a classic Denial of Service attack.

One way to solve the problem is to simply restrict access to the production database, particularly if you have a copy of production available for debugging/reporting purposes.  Alternatively, you can implement Database Profiles to restrict the amount of resources any one user is allowed to consume.  They are very easy to implement and can go a long way towards securing your database against an unintentional (or possibly intentional) Denial of Service attack. 

 In order for profiles to work, you must first set the init.ora parameter resource_limit to TRUE.  You can then create profiles and assign them to users.

SQL> alter system set resource_limit=TRUE;
SQL> create profile LIMITED_PROFILE limit sessions_per_user 2;
SQL> create user PROFILE_TEST identified by abcd profile LIMITED_PROFILE;
SQL> grant create session to profile_test;

Now I connect as PROFILE_TEST user, the first two connections are fine, but the third gets:

oracle@mdlinux ~]$ sqlplus profile_test/abcd
ERROR:
ORA-02391: exceeded simultaneous SESSIONS_PER_USER limit

Now that the user can only have two sessions at a given time, I will further crank down the screws by limiting the number of Logical Reads allowed per session.  In a real world scenario, you will need to come up with a reasonable limit but for testing I will set it to 10000.

SQL> alter profile LIMITED_PROFILE limit logical_reads_per_session 10000;

And when they run some crazy query, it will stop them in their tracks

SQL> select count(*) from dba_objects a, dba_objects b;
select count(*) from dba_objects a, dba_objects b
                     *
ERROR at line 1:
ORA-02394: exceeded session limit on IO usage, you are being logged off

To see what you can limit with a profile, run this query:

SQL> select unique RESOURCE_TYPE,RESOURCE_NAME from dba_profiles
SQL> order by 1,2 asc;
 RESOURCE RESOURCE_NAME
------- --------------------------------
KERNEL   COMPOSITE_LIMIT
KERNEL   CONNECT_TIME
KERNEL   CPU_PER_CALL
KERNEL   CPU_PER_SESSION
KERNEL   IDLE_TIME
KERNEL   LOGICAL_READS_PER_CALL
KERNEL   LOGICAL_READS_PER_SESSION
KERNEL   PRIVATE_SGA
KERNEL   SESSIONS_PER_USER
PASSWORD FAILED_LOGIN_ATTEMPTS
PASSWORD PASSWORD_GRACE_TIME
PASSWORD PASSWORD_LIFE_TIME
PASSWORD PASSWORD_LOCK_TIME
PASSWORD PASSWORD_REUSE_MAX
PASSWORD PASSWORD_REUSE_TIME
PASSWORD PASSWORD_VERIFY_FUNCTION
16 rows selected.

The PASSWORD resources are used to manage passwords.  The KERNEL resources are used to manage system resources.  For security reasons, you may want to set the IDLE_TIME limit.  This will prevent users from staying logged into the database for extended periods of time with no activity.  To see which users have been assigned profiles, you can query the dba_users view:

select username, profile from dba_users;

So, if you get a chance, check out Database Profiles.  They are quick, easy and can possibly help protect your database from an unpleasant situation.