Here’s a quick quiz; of all of the problems that can beset an Oracle database, which one happens the most often? Do you think that it’s concurrency or locking issues? Issues with CPU being overwhelmed, or disk I/O rate being overwhelmed? Oracle internal bugs? Disk space issues? Problems encountered with backup & recovery? Account Security? Memory issues with either performance problems or not being large enough?
Think about that for a minute…
What’s you’re answer? If you answered disk space issues, collect your prize!
Indeed, in my experience with literally hundreds of Oracle databases over many years, the most common issue I encounter is in the management of disk space. This presentation will cover a few ideas on why this happens, as well as how to approach it.
Why do Oracle databases run out of disk space?
Here are the main reasons that this problem occurs so frequently:
- Most Oracle database schemas are not designed around the lifecycle of data; from input, to updating, to archival.
- Most Oracle databases only grow over time, never shrinking in terms of data volume retained.
- When rows are deleted from a table, neither the table segment shrinks, or the index segment shrink, so these segments only get bigger over time by default.
- When segments are removed from a tablespaces, there is no automatic shrinking of the datafiles involved – they only get bigger over time.
- Oracle database performance is most often tied to disk I/O performance, and the IOPS that a disk subsystem can handle. Using cheaper lower-performing disk subsystems just won’t work. This means that it usually takes quite a bit of effort to order, purchse, install and configure the disk subsystem.
- The amount of overhead space that Oracle requires is quite large. Extreme example; we want to store data in a 1-row, 1 column table with an index. Overhead is over 80%; Oracle stores data in blocks (not bits); Oracle doesn’t use the entire block by default by keeps 10% for future use; it allocates ITLs and other data structures in each block; indexes can effectivily double the amount of space a row requires; there’s overhead for Oracle internal uses like TEMPORARY space, REDO/UNDO, and the system data dictionary. This all amounts to each row stored requiring a LOT more space (+50% to +95%) than the raw data.
- Being a database, it’s the nature of a database to grow over time as data is added.
- It is unfortunately not monitored closely often.
- It’s the nature of business to grow in chaotic spurts. This can completely obliterate any former planning that has been done.
- Disk subsystems have been improving and getting cheaper every year since their invention. It doesn’t make economic sense to buy too much more disk space than what you’ll require today, if you won’t need it say in a year.
For the reasons above, and more, it’s just a reality that needs to be handled, and one that needs to be understood by those signing purchase orders for more.
Tips on managing and maximizing disk space
- For most applications, using lots of different tablespaces is counter-productive in restraining disk growth. Tablespaces should be used for specific reasons (using transportable tablespaces, for recovery of part of a database, block-size factors). http://www.dbspecialists.com/blog/uncategorized/simplifying-storage-management-using-fewer-tablespaces/
- Be sure to use pctfree/pctused if appropriate to allow more data to fit in blocks if rowsize doesn’t grow with time. http://www.dbspecialists.com/blog/uncategorized/the-often-overlooked-pctfree-property-for-tables-and-indexes/
- Use schema designs which are 3rd or 4th normal form if performance can handle it; typically these will require a lot less space, especially over time.
- In the application design, try not to have a lot of data stored that are basically log data that will never be reviewed. If nobody will ever read the data in a database, perhaps it should be archived? If it’s really not used, why is it being collected? This definitely applies to log tables used for debugging purposes.
- Design the lifecycle of the data when the schema is designed. This can yield built-in procedures for archiving, un-archiving, and deleting old data that can have a profound effect on how large the database is. http://www.dbspecialists.com/blog/uncategorized/the-importance-of-cleaning-up-historical-data/
- Consider sectioning or partition data by date - for some business applications, this can make it a bit easier to archive/delete it without affecting performance on the currently-active dataset.
- Be sure that storage space is effectively monitored, like with Database Specialists DatabaseRX ® .
- Study how changing backup retention and frequency can dramatically affect how much space is required (assuming you’re using a flash-recovery-area for backups).
- If you have the Enterprise Edition, consider periodic shrinking of segments and datafiles online; this can have a pretty large effect if a lot of data has been deleted. Unfortunately, this isn’t possible with the cheaper Oracle editions (Standard Edition & Standard Edition One). http://www.dbspecialists.com/blog/database-maintenance/resizing-datafiles-and-understanding-the-high-water-mark/
- If you are using v11 or greater, consider using the Deferred Segment Creation feature http://www.dbspecialists.com/blog/uncategorized/deferred-segment-creation/
- Consider turning on monitoring for indexes, and remove those that are not used.
- Be sure your auditing is correct, and that jobs exist to archive/clean out old monitoring data, and that you aren’t monitoring anything that isn’t important. http://www.dbspecialists.com/blog/best-practices/overview-of-oracle-auditing/
Tips on managing the growth itself
- Regularly estimate how fast things are growing, and have regular reports to management re: how much time is left before there is no room assuming no change in the business requirements. This is done by tablespace.
- If there are a lot of tablepaces, spread among many mount points or LUNs, it can be difficult to figure out which one(s) will fill up first. Fewer, larger LUNs can help quite a bit – and for this reason ASM usually makes this easier, because most ASM setups don’t have many different disk groups, typically a SYSTEM diskgroup, followed by a DATA diskgroup, and then a flash-recovery area.
- Have a monitor program in place that will escalate warnings as a tablespace gets full, like Database Rx ®.
The main point to the presentation is this; even though disk space usage is often chaotic in nature, it needs to be closely monitored so that the database can continue to grow. Failing this responsibility will usually result in an application crashing, and being unable to support users, and the problem can take a considerable time to solve (days) because it often requires new hardware.
Date: 2014-03-24 Mon
Author: Jay Stanley
Org version 7.8.11 with Emacs version 24
In Oracle v10, a new feature was introduced to replace the existing DBMS_JOB job scheduling software and API: the DBMS_SCHEDULER. Up until that point, the only ways to schedule reoccurring jobs inside the database was to use a very simplistic system, DBMS_JOB. Limitations included:
- No logging of individual runs successful or failed logs
- Could only call stored procedures,
- Had no method of ‘chaining’ jobs together so that if one failed, special action could be taken
- No method of notification if a job failed,
- No concept of execution windows of time, or of using Oracle Services, or RAC in scheduling jobs.
- Lots of other features that are too large to list
The original DBA_JOBS API can be thought of similar to the Unix crond package.
In this short paper, I’ll show how you can enable the new DBMS_SCHEDULER package to send email to a set of recipients if a job does not complete successfully.
Using the new DBMS_SCHEDULER
The upside to using DBMS_SCHEDULER is that it is far more flexible, and it contains many new features over DBMS_JOB.
The disadvantage is that it is more complicated to work with.
Let’s start by creating a test stored procedure, and then enable the scheduler to send email when it fails.
Oracle considers any PLSQL_BLOCK type job to fail, when it exits with unhandled exceptions. If the code to a called PLSQL_BLOCK looks like this:
WHEN OTHERS THEN
Then it will never fail according to how Oracle measures things. Any exception that mytest will raise, will be silently handled by the exception block.
Here’s a test stored procedure that will fail every time:
create or replace procedure showme
Every time this procedure is called, it will exit with an unhandled exception – that being NO_DATA_FOUND, which is a pre-defined Oracle exception.
Let’s try running this in sql*plus:
SQL> execute showme;
BEGIN showme; END;
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at "JSTANLEY.SHOWME", line 5
ORA-06512: at line 1
Let’s create a job that will run this hourly at 50 minutes after each hour. In this example, the schema name is ‘JSTANLEY’.
JOB_NAME => 'JSTANLEY.JOB_SHOW_ME',
job_type => 'PLSQL_BLOCK',
repeat_interval => 'FREQ=hourly;byminute=50',
start_date => sysdate,
enabled => TRUE,
comments => 'test',
job_action=>'BEGIN showme; END;'
We can check this by viewing the DBA_SCHEDULED_JOBS table.
Now that we have the job in there, let’s enable full logging for that job, as well as email if it fails.
SQL> execute dbms_scheduler.set_attribute('JSTANLEY.JOB_SHOW_ME','LOGGING_LEVEL', DBMS_SCHEDULER.LOGGING_FULL);
First, we need to configure DBMS_SCHEDULER so that it knows how to send mail – where the mailserver is, and what the sender’s email address is. You can find out if these are configured with the following script: note that if they’re NULL, email will not work. This quick script will print out all of the attributes. In this example, we’re really only checking the EMAIL_SENDER, EMAIL_SERVER, and MAX_JOB_SLAVE_PROCESSES.
set serveroutput on size 100000
set lines 200 pages 10000
dbms_output.put_line('current open window '||v_value);
To set them, run the DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE() call as follows:
SQL> execute dbms_scheduler.set_scheduler_attribute('email_sender','email@example.com');
SQL> execute dbms_scheduler.set_scheduler_attribute('email_server','localhost');
Now that it is configured, we need to enable emailing for the job in question.
Now, assuming that the SMTP host is correct, and that it doesn’t refuse email sent from the IP address of the Oracle database or object to the sender’s email address, when the job fails, it will send an email!
Below is a sample email in full, including headers:
X-From-Line: firstname.lastname@example.org Wed Mar 19 13:50:59 2014
Received: from localhost (localhost [127.0.0.1])
by dbspecialists.com (8.13.1/8.13.1) with SMTP id s2JKoxOC010760
for email@example.com; Wed, 19 Mar 2014 13:50:59 -0700
Date: Wed, 19 Mar 2014 13:50:59 -0700
From: Jay Stanley <firstname.lastname@example.org>
Subject: Oracle Scheduler Job Notification - JSTANLEY.JOB_SHOW_ME JOB_FAILED
X-IMAPbase: 1371854655 571
Xref: emacs1 Mail.Other:10705
Date: 19-MAR-14 01.50.57.838644 PM -07:00
Log id: 13776
Job class: DEFAULT_JOB_CLASS
Run count: 25
Failure count: 22
Retry count: 0
Error code: 1403
ORA-01403: no data found
This shows a relatively easy way to enable the DBMS_SCHEDULER API to inform users when a scheduled job exits with an unhandled exception.
Note that setting up email for DBMS_SCHEDULER is completely unrelated to setting it up for Enterprise Manager, or to the DBMS_MAIL package – it is self-contained.
Date: 2014-03-19 Wed
Author: Jay Stanley
Org version 7.8.11 with Emacs version 24
This is the first installment of a series of shorter whitepapers that just cover the basics of a particular aspect of the Oracle RDBMS. The first one is on the subject of upgrading Oracle. After doing literally hundreds of Oracle RDBMS upgrades and patches over the past 18 years, these are my recommendations.
The first question that you need to ask is why? Most often, it is to patch a particular bug that is actively causing problems in production. Less often, it’s to gain extra features that the old version lacks. Sometimes, it’s to insure that the database’s security is as good as it can be. There’s one other reason to upgrade; if Oracle is going to stop supporting and patching the version you are on.
All of these are very reasonable reasons to perform an upgrade. If it’s for another reason, you might ask why? – if something is working well and is stable, why change things?
The next question is: what version are you going to upgrade to? If at all possible, stay away from upgrading to the 1st release of a particular version of Oracle. Oracle 7.0 was terrible, while 7.3.4 was arguably one of most stable versions ever; 8.0.1 was terrible, but by 188.8.131.52 it was quite stable; 9.0.1 was extremely buggy, but it was by 184.108.40.206; 10.0.1 was horrible but things were fixed by 10.2.0.5, and I will not be suprised if 12.0.1 will be terrible in terms of the number of open bugs and stability problems. Those early versions are like fire; mess with them and you WILL get burned.
Recommendations for the upgrade itself
- Definitely do a trial upgrade of a copy of your target database. Yes, it is a lot of work, but there is no better way to really understand if you run into any issues, and to understand how long it’ll take.
- Also, do a trial rollback after the upgrade. It’s surpising how often when problems do develop in a production upgrade, that nobody tried it first.
- Be sure to keep your COMPATIBLE setting at the old version until the upgrade is succesful and you are sure that you will not be rolling it back.
- Upgrade into a different $ORACLE_HOME if possible. Since v11.2, Oracle has changed the way that most upgrades happen; a new, patched $ORACLE_HOME is created, and then the database is shut down, restarted with the new $ORACLE_HOME, and upgraded. This is a much, much better method as it allows you to fail back to the old version if the upgrade fails, and there is no downtime while the patch is applied to the binaries making the maintenence window shorter.
- Shut down the listener if possible during the upgrade – this keeps application sessions from connecting. A client connection starting work in the middle of an upgrade can really mess things up.
- Thoroughly read the latest updated My Oracle Support (MOS) document on the exact upgrade(s) that are happening, and make notes. This can be a long read, but any part of it can become immediately important if an upgrade has trouble. Create a separate full plan document that has each step, what to expect, how long each should take, and any difficulties encountered.
- Be very sure that you have a good backup before starting the upgrade. Another option if using Enterprise Edition is to create a Guaranteed Restore Point, so that it’s relatively trivial and fast to undo the upgrade.
- Be sure that there is enough tablespace (especially SYSTEM, SYSAUX and TEMP) for the upgrade. If these fill up, it can again cause a lot of problems.
- Take extra care to review memory pools as well – some upgrades really require a JAVA_POOL memory pool for example, and will fail if there isn’t enough room.
- Be sure to de-schedule all dba_scheduler & dba_jobs database jobs for the upgrade, possibly by setting JOB_QUEUE_PROCESSES=0. Note that export-datapump (and import-datapump) requires a non-zero JOB_QUEUE_PROCESSES to work. It is likely a good idea to make a copy of the DBA_JOBS and DBA_SCHEDULER_JOBS tables before starting for reference. Also, de-schedule any jobs scheduled through crontab like backups as well.
- Communicate to operations management very clearly regarding how long it will take for the upgrade, including how long it will take to back it out worst-case, and how long it will take to restore the database from backup if everything goes horribly wrong.
- Be sure that either a QA team, or someone who does quality control, can check the database & application after upgrade to make sure there are no problems. This should be done after the ‘trial’ upgrade, as well as the production upgrade.
- Take a backup of any parameter files (Oracle RDBMS, ASM) before starting – upgrades can change parameter values in spfiles, so if a database is rolled back it can come in quite handy.
- If you have separate development/QA-test/production databases, start by upgrading your development first, then QA, and finally production. This should make it more unlikely that new issues are encountered when the most critical database(s) (production) are upgraded, and it gives time to the developers and QA folks to find any potential issues before the upgrade happens.
- Schedule the upgrade to minimize impact. If the busiest day of the week for the database is over a weekend, don’t do an upgrade on Friday evening. If it’s least busy at 2:00 am on a Tuesday, consider doing it then.
Following my recommendations above will very likely make any Oracle RDBMS upgrade go smoother.
Terry Sutton and I recently had the pleasure of volunteering at a session of 12cR1 “RAC Attack” lab held at the November Conference of the North California Oracle Users Group (NoCOUG). Jimmy Brock from NoCOUG also volunteered and we had several other experts drop by and help out. For those of you unfamiliar with a “RAC Attack” lab, it is a hands-on session where attendees follow the instructions from the RAC Attack Wikibook either
http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home for 11gR2 or
http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c for 12cR1
and install a 2-node Oracle RAC database on their laptop. This was the first time that NoCOUG had held such a lab and our first involvement with one but it was well attended with over 30 attendees and since they had to turn out the lights to get us to leave, it appeared to be a great success. A more concise description of the material is available at the above (12c) URL
RAC Attack is a free curriculum and platform for hands-on learning labs related to Oracle RAC (cluster database), motivated by the belief that the best way to learn RAC is through lots of hands-on experience. The original contributors were Jeremy Schneider, Dan Norris and Parto Jalili. This curriculum has been used since 2008 by organizers of events, by instructors in classes and by individuals at home. Its goal is to help students learn about Oracle RAC cluster databases through guided examples.
RAC Attack differs in depth from other tutorials currently available.
Every keystroke and mouse click is carefully documented here.
The process is covered from the very beginning to the very end - from the very first installation of the Virtual Hypervisor on your laptop to various experiments on your running cluster database… with everything in between.
The labs in the main workbook have been tested thoroughly and repeatedly.
Of course the 12c material is relatively new and “Part II: Exploring RAC” is still under development but it is already excellent and can be used without any major issues to complete the install.
Limitations: Of course this virtualization setup is purely for education and it necessarily omits a few features that a production RAC configuration would typically have for example jumbo frames on the interconnect, bonded network interfaces and multi-path I/O.
We came across a few issues in the lab most of them unrelated to the material as such but more about the preparation and environment. Here is a set of tips for anybody wanting to complete the material by themselves (or in a future lab).
We used the latest available version of Virtualbox available from http://www.virtualbox.org which is v4.3.2. Jimmy was our “Mac Guy” and confirmed the lab works fine with this version on Macs.
As with all downloads it is highly desirable to verify the downloads by comparing the digest against the source to ensure the download is correct. One promising lab attempt was brought to an end by an apparently corrupt Oracle Linux .iso.
When unzipping the four Oracle Grid Infrastructure and Database 12c (220.127.116.11) downloads you want to end up with only two directories one called ‘grid’ and one called ‘database’. Some unzipping alternatives leave you with 4 directories in which case you should merge them back into the above two.
One annoying problem several attendees encountered was due to their “hardware virtualization extensions”. These appeared as a few errors but the most common was people getting “kernel requires an x86_64 cpu, but only detected an i686 cpu unable to boot - please use a kernel appropriate for your cpu” when Oracle Linux attempts to start towards the end of the install. This issue is discussed in detail in the VirtualBox FAQ at
but a summary is “…a 64bit GUEST NEEDS VT-X/AMD-V to be active and enabled….”. If you get the above error you need to enable these extensions by changing your BIOS settings. Usually in the Chipset or Processor menus but possibly even in Security Settings or other non-standard menu names. If anybody knows of a simple test to ensure you have these extensions enabled please let us know. We know of the VMware Processor Check for 64-Bit Compatibility, but it unfortunately appears to require registration.
In a lab environment with people working on their laptops there is an ergonomic consideration. It is tricky on a relatively small single screen to constantly shuffle backwards and forwards between the RAC Attack Wikibook and the Virtual Machine windows. If you are following the material in an environment with multiple large monitors this is not a concern but in a lab, on a laptop, it is a hassle. The NoCOUG folks were kind enough to provide attendees with a nice printed manual which helps greatly. However some of the commands themselves are complicated enough that they need to be pasted from an electronic source rather than manually typed and ‘bind’ is very fussy about indentation! In short when the instructions say ‘cut and paste the following command’ we found it most reliable to cut and paste from the wikibook. If you do not have access to the web page ensure you have a suitable electronic source available.
If you only have 8G of RAM available on your laptop we suggest allocating 1536M to each VM. The RAC Attack material is currently a little inconsistent here on this point e.g. they show 3072MB as a “Memory Size” in some screens but have “Base Memory: 1500MB” in others.
Before step 11 on URL http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c/Configure_Second_Linux_VM (i.e. before rebooting the collabn2 VM) check your /etc/sysconfig/network-scripts/ifcfg-eth2 file and if it contains a line
then change it to
This step is mentioned in the collabn1 setup but not when cloning collabn2. Without this step I found that my /etc/resolv.conf file was getting overwritten on collabn2 which messes up the dns lookups.
In summary, the RAC Attack material is a powerful resource for anybody wanting to learn about RAC. Its creators and contributors have done a great job of documenting the process so thoroughly that anybody can complete it in a few hours.
Many thanks to Iggy Fernandez. Jimmy Brock and the rest of the NoCOUG Board for their invaluable assistance and support during the preparation and running of the lab.
One thing about working with PL/SQL is the fact that it runs inside the database (Oracle Forms nonwithstanding), rather than in a normal OS process. This means that any interaction with the database server OS itself isn’t easily possible. You can create a library in ‘C’ and call that from
pl/sql, but this isn’t usually a trivial exercise.
So, if you have a need for your
pl/sql program to communicate with an outside process, there is one really simple way to go about it – to use an API called
dbms_pipe. This API is available under all versions of Oracle Server, including Standard Edition One, it has been around for a very long time, since Oracle v7, and in my experience it is very stable.
dbms_pipe API is not related to Unix (or Posix) pipes – it is its own thing, that only communicates between sql sessions. It’s a very quick method that you can use to inform an external process immediately when attention is needed. For example, one way to tell if something is needed in a database is to poll a table or query every few seconds – if the query is resource-intensive, this can really eat up a lot of CPU, I/O and memory. Instead of polling, the
dbms_pipe API can be used to reduce this load considerably.
dbms_pipe -vs- AQ
Oracle also has a product that can be licensed for Enterprise Edition called AQ, or Advanced Queuing. This allows you to create message queues in the database, where basically you have one or more publisher processes which add things to the queue, and one or more subscriber processes which take messages from the queue, though it also supports a point-to-point model. AQ is a very advanced and complicated package; messages can queue within or between systems, you can have rules and transformations for these messages in the database, and it can interface with other systems using HTTP, HTTPS or SMTP. However, AQ is an extra license on top of the Enterprise Edition, so it’s not cheap and it’s not possible to use with a Standard Edition database.
dbms_pipe is not that sophisticated at all. Using
dbms_pipe, one process creates a message for the named database pipe, and other processes can retrieve it, very much like a FIFO queue (first-in first-out). The process that creates the message is never blocked. When the process that is reading the message asks for one, and there isn’t one available, it will be blocked there, and will ‘hold’ there for a timeout period.
Adding a message to a
dbms_pipe, the schema or owner of the procedure using it will need access;
SQL> grant execute on dbms_pipe to myuser;
The API requires that you name the pipe over which messages will flow. I believe the name can be anything, but the processes that put messages into the pipe, and take messages out, will need to use the same pipe name.
To send a message, the
dbms_pipe.pack_message() is called, followed by the
dbms_pipe.send_message() procedure like this:
create or replace procedure send_pipe(p_message in varchar2)
v_status := dbms_pipe.send_message('BIGWIDGET');
if (v_status != 0) then
Note that in the above code, the pipe name BIGWIDGET is given as the 1st argument to
send_message. There are actually two messages sent; the first being the length of the message (numeric), and the second being the message itself. This makes it easy for the routine which retrieves the message(s) to know that they got the entire message – obviously this could become more sophisticated still (maybe a CRC for the message as well?), but it isn’t strictly necessary.
When the above code is run, a message is added to the queue named
BIGWIDGET, and it will stay there.
Retrieving a message from dbms_pipe
The important thing to remember with regards to retrieving messages from the pipe is that by default, the routine will not return until a message is sent to the same named pipe.
Here’s some code to retrieve the message:
procedure receive_pipe(p_message out varchar2)
v_status := dbms_pipe.receive_message('BIGWIDGET');
if (v_status = 0) then
p_message := v_message;
Note that if there is a message for that pipe, this will return immediately. If there are no messages, it will just sit there and wait until there is one.
Example in real-life
Your company, Big and Small Widgets Inc make really big widgets, and also some small ones. The big ones are so large and expensive that it is a major accomplishment when your company sells one. The company has an IRC server where a great bulk of the interoffice business gets taken care of.
The Big Boss would like a message to appear in your IRC channel whenever there is a sale of a Big Widget. The problem is that it is a very expensive query to determine if a Big Widget has been sold – the query typically takes a minute or more, and it consumes a lot of database resources. It is important to the Big Boss that the message be sent immediately when the sale closes.
It’s easy enough to create an IRC ‘bot’ that will do exactly what you need in terms of connecting to the IRC server, polling messages in the database, and sending them, perhaps using the Perl
POE::Component::IRC perl module.
The problem here is that the IRC Bot won’t be able to just execute that really big query over and over without affecting other things that the database is being used for.
This is where using
dbms_pipe would come in really handy.
Step 1: in the IRC Bot, have a call to
receive_pipe() above. The bot will execute this, and it will basically pause until a message is received. Following the call to
receive_message(), it will perform the large query to figure out the details of the sale, and then it will broadcast it on the IRC channel(s) desired.
Step 2: change the sales logic so that as soon as a big widget is sold, a call to the
send_pipe() procedure is called.
In this way, when a sale of a Big Widget happens, the IRC bot will immediately know, and will immediately inform the IRC channel as soon as it completes the (resource intensive) query. They query will not run, unless a sale has just taken place, reducing the load on the database.
dbms_pipe API is relatively simplistic, it can definitely is very useful when one SQL process needs to be informed by another SQL process that something has happened.
HTML generated by org-mode 6.33x in emacs 23
If you have an 11gR2 database (we’ve seen it on 18.104.22.168 databases although it may be present in 22.214.171.124 as well) and you get this error when trying to run a Statspack snap:
ORA-00001: unique constraint (PERFSTAT.STATS$FILE_HISTOGRAM_PK) violated
ORA-06512: at “PERFSTAT.STATSPACK”, line 3813
ORA-06512: at “PERFSTAT.STATSPACK”, line 5079
ORA-06512: at “PERFSTAT.STATSPACK”, line 105
ORA-06512: at line 1
It is likely due to Bug 11927239. It stems from duplicate rows in the v$filestat and/or v$file_histogram views. The unofficial workaround is to simply create a view in the PERFSTAT schema that eliminates the duplicates and your Statspack snaps should work okay until you get the bug patch applied or upgrade to 126.96.36.199 due out this fall. You will first need to grant explicit access on the view to PERFSTAT:
SQL> connect / as sysdba
SQL> grant select on v_$file_histogram to perfstat;
SQL> connect perfstat
SQL> create view v$file_histogram as select distinct * from sys.v_$file_histogram;
SQL> alter package statspack compile;
If we can assist with this or any other Oracle Database issues you might be having including remote database monitoring and administration, please give us a call at 415-344-0500 ext 501.
One of the new Backup and Recovery features of Oracle 11g Release 2 (11gR2) is the ability to duplicate a database without connecting to the target database. In RMAN terminology, the “target” database is the one you wish to duplicate. In this case, the only requirement is that you provide a disk backup location where RMAN can find all the backups, data file copies, archived logs, and control file copies for database duplication. This database duplication enhancement is helpful when it is not possible to connect to the target database and the recovery catalog.
This came in handy for me recently on a customer project. They wanted me to test a disaster recovery scenario in which the production database server was completely gone and we needed to restore it from tape onto another server. This is a fairly typical DR situation, but in this case it was made more difficult because the directory structure on the test server didn’t match that of the production server. Specifically, on the production server, everything was located on the E: drive while on the Test server, the only drive that exists is C:. This means that all file locations need to be changed as part of the duplicate procedure. This can all be done in one single RMAN command.
Note: One thing that makes this process easier is to enable controlfile autobackups on the production database. This is a good practice in general and should be configured for all databases.
rman target / nocatalog
configure controlfile autobackup on;
Here are the steps that I followed:
Step1) Copy the most recent backup files and archivelog files to the test server. In this case, we restored from tape to the location c:\prod backup files. The backup files and archivelogs come from the following locations:
Step 2) Create a file called INITPROD2.ORA in %ORACLE_HOME%\database containing one line: db_name=PROD2
Step 3) At a command prompt:
oradim –new –sid prod2
sqlplus / as sysdba
rman auxiliary / msglog=c:\restore.log
Step 4) At the RMAN prompt:
duplicate database to prod2 spfile
backup location “c:\prod backup files\”;
That’s all it takes! As you can see, I changed all file location parameters and converted datafile and logfile names with the _CONVERT parameters. You can set any parameters for the new database in this way. For example you may want to disable all database jobs in the duplicate database by setting job_queue_processes=0. Or, change the size of the SGA for a smaller server. RMAN will create the PROD2 database, perform as much recovery as possible given the available archive logs and open the database with the resetlogs option. This is a really nice feature of RMAN.
I arrived home from Denver, CO around 2am this morning after attending the Rocky Mountain Oracle Users Group Training Days 2013 conference (aka RMOUG TD2013). What a great event! It was two full days (with an optional 3rd half-day) of intense Oracle training – enough to make my brain hurt – given by some of the best-known experts in the field.
Interestingly, the conference was not strictly about Oracle. There were quite a few sessions on MySQL, SQLServer, Hadoop and various development topics. However, since I have limited mental capacity, I decided to stick with what I already know and attended mostly the Oracle-specific sessions. Because we have a few customers running Oracle Exadata, I attended several sessions devoted to this, hoping to improve my understanding in this area. I have to say that I definitely have a better understanding of some of the key aspects of this technology now.
The highlight of the conference for me was on Tuesday at 4pm. This is when I did my own presentation: “All About Oracle Auditing – Everything You Need to Know”. I have been working on this for quite a while and was very happy to have the opportunity to do my presentation in front of other Oracle professionals. Database auditing is an important part of any security strategy and I tried to present the audience with technical facts along with my own observations and experience on the subject. It was both well attended and (I think) well received by the audience. If you weren’t lucky enough to attend the conference, you can download my presentation and the accompanying white paper from our website. You can find them here: http://www.dbspecialists.com/presentations.html#Dean_paper
Have you found that daily full database backups are becoming impractical for your growing databases? Incremental backups are no longer the exception in some shops. But with the increased complexity comes a greater responsibility for thoroughly planning and testing your backup solution. Testing a more complicated backup solution makes for a more complicated validation procedure. This article isn’t going to get into every aspect of backup testing but we will show one example of how an inadequate test of incremental backups can have disastrous results in a database recovery scenario. For this example we used Oracle 188.8.131.52 Enterprise Edition on CentOS 5.5. I tried the same test on Oracle EE 10.2.0.4 and 184.108.40.206 with similar results.
While auditing a backup solution for a new client recently, we noted that they were using the RMAN syntax for improving backup efficiency:
RMAN> backup database incremental level 0 not backed up since time “blah blah blah”;
The NOT BACKED UP SINCE TIME clause can be handy for resuming backups that were interrupted for some reason. Suppose your strategy calls for weekly level-0 incrementals and daily level-1 incrementals. The level-0 is akin to a full backup and the level-1 picked up changes made since the level-0 (or the last level-1 unless you specify that it should be cumulative). If you’ve done a level-0 backup on half of your data files when the backup goes south and you’ve got a pretty large database then on the next backup attempt you don’t really want to back those guys up again. Why not just do the level-0 on the ones that remain. But there’s potential for misuse of the NOT BACKED UP SINCE TIME clause here. Let’s illustrate through an example.
RMAN> report need backup;
RMAN> retention policy will be applied to the command
RMAN> retention policy is set to redundancy 1Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1 0 /u01/oradata/test2/system01.dbf
2 0 /u01/oradata/test2/sysaux01.dbf
3 0 /u01/oradata/test2/undotbs01.dbf
4 0 /u01/oradata/test2/users01.dbf
Okay, all of the data files need to be backed up so let’s do a level-0 incremental backup:
RMAN> backup incremental level 0 database;
Starting backup at 02-07-2013 11:52:02
Finished backup at 02-07-2013 11:53:40
Our strategy calls for weekly level-0 incrementals and daily level-1 incrementals so let’s simulate that by taking a level-1 backup of the database now.
RMAN> backup incremental level 1 database;
Starting backup at 02-07-2013 12:08:16
Finished backup at 02-07-2013 12:08:54
So we have a level-0 backup to serve as our baseline and a level-1 backup with changes made since the level-0 was performed. Now let’s try the NOT BACKED UP SINCE TIME clause but first we’ll introduce a “what-if” condition. What if the level-0 backup has gotten lost or deleted. We’ll simulate that in the file system by manually deleting those backup set pieces and then crosscheck in RMAN to synchronize the control file and make RMAN aware that they’re missing:
RMAN> crosscheck backup;
RMAN> delete expired backup;
Okay, RMAN now knows that we only have a level-1 incremental backup – not good obviously because you can’t restore your database from a level-1 backup alone. We’ll confirm that RMAN is informed of the situation:
RMAN> report need backup;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1 0 /u01/oradata/test2/system01.dbf
2 0 /u01/oradata/test2/sysaux01.dbf
3 0 /u01/oradata/test2/undotbs01.dbf
4 0 /u01/oradata/test2/users01.dbf
Now let’s test how our what-if condition causes confusion when combined with an improperly implemented NOT BACKED UP SINCE TIME usage. Let’s try doing a level-0 backup but use a time after the last level-0 backup.
RMAN> backup incremental level 0 database not backed up since time "to_date('02-07-2013 12:00:00','mm-dd-yyyy hh24:mi:ss')";
Starting backup at 02-07-2013 12:15:53
using channel ORA_DISK_1
skipping datafile 1; already backed up on 02-07-2013 12:08:51
skipping datafile 2; already backed up on 02-07-2013 12:08:51
skipping datafile 3; already backed up on 02-07-2013 12:08:51
skipping datafile 4; already backed up on 02-07-2013 12:08:51
Finished backup at 02-07-2013 12:15:54
RMAN is not going to back up the files even though we stated that we want a level-0 backup. The trouble is that RMAN assumes the time we specify is directly related to the incremental level we specified. Even though it knows the files need to be backed up in order to obey the backup retention policy, it blindly skips the files because of the level-1 backup we performed.
Now let’s try a database restore:
RMAN> restore database;
Starting restore at 02-07-2013 12:18:49
using channel ORA_DISK_1
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-03002: failure of restore command at 02/07/2013 12:18:49
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 5 found to restore
RMAN-06023: no backup or copy of datafile 4 found to restore
RMAN-06023: no backup or copy of datafile 3 found to restore
RMAN-06023: no backup or copy of datafile 2 found to restore
Our backups are useless!
What is the lesson here? The lesson is that even though RMAN is a great tool and frees the DBA from a lot of responsibility with regard to backup retention, it can also present the occasional pitfall if you’re not careful. If you use the NOT BACKED UP SINCE TIME clause in conjunction with an incremental backup strategy, make sure the time you use corresponds to the specific level of incremental backup that you performed. As this example shows if you make the mistake of using the wrong time you might wind up with egg on your face.
Finally, we should note here that testing your backups and disaster recovery strategy is not to be taken lightly. If there is one thing a DBA can get fired over, it is losing data. If you are not totally comfortable with your backup solution and need help with testing or if you would just feel better about getting another pair of eyes on your strategy, please give us a call at 415-344-0500 (ext 501) or email email@example.com. We would be happy to help.
Recently, I was doing some work with OpenOffice (OO), and I noticed a part of OO that I wasn’t aware of - the Base database. Base is similar to Microsoft Access, though it of course lacks some features and polish. It does come with the ability to interact with several different databases, the Oracle RDBMS among them. It has a tables section, where you can design and work with tables; a Queries section which has several ways to interactively build queries, a Forms section where you can create a front-end form for a database table or set of tables, and a Reports section for creating reports.
I was curious – just how good is Base, what is it good at, and is it worth bothering with when using it with Oracle?
Installation of Oracle JDBC library
Before Base can connect to your Oracle database, OO needs to be configured to use the Oracle JDBC driver. Do do this, first figure out which version of Oracle you are using. Then, go to the Oracle website to retrieve the proper JDBC thin-client - at this time it can be found athttp://www.oracle.com/technetwork/database/features/jdbc/index-091264.html – though this may change in the future. In my case, this file was called
ojdbc6.jar – for me, this was already installed as part of a thick-client. In OO, go to
options – then under the OpenOffice.Org tree find the ‘Java’ node. Click on the
Class Path... button on the right, and then click on ‘Add Archive’. Select your ‘ojdbc6.jar’ (or other Oracle JDBC library), and then click ‘Open’ to add it.
You will need to shut down & restart OO in order for this to work. If it is not installed correctly, you will get error messages when trying to add a connection to your target database.
Setting up a connection to your Oracle database
You will require network access, with proper credentials, in order to connect to the Oracle database. I would suggest that while testing, you should be using a test account in a database with no write access to your ‘main’ accounts for safety.
As this is using Oracle JDBC, you will need to know the:
- service-name, and
- IPV4 network address
for your target database.
I’ve found that working with OO connections with Oracle go much faster if the account that is being connected to does not have any DBA, or ’select any table’ privileges. When you first connect and periodically after that, OO will query the data dictionary to see what tables can be accessed. If this includes most or all tables in the database, it will cause this step to be quite slow, loading information from hundreds of tables.
When you first run Base, a Database Wizard dialog box is invoked so that the connection information can be configured. Click on Connect to an Existing Database, and change the connection type to Oracle JDBC, then click Next. Enter the Database Name, the Server (where you can enter the IP address if you like), the Port. Click onTest class – this tests to insure that the Oracle JDBC driver can be loaded. if this returns an error, something is wrong with the installation of the JDBC library above. Then, click on Next. At this stage you’ll need to enter your username/password. You can click on Test connection. If you get The connection was established successfully then it has been configured correctly. For future connections, just use the Open an existing database file option on the dialog box that pops up when it starts, to connect to your previously configured database.
Once you have connected, if you have used Microsoft Access before, things should be pretty familiar. Just like MS Access, you can use a wizard to create new tables. It has quite a few tables pre-defined – for example Customers and TimeBilled would could be quite handy – especially if what you are storing is common. However, I ran into an error when trying to create these pre-defined tables (”Missing closing parentheses”), so the wizard may not work.
Using the design view, though, worked fine for a test table I created.
There are three ways to define queries – using the wizard, using Create query in Design View, and Create query in SQL View. These options allow you to create queries pretty easily – the Create query in Design View allows you to visually add tables, then define join conditions, and finally to figure out what fields are needed. Obviously, if you are using more advanced SQL (including nonstandard Oracle syntax) then you’ll need to use the Create query in SQL view. Note: I did get some errors when working with queries (”Data cannot be loaded”), so you may encounter some issues here.
Forms allows you to create data entry/view forms. From my testing, it is not at all as polished or capable as MS Access, but it does work. I was able to create a data-entry form for a simple table just by using the wizard alone in just a few clicks. I was suprised to see that it does support major/minor forms, or forms/sub-forms.
Reports allows you to define reports visually. The wizards have a few predefined looks that personally I didn’t like that much, but luckily you can define reports using the design view without the wizard, though.
Other ways of interacting with OO
Similar to Microsoft Office products, OO does have a well-documented API. You can control OO from a variety of languages, including C, Perl, and Python among others. In fact, you can actually programatically create OO documents. So at least in theory, you could programatically run Base reports from a batch job.
OO Base is an interesting project – it definitely does have some rough edges (bugs), at least when interacting with Oracle databases using the JDBC thin driver. My testing was done with version 3.3 – I think I plan to check back in a bit to see if these bugs have been fixed. It may have a use as a query-builder and in querying the database, though these features are in Oracle Developer and Tora as well as most other modern IDEs.