Terry Sutton and I recently had the pleasure of volunteering at a session of 12cR1 “RAC Attack” lab held at the November Conference of the North California Oracle Users Group (NoCOUG). Jimmy Brock from NoCOUG also volunteered and we had several other experts drop by and help out. For those of you unfamiliar with a “RAC Attack” lab, it is a hands-on session where attendees follow the instructions from the RAC Attack Wikibook either
http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home for 11gR2 or
http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c for 12cR1
and install a 2-node Oracle RAC database on their laptop. This was the first time that NoCOUG had held such a lab and our first involvement with one but it was well attended with over 30 attendees and since they had to turn out the lights to get us to leave, it appeared to be a great success. A more concise description of the material is available at the above (12c) URL
RAC Attack is a free curriculum and platform for hands-on learning labs related to Oracle RAC (cluster database), motivated by the belief that the best way to learn RAC is through lots of hands-on experience. The original contributors were Jeremy Schneider, Dan Norris and Parto Jalili. This curriculum has been used since 2008 by organizers of events, by instructors in classes and by individuals at home. Its goal is to help students learn about Oracle RAC cluster databases through guided examples.
RAC Attack differs in depth from other tutorials currently available.
Every keystroke and mouse click is carefully documented here.
The process is covered from the very beginning to the very end - from the very first installation of the Virtual Hypervisor on your laptop to various experiments on your running cluster database… with everything in between.
The labs in the main workbook have been tested thoroughly and repeatedly.
Of course the 12c material is relatively new and “Part II: Exploring RAC” is still under development but it is already excellent and can be used without any major issues to complete the install.
Limitations: Of course this virtualization setup is purely for education and it necessarily omits a few features that a production RAC configuration would typically have for example jumbo frames on the interconnect, bonded network interfaces and multi-path I/O.
We came across a few issues in the lab most of them unrelated to the material as such but more about the preparation and environment. Here is a set of tips for anybody wanting to complete the material by themselves (or in a future lab).
We used the latest available version of Virtualbox available from http://www.virtualbox.org which is v4.3.2. Jimmy was our “Mac Guy” and confirmed the lab works fine with this version on Macs.
As with all downloads it is highly desirable to verify the downloads by comparing the digest against the source to ensure the download is correct. One promising lab attempt was brought to an end by an apparently corrupt Oracle Linux .iso.
When unzipping the four Oracle Grid Infrastructure and Database 12c (18.104.22.168) downloads you want to end up with only two directories one called ‘grid’ and one called ‘database’. Some unzipping alternatives leave you with 4 directories in which case you should merge them back into the above two.
One annoying problem several attendees encountered was due to their “hardware virtualization extensions”. These appeared as a few errors but the most common was people getting “kernel requires an x86_64 cpu, but only detected an i686 cpu unable to boot - please use a kernel appropriate for your cpu” when Oracle Linux attempts to start towards the end of the install. This issue is discussed in detail in the VirtualBox FAQ at
but a summary is “…a 64bit GUEST NEEDS VT-X/AMD-V to be active and enabled….”. If you get the above error you need to enable these extensions by changing your BIOS settings. Usually in the Chipset or Processor menus but possibly even in Security Settings or other non-standard menu names. If anybody knows of a simple test to ensure you have these extensions enabled please let us know. We know of the VMware Processor Check for 64-Bit Compatibility, but it unfortunately appears to require registration.
In a lab environment with people working on their laptops there is an ergonomic consideration. It is tricky on a relatively small single screen to constantly shuffle backwards and forwards between the RAC Attack Wikibook and the Virtual Machine windows. If you are following the material in an environment with multiple large monitors this is not a concern but in a lab, on a laptop, it is a hassle. The NoCOUG folks were kind enough to provide attendees with a nice printed manual which helps greatly. However some of the commands themselves are complicated enough that they need to be pasted from an electronic source rather than manually typed and ‘bind’ is very fussy about indentation! In short when the instructions say ‘cut and paste the following command’ we found it most reliable to cut and paste from the wikibook. If you do not have access to the web page ensure you have a suitable electronic source available.
If you only have 8G of RAM available on your laptop we suggest allocating 1536M to each VM. The RAC Attack material is currently a little inconsistent here on this point e.g. they show 3072MB as a “Memory Size” in some screens but have “Base Memory: 1500MB” in others.
Before step 11 on URL http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c/Configure_Second_Linux_VM (i.e. before rebooting the collabn2 VM) check your /etc/sysconfig/network-scripts/ifcfg-eth2 file and if it contains a line
then change it to
This step is mentioned in the collabn1 setup but not when cloning collabn2. Without this step I found that my /etc/resolv.conf file was getting overwritten on collabn2 which messes up the dns lookups.
In summary, the RAC Attack material is a powerful resource for anybody wanting to learn about RAC. Its creators and contributors have done a great job of documenting the process so thoroughly that anybody can complete it in a few hours.
Many thanks to Iggy Fernandez. Jimmy Brock and the rest of the NoCOUG Board for their invaluable assistance and support during the preparation and running of the lab.
One thing about working with PL/SQL is the fact that it runs inside the database (Oracle Forms nonwithstanding), rather than in a normal OS process. This means that any interaction with the database server OS itself isn’t easily possible. You can create a library in ‘C’ and call that from
pl/sql, but this isn’t usually a trivial exercise.
So, if you have a need for your
pl/sql program to communicate with an outside process, there is one really simple way to go about it – to use an API called
dbms_pipe. This API is available under all versions of Oracle Server, including Standard Edition One, it has been around for a very long time, since Oracle v7, and in my experience it is very stable.
dbms_pipe API is not related to Unix (or Posix) pipes – it is its own thing, that only communicates between sql sessions. It’s a very quick method that you can use to inform an external process immediately when attention is needed. For example, one way to tell if something is needed in a database is to poll a table or query every few seconds – if the query is resource-intensive, this can really eat up a lot of CPU, I/O and memory. Instead of polling, the
dbms_pipe API can be used to reduce this load considerably.
dbms_pipe -vs- AQ
Oracle also has a product that can be licensed for Enterprise Edition called AQ, or Advanced Queuing. This allows you to create message queues in the database, where basically you have one or more publisher processes which add things to the queue, and one or more subscriber processes which take messages from the queue, though it also supports a point-to-point model. AQ is a very advanced and complicated package; messages can queue within or between systems, you can have rules and transformations for these messages in the database, and it can interface with other systems using HTTP, HTTPS or SMTP. However, AQ is an extra license on top of the Enterprise Edition, so it’s not cheap and it’s not possible to use with a Standard Edition database.
dbms_pipe is not that sophisticated at all. Using
dbms_pipe, one process creates a message for the named database pipe, and other processes can retrieve it, very much like a FIFO queue (first-in first-out). The process that creates the message is never blocked. When the process that is reading the message asks for one, and there isn’t one available, it will be blocked there, and will ‘hold’ there for a timeout period.
Adding a message to a
dbms_pipe, the schema or owner of the procedure using it will need access;
SQL> grant execute on dbms_pipe to myuser;
The API requires that you name the pipe over which messages will flow. I believe the name can be anything, but the processes that put messages into the pipe, and take messages out, will need to use the same pipe name.
To send a message, the
dbms_pipe.pack_message() is called, followed by the
dbms_pipe.send_message() procedure like this:
create or replace procedure send_pipe(p_message in varchar2)
v_status := dbms_pipe.send_message('BIGWIDGET');
if (v_status != 0) then
Note that in the above code, the pipe name BIGWIDGET is given as the 1st argument to
send_message. There are actually two messages sent; the first being the length of the message (numeric), and the second being the message itself. This makes it easy for the routine which retrieves the message(s) to know that they got the entire message – obviously this could become more sophisticated still (maybe a CRC for the message as well?), but it isn’t strictly necessary.
When the above code is run, a message is added to the queue named
BIGWIDGET, and it will stay there.
Retrieving a message from dbms_pipe
The important thing to remember with regards to retrieving messages from the pipe is that by default, the routine will not return until a message is sent to the same named pipe.
Here’s some code to retrieve the message:
procedure receive_pipe(p_message out varchar2)
v_status := dbms_pipe.receive_message('BIGWIDGET');
if (v_status = 0) then
p_message := v_message;
Note that if there is a message for that pipe, this will return immediately. If there are no messages, it will just sit there and wait until there is one.
Example in real-life
Your company, Big and Small Widgets Inc make really big widgets, and also some small ones. The big ones are so large and expensive that it is a major accomplishment when your company sells one. The company has an IRC server where a great bulk of the interoffice business gets taken care of.
The Big Boss would like a message to appear in your IRC channel whenever there is a sale of a Big Widget. The problem is that it is a very expensive query to determine if a Big Widget has been sold – the query typically takes a minute or more, and it consumes a lot of database resources. It is important to the Big Boss that the message be sent immediately when the sale closes.
It’s easy enough to create an IRC ‘bot’ that will do exactly what you need in terms of connecting to the IRC server, polling messages in the database, and sending them, perhaps using the Perl
POE::Component::IRC perl module.
The problem here is that the IRC Bot won’t be able to just execute that really big query over and over without affecting other things that the database is being used for.
This is where using
dbms_pipe would come in really handy.
Step 1: in the IRC Bot, have a call to
receive_pipe() above. The bot will execute this, and it will basically pause until a message is received. Following the call to
receive_message(), it will perform the large query to figure out the details of the sale, and then it will broadcast it on the IRC channel(s) desired.
Step 2: change the sales logic so that as soon as a big widget is sold, a call to the
send_pipe() procedure is called.
In this way, when a sale of a Big Widget happens, the IRC bot will immediately know, and will immediately inform the IRC channel as soon as it completes the (resource intensive) query. They query will not run, unless a sale has just taken place, reducing the load on the database.
dbms_pipe API is relatively simplistic, it can definitely is very useful when one SQL process needs to be informed by another SQL process that something has happened.
HTML generated by org-mode 6.33x in emacs 23
If you have an 11gR2 database (we’ve seen it on 22.214.171.124 databases although it may be present in 126.96.36.199 as well) and you get this error when trying to run a Statspack snap:
ORA-00001: unique constraint (PERFSTAT.STATS$FILE_HISTOGRAM_PK) violated
ORA-06512: at “PERFSTAT.STATSPACK”, line 3813
ORA-06512: at “PERFSTAT.STATSPACK”, line 5079
ORA-06512: at “PERFSTAT.STATSPACK”, line 105
ORA-06512: at line 1
It is likely due to Bug 11927239. It stems from duplicate rows in the v$filestat and/or v$file_histogram views. The unofficial workaround is to simply create a view in the PERFSTAT schema that eliminates the duplicates and your Statspack snaps should work okay until you get the bug patch applied or upgrade to 188.8.131.52 due out this fall. You will first need to grant explicit access on the view to PERFSTAT:
SQL> connect / as sysdba
SQL> grant select on v_$file_histogram to perfstat;
SQL> connect perfstat
SQL> create view v$file_histogram as select distinct * from sys.v_$file_histogram;
SQL> alter package statspack compile;
If we can assist with this or any other Oracle Database issues you might be having including remote database monitoring and administration, please give us a call at 415-344-0500 ext 501.
One of the new Backup and Recovery features of Oracle 11g Release 2 (11gR2) is the ability to duplicate a database without connecting to the target database. In RMAN terminology, the “target” database is the one you wish to duplicate. In this case, the only requirement is that you provide a disk backup location where RMAN can find all the backups, data file copies, archived logs, and control file copies for database duplication. This database duplication enhancement is helpful when it is not possible to connect to the target database and the recovery catalog.
This came in handy for me recently on a customer project. They wanted me to test a disaster recovery scenario in which the production database server was completely gone and we needed to restore it from tape onto another server. This is a fairly typical DR situation, but in this case it was made more difficult because the directory structure on the test server didn’t match that of the production server. Specifically, on the production server, everything was located on the E: drive while on the Test server, the only drive that exists is C:. This means that all file locations need to be changed as part of the duplicate procedure. This can all be done in one single RMAN command.
Note: One thing that makes this process easier is to enable controlfile autobackups on the production database. This is a good practice in general and should be configured for all databases.
rman target / nocatalog
configure controlfile autobackup on;
Here are the steps that I followed:
Step1) Copy the most recent backup files and archivelog files to the test server. In this case, we restored from tape to the location c:\prod backup files. The backup files and archivelogs come from the following locations:
Step 2) Create a file called INITPROD2.ORA in %ORACLE_HOME%\database containing one line: db_name=PROD2
Step 3) At a command prompt:
oradim –new –sid prod2
sqlplus / as sysdba
rman auxiliary / msglog=c:\restore.log
Step 4) At the RMAN prompt:
duplicate database to prod2 spfile
backup location “c:\prod backup files\”;
That’s all it takes! As you can see, I changed all file location parameters and converted datafile and logfile names with the _CONVERT parameters. You can set any parameters for the new database in this way. For example you may want to disable all database jobs in the duplicate database by setting job_queue_processes=0. Or, change the size of the SGA for a smaller server. RMAN will create the PROD2 database, perform as much recovery as possible given the available archive logs and open the database with the resetlogs option. This is a really nice feature of RMAN.
I arrived home from Denver, CO around 2am this morning after attending the Rocky Mountain Oracle Users Group Training Days 2013 conference (aka RMOUG TD2013). What a great event! It was two full days (with an optional 3rd half-day) of intense Oracle training – enough to make my brain hurt – given by some of the best-known experts in the field.
Interestingly, the conference was not strictly about Oracle. There were quite a few sessions on MySQL, SQLServer, Hadoop and various development topics. However, since I have limited mental capacity, I decided to stick with what I already know and attended mostly the Oracle-specific sessions. Because we have a few customers running Oracle Exadata, I attended several sessions devoted to this, hoping to improve my understanding in this area. I have to say that I definitely have a better understanding of some of the key aspects of this technology now.
The highlight of the conference for me was on Tuesday at 4pm. This is when I did my own presentation: “All About Oracle Auditing – Everything You Need to Know”. I have been working on this for quite a while and was very happy to have the opportunity to do my presentation in front of other Oracle professionals. Database auditing is an important part of any security strategy and I tried to present the audience with technical facts along with my own observations and experience on the subject. It was both well attended and (I think) well received by the audience. If you weren’t lucky enough to attend the conference, you can download my presentation and the accompanying white paper from our website. You can find them here: http://www.dbspecialists.com/presentations.html#Dean_paper
Have you found that daily full database backups are becoming impractical for your growing databases? Incremental backups are no longer the exception in some shops. But with the increased complexity comes a greater responsibility for thoroughly planning and testing your backup solution. Testing a more complicated backup solution makes for a more complicated validation procedure. This article isn’t going to get into every aspect of backup testing but we will show one example of how an inadequate test of incremental backups can have disastrous results in a database recovery scenario. For this example we used Oracle 184.108.40.206 Enterprise Edition on CentOS 5.5. I tried the same test on Oracle EE 10.2.0.4 and 220.127.116.11 with similar results.
While auditing a backup solution for a new client recently, we noted that they were using the RMAN syntax for improving backup efficiency:
RMAN> backup database incremental level 0 not backed up since time “blah blah blah”;
The NOT BACKED UP SINCE TIME clause can be handy for resuming backups that were interrupted for some reason. Suppose your strategy calls for weekly level-0 incrementals and daily level-1 incrementals. The level-0 is akin to a full backup and the level-1 picked up changes made since the level-0 (or the last level-1 unless you specify that it should be cumulative). If you’ve done a level-0 backup on half of your data files when the backup goes south and you’ve got a pretty large database then on the next backup attempt you don’t really want to back those guys up again. Why not just do the level-0 on the ones that remain. But there’s potential for misuse of the NOT BACKED UP SINCE TIME clause here. Let’s illustrate through an example.
RMAN> report need backup;
RMAN> retention policy will be applied to the command
RMAN> retention policy is set to redundancy 1Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1 0 /u01/oradata/test2/system01.dbf
2 0 /u01/oradata/test2/sysaux01.dbf
3 0 /u01/oradata/test2/undotbs01.dbf
4 0 /u01/oradata/test2/users01.dbf
Okay, all of the data files need to be backed up so let’s do a level-0 incremental backup:
RMAN> backup incremental level 0 database;
Starting backup at 02-07-2013 11:52:02
Finished backup at 02-07-2013 11:53:40
Our strategy calls for weekly level-0 incrementals and daily level-1 incrementals so let’s simulate that by taking a level-1 backup of the database now.
RMAN> backup incremental level 1 database;
Starting backup at 02-07-2013 12:08:16
Finished backup at 02-07-2013 12:08:54
So we have a level-0 backup to serve as our baseline and a level-1 backup with changes made since the level-0 was performed. Now let’s try the NOT BACKED UP SINCE TIME clause but first we’ll introduce a “what-if” condition. What if the level-0 backup has gotten lost or deleted. We’ll simulate that in the file system by manually deleting those backup set pieces and then crosscheck in RMAN to synchronize the control file and make RMAN aware that they’re missing:
RMAN> crosscheck backup;
RMAN> delete expired backup;
Okay, RMAN now knows that we only have a level-1 incremental backup – not good obviously because you can’t restore your database from a level-1 backup alone. We’ll confirm that RMAN is informed of the situation:
RMAN> report need backup;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1 0 /u01/oradata/test2/system01.dbf
2 0 /u01/oradata/test2/sysaux01.dbf
3 0 /u01/oradata/test2/undotbs01.dbf
4 0 /u01/oradata/test2/users01.dbf
Now let’s test how our what-if condition causes confusion when combined with an improperly implemented NOT BACKED UP SINCE TIME usage. Let’s try doing a level-0 backup but use a time after the last level-0 backup.
RMAN> backup incremental level 0 database not backed up since time "to_date('02-07-2013 12:00:00','mm-dd-yyyy hh24:mi:ss')";
Starting backup at 02-07-2013 12:15:53
using channel ORA_DISK_1
skipping datafile 1; already backed up on 02-07-2013 12:08:51
skipping datafile 2; already backed up on 02-07-2013 12:08:51
skipping datafile 3; already backed up on 02-07-2013 12:08:51
skipping datafile 4; already backed up on 02-07-2013 12:08:51
Finished backup at 02-07-2013 12:15:54
RMAN is not going to back up the files even though we stated that we want a level-0 backup. The trouble is that RMAN assumes the time we specify is directly related to the incremental level we specified. Even though it knows the files need to be backed up in order to obey the backup retention policy, it blindly skips the files because of the level-1 backup we performed.
Now let’s try a database restore:
RMAN> restore database;
Starting restore at 02-07-2013 12:18:49
using channel ORA_DISK_1
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-03002: failure of restore command at 02/07/2013 12:18:49
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 5 found to restore
RMAN-06023: no backup or copy of datafile 4 found to restore
RMAN-06023: no backup or copy of datafile 3 found to restore
RMAN-06023: no backup or copy of datafile 2 found to restore
Our backups are useless!
What is the lesson here? The lesson is that even though RMAN is a great tool and frees the DBA from a lot of responsibility with regard to backup retention, it can also present the occasional pitfall if you’re not careful. If you use the NOT BACKED UP SINCE TIME clause in conjunction with an incremental backup strategy, make sure the time you use corresponds to the specific level of incremental backup that you performed. As this example shows if you make the mistake of using the wrong time you might wind up with egg on your face.
Finally, we should note here that testing your backups and disaster recovery strategy is not to be taken lightly. If there is one thing a DBA can get fired over, it is losing data. If you are not totally comfortable with your backup solution and need help with testing or if you would just feel better about getting another pair of eyes on your strategy, please give us a call at 415-344-0500 (ext 501) or email email@example.com. We would be happy to help.
Recently, I was doing some work with OpenOffice (OO), and I noticed a part of OO that I wasn’t aware of - the Base database. Base is similar to Microsoft Access, though it of course lacks some features and polish. It does come with the ability to interact with several different databases, the Oracle RDBMS among them. It has a tables section, where you can design and work with tables; a Queries section which has several ways to interactively build queries, a Forms section where you can create a front-end form for a database table or set of tables, and a Reports section for creating reports.
I was curious – just how good is Base, what is it good at, and is it worth bothering with when using it with Oracle?
Installation of Oracle JDBC library
Before Base can connect to your Oracle database, OO needs to be configured to use the Oracle JDBC driver. Do do this, first figure out which version of Oracle you are using. Then, go to the Oracle website to retrieve the proper JDBC thin-client - at this time it can be found athttp://www.oracle.com/technetwork/database/features/jdbc/index-091264.html – though this may change in the future. In my case, this file was called
ojdbc6.jar – for me, this was already installed as part of a thick-client. In OO, go to
options – then under the OpenOffice.Org tree find the ‘Java’ node. Click on the
Class Path... button on the right, and then click on ‘Add Archive’. Select your ‘ojdbc6.jar’ (or other Oracle JDBC library), and then click ‘Open’ to add it.
You will need to shut down & restart OO in order for this to work. If it is not installed correctly, you will get error messages when trying to add a connection to your target database.
Setting up a connection to your Oracle database
You will require network access, with proper credentials, in order to connect to the Oracle database. I would suggest that while testing, you should be using a test account in a database with no write access to your ‘main’ accounts for safety.
As this is using Oracle JDBC, you will need to know the:
- service-name, and
- IPV4 network address
for your target database.
I’ve found that working with OO connections with Oracle go much faster if the account that is being connected to does not have any DBA, or ’select any table’ privileges. When you first connect and periodically after that, OO will query the data dictionary to see what tables can be accessed. If this includes most or all tables in the database, it will cause this step to be quite slow, loading information from hundreds of tables.
When you first run Base, a Database Wizard dialog box is invoked so that the connection information can be configured. Click on Connect to an Existing Database, and change the connection type to Oracle JDBC, then click Next. Enter the Database Name, the Server (where you can enter the IP address if you like), the Port. Click onTest class – this tests to insure that the Oracle JDBC driver can be loaded. if this returns an error, something is wrong with the installation of the JDBC library above. Then, click on Next. At this stage you’ll need to enter your username/password. You can click on Test connection. If you get The connection was established successfully then it has been configured correctly. For future connections, just use the Open an existing database file option on the dialog box that pops up when it starts, to connect to your previously configured database.
Once you have connected, if you have used Microsoft Access before, things should be pretty familiar. Just like MS Access, you can use a wizard to create new tables. It has quite a few tables pre-defined – for example Customers and TimeBilled would could be quite handy – especially if what you are storing is common. However, I ran into an error when trying to create these pre-defined tables (”Missing closing parentheses”), so the wizard may not work.
Using the design view, though, worked fine for a test table I created.
There are three ways to define queries – using the wizard, using Create query in Design View, and Create query in SQL View. These options allow you to create queries pretty easily – the Create query in Design View allows you to visually add tables, then define join conditions, and finally to figure out what fields are needed. Obviously, if you are using more advanced SQL (including nonstandard Oracle syntax) then you’ll need to use the Create query in SQL view. Note: I did get some errors when working with queries (”Data cannot be loaded”), so you may encounter some issues here.
Forms allows you to create data entry/view forms. From my testing, it is not at all as polished or capable as MS Access, but it does work. I was able to create a data-entry form for a simple table just by using the wizard alone in just a few clicks. I was suprised to see that it does support major/minor forms, or forms/sub-forms.
Reports allows you to define reports visually. The wizards have a few predefined looks that personally I didn’t like that much, but luckily you can define reports using the design view without the wizard, though.
Other ways of interacting with OO
Similar to Microsoft Office products, OO does have a well-documented API. You can control OO from a variety of languages, including C, Perl, and Python among others. In fact, you can actually programatically create OO documents. So at least in theory, you could programatically run Base reports from a batch job.
OO Base is an interesting project – it definitely does have some rough edges (bugs), at least when interacting with Oracle databases using the JDBC thin driver. My testing was done with version 3.3 – I think I plan to check back in a bit to see if these bugs have been fixed. It may have a use as a query-builder and in querying the database, though these features are in Oracle Developer and Tora as well as most other modern IDEs.
Every DBA should include Datapump, Oracle’s cross-platform data migration utility, in his toolbox, especially since the legacy export utility is slated for obsolescence. Datapump expdp has a variety of uses, from database upgrades to QA refreshes and logical backups. A great place to start is with this script which has everything you need to do a full, consistent database export including data or just metadata. Of course we’ve taken care of the part of scripting that everyone loves, error handling. If we can advise on the best ways to use Datapump and take care of implementation for you, please contact us immediately.
About 10 years ago, Oracle Support released a completely new version of their support website. The previous one was based on straight HTML, served by what appears to be plsql-server pages, aka Oracle HTMLDB, aka the ‘htp’ package. It was quite functional, but it did look a bit dated, and I’m sure there were features that product management wished for, but simply couldn’t do because of the technology.
At the time, I was pleased that Oracle decided to release a new version, but this sense of well-being was turned on its head when I realized that it exclusively used Macromedia Flash technology. The 1st release required Flash, and would not work without it. I also frankly didn’t like their UI design; to me, it was cluttered, confusing at 3:00am in the morning, and the workflow – say for filing an SR – could have been better-done.
Oracle database administrators tend to be very concerned about security in general for obvious reasons. We also tend to have workstations that have at least partial access to the databases that we administer – often full access – and so we need to be extra-careful with regards to workstation security. Macromedia Flash has been ridden with extremely serious security bugs from the time it was released throughout its history. Requiring the workstation used by DBAs to have Flash installed really made no sense to me.
I actually doubt if there was ever a day in history, in which there were no known high-level vulnerabilities to Flash. It was for this reason that I really didn’t approve of Oracle using Flash as part of the technology stack – especially for their support website.
At the time, I did complain to them, and I’m sure others did as well. Soon after, they did come out with an HTML-only website that, though it wasn’t as functional as the Flash version, did well-enough. I still didn’t really like the UI, though.
I’ve learned that Oracle has released a new version of their support website recently for mobile clients, like cellphones/tablets. If you go to https://support.oracle.mobi, you’ll be directed to that website.
What’s nice, is that it actually works quite well even for full-sized browsers! It turns out that I’m using the mobile version far more often than the normal version for full-sized browsers; the layout is extremely simple and easy to use, it’s very very fast, and when you find an interesting article it displays it more clearly. It also doesn’t clutter up your display with a lot of options that are rarely if ever needed, and you can shrink the window quite small on a laptop screen to occupy less space while keeping it legible.
The new site isn’t using HTML5 responsive web design; it’s a completely different website. However, it does work quite well.
I’d suggest that if you do have a support contract with Oracle, to try out their new mobile site!
The address is: https://support.oracle.mobi
Managing database user accounts in a large enterprise can be very challenging. Creating individual accounts for every person in every database can be very time-consuming for DBAs and managing different passwords in every database is time-consuming for both DBAs and users. When people leave the enterprise, it is critical to remove their access immediately but this can be difficult if you have many databases. However, the cost of not properly managing these accounts is that your databases are more vulnerable to security breaches. Oracle offers a solution to this in the form of Enterprise User Security (EUS) which is part of the Oracle Identity Management product. With EUS, user accounts are created and managed in an LDAP repository called Oracle Internet Directory (OID). OID is responsible for authenticating the users as well as managing the access rights of each user.
I recently wrote a white paper called “Implementing Oracle 11g Enterprise User Security” that explains how to install and configure EUS. In this paper, I will give step-by-step instructions on how to install the software components required to run Oracle Identity Management and also how to implement Enterprise User Security to handle a typical real-world scenario in which different people need different levels of access to a number of databases.
You can find it here: http://www.dbspecialists.com/presentations.html#implementing_oracle_11g_enterprise_user_security
I hope you enjoy it. Please feel free to give feedback