One of the many useful features of rman is its ability to create compressed backup sets. Prior to the widespread adoption of rman, most backups would be compressed using OS utilities (gzip, compress, zip, winzip). If you compress rman backup pieces in this manner, then you will need to uncompress them manually before they can be used for recovery. This leaves room for human error and increases recovery time.
As of version 220.127.116.11, there are 4 compression algorithms available: BASIC, LOW, MEDIUM and HIGH. The 11g Backup & Recovery Guide describes these options as follows:
- BASIC – default compression algorithm
- HIGH – Best suited for backups over slower networks where the limiting factor is network speed
- MEDIUM -Recommended for most environments. Good combination of compression ratios and speed
- LOW – Least impact on backup throughput and suited for environments where CPU resources are the limiting factor.
Unfortunately, unless you have purchased the Advanced Compression Option, your only choice is BASIC. Regardless, I did some testing to see the difference in compression ratio as well as the time it takes to backup. The test script that I used is pretty simple. It specifies the compression algorithm and then does a full backup of the database and archivelogs. As a final test, I did a non-compressed rman backup and then used gzip to compress it. While I wouldn’t recommend you do backups in this way, I think it is interesting for comparison purposes.
CONFIGURE COMPRESSION ALGORITHM ‘BASIC';
backup as compressed backupset database format ‘/tmp/basic/db%U.dbf';
backup as compressed backupset archivelog all format ‘/tmp/basic/arch%U.dbf';
CONFIGURE COMPRESSION ALGORITHM ‘LOW';
backup as compressed backupset database format ‘/tmp/low/db%U.dbf';
backup as compressed backupset archivelog all format ‘/tmp/low/arch%U.dbf';
CONFIGURE COMPRESSION ALGORITHM ‘MEDIUM';
backup as compressed backupset database format ‘/tmp/medium/db%U.dbf';
backup as compressed backupset archivelog all format ‘/tmp/medium/arch%U.dbf';
CONFIGURE COMPRESSION ALGORITHM ‘HIGH';
backup as compressed backupset database format ‘/tmp/high/db%U.dbf';
backup as compressed backupset archivelog all format ‘/tmp/high/arch%U.dbf';
backup database format ‘/tmp/none/db%U.dbf';
backup archivelog all format ‘/tmp/none/arch%U.dbf';
This test was done using 18.104.22.168 Enterprise Edition on 64-bit linux. The total amount of data that needs to be backed up (datafiles plus archivelogs) is approximately 40GB but rman will skip the empty blocks, making the backup smaller than that.
As you can see from the results below, there is a big difference in the compression ratio as well as the amount of time taken. The HIGH compression took twice as long as LOW but the backup is 56% smaller. The BASIC option performed quite well, particularly considering that it is the only choice that doesn’t require additional licensing. Clearly the slowest option is to do a non-compressed backup and then compress it yourself. As with every test your results may differ from mine. The type and amount of data as well as the hardware in your environment will all make a difference. From my test, I conclude that the rman BASIC compression algorithm does quite a good job and there is probably no need to use any of the other options.
|Algorithm||Time to backup (minutes)||Size of database backup (GB)|
|NONE + gzip||61||5.2|