I have 2 questions on backup options.
1.) Is anybody using probkup with the -com option.? I would like to know if this option is stable as previously i was advised to not use this option. I hope this is now stable to use.Comments Please.
2.) Redundancy check on backup -red n option. Should we be using this in this day and age as the backup layer is much more stable and robust than it was a couple of years ago. Comments Please.
-com works fine but does not provide much reduction in backup size.
-red 10 would add 10 percent to the size of your backup. it was originally implemented when UNIX tape drivers, tapes, and tape hardware were often flaky and unreliable. these days, that is not a problem. however, i have run into bad backups now and then and the redundancy might have made them readable but it was not being used.
Hi Gus. Thank you for that info. I think i would stick to the standard that i am currently using. Maybe in the future Progress could make backups with built in de-duplication and compression.
Backup compression would be nice. I compress Oracle backups 1 to 5, so my 1.5 TB oracle database backup is about 300 GB on disk.
It does add time, though not significant (10-20%).
Compression for Progress backups was discussed and voted for at several PUG meetings. I do not think it was a top vote though.
I always use -com. The amount of savings varies from "not much" to "quite a bit".
It is not "zip" style compression though. As I understand it it just skips empty space. So if your data extents are very densely packed it probably won't do much for you except to save a few IO ops.
After backing up with -com I then often gzip the backup and that usually gets to roughly 1/5th the size of the original db.
I'd be thrilled if real zip style compression were built-in to probkup/prorest. CPU cycles and RAM are plentiful. IO ops are precious. Zipping the data would burn CPU cycles and RAM but save quite a few IO ops. I'd like to be able to easily make that trade off. I know that we could, in theory, backup to a pipe and run it through gzip but that's really kind of klunky for my taste. A built-in option would be much easier to work with.
Tip: you should also use -Bp 10 with online probkup -- that prevents the backup from polluting the buffer pool as it reads the db.
Slightly off topic - has anyone tried raw backup i.e. by bypassing O/S cache. Not even sure if its possible but if yes, I would assume it might be faster?