AI Archiver - Forum - OpenEdge RDBMS - Progress Community
 Forum

AI Archiver

This question is answered

Hi,

On OE 11.3.3, RHEL 6.1, set archiver to run at say every 10 minutes (hh:10, hh:20..), then have another job moves the archive files to the backup server (for roll forwarding) to run every 10 minutes in between the archive schedule (hh:15, hh:25..).

Because the second job copies and then deletes the files in the archive directory, I reckon there's a chance the archive file is still being generated but the second job is already processing it.

Any thoughts? 

TIA!

Verified Answer
  • I suppose if the file is large enough and/or the network between the AI location and backup server not up to snuff, it might happen. You can utilize 'fuser' command to check whether file is in use and act accordingly. And/or compress the file(s).

  • To add to Libor's comment: you'll need to be root (sudo fuser).  lsof will also do the job.

    For example, here's an OE 12.0 DB I started with my user "paul" :

    $ fuser toto.db

    $ sudo fuser toto.db

    /home/paul/tmp/db/toto.db: 11912

    $ lsof toto.db

    $ sudo lsof toto.db

    COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME

    _mprosrv 11912 root    8u   REG  259,2    32768 664125 toto.db

    _mprosrv 11912 root   10u   REG  259,2    32768 664125 toto.db

    _mprosrv 11912 root   11u   REG  259,2    32768 664125 toto.db

    Paul Koufalis
    White Star Software

    pk@wss.com
    @oeDBA (https://twitter.com/oeDBA)

    ProTop: The #1 Free OpenEdge DB Monitoring Tool
    http://protop.wss.com
All Replies
  • I suppose if the file is large enough and/or the network between the AI location and backup server not up to snuff, it might happen. You can utilize 'fuser' command to check whether file is in use and act accordingly. And/or compress the file(s).

  • To add to Libor's comment: you'll need to be root (sudo fuser).  lsof will also do the job.

    For example, here's an OE 12.0 DB I started with my user "paul" :

    $ fuser toto.db

    $ sudo fuser toto.db

    /home/paul/tmp/db/toto.db: 11912

    $ lsof toto.db

    $ sudo lsof toto.db

    COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME

    _mprosrv 11912 root    8u   REG  259,2    32768 664125 toto.db

    _mprosrv 11912 root   10u   REG  259,2    32768 664125 toto.db

    _mprosrv 11912 root   11u   REG  259,2    32768 664125 toto.db

    Paul Koufalis
    White Star Software

    pk@wss.com
    @oeDBA (https://twitter.com/oeDBA)

    ProTop: The #1 Free OpenEdge DB Monitoring Tool
    http://protop.wss.com
  • Thank you all for the input.

  • you could compute an md5 hash for the file before copying and then another after a bit, say 30 sec. if they differ the file is still being generated.

    also, you can (and should) use the md5 hash to verify that the copy on the other end of the transfer (at the backup server) is good.