Why binary dump tried to write to BI file? - Forum - OpenEdge RDBMS - Progress Community

Why binary dump tried to write to BI file?

 Forum

Why binary dump tried to write to BI file?

This question is answered

Binary dump was started on purpose without the userid bit ("s") set in _proutil’s permissions and ulimit was lowed down to 1GB. It successfully dumped a few thousands records (2 MB) but then binary dump died with error 9449:

[2018/12/17@22:17:49.726+0600] P-17292      T-1     I BINDUM908: (452)   Login by XXXXX on batch. 
[2018/12/17@22:17:49.730+0600] P-17292      T-1     I BINDUM908: (7129)  Usr 908 set name to Binary dump. 
[2018/12/17@22:17:49.731+0600] P-17292      T-1     I BINDUM908: (7129)  Usr 908 set name to XXXXX. 
[2018/12/17@22:17:49.732+0600] P-17292      T-1     I BINDUM908: (17813) Using index indexname (19) for dump of table tablename. 
[2018/12/17@22:17:55.148+0600] P-17292      T-1     I BINDUM908: (10032) 20000  records dumped. 
[2018/12/17@22:17:55.992+0600] P-17292      T-1     I BINDUM908: (9449)  bkioWrite:Maximum file size exceeded during write, fd 10, len 16384, offset 169175, file /path/to/db.b1. 
[2018/12/17@22:17:55.992+0600] P-17292      T-1     F BINDUM908: (6072)  SYSTEM ERROR: error writing, file = /path/to/db.b1, ret = -1 
…
[2018/12/17@22:17:55.992+0600] P-17292      T-1     F BINDUM908: (5027)  User 908 died with 1 buffers locked. 
[2018/12/17@22:17:55.992+0600] P-17292      T-1     I BINDUM908: (439)   ** Save file named core for analysis by Progress Software Corporation.

The size of BI file seems to be equal to 2.58 GB (len 16384, offset 169175), in other words the size is higher than process’ ulimit.

 

Protrace file:

(15) 0x4000000000597c50  bkWrite + 0xa90 at /vobs_rkt/src/dbmgr/bk/bksubs.c:778 [/usr/dlc/bin/_dbutil]
(16) 0x400000000070dac0  rlwrtcur + 0x230 at /vobs_rkt/src/dbmgr/rl/rlrw.c:855 [/usr/dlc/bin/_dbutil]
(17) 0x40000000005968b0  rlbiflsh + 0x490 at /vobs_rkt/src/dbmgr/rl/rlrw.c:1017 [/usr/dlc/bin/_dbutil]
(18) 0x4000000000595e60  bmFlush + 0xd0 at /vobs_rkt/src/dbmgr/bm/bmbuf.c:5913 [/usr/dlc/bin/_dbutil]
(19) 0x4000000000595d50  bmwrold + 0x70 at /vobs_rkt/src/dbmgr/bm/bmbuf.c:3394 [/usr/dlc/bin/_dbutil]
(20) 0x4000000000592140  bmsteal + 0x320 at /vobs_rkt/src/dbmgr/bm/bmbuf.c:3886 [/usr/dlc/bin/_dbutil]
(21) 0x400000000058e540  bmLocateBuffer2 + 0x5d0 at /vobs_rkt/src/dbmgr/bm/bmbuf.c:4657 [/usr/dlc/bin/_dbutil]
(22) 0x40000000005e6310  rmLocate + 0x1f0 at /vobs_rkt/src/dbmgr/rm/rm.c:2118 [/usr/dlc/bin/_dbutil]
(23) 0x40000000005e5620  rmRecordFetch + 0x140 at /vobs_rkt/src/dbmgr/rm/rm.c:1804 [/usr/dlc/bin/_dbutil]
(24) 0x40000000006bf440  dbRecordGet + 0x300 at /vobs_rkt/src/dbmgr/db/dbrecord.c:619 [/usr/dlc/bin/_dbutil]

Why binary dump tried to write to BI file?

Verified Answer
  • when you run binary dump online, and it needs to read a new block into the buffer cache because it is not there, and it has to pick an old but modified lru buffer, it may choose to replace an already long ago modified buffer into which to to read the modified block bust whose bi notes have not yet been written to the bi log. if so, it must write the modified block to the database but it must first write the bi notes that were used to modify said block. so a bi write or several are required. thus what happened.

    -gus

All Replies
  • when you run binary dump online, and it needs to read a new block into the buffer cache because it is not there, and it has to pick an old but modified lru buffer, it may choose to replace an already long ago modified buffer into which to to read the modified block bust whose bi notes have not yet been written to the bi log. if so, it must write the modified block to the database but it must first write the bi notes that were used to modify said block. so a bi write or several are required. thus what happened.

    -gus

  • Unlike index check or dbanalys, binary dump is not a read-only operation. It may still update the database. For example, if running in single user mode, it needs to perform crash recovery; if auditing is enabled, it needs to record the auditing event. These operations may all cause writes to BI files.

  • Thanks, Gus! Indeed the customer has used a target database to test the dump scenario. And I don't know what was going on the source. The scenario assumes that databases is not updated during dumps.

  • even if database is not updated during dumps, there may be data and bi notes in memory that was updated earlier and that was not yet written to disk.

  • >  bi notes in memory that was updated earlier and that was not yet written to disk.

    BI notes can stay in memory not longer than the -Mf timeout, can't they?

  • only if a commit note is in one of the buffers. when the delayed commit timer expires, buffers up to and including the commit will be written, but not any later than that. if there is an open transaction that has not committed, its notes can sit in memory until a long time has transpired. perhaps even until database shutdown, at which point it will be rolled back.

  • I did some research about binary dump and its relations with ulimit.

    Binary dump as well as other Progress executables rises ulimit on startup. It does not matter if the executables have the userid bit or not if the executables are owned by root or not.

    Note that non-root user can't increase ulimit in shell:
    ulimit: file size: cannot modify limit: Operation not permitted

    Description of the old message # 4162 says:
    ** UNIX maximum file size exceeded. (4162)

    If PROGRESS runs as root, the standard UNIX file size limit of 1 MB is increased to 1 GB. You have exceeded the limit. Check to make sure that _progres and _mprosrv are owned by root and that each has the userid bit (s) set in its permissions.

    But in fact Progress executables seems to rise ulimit to the "unlimited" value.

    After login to a database Progress sessions downgrade their suid privileges to the user's ones but not ulimit.

    Binary dump is an exeption. It downgrades ulimit to the its initial value. For example, if ulimit is smaller than the size of database log then binary dump is still able to write the login messages to a database log:

    BINDUMP 7: (452)   Login by george on /dev/pts/18. 
    BINDUMP 7: (7129)  Usr 7 set name to Binary dump. 
    BINDUMP 7: (7129)  Usr 7 set name to george. 

    But it will be unable to write the rest of the messages:

    BINDUMP 7: (17813) Using index CustNum (12) for dump of table customer. 
    BINDUMP 7: (453)   Logout by george on /dev/pts/18. 

    Instead the binary dump will write 10 times per message to the standard output stream:

    ** UNIX maximum file size exceeded.  bkWriteMessage (4162)

    If ulimit is not enough to write to bi or db files then binary dump will crash a database when it evicts the blocks modified by other users. Dbanalys or ABL clients will evict the modified blocks as well but they will NOT crash a database because they keep using the unlimited ulimit.

    Be warned: the malicious minds may use binary dump to easy crash your database:

    # ulimit 30
    # ./_proutil sports2000 -C dump OrderLine .
    SYSTEM ERROR: error writing, file = sports2000.b1, ret = -1 (6072)
    User 7 died with 1 buffers locked. (5027)
    # ulimit 100
    # ./_proutil sports2000 -C dump OrderLine .
    SYSTEM ERROR: error writing, file = sports2000_9.d1, ret = -1 (6072)
    User 7 died with 1 buffers locked. (5027)

    I believe it's not a bug. Ulimit sets the size of the volumes (sections) for binary dump when _proutil does not have the userid bit. So we can create the multi-volume bunary dumps. It's a useful feature.

    By the way, there is a minimal value of ulimit for binary dump: 28 or 29 (kilobytes). The value slightly depends from a table used for dump.
    Binary dump will fail if ulimit was set less than this value:

    Internal error in upRecordDumpCombined, return -26631, inst 8. (14624)
    Binary Dump failed. (6253)

    In fact the binary dump successfully creates all volumes except the last one (except the smallest volume) - no matter how large should be the last volume. The minimal size of binary dump is the header size plus record size where the header size is 1K. Binary dump can creates the dump files just a bit larger 1K but it needs ulimit 30 times larger. The mimimal ulimit is close to but a bit less than the max record size: 31992 bytes. I don't have a guess why there is such limit.

    I hope the topic starter will not blame me for offtop. ;-)
    Does anybody know why a shell script with a header doubles the value of ulimit?

    ulimit 10
    utest.sh

    where utest.sh

    #!/bin/sh
    ulimit
    TmpFile=out.$$.tmp
    dd if=/dev/zero of=$TmpFile bs=1M count=1024
    ls -l $TmpFile
    rm $TmpFile

    ulimit will report: 20. The size of temp file will be 10K.

    In other words, the effective value of ulimit is correct. Only ulimit function reports a wrong value.

    But if we will remove the header of the script ("#!/bin/sh") than ulimit reports a correct value.
    Is it a bug or has it have some sacred meaning?