How to survive after the errors like 819 or 10566?

Posted by George Potemkin on 12-Feb-2019 12:22

How to survive after the errors like 819 or 10566?

(819) SYSTEM ERROR: Error in undo of record delete
or
(10566) SYSTEM ERROR: Undo failed to reproduce the record in area <area-num> with rowid <DBKEY> and return code <Return Code>.

They mean that undo part of recovery note is corrupted. The note will be successfully replicated to a target database or rolled forward on warm standby copy but then these databases will fail to start: the crash recovery will be terminated due to the same error. In other words we will lose all databases.

We can restore old backup and roll forward all AI files using endtime before the error. But it may not work as well because we need to find a time when a corrupted note was created rather than when Progress tried to undo the changes related to the note. Last one is a point of no return. As minimum we should known the time of transaction’s beginning. Unfortunately the error does not report even TRID.

Does anybody see any other solutions rather than to use the –F option?

Just for information - we already got error #819 for two customers.

Both customers are running 11.7 build 1497 SP02. One on Linux 64-bit, second on on AIX 64-bit.

Knowledgebase describes the defect PSC00293031 related to the error #819 but it was fixed in 11.3.3 and 11.4. In our cases the transactions did not use LOB objects.

In both our cases the corrupted recovery note was RL_RMCR – the one that describes the creation of new record. Transaction undo did replace the records by placeholders (recid locks) and exactly at this moment we got the error #819.

Progress could make a survival a bit easy and less painful for us if it would:
1. Report TRID with the errors above;
2. Use Transaction Ignore List at db startup that would sets the list of TRIDs whose recovery notes should be ignored during crash recovery. It’s better to be ill than dead. It’s better to leave uncommitted only a transaction that caused the above errors rather than to skip the crash recovery for all transactions that were active at the moment of db crush;
3. Enhance rollforward scan by the “loud” verbose option to report the changes described by each recovery note, in other words – the option to fully decode the contents of recovery notes. It would help us to find all changes done by the transactions on Transaction Ignore List and to fix them manually.

All Replies

Posted by George Potemkin on 13-Feb-2019 10:18

If all databases (source, target and standby) are deathly infected by a corrupted recovery note, which db copy to open using the -F option?

Is a standby database the best choice? Source and taget databases were crashed. Some buffers modified by the commited transactions were not written on disk. The -F option will lose these changes. Standby database used to roll forward AI files did not crashed. All recent changes are written to disk. The -F option will not lose the changes done by the commited transactions. At least the theory seems to say this. But in theory there is no difference between theory and practice. In practice there is.

In the second incident we were lucky to have only one uncommited transaction (the one with a corrupted recovery note). The customer had opened the database using the -F option but database soon crashed again due to the new corruptions that are expected when someone uses the force access:

(14684) SYSTEM ERROR: Attempt to read block 0 which does not exist in area <index-area>, database <db>. (210)

(10831) SYSTEM ERROR: Record continuation not found, fragment recid <recid> area <data-area>  3.

(10833) SYSTEM ERROR: Invalid record with recid <recid> area <data-area> 1.

The uncommited transaction did not update those areas. And I almost sure these corruptions did not exist before the -F was used. Why we got these errors?

Posted by gus bjorklund on 14-Feb-2019 16:36

> On Feb 13, 2019, at 5:20 AM, George Potemkin wrote:

>

> The -F option will not lose the changes done by the commited transactions

yes, it will sometimes. consider the following scenario:

0) a transaction begins and a transaction begin note is spooled.

1) transaction creates a record. this could likely cause several block changes if a new block must be allocated, or just one if record fits in block at head of rm chain.

2) transaction creates an index entry. best case, one index block is changed, else a block split may be required.

2a) at this point, there are bi notes describing all those changes made by the transaction, probably still in memory.

3) transaction commits and a commit note is spooled.

4) lazy commit timer expires and all bi notes up to and including the commit note are flushed to disk.

5) system crashes. contents of bi and ai buffers and database buffers are lost.

6) you do a normal database restart, the redo phase will recreate the actions of any notes whose database actions did not make it to disk. what was in memory and not written to disk is recreated. the transaction will be ok. nothing lost.

alternate 6) you do a database start with -F. contents of bi log are discarded. there is no redo phase. memory contents are NOT recreated and whatever was in memory is lost forever. that could be the contents of any action performed in steps 0 through 3, including the entire transaction.

Posted by George Potemkin on 14-Feb-2019 18:20

I meant that we will not lose the changes done by the commited transactions when we will use the -F option to open a standby database that was used to roll forwad AI files. It did not crahed when the corrupted note was applied. Rfutil was successful for last AI. All db changes done by rfutil were saved on disk. But it will be crashed if we will try to open database in normal mode that will try to undo a transaction with the corrupted recovery note.

Posted by gus bjorklund on 14-Feb-2019 19:18

george, you are correct.

Posted by George Potemkin on 19-Feb-2019 15:28

I’m thinking about the following plan what to do if we’ll get the “error in undo” again. Any comments are welcomed.

 

0. Closely (for example, once per second) watch db logs. If the error happens then:

1. Freeze a watchdog process (kill -SIGSTOP). It will prevent watchdog from a death during undo of dead client’s transaction. Hence database will not crash immediately;

2. Optionally proquiet database. Any changes done from this point can be lost. We need a time to make a decision;

3. Get the full information about the transaction of the dead client - mainly transaction start time, the number of notes written and read for the current transaction;

4. Based on this information we can decide if we are going to switch to a warm standby database and to roll forward AI files to a point in time before beginning of the transaction or (if the transaction was opened long time ago) we can decide to continue with the current state of database even if we will be forced to use the -F option to open db.

5. If we choose to use the -F option then:

5.1 Disable a "quiet" point and disconnect all db sessions except, of course, the dead one;

5.2 Proquiet database again to write all dirty blocks on disk;

5.3 Shut database down (emergency shutdown?). Of course, the database will not be closed normally because the transaction of dead session can’t be undone due to the error;

5.4 Truncate bi -F. It’s expected that at this point of time we will lose only some changes done by the dead uncommitted transaction. The changes done by other transactions supposed to be saved on disk;

5.5 When db is up and running eliminate the changes made by dead transaction. To find out those changes we can use (with a bit of luck) AI scans.

 

Did I miss some points?

Posted by Andriy Mishin on 20-Feb-2019 08:05

>>In other words we will lose all databases.

>>Just for information - we already got error #819 for two customers.

I'm shocked. How did you survive those two cases? Have you opened a сase in Progress Technical Support? What does Progress tell you about this?

Why is no one here responding to this message?

It's the database administrator nightmare. I wouldn't want to be in this situation. [:|]

Posted by George Potemkin on 20-Feb-2019 10:10

> Have you opened a сase in Progress Technical Support?

Sure

Posted by Andriy Mishin on 15-Mar-2019 07:36

Hi there!

What's the news? What does Progress say about this?

Posted by George Potemkin on 15-Mar-2019 07:54

It's still under investigation.

Posted by Andriy Mishin on 15-Mar-2019 10:12

Did they confirm it was a bug?

Posted by George Potemkin on 15-Mar-2019 10:27

I'll share the conclusion when I'll get it from PTS.

Posted by George Potemkin on 03-Jul-2019 14:17

Development team did a big job while investigating a root case of the errors.

First of all, two incidents that I mentioned above were caused by two absolutely different errors:
SYSTEM ERROR: Error in undo of record delete (815)
SYSTEM ERROR: Error in undo of record delete (819)

There is the error # 820 but our customers did not yet ;-) get it:
SYSTEM ERROR: Error in undo of record delete (820)

The error 819 is fatal for database. To get an access to a database we need either to use the -F option or to roll forward AI files to a time before a corrupted note was created (it’s not the time when message # 819 was issued).

The error 815 is recoverable. Transaction undo performed by a client’s session failed but database crash recovery will be successful. In our case crash recovery took a long time (more than 5 hours) because the remote user did not really logout from database which resulted in a very large bi file (user’s transaction stayed open for a week). The error will be fixed in 12.1.

IMHO, a workaround for such errors: on standby database do not apply AI files that contain the notes for transactions that are not yet committed on source database.

Text is corrected: the 815 is recoverable and the 819 is fatal.

Posted by Dmitri Levin on 09-Aug-2019 20:52

>The error will be fixed in 12.1

Only the recoverable error 819? How about error 815?

> a workaround for such errors: on standby database do not apply AI files that contain the notes for transactions that are not yet committed on source database.

While that sound great, it may or may not be possible to implement. I believe the Progress database engine has "Write ahead" meaning APW will write changes to database that may be not yet committed. BTW, Oracle engine works the same way. Thus those uncommitted changes have to go to standby database in a form of AI notes. It would be necessary to put those notes in a separate pool.

Posted by gus bjorklund on 09-Aug-2019 22:16

> On Jul 3, 2019, at 10:19 AM, George Potemkin wrote:

>

> IMHO, a workaround for such errors: on standby database do not apply AI files that contain the notes for transactions that are not yet committed on source database.

cant do that. there may be changes from several transactions in the same database block. the first change may be uncommited and the following ones committed. it would not be easy to come up with a way to skip such changes temporarily and still have crash recovery.

if the last change to a block was uncommitted, one could perhaps hold that in abeyance. but knowing if the transaction is uncommitted is not easy either without a scan of the ai file before doing the roll-forward.

Posted by George Potemkin on 10-Aug-2019 08:57

> Only the recoverable error 819? How about error 815?

Sorry for misinformation: it's the error 815 that is recoverable and the 819 is fatal. A rule for myself to remember: the higher number the more dangerous is an error (so the 820 would probably mean the end of World ;-).

The fix for the recoverable 815 will be available in 11.7.5 as well as in 12.1.

Development team is still working on the fatal 819.

> it may or may not be possible to implement. I believe the Progress database engine has "Write ahead" meaning APW will write changes to database that may be not yet committed.

> cant do that. there may be changes from several transactions in the same database block.

Data blocks are not corrupted. It's a recovery note that is corrupted in case of the 819. Namely the corruption is in undo part of recovery note. Every day we could have the thousands transactions that contain the notes with such corruption but we do not get the the fatal 819 because those transactions were not interrupted. But when we get the 819 we need the database in state before the note was generated.


BTW, in the starting post I suggested an idea of Transaction Ignore List that would sets the list of TRIDs whose recovery notes should be ignored during crash recovery. In my humble opinion it can be useful not only after the fatal 819. Why developers think it's a bad idea?

In case of the 819 all standard recovery plans will not work. The only solution from Progress is to use the -F option followed by dump and load of the whole database. For our largest customers where the typical size of databases is a few terrabytes) it would mean that the business is stopped for a week or longer. But Transaction Ignore List could reports the notes that were ignored (in the same format as with aimage scan verbose). We can easy dump all records from data blocks updated by those notes. We can rather easy empty these blocks. Unfortunately we can't use dbrpr/8. Reformat Block to a Free Block because this option is corrupted and in any case it should not be used for data blocks in SAT2. Then we can load the records back into a database and fix or rebuild the indexes in the corresponding tables. If everything is scripted then database downtime will be defined only by the size of the transactions on Transaction Ignore List and it can be reasonably short.

Needless to say that the "ignoring" can be coded easy.

This topic is exactly about the recovery plans for the unrecoverable errors like the 819.

Posted by gus bjorklund on 10-Aug-2019 13:18

> On Aug 10, 2019, at 4:58 AM, George Potemkin wrote:

>

> Why developers think it's a bad idea?

It isn't a bad idea, its just that it isnt at all simple to make it work. Skipping notes makes it so that the following (or prededing during physical undo) note for the same block will encounter a block state different than it was expecting. I did understand that the block is not currupted.

Posted by George Potemkin on 11-Aug-2019 10:45

> Skipping notes makes it so that the following (or prededing during physical undo) note for the same block will encounter a block state different than it was expecting.

I heard that once in the past Progress created for a customer the special version of rfutil that allows to apply AI file even if a previous one was missed. Is it a myth?

Posted by George Potemkin on 11-Aug-2019 15:44

Another idea: Progress could add new type of the block chains - a chain of the corrupted blocks. The fatal errors like 819 where Progress is unable to handle the changes of a block can move the block to this chain with creating the corresponding recovery note. So these errors will not even crash a database, it will stay online. Any attempts of the sessions to read a block on the chain of the corrupted blocks should result in the error. The fatal errors will not result in the lost of database (and its replicas). We will lose only one block inside a database instead of the whole database. In case of the 819 the block contained only the holder left after the record's fragment created by the same transaction that resulted in the 819 during undo. In other words, we will lose nothing if we will format the corrupted block. No downtime at all! No data lost (not guaranteed but very likely)!

Posted by gus bjorklund on 11-Aug-2019 16:24

> On Aug 11, 2019, at 6:47 AM, George Potemkin wrote:

>

> I heard that once in the past Progress created for a customer the special version of rfutil that allows to apply AI file even if a previous one was missed. Is it a myth?

it was a version that kept going even when there were errors but could not apply notes out of sequence. the goal was to salvage as much of the database as possible since there were no backups. but the resulting database was highly corrupted and further salvage work was required.

it didn't skip notes because applying notes out of sequence almost always leads to memory violations right away because the database block state is wrong (depedning on what operation was skipped). simple example of consequences of skipping a note is update to a fragmented record can cause the record to become corrupted if only one fragment is updated. or free chain or rm chain might become looped if one of the chaining operations are not done.

Posted by gus bjorklund on 11-Aug-2019 16:38

> On Aug 11, 2019, at 11:47 AM, George Potemkin wrote:

>

> The fatal errors like 819 where Progress is unable to handle the changes of a block can move the block to this chain with creating the corresponding recovery note.

that approach is worth investigating. could also have a "bad block" code in the block header. losing some kinds of blocks could still have disastrous consequences - like if it was an area root block or something. depedning on the type of error and blcok type, recovery might be difficult or impossible. in such cases, the block could be removed from the table or index and reformatted. also, with that approach, once a block was flagged, the roll forward could skip all operations on the block and keep going.

there are other things that could be done as well, like a table dumper utility that would dump all the records in a table by reading the data extents and ignoring all errors. in a corrupt database it might loase lots of records but the result might still be worth it. never implemented it myself because there aren't enough cases where it would be needed. so no roi in it.=

Posted by Thomas Mercer-Hursh on 11-Aug-2019 16:54

like a table dumper utility that would dump all the records in a table by reading the data extents and ignoring all errors.

Been there, done that ... and I'm sure you have too.  Crude, but reading and dumping by primary id until one gets an error and then restarting by incrementing the id until it starts to work again.  I had one case many years ago where I thought the company was going to have to start over again because the backups were bad and there was significant physical damage to the disks.  It took me about 40 hours but I got 90-95% of the data back.

Posted by George Potemkin on 12-Aug-2019 13:04

I will criticize my idea of using a chain of corrupted block to keep a database online after the critical errors. It will not work, for example, for the error 1124: “Wrong dbkey in block”. In the most cases nowadays the error is caused by wrong mapping of a block in system cache to a block on disk. By the way, Progress could automatically empty the system cache trying to fix the 1124. After the error 1124 we can’t mark the block as corrupted by any flag inside the block itself. Otherwise sooner or later we should write the modified block on disk and, if the mapping error is not yet fixed we will overwrite a wrong block on disk.

Progress can store the list of the corrupted blocks inside some special blocks. To minimize the impact on performance Progress can check against the list only when the blocks are retrieved from disk. When we will get a critical error the block can be in database buffer pool. In this case it should be deleted from buffer pool.

When another session will try to read a block that was previously reported in the message 1124, the session can issue new message: "You are trying to read a block that was marked as corrupted". And at this moment the session will not held a buffer lock - no reasons to crash a database.

Apart of implementation are there any negative effects of keeping a database online after the critical errors?

Posted by gus bjorklund on 12-Aug-2019 14:06

> On Aug 12, 2019, at 9:07 AM, George Potemkin wrote:

>

> Progress could automatically empty the system cache trying to fix the 1124

There is no operating system API to accomplish that. the only way is to dismount and remount the filesystem but that deletes all those blocks from the cache. and all open filehandles are lost.

Posted by gus bjorklund on 12-Aug-2019 14:14

> On Aug 12, 2019, at 9:07 AM, George Potemkin wrote:

>

> Apart of implementation are there any negative effects of keeping a database online after the critical errors?

>

>

>

depends on the application. if the data that has become inaccessible is critical to the application and/or used very often, the application will be unusable.=

Posted by gus bjorklund on 12-Aug-2019 14:16

> On Aug 12, 2019, at 9:07 AM, George Potemkin wrote:

>

> Progress can store the list of the corrupted blocks inside some special blocks

Or, use the chain as you suggested but also clear the modified flag, set the "bad block" flag and lock the block in memory. with a limit on what percentage of buffer pool can be devoted to such. then attempts to access the block could return a different error as you suggest.

but doing that could still cause crashes in at least some cases. what do we do if the corrupt block is detected while updating a fragmented record and the operation cannot be undone?

Posted by George Potemkin on 12-Aug-2019 14:36

> if the data that has become inaccessible is critical to the application and/or used very often, the application will be unusable.

It will be not worse than currently. For example, what will happen now after the error 1124? Database will crash. DBA will not have time to solve the error and will just start the database. A bit later some of the sessions will try to read the same block again. Database will crash again, then again. But if a block will be marked as corrupted  then only some users will be unable to do their jobs - tens from thousands.

Posted by George Potemkin on 12-Aug-2019 14:43

> There is no operating system API to accomplish that. the only way is to dismount and remount the filesystem but that deletes all those blocks from the cache. and all open filehandles are lost.

While db is still running then the file handles are still opened. In this case the umount command will not dismount a disk but the command will empty system cache first. At least it's true on HP-UX but I guess on other Unix flavors as well.

Posted by gus bjorklund on 12-Aug-2019 16:56

> On Aug 12, 2019, at 10:46 AM, George Potemkin wrote:

>

> While db is still running then the file handles are still opened. In this case the umount command will not dismount a disk but the command will empty system cache first. At least it's true on HP-UX but I guess on other Unix flavors as well.

>

>

>

that behaviour must be o/s and filesystem specific and is not documented.

there's a force option on umount() system call.

to flush, on linux, one can do

echo 1 > /proc/sys/vm/drop_caches

to flush /all/ filesystem caches without disturbing system otherwise. that is a bit drastic though (but cant be worse than sync() (or can it?)).=

Posted by George Potemkin on 13-Aug-2019 07:20

> what do we do if the corrupt block is detected while updating a fragmented record and the operation cannot be undone?

Progress can undo the changes in all blocks except the ones marked as inaccessible. A record will be inaccessible if one of the blocks that need to be read is inaccessible including an index block.

Let me keep dreaming… DBA can have utility to remove the inaccessibility flags. For example, session got the wrong dbkey error, marked the block as inaccessible, removed it from buffer pool, raised the error and /continue/ to run. When DBA will be notified about the error 1124 in db log, he/she can empty the system cache and if it fixed the error then remove the block’s inaccessibility flag. Everything can be done with no database downtime.

The next step is the different states of the inaccessibility: fully inaccessible or read-only access. In the case in the starting post the database was running 6 days after the error 815. All this time the block contained the changes from the uncommitted transaction. So in some cases we can allow reading a block but are forced to prohibit its updates. This can’t be done easy in the current Progress versions but the future Progress versions could support the read-only buffer pools in shared memory. Not only to deal with the corrupted blocks but to support any operations that should temporary restrict the updates in some storage areas. As an intermediate solution: read-only sessions (including the ones running on target database) can read the corrupted blocks if DBA granted the read-only access to these blocks.

It can be a part of “Five 9s” strategy.

Posted by gus bjorklund on 13-Aug-2019 13:26

> On Aug 13, 2019, at 3:23 AM, George Potemkin wrote:

>

> All this time the block contained the changes from the uncommitted transaction

i thought you said the note was corrupted but not the block. so if the block is fine, of course it can sit there and be used. even updated.=

Posted by George Potemkin on 13-Aug-2019 13:44

The block was fine in the cases of the 815 as well as of the 819. "Fine" means that the block's structure was correct. And there was a recovery note that must be applied to this state of the block but it can't be done. No matter if it's due to the corruption inside the block (not our cases) or due to the corruption in recovery note (case of the 819) or due to the corruption in shared memory (case of the 815). Block contained the "dirty" data that can't be "cleaned". I would say the block was inaccessible for applying the recovery notes but accessible for the purposes of dirty reads. In one word: it's a "corruption".

This thread is closed