Anyone know what this means in an appserver log:
[17/06/03@19:20:39.422-0400] P-005396 T-005812 1 AS -- (Procedure: 'UpdateHeader app/Maintenance/AppServer/Maintenance.p' Line:401) RAW-TRANSFER statement failed due to stale schema for table 'my_rec_hdr'. (11425)
I was getting this message consistently for about an hour from a state-reset appserver. The appserver agent in question was connected to a database over a client-server connection. Interestingly, google has no hits on the error message yet. So I suspect I must be doing something very unusual (ie perhaps I'm using some obscure part of the progress/openedge product).
The problem mysteriously started happening after I made an online schema change - I added a column to a table. I even bounced the entire broker and all its agents a few times and the problem would not go away. It was very reproducible.
Then I switched to a shared-memory connection (instead of client/server) and ran the same code and everything worked. I went back to review the client/server repro again, in order to try to gather more details and possibly submit a bug.... but the problem mysteriously vanished once again from my client/server scenario.
It would almost seem like a bug in a client/server database connection that is "flushed out" by accessing the same schema via an unrelated shared-memory client.
I am somewhat new to using client/server connections with ABL. But I was made to believe that this is a well-supported way of using Progress. My understanding is that, while the performance is worse than shared-memory clients, we should not be troubled with a different set of bugs in the platform.
Please let me know if anyone has encountered this. I'm not sure if I should stop using client/server or if I should stop making online schema changes. OE wasn't happy about one or the other of these, and I'm not sure which. I'm using OE 11.6.3 on Windows.
I was thinking about my issue with client/server schema over the weekend. I also did some searching thru the KB. I think I have two theories about what is going on. Theory #1 is that I'm dealing with an outright bug in client/server ABL connections to an OpenEdge database. Such a bug would cause them to fail to pick up online schema changes as they should. Online schema changes are a relatively new feature of OpenEdge, and there may have been shortcomings in the design of this feature where client/server connections are concerned. I found other KB's that point to client-server bugs of a similar nature.
Moving on to my Theory #2, it is possible that this is not technically a bug (at least not by Progress OE standards), but a "feature" based on the way that online schema changes were introduced. The "feature" may be implemented in a way that causes client/server to behave slightly differently than shared-memory connections. Basically the theory goes like this. Online schema changes are possible given an implementation where existing clients are allowed to execute code which predates a schema change; but this happens only by keeping leaving these clients in an "uninformed" state where they are only aware of the original schema and not any changes to it. It is possible that, while the vast majority of the ABL language works fine under this model, the RAW-TRANSFER statement does *not* work because it is *not* happy to remain "uninformed" or "blissfully unaware" of schema mismatches. Below is a link about certain ABL language features that can generate schema-related error messages. This link explains that some of these messages are "expected behavior".
However I must say that even under "theory #2", I am not happy that my client-server connections started kicking out errors, and eventually *recovered* on their own (for unexplained reasons). I don't mind error messages which are "by design" or "expected", so long as they are also consistent. But given my experience of it with "RAW-TRANSFER", I think the client-server stuff is flaky. I have never experienced anything quite like this with the "shared-memory" database connectivity.
One thing that might force client-server connections to behave more consistently where online schema changes are concerned is to do one of the following. In retrospect, these may have been good things to try.
Does anyone have experience with client-server connections and online schema changes? Can you help me determine if my RAW-TRANSFER error was happening as a result of theory #1 or theory #2. I haven't followed the evolution of online schema changes closely enough to know if I encountered a bug, or a feature that is by design. Nor do I have enough client-server experience to explain why the behavior of those clients would be so different than a shared-memory clients. Fortunately we don't do online schema changes every day, especially not in production. But if we are going to start replacing shared-memory connections with client-server connections, we should probably understand some of the consequences of making the switch. Thanks in advance.
It's not expected that client/server connections have different behavior than self-service connections in this case.
The error condition is supposed to stop a situation where the client hasn't refreshed the cache to include the new fields, but the record retrieved has the new field already. Based on your description, where you say you restarted the AppServer broker, and still saw the problem, I would imagine the problem to be on the database server side but can't pinpoint exactly what the problem might be. If you happen find a way to reproduce this again, you can report it to Support as a bug and we will take a look into it.
I experienced this issue last week on a client-server connection as well. It seems to be some sort of cached schema information that isn't being updated when a database table is changed. I added 2 columns to a table and received the same RAW-TRANSFER error. After this, I attempted to just access those new fields via state-free AppServer and got a different but I believe related error. The error it returned:
SYSTEM ERROR: Cannot read field 41 from record, not enough fields. (450) SYSTEM ERROR: Failed to extract field 41 from bom_hdr record (table 51) with recid 229889. (3191)
Errors 450 and 3191 have to do with "record corruption." In particular, this article discusses having a field with a value that is in an invalid format and how to fix it. The article doesn't contain any info about schema changes/column additions:
When trying to solve this I shut down and restarted everything I could in an attempt to get the database to "update" the schema and resolve this error:
- deleted all compiled .r code
- shut down all AppServer connections
- shut down AVM
- shut down broker
- restarted all of this from scratch
This changed nothing and kept returning the same error above. This was happening for the better part of an hour and the only fix that worked was eventually restarting the database as a whole. Errors like this were not happening to me when adding columns with a shared memory connection.
It seems like this is a bug, as the errors are about corrupt records but is fixed by restarting the database. This of course is far from ideal but as far as I can tell there isn't any true "record corruption" happening because accessing new fields causes no errors once the database is restarted.