SQL-92 QUERY FAIL: Internal error -1 buffer too small for generated record - Forum - OpenEdge General - Progress Community

SQL-92 QUERY FAIL: Internal error -1 buffer too small for generated record

 Forum

SQL-92 QUERY FAIL: Internal error -1 buffer too small for generated record

This question is answered

I'm getting the following error when I'm trying to query the database using a dynamic prompt in Crystal Reports 2008 (SQL-92):

Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function rcUpgradeRecord called from stub_tpl__hdl_t::upgradeRecord on DATABASE.TABLE for . Save log for Progress technical support. [Database Vendor Code: -219901 ]'.

The closest KB article I could find was this, but I'm getting this error upgrading from 10.1B to 11.6.3 and the erroring function is different. Regardless I have tried changing -SQLTempPgSize to 256 (largest value) and the -SQLTempBuff value accordingly to match using the formula in the KB but it still does not work.

Does anyone have any ideas on what else I could try?

Verified Answer
  • Hi,

    If you truncate the fields under 32k it should resolve the issue.

    Maximum length of a column in a table  can be 31983.

    So, you can truncate the fields with in the limit.

    Moreover, please have a look at the following article where you can find the use of DBTOOL to fix such issues.

    knowledgebase.progress.com/.../000065388

    Please let us know if you need further assistance.

    Thanks and Regards,

    Ram Krushna

All Replies
  • Hi,

    The mentioned method in the error "rcUpgradeRecord"  works on a given record and compares the version of this record with the new schema and upgrades it if it not in sync.  It returns "Buffer too small for generated record" if there's not enough space in the provided record buffer to expand the record.

    We set the buffer size to the maximum value ( 32K ) while calling it. If this error is returned that implies that 32K was not enough for some of the records.

    So, it would be helpful if you can provide us the following information.

    -- What is the table for which this error was noticed ?

    -- What is the schema definitions in 10.1B and 11.6.3  for this table?

    --  Could you please provide us the data as well ?

    Thanks and Regards,

    Ram Krushna

  • We have a known issue with similar error to the one you encountered. And this occurs when we are updating a record which contains a varchar column of size ~31981.  

    A simple repro would be :

    create table pub.t3 (id int, name varchar(31981));
    create index i1t3 on pub.t3 (id);
    insert into pub.t3 values (10,repeat('c', 31981));
    
    update pub.t3 set id=11 where substr(name,1,2) ='cc';
    Update count = 0.
    === SQL Exception 1 ===
    SQLState=HY000
    ErrorCode=-219901
    [DataDirect][OpenEdge JDBC Driver][OpenEdge] Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function rcUpgradeRecord called from stub_tpl_hdl_t::upgradeRecord on PUB.T3 for . Save log for Progress technical support.
    

    Is there any similarity with the scenario where you encountered the problem? 

  • the maximum record length is a bit under 32,000. this includes various bits of overhead in addition to the data values.

  • Yeah, I had assumed that it was something to do with the record sizes but when I tested it using the same table but a different field, it worked fine. Note that I did not have this issue in version 10.1B, although we have converted the codepage from ISO8895-1 to UTF-8 in the upgrade so that might have increased the record size slightly but I would have thought that this would give me the same issue if I were to query another field from the same table which it didn't?

    Ram, to answer your question, this if for the 11.6.3 schema. I ran a dbanalys and the table has about 214816 records totalling about 94.9MB with max record size of 32014 which is over the 32k limit (not even sure how that's possible), so I'm guessing if I find that records that are over 32k and truncate those to under 32k, it should hopefully fix the issue? How much would I have to truncate by with the overheads, etc?

  • Hi,

    If you truncate the fields under 32k it should resolve the issue.

    Maximum length of a column in a table  can be 31983.

    So, you can truncate the fields with in the limit.

    Moreover, please have a look at the following article where you can find the use of DBTOOL to fix such issues.

    knowledgebase.progress.com/.../000065388

    Please let us know if you need further assistance.

    Thanks and Regards,

    Ram Krushna

  • UTF-8 data will always be longer than data from an 8-bit character set.

    i am curious why you have such large records. this is not very common.

    being so close to the limit will frequently cause problems as it is is easy to exceed the maximum when you are so close.

  • I've tried using DBTOOL to fix the sql width but it's still not working even though it's no longer posting any errors when I run it.

    I tried to find the culprit records using the Buffer record length and I couldn't find any records greater than 32000 (code below). I did find a couple greater than 31000 and have since truncated them but dbanalys is still giving me >32k max record length for this inventory table. We have a notes field inside the table which is probably why certain records can grow to be this large. Is there any other way I can find the problem records?

    DEFINE VARIABLE hBufferHandle AS HANDLE NO-UNDO.
    hBufferHandle = BUFFER inventory:HANDLE.
    FOR EACH inventory:
        if hBufferHandle:RECORD-LENGTH > 32000 then message hBufferHandle:RECORD-LENGTH.
    END.
  • > but dbanalys is still giving me >32k max record length

    Record length reported by dbanalys is not exactly the same as the value returned by RECORD-LENGTH function. Schema changes and record fragmentation can make an essential difference.

    Try to defrag the long records:

    If RECORD-LENGTH > 30000 then dump and delete the record.

    Then reload the dumped records.

  • I found the problem record via an index rebuild which returned an error on its recid; also explains why it wasn't coming up in the query for the record length even when I changed the record length filter to 30000. I truncated the record notes field and the SQL query is working properly now.

    Thanks to everyone who helped!