How can I troubleshoot System Error 290? I know that the filesystem is out of disk space when this error is thrown due to some kind of temporary file growing much larger than it should; but because there is no disk space, the generated procore file has a filesize of 0. I need to be able to identify the procedure that is being run that causes this error. For reference, the filesystem where these temporary files are being created (no -T parameter, but defaults to user's home directory in our setup) is 15GB, but during normal use, only about half is used at any given time. So the culprit procedure is filling up a temporary file in the range of 5-7GB. Also, depending on timing, the culprit procedure may or may not be the first one to throw the error. It may be sitting there with a 7GB temp file, then another user runs a procedure with a 5MB temp file that tips it over the filesystem limit. We have increased the filesystem by several GB more than once, thinking that the problem could be related to database growth over time, but this error is still occurring. By the time we have been alerted of the error, the System Error has been logged and the session terminated, removing the temp file with it and dropping the filesystem usage back down to normal levels.
I am also contacting tech support about this, but wanted to see if others have run into this and found a solution.
When you have a temp file that grows to very large size and this situation happens repeatedly it might be time to consider changing some things about your setup.
In particular I would strongly suggest using -T to move the temp files off /home. This ought to allow the protrace files to be created. I also suggest adding -t to make them visible. Making them visible is very useful because it will, potentially, let you see what is growing out of control before it is too late. Or at least telling you something about what sort of file is growing and maybe who is creating it.
This is definitely something that ProTop could help you with. You can set ProTop up to monitor the disk space of your file system and set it to alert when a certain % is full which gives you time to react and investigate. But even better than that, you can even set up custom "alert enhancers" that do the legwork for you at the point of raising the alert. Any Progress code can be executed within reason so you could conceivable set it to dump busy processes or long running transactions at that point. The advantage of this is that by the time you've got an alert and logged on, chances are the process has already gone bang - you're already experiencing this. But as the alert enhancer gets the data automatically you are far more likely to find the offending process.
Error 290 is related to transaction length, so you could also monitor that as well, and record locks too.
We've got a datasheet here that will give you a bit of a flavour if you've not come across it before: www.consultingwerk.com/.../protop
Drop me a line if you want to discuss it in more detail.
James - I was aware of ProTop, but it had fallen off my radar. I'll take another look at it.
Tom - I looked closer at the difference between the procore and protrace files. procore is written to the current directory by default, or to the temporary directory if the -T parameter is specified. protrace is written in the current directory and does not appear to be affected by the -T parameter. If I understand this correctly, the procore file may still end up with a filesize of 0 in the -T directory, but the protrace would not be affected by the full filesystem and would be created successfully (assuming -T is actually in a different filesystem). Does this sound correct? That would certainly help if we could get the ABL stack trace of the offending session.
Is the -t parameter required to make the protrace file visible? I have seen protrace files on our system before, so I'm assuming not. Also, the -t parameter documentation mentions that you will need to manually delete the temp files from aborted sessions if the parameter is set. I'm hesitant to set this flag with our ongoing problem, because the problem currently "resolves" itself by removing the temp file when the session aborts. Setting the -t parameter may cause the problem to last longer than it currently does because of the required manual file removal. Of course, if it helps us solve the root cause, it's probably worth it.
I believe that you are correct that protrace is not impacted by -T.
-t dies not impact visibility of protrace files.
Yes, with -t you need to purge orphan temp files from time to time. A simple find -mtime command run from cron will take care of that. The diagnostic benefit of knowing what is causing your should be well worth the extra work of cleaning up manually.
Thank you, Tom! We'll give this a try.