IT seems that when i transfer a file from the appserver to a remote client it works fairly well with Standard Progress ABL code and Raw datatype. However for some reason when the file on the appserver exceeds 3.5 GB the client side file just keeps growing uncontrollably. So if the file on the appserver was 4 GB then the client side file will grow until it consumes all the disk space on the client. (Maybe this is a known bug)
I am using RAW datatype to store 30k Chunks of data and then sending it back to a client machine and reconstructing the file . I need to know if this is the best way or if there is a faster or better way to transfer large files from an appserver to a remote client.
here is the code that pulls the data from the file:
import stream ghStream unformatted grChunk no-error. /* If there is an error return that there are no more chunks */ if error-status:error = YES then assign lMoreChunks = NO error-status:error = NO. else lMoreChunks = YES. /* Find the first temp table record */ create gttRaw. /* Set the raw data for the temp table */ assign gttRaw.RawData = grChunk.
Then a snippet of client side code does this:
/* Get the first temp table record. */ find first gttRaw. /* Write the data to a stream out to local file*/ put stream strAsFile control gttRaw.RawData.
Can anyone suggest other ways to transfer a file? Passing a memptr to a 4GB file might cause problems on the appserver side depending on whether or not there is enough RAM to store the data.
Actually. I was hoping for more ABL related ideas or solutions. The client and appserver are on a LAN and just would like to copy the files through the appserver. I don't want to do OS copies as I don't want to open file shares. I am wondering if there are some Memptr tricks or LOB copying techniques across the appserver boundary.
You don't check lMoreChunks, and IMPORT failures don't always throw errors the way you'd expect. Check ERROR-STATUS:NUM-MESSAGES to make sure there's no error / warning condition to handle. LENGTH(grChunk) < 30K may also be a better check for being at the last record.
COPY-LOB is another option, although you would need more overhead to keep track of where you are in the file.
This would be how I implement it.
define output parameter mFile as memptr no-undo.
copy-lob file "<file>" to mFile.
set-size(mFile) = 0.
def var mFileCopy as memptr no-undo.
run copyfile.p on server (output mFileCopy).
coply-lob mFileCopy to file "<filepath>".
set-size(mFileCopy) = 0.
Thank you all for your help. This give me something else to work with.
The copy-lob sounds really simply, but it would seem that if the file was 10 GB then I would need to make sure there was enough RAM on the system to efficiently handle that .
Tim, Thanks for the insight into IMPORT function failures as I was not aware that if it had reached the end of the file, it would not work. It seems that the error status check works really well for files under 3.5 GB, but past that there must be something in the way Progress handles the file that causes it to only throw a warning or some other message.
danaThe copy-lob sounds really simply, but it would seem that if the file was 10 GB then I would need to make sure there was enough RAM on the system to efficiently handle that .
Worth looking into some of the less commonly used options of the COPY-LOB in that case:
1. COPY-LOB FROM ... STARTING AT <offset> FOR <length> can be used to chunk the data on the server side.
(Where chunk size can be much larger than the 30k the RAW type allows)
2. COPY-LOB TO FILE ... APPEND can be used on the client side to glue things back together.
Just beware that with 10.2B STARTING-AT is broken.
We've had Java HeapSpace issues with large temp tables over the AppServer boundary. Just mentioning it as a potential failure point for that solution. It's easy enough to increase the HeapSpace on the AppServer, but if the HeapSpace is blown then it brings down the AppServer so test first.
My code does that currently , but does only one record at a time. The problem I have mostly is when it gets to the end of the source file. When the file is larger than 3.5 GB something fails and it just keeps sending data to the client and then the client side file grows till it consumes all disk space. I think that like Tim Kuehn mentioned .. the error-status check may not always work as the end of the file does not trigger the status to be changed for some reason. Maybe there is a Progress bug that causes this to fail with Large files.
32 bit Windows
and are you using an OpenEdge version that has 4GL large file support ? I don't recall when we added that.
> On Sep 28, 2015, at 12:04 PM, Brian K. Maher wrote:
> Update from Progress Community [https://community.progress.com/]
> Brian K. Maher [https://community.progress.com/members/maher]
> Is your client process 32 or 64 bit?
> View online [https://community.progress.com/community_groups/openedge_development/f/19/p/20414/72333#72333]
> You received this notification because you subscribed to the forum. To unsubscribe from only this thread, go here [https://community.progress.com/community_groups/openedge_development/f/19/t/20414/mute].
> Flag [https://community.progress.com/community_groups/openedge_development/f/19/p/20414/72333?AbuseContentId=9822a070-ce5b-46fa-9881-9f59e8ecf435&AbuseContentTypeId=f586769b-0822-468a-b7f3-a94d480ed9b0&AbuseFlag=true] this post as spam/abuse.