zlib

Posted by Gareth Vincent on 11-Dec-2017 23:21

We are in the process of compressing our LOB objects in our document database and we seem to be having problems calling the external library of libz.so.1 on Solaris 10 (sparc).  Its working fine on Centos.

As a test, i'm trying to compress a file "10PW_11.txt".  When running the program on Solaris, it does not create the _zip file as expected, there is also no error message.  Any ideas as to why this would work on Centos on not on Solaris?  We are running 11.5 64bit on both servers.

DEFINE VARIABLE cSourcePath# AS CHARACTER NO-UNDO.
DEFINE VARIABLE cTargetPath# AS CHARACTER NO-UNDO.

cSourcePath# = SESSION:TEMP-DIRECTORY + "10PW_11" + ".txt".


cTargetPath# = cSourcePath# + "_zip".
RUN zlib.p(INPUT cSourcePath#, INPUT cTargetPath#, INPUT "compress").

Below are the two source files we are calling:  zlib.p and zlibbind.p.

[View:/cfs-file/__key/communityserver-discussions-components-files/18/zlib.p:100:100][View:/cfs-file/__key/communityserver-discussions-components-files/18/zlibbind.p:100:100]

All Replies

Posted by Gareth Vincent on 12-Dec-2017 00:54

I've just discovered that if I run the same test on an SUN Intel server, it works.

Posted by James Palmer on 12-Dec-2017 03:37

I'm afraid I don't have an answer to why it's not working, but wanted to chip in with my 2p.

In my opinion (and from experience!), it is a really bad idea to be storing documents within LOBs, particularly to the point where you are considering compressing them. I can understand the draw to do this, heck that's why we did it in the first place too - keep everything together under one roof, secure and easy to access.

The reality is, that LOBs are a real pain when it comes to doing any DBA work that requires moving stuff around. They bloat the database and greatly increase the maintenance time required for all sorts of activities. Even compressed, our LOBs comprised nearly 50% of the database. Backups, restores, Replication, Dump and Load, etc, etc were all negatively impacted by this.

The database is not a filestore. It's a database.

In the end the solution we came up with was to have a file store created which only administrators and one user had access to. The user that had access was owned by an AppServer process. The AppServer pushed and pulled the files (the filenames were obfuscated by the AppServer to make it hard for any person to fiddle around). We had a small table within the database that was the link between obfuscated file names and real file names, so we could present a file in it's real format to the user.

It would be a bit of effort to get all set up, but once done the database will be a lot leaner, and the file system with the files is now under standard system backups and maintenance, and you save hours a month on routine maintenance jobs.

Posted by Gareth Vincent on 12-Dec-2017 04:07

That is an interesting approach and something worth considering.  Our main drive is to free up disk space and reduce bandwidth to our end users.  

Our document db makes up 80% of our entire storage and is used maybe 10% of the time.  In my test environment I was able to reduce a 150Gb DB to 32Gb.

Our document db is only one of the DB's that make up our application and contains only copy documents and dayend reports. We are hosting just over 100 clients with our new application so you can imagine the impact this has on storage.

This thread is closed