I've been playing with the use of SQLite as an embedded database. It is one of the most popular ones.
OE ABL uses local files in the session temp directory to store its temp-tables. I was just thinking how cool it would be if OE ABL had an option to keep its local TT files in SQLite instead of the proprietary format that it uses today.
...After filling the temp-tables with data from from the OE database, we would have tons of powerful options that we don't have today. Today we can obviously interact with the TT records from the ABL language (like they were OE database tables). But if the data was stored in a local SQLite database, then they you would also be able to run SQL queries against them, or transfer the connection to the database file to a different application in order to do follow-up analysis. This SQLite stuff performs very well, and a wide variety of other types of applications could use the results. (IE. the temp data would no longer be locked up within the context of a single ABL session, and wouldn't require marshalling to get the data out of the session .)
Its just a thought. I suspect Progress ABL is permanently married to its own format for temp-tables. It would be nice if other options were available. Even a simple alternative like having a ProDataSet:WriteSqliteDatabase() option would be super valuable (similar to WriteXml but sends it do a SQLite database to allow for subsequent SQL queries).
Thoughts? Is it possible to move forward with ideas like this in the OE ecosystem?
>> was your temp-table memory-resident? Or did that involve DBI I/O?
I suspect it was NOT memory-resident but I didn't get that deep in my investigation. Are there VST's for that? Or do I have to enable ABL tracing? Or maybe I just need to watch for file i/o in my temp directory?
If I had to guess, it was probably using I/O on disk (local SSD) considering the number of records. But I've never seen the local SSD become a bottleneck for an ABL database applications. Usually the bottlenecks in an ABL client-session are related to waiting on round-trips from the database, or using a single CPU core.
Not to rain on marketing's parade but the whole point of the SSJ stuff is to improve "queries". FIND statements are not part of that. The initial roll out is for static FOR EACH statements but I expect that dynamic statements won't be far behind.
Also -- the 300% was the sum of multiple enhancements. Most real-world scenarios probably won't take advantage of all of them simultaneously very often. Those that *do* will be very impressive though.
For temp-table activity there are some sort of VSTs that are private to the session (thus you cannot collect this from outside the session) and there is a class library to work with. ProTop has some sample code available that you can see in action via the ^t command. The code itself is in lib/ttinfo.p. You do need to set some session startup parameters to be able to collect all of the temp-table data.
> I suspect it was NOT memory-resident but I didn't get that deep in my investigation. Are there VST's for that? Or do I have to enable ABL tracing? Or maybe I just need to watch for file i/o in my temp directory?
I haven't played with the TT VSTs yet so I can't help you there. There might be some useful info in the client log if you enable TT log entry types. At a minimum, look at the DBI file I/O and size, though you'd have to know your code to know whether any I/O is due to the use of one temp-table versus another.
I haven't made any serious attempt to benchmark temp-tables but I've generally found them to be speedy enough when memory-resident. But even if -T is on fancy SSDs, that will be a *lot* slower than having temp-tables entirely in RAM. So I suggest you prove to yourself that you have optimized them for your data before deciding that temp-tables are too slow.
Also a quick reminder:
Temp-tables have their own buffer pool, which you can size with the -Bt & -tmpbsize parameters. That defaults to 255 blocks of 4k, so that's 4k short of a megabyte.
If you are going to cache large-ish sets of data in temp-tables, you really want to increase that by a large margin.
You don't really need VSTs. Just use the session options to show the files in the temp directory and leave them around after execution. They will either have nominal size, meaning everything is happening in memory, or not, which means that swapping is involved. If the latter, there are a couple of simple startup parameters to control the amount of memory available for TTs. I wouldn't be surprised if you found a little playing with those would have a profound effect.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
"leave them around after execution" requires that you use -t and then crash the session. George can explain how to turn the resulting file into a hacked database but that isn't something that most people are going to know how to do. So you mostly end up seeing the file size.
Whereas the TT "vst" info and the supporting class library give you relatively easy access to a wealth of useful insight. For instance:
Thank goodness this forum is so well suited to posting technically oriented content!
>> use a dedicated local OpenEdge database for that.
> I already am using a dedicated local database. The client-session TT's essentially act like a dedicated local database.
Suggesting improved functionality is always fun. That's my favorite pass-time too. Like being able to reconnect temp-tables. It could be interesting to make the handling of temp-tables and regular databases more alike, so you could create tables in a local permanent database from within ABL code as easily as temp-tables today. And why not also make it possible to dynamically create new (or temporary) tables inside a regular permanent database using similar ABL code as we have today for temp-tables? (Rather then explicitly manipulating schema tables.) And then it would just be a matter of design choice whether any particular table would be permanent or temporary and whether it would be stored locally or in any particular permanent database.
However, I guess any such improvements would take a couple of years to get delivered, even if accepted.
Adding built-in functionality for SQLite would be even further off, I guess.
(But trying to build on the suggestions for implementing some interface to SQLite in custom code would seem attainable.)
But I don't see that you have told us any reason why you don't just set up an extra local OpenEdge database on each server, to use in stead of your temp-tables? (As long as it is intended for use on servers as you described it, and not on clients.) You could still have your main database on another server. This seems to be the solution that would be recommended within the current set of tooling available. Sometime you have to choose to use the tools that are actually already available and working well, rather than trying to "fight the system" to force it into working the way you (or I) would like it to work in an ideal world. (My experience is that fighting the system is rarely worth it. Except for those occasions when you find some loophole that actually works well.)
Also, I would not expect Progress to take any interrest in these suggestions without even being provided with any such reason as to why the current facilities are insufficient…
>> I don't see that you have told us any reason why you don't just set up an extra local OpenEdge database on each server, to use instead of your temp-tables?
Why not more OE databases? That involves lots of overhead/infrastructure, persistent schema, security considerations, licensing, etc. SQLite is free of any these considerations.
Moreover, with SQLite-based temp tables you would get some additional benefits that I had mentioned:
Getting large amounts of OpenEdge data to move from one tier to another can be expensive. Depending on how the ABL is written, this can be more expensive for "client/server", than when using the SQL92 driver. But after that initial expensive is paid, then SQLite-based temp tables can "preserve" the investment, and retain data for later use. The data could be portable, flexible, and far less "transient" than normal TT data which is immediately lost after the client session ends.