I've been playing with the use of SQLite as an embedded database. It is one of the most popular ones.
OE ABL uses local files in the session temp directory to store its temp-tables. I was just thinking how cool it would be if OE ABL had an option to keep its local TT files in SQLite instead of the proprietary format that it uses today.
...After filling the temp-tables with data from from the OE database, we would have tons of powerful options that we don't have today. Today we can obviously interact with the TT records from the ABL language (like they were OE database tables). But if the data was stored in a local SQLite database, then they you would also be able to run SQL queries against them, or transfer the connection to the database file to a different application in order to do follow-up analysis. This SQLite stuff performs very well, and a wide variety of other types of applications could use the results. (IE. the temp data would no longer be locked up within the context of a single ABL session, and wouldn't require marshalling to get the data out of the session .)
Its just a thought. I suspect Progress ABL is permanently married to its own format for temp-tables. It would be nice if other options were available. Even a simple alternative like having a ProDataSet:WriteSqliteDatabase() option would be super valuable (similar to WriteXml but sends it do a SQLite database to allow for subsequent SQL queries).
Thoughts? Is it possible to move forward with ideas like this in the OE ecosystem?
...After filling the temp-tables with data from from the OE database, we would have tons of powerful options that we don't have today. Today we can obviously interact with the TT records from the ABL language (like they were OE database tables). But if
the data was stored in a local SQLite database, then they you would also be able to run SQL queries against them, or transfer the connection to the database file to a different application in order to do follow-up analysis. This SQLite stuff performs very
well, and a wide variety of other types of applications could use the results. (IE. the temp data would no longer be locked up within the context of a single ABL session, and wouldn't require marshalling to get the data out of the session .)
Its just a thought. I suspect Progress ABL is permanently married to its own format for temp-tables. It would be nice if other options were available. Even a simple alternative like having a ProDataSet:WriteSqliteDatabase() option would be super valuable
(similar to WriteXml but sends it do a SQLite database to allow for subsequent SQL queries).
You received this notification because you subscribed to the forum. To stop receiving updates from only this thread,
post as spam/abuse.
So how do you transfer data from OE to SQLite? Probably not directly within ABL, right (probably not even within the CLR bridge). It sounds like you probably make remote appserver calls or else you use SQL92 to get the OE data and push it into SQLite?
I'd like it if ABL could *directly* interact with a local SQLite database file, possibly even for hosting its own DS/TT.
Having an option to at least write to a SQLite db would be great. ProDataSet:WriteSqliteDatabase() option could also ease the integration of different technologies which most of us have for mobile use and other add-on products not using data directly from the OE DB. Currently we have SQLite DB in use with multiple mobile applications. The data is transferred to mobile devices in json format in normal online use and also stored in that format to SQLite DB for offline usage. Using either OE SQL write or SQLite.dll could open up some possibilities in app interaction. I'll be contacting you Roger for an example and give a thumbs up for dbeavon's idea.
It would be nice to have this uploaded on the oehive for example. Many of us would be interested I guess ;)
or github :-)
There are a few nice things that could be gained by interacting with SQLite, and storing data from ABL. One is the ability to post-analyze the data using SQL queries. This is something that is pretty far outside the scope of anything we can do with ABL temp-tables. But another thing that could be gained is simply the ability to store a local, *persistent* copy of our data (eg. a copy that lives on a PASOE server ). I wonder if "persistence" is is something that could be added to the implementation of ABL temp-tables and prodatasets in ABL?
Before continuing to talk about "persistent" temp-tables, I should point out how costly it can be to read and process large amounts of OpenEdge data over client-server. For example, it takes a long time to read all of our open sales orders using an ABL business layer, and then combine them with related information from master tables (products, customers, salespersons, etc). The bulk of the cost can be attributed to the time-consuming network chatter that takes place between the ABL session and the remote database.
But what if the resulting temp-tables could be made *persistent* (shared) and copied between multiple ABL sessions within an msagent? That persistent data could be stored for whatever span of time is appropriate. These types of temp-tables could then be used as a type of cache, and would substantially improve performance for any clients of PASOE. Certain master tables (eg products, customers) might never need to be refreshed for an entire day - if the information was persistent and the source data was wasn't being actively modified .
In order to make temp-tables persistent, the best approach (today) is to serialize to disk (perhaps by using EXPORT statements or WriteXml). But this becomes expensive when a large amount of data is de-serialized because it needs to be parsed, and re-indexed, and re-loaded into the memory of every other ABL session that might need the same data.
Instead of jumping thru these hoops to perform serialization and deserialization, it would be more expedient if an ABL session could simply *attach* (bind) itself to some pre-existing temp-table files on the local SSD. This would give us a way to retrieve the costly client-server data only once - and then re-use it numerous times (by independent sessions in the msagent).
I see that there are startup options to preserve the temp files used by ABL sessions ... .
... but I've never heard if there is a way to copy or reattach (bind) these files as static temp-tables in a new session.
Has anyone tried to accomplish something like this in ABL? This type of a feature would provide a subset of the benefits of an embedded database (like SQLite).
(PS. I think another approach for keeping persistent temp-table data might be to use "state-aware" sessions that hold onto their own temp data for long periods of time. Those would then serve up the data to other sessions as needed. The problem in this case is still the cost that is involved in marshalling data from the state-aware session to another session where the temp data would be reused. The marshalling would be costly. It would be much better if data would simply persist to local disk and anyone could attach to it instantly!)
There are a number of different issues floating around in here which I think are getting munged together at the expense of clarity.
You talk about the expense of sending a lot of data to the client. The usual solution for that is to do the assembly on the AppServer and then only send a limited amount of data over the wire.
Maybe I am old-fashioned, but I get a bit nervous around ideas like semi-persistent, although there architectures where one can do something like that safely. E.g., if one uses a distributed architecture with a tool like Sonic say Order Processing on one (or more) machines and Inventory on a different machine, then it can be useful to cache some inventory data on the OP machine to avoid excessive requests about the network. But, to do that one has to be careful in the design to commit anything that changes inventory to the inventory machine and, if possible, to provide Inventory with a signal that it can send to OP when some presumed relatively static information changes.
I can't say that the idea of accessing data with SQL is an attractive alternative to ABL.
Reconnecting to a prior session's TTs sounds like an EXTREMELY risky idea. How do you have any idea how current the data is?
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
Thomas, it sounds like you aren't quite sold yet on the idea of embedded databases - nor on keeping any persistent data outside of an OpenEdge database. But TT data is already being stored (cached locally) in a way that is similar to an embedded database... which is why I was trying to make some comparisons and suggest some improvements.
>> The usual solution for that is to do the assembly on the AppServer and then only send a limited amount of data over the wire.
No... because appserver *IS* over the wire. It is becoming more common to run an appserver on a machine that is different than the one hosting the database. PASOE is *NOT* considered to be an appendage of the database, or at least not as much as the "classic" appserver used to be.
It is more scalable and fault-tolerant to create a *separate* tier where you would run PASOE servers, preferably several of them behind a load-balancer. You might even want to run them in docker containers. This is especially feasible if your PASOE servers and database are running within the same LAN, connected by 10Gbit ethernet. Remember that a database server is there to serve a *different* purpose than an appserver. The first one serves raw data and the second one incorporates business rules as well. But we may want to create a new thread for a long discussion about software architecture.
>> send a limited amount of data over the wire
This is the goal for any database server. The OE database in version 12 is making improvements in this area by supporting "server-side" joins which promise to send less data over the wire (to appserver) than what needs to be sent today. Less data will be sent to appserver, and it will be up to the ABL business rules (in appserver) to determine what kind of data is subsequently composed and sent along to client applications.
>> Reconnecting to a prior session's TTs sounds like an EXTREMELY risky idea. How do you have any idea how current the data is?
How current is TT data supposed to be? 10 seconds? 10 milliseconds? The exact same risk applies to all temp table data after it has been created. This same risk applies to all types of caching as well. It is up to the implementation of the cache to expire whenever the source data changes, or after a reasonable timeout. Besides the risk is comparable to some of the other things we do in ABL on a regular basis. For example we make use of *tons* of data that is retrieved from the database using "NO-LOCK". That NO-LOCK data is very similar to a dirty cache and it can be just a "risky". (If you work with ABL long enough, you will have your own stories about the consequences of retrieving tons of data with NO-LOCK.)
I'm not suggesting that everyone should keep their TT data around all week or all day long. But it would be nice to have features that make temp-tables more flexible. As I mentioned before, the concept of temp-tables is not unique to ABL. Temp tables behave very similar to an embedded database (ie. like SQLite). Note that SQLite may be the most widely deployed database in the world*** and it would be extremely helpful to interoperate with something like that from ABL.
*** see sqlite.org/mostdeployed.html
> On Apr 28, 2019, at 3:48 PM, dbeavon wrote:
> I wonder if "persistence" is is something that could be added to the implementation of ABL temp-tables and prodatasets in ABL?
just use a dedicated local OpenEdge database for that.
> It would be much better if data would simply persist to local disk and anyone could attach to it instantly!
yes. and you can do this quite easily with a separate OpenEdge database used for that purpose.=
Regarding the age of data in a temp-table, I see 3 main scenarios that need to be accounted for:
1) The data I am editing in my temp-table is out of sync from what is stored in the database.
2) The data I am editing is referencing other temp-table data that is out of sync with the the database (such as client-side validation data).
3) The user is making business decisions based on old data.
Scenario 1 is directly accounted for when using PASOE with business-entity-based classes with before imaging enabled. If the before image of your changes doesn't match the data in the database for the changed records, they will be rejected. Using data services with before imaging capabilities provides a lot of peace of mind regarding database integrity.
Scenario 2 encourages us to ensure that server-side validation is used for all data services/endpoints to protect the integrity of the database (don't rely on client-side validation).
Scenario 3 must be considered in the context of the application. Some things just aren't very critical. The more critical the decision to be made based on the data, the more important it is that the user knows exactly how old the data they are looking at is. Perhaps when the application is offline, a prominent timestamp is displayed showing the last time it was online.
Offline storage of data is really valuable for certain applications, but there are many approaches you may take depending on the application. Do you have a service/script running in the background of the application that's always checking for fresh data for the whole application? Do you always attempt a network call to the server and fall back on local data? How does the application know when it's back online to synchronize the offline changes? How do you handle conflicts when synchronizing?
We intend to work with offline/local storage in our web applications at some point, but I don't think we will need SQLite to access it. We are implementing an Angular + Kendo UI app with JSDO data services to a PASOE backend (based on business entity). In offline mode, I think we would instantiate the JSDO against the local files instead of the remote services, but the application itself would not need additional logic besides instantiating against local vs. remote data. That's how I anticipate it going anyway. I'm sure we will have some challenges, though.