I am looking for references to bugs that arise when using "client-server" connections to an OE database from ABL.
I'm pretty well aware that client-server connections are somewhat slower than "shared memory" connections (as anyone would reasonably expect).
I'm also very familiar with configuration changes (startup parameters) that are necessary to start supporting client-server connections, and tweak performance (-Mm, and friends).
I'm less concerned about performance and configuration than I am about bugs. In short, I'd like to know if anyone has ever been exposed to bugs which are *unique* to client-server (ie. bugs that do not apply to ABL code running on a shared memory connection). I've encountered some weird issues related to online schema changes but I'm hoping that the bugs don't extend further than that. It would be especially helpful to have KB references to client-server bugs, if any.
Here is one KB about a bug related to online schema changes.
The changes seem to break client-server applications but not shared-memory applications.
I'm hopeful there aren't any more where this comes from. We have historically deployed our applications to the same server that hosts the DBMS, and allowed the apps to connect in shared memory. But this architecture has a lot of limitations, and we are hoping to start scaling out the application logic so that it will run in another tier (eg from a remote PASOE instance, or from several PASOE instances that are behind a shared NLB name).
Thanks in advance.
I thing that is an unlimited search. You will find bugs in the _mprosrv and in the _progres.
I suggest you to the define the best architecture for your application based on resources that each architecture delivers. Bugs happen and are corrected anytime.
I found another potential bug that is specific to "client-server" programming in ABL. The following KB indicates that some FIND/GET operations behave differently and will potentially return different records for "shared-memory" vs "client-server" connections:
FIND DIFFERENCES WHEN CONNECTED REMOTELY VS LOCALLY:
Of course the KB indicates that the difference is not a bug per-se. But it will pose as a problem for software developers who are migrating from a "shared memory" to a "client-server" architecture.
On a discouraging note, the KB admits that differences in behavior between "shared memory" and "client-server" ABL may remain "intentionally undocumented". In other words, Progress doesn't want to explicitly identify how a "client-server" program might behave differently than a "shared memory" one. This admission could make it all the more challenging to gather together a list of bugs that are specific to "client-server".
There was one surprising "client-server" issue I had a significant struggle with. It turned out that a FOR EACH query - even a simple one using a single table - doesn't always get fully resolved on the server side.
If you have multiple conditions in a WHERE clause, apparently only some of them will be resolved on the server. The degree to which the server resolves the WHERE clause is based on the index that is used. If the fields are in the index, then the conditions are resolved on the server. Afterwards the matching records are all sent over the network to the client where the remaining (non-indexed) part of the WHERE clause is evaluated. This can generate a ton of network activity for records that are basically going to be discarded.
Does anyone have a reference to where this unfortunate "client/server" behavior is documented? It was one of the most unexpected behaviors that I experienced when using "client-server" ABL for the first time. And it causes a pretty severe loss in performance unless a ton of extraneous indexes are added.
Brian, thanks for the tip. I was worried that the article, even if I found it, would not be relevant to remote client/server connectivity... But it appears to be fairly relevant. Here it is:
Here is an interesting part related to resolving WHERE conditions.
A version 7 server is capable of doing most selections, and sends only those records which fully satisfy the query to the client.It is important to note that there are some selection operations that the server cannot do, either because they require access to program variables in the client, or because they are not implemented on the server (the most important such function is CAN-FIND which is not yet implemented on the server). In such a case, the server sends the records to the client along with an indication that it cannot perform the selection, and the client must do it.
My understanding is that the only "WHERE" conditions that a server is able to resolve entirely on the server-side are conditions that use *indexed* fields. However the statement above does not explicitly say that. It leads us to believe that the server may do better than that.
I was hoping that I could find a better explanation than this of which types of WHERE conditions would be resolved on the server, during a FOR EACH scan of a single table.
A consultant of ours has told me that "anything not resolved by index (or indexes as query can use multiple indexes) will be sent to client and resolved by client.”
That is a statement that is fairly straight-forward. And it aligns with many of my experiences. But I can't find a KB article and it seems like it would be in the KB (the article 000012195 would have been a good place for it).
Maybe the KB articles are deliberately avoiding specific statements about client-server query resolution. Perhaps they avoid those statements in cases where the programmers hope to make improvements in a future release of OpenEdge.
I've never spent time analysing this sort of thing, so I don't know any answers, but I would start testing this using the QryInfo log entry type. Whack it up to level 3 so all queries are logged. It will tell you how many records are returned from the server vs how many were actually useful to the query. That should enable you to build and analyse some test cases.