OpenEdge Reference Architecture & Web - Forum - OpenEdge Architecture - Progress Community

OpenEdge Reference Architecture & Web

 Forum

OpenEdge Reference Architecture & Web

  • That is a bit of topic isn't it. In my message stood "and started with autoedge|the factory (the oo version of autoedge)." So I started with an existing framework.

  • First of all, we have an ASP.NET MVC3 web solution and I'm not satisfied with the way "our web boys" are dealing with this issue. Their solution is lacking on both performance and scalability.

    The main reasons behind the question that I want to releave the client from the burden of the dataset obscurities (before- & after images, storing entire datasets). When I say client I mean both browser and webserver.

    If I solve this issue server side (on the OE AppServer that is) I make this reuasble much easier for other clients (iPads, Metro interfaces, HTML5 w/ Javascript, bla, bla) to connect as well.

    IMHO this means that the client should be allowed to send back a minimum of information (the old and new value of the changed fields and an identifier).Something like (I'm trying to sketch the general idea):

    {
      "person" : {
        "firstname" : "Bronco",
        "lastname" : "Oostermeyer",
        "id" : 0x00000021de
      }
      "person-old" : {
        "firstname" : "bronco",
        "lastname" : "oostermeyer",
        "id" : 0x00000021de
      }
    }

    On the serverside person could have as many as 50 field (or whatever). But the server deals with that.

    I agree with Stefan D (nice, two Stefans) that the JSON somehow needs to be stored in the webpage, but that doesn't solve my serverside issue.

  • bfvo wrote:

    First of all, we have an ASP.NET MVC3 web solution and I'm not satisfied with the way "our web boys" are dealing with this issue. Their solution is lacking on both performance and scalability.

    The main reasons behind the question that I want to releave the client from the burden of the dataset obscurities (before- & after images, storing entire datasets). When I say client I mean both browser and webserver.

    If I solve this issue server side (on the OE AppServer that is) I make this reuasble much easier for other clients (iPads, Metro interfaces, HTML5 w/ Javascript, bla, bla) to connect as well.

    IMHO this means that the client should be allowed to send back a minimum of information (the old and new value of the changed fields and an identifier).Something like (I'm trying to sketch the general idea):

    {
      "person" : {
        "firstname" : "Bronco",
        "lastname" : "Oostermeyer",
        "id" : 0x00000021de
      }
      "person-old" : {
        "firstname" : "bronco",
        "lastname" : "oostermeyer",
        "id" : 0x00000021de
      }
    }


    On the serverside person could have as many as 50 field (or whatever). But the server deals with that.


    I agree with Stefan D (nice, two Stefans) that the JSON somehow needs to be stored in the webpage, but that doesn't solve my serverside issue.


    And now there are also 2 Peter's on the thread ... we need more Bronco's in the ABL world

    Something like HTML5 local storage may help with JSON storage.

    However, I suspect that you will need to persist each request's data in a shared (between appserver agents) location so that it can always be retrieved. I'm not sure what the id element above refers to (I'm assuming it's a key on person), but you'll need a data request ID so that you can identify each request's data. Storage and retrieval will be a challenge, since what you're building will be a persistent cache, and so needs to have fast in/out mechanisms.

    You could serialise the ProDataSet to JSON/XML and store in a OE db keyed on that request id. You could create a per-agent version of the ProDataSet (but that would introduct consistency issues across agents, as well as potentially resource issues). You could create a (real) DB that's a map to the ProDataSet schema and use that as a cache.

    This question came up a couple of months ago with another customer who decided to use JSON for the caching mechanism.

    OERA doesn't have an answer since the design centre is that the Business Entities are completely stateless, which implies that the client has everything it needs for a request (including before-images).

    -- peter

  • In an ideal world (with a statefree server) the client passes all the old values together with changes as well as a flag that tells whether the row is to be deleted, created or modified, also when the data is passed with JSON, XML or something else. 

    As you point out, you cannot always control what you get from a non-progress client and as Peter Judge points out the best way to handle this is to implement server side context management.  
    But assuming you already have an OERA implementation that handles stateless reads with acceptable performance it should be possible to add support for less-than-ideal clients with out going the full context management route.

    The ProDataSet does support adding new records and marking them as modified or new in order to allow receiving data and use save-row-changes to save them without the need to read the data before you apply changes. This may be used in a case where you receive all the fields without before-image data.
    But if the client only sends a subset of the actual fields you will get in trouble when the Business Entity is implemented with temp-tables and a ProDataSet, since the other fields will end up with the defined initial values, which again may mess up business logic and/or overwrite the actual values in the database.A practical way to support a client that only passes changes is to ensure that the records that are being updated are re-read (and fill events are fired) from the data access layer before you apply the changes. You may add an import method to your business logic layer that deals with this. Again, assuming you already have an OERA implementation that handles stateless reads then you should in theory be able to reuse existing code both to initiate the ProDataSet and to read the data. It may seem as overkill to read data for the only purpose of ensuring that business logic and data access does not think you changed something that you did not change, but it is typically easier to implement this than ( the more correct and complete) alternatives that require server side context. The overhead of the read should be tolerable since a stateless server should be able to handle fast reposition requests in any case (and the corresponding database data may remain in the (–B) buffers when you do the actual commit).
    Note that you also need to have a strategy for optimistic locking if the client does not pass before image data with the update.  Time stamps or some kind of CRC may be needed if you don’t want to allow saving of the data if some of the fields that are NOT being passed have been changed since you read them.
  • Just a small note on how to implement the Import method. If you use ProDataSet native read-xml or read-json, you may need to clone the ProDataSet or at least some of the temp-tables to do this, so that you can keep the new data separate from the existing data and use the new data's keys to populate the existing data. If you are on version 11 this logic may actually be easier to implement if you use the native JSON support (I do no know which is faster).

  • Seems like you're reinventing REST.  You shouldn't need two separate JSON objects to represent modifications to one person.  What you should do is open up your Person as a RESTful Web resource.  Mutating the person should be handled using RESTful HTTP actions (for changing some values like your example, it would be PUT).

    If you can accomplish that, it's super easy to handle the mapping to a browser's JavaScript model with libraries like backbone.js that keeps the client and server in sync, painlessly.

  • Although REST is definitely interesting, in its standard form I don't think it provides an answer for old/new value issue. This is obviously needed because other I can't get my current (OERI) conncurrency implementation to work.

    AFAIK in REST you still need to think how to structure your data (JSON/XML or whatever) you want to communicate to the server.

  • Well, the idea behind all this was to offload the burden for the client to store the original dataset. When I started to investigate I hoped that I could come up with some smart way to reread my original dataset (or at least the record for which I received updates) on the server to be able to use my current (OERA type of) architecture. Setting up context management for storing each and every single byte which goes out of the server doesn't seem to be very attractive because of the performance penalty this would apply. One transaction for every fetch of a dataset and physically storing all the data when maybe one record is needed is going to hurt my servers performance (badly in my estimate). Although this is something I wanted to avoid, I think I will have to come up with some patterns to make the client repsonsible for holding the original dataset. The overhead of sending the entire dataset and put it in some hidden html form field seems relatively minor. It's just a pity that I have to duplicate this solution for all the (stateless) clients. Since I control both server and client that shouldn't be too much of an issue.

    BTW, just one wild idea for scalable and performant context management: setup cassandra to store the dataset :-)

    In all seriousness, if Progress Software is serious about its OpenEdge cloud proposition these type of patterns deserves an ouf-of-the-box solution. After all, OpenEdge is about simplify making the best world's business (cloud) applications.

  • One more thought: Obviously a REST adapter would be benifical. This REST adapter could recieve both the OLD dataset and the NEW values and make a decend dataset out if it, before sending it to the AppServer. This way you at least relieve your client solution from implementing communication with the AppServer. No knowledge of the dataset particulars are needed by the client solution as well. HTTP/REST is becoming quite the standard, I suppose.

    Isn't Progress building a REST adapter? If so, is Progress willing to share so details (implementation, ETA)?

  • bfvo wrote:

    Although REST is definitely interesting, in its standard form I don't think it provides an answer for old/new value issue. This is obviously needed because other I can't get my current (OERI) conncurrency implementation to work.

    AFAIK in REST you still need to think how to structure your data (JSON/XML or whatever) you want to communicate to the server.

    The "standard" way to handle this in REST implementations is to use a version number or time stamp on each row. This means that you do not need to pass old values to the server or need any server context. If the version number passed from the client matches the version number on server then you know that the old values are the same as the current values. if they do not match then you can pass the new data back to the client with the error and deal with the refresh/merge or whatever on the client. Existing frameworks like extjs 4 and probably also backbone.js have a model on the client that will still have the old values. Many if these frameworks are quite sophisticated and provide everything you need for UI binding to the model.

    The Progress database does not have a version number, so you will need to roll your own. (I'm sure you also can extend these frameworks to send the old values).

    You will also need to think how to structure the data. JSON can handle any structure that you can define in a prodataset or temp-table (and more). You really want the client to have the same structure as your BE, so most of the thinking should already be done.

    You will also need to add a unique resource identifier on each row. This typically reflects your BE structure as well. ..orders/301/orderlines/6

    You can use ProDataset or temp-table json support, but these may not match the expected/standard formats, so you may need to spend some time to make the client work with these. In version 11 you may use the ABL's JSON objects. This gives you a lot more flexibility. Version 11 also allows you to ommit the "root" node for a ProDataSet for this reason. Given the fact that it is common to have operations for one record, you may want to improve your BE to have dedicated logic and methods for this, since ProDataSet, temp-tables import and export, as well as buffers in general do not distingusih between a single or many rows.

    .

  • bfvo wrote:

    Well, the idea behind all this was to offload the burden for the client to store the original dataset. When I started to investigate I hoped that I could come up with some smart way to reread my original dataset (or at least the record for which I received updates) on the server to be able to use my current (OERA type of) architecture. Setting up context management for storing each and every single byte which goes out of the server doesn't seem to be very attractive because of the performance penalty this would apply. One transaction for every fetch of a dataset and physically storing all the data when maybe one record is needed is going to hurt my servers performance (badly in my estimate). Although this is something I wanted to avoid, I think I will have to come up with some patterns to make the client repsonsible for holding the original dataset. The overhead of sending the entire dataset and put it in some hidden html form field seems relatively minor. It's just a pity that I have to duplicate this solution for all the (stateless) clients. Since I control both server and client that shouldn't be too much of an issue.

    BTW, just one wild idea for scalable and performant context management: setup cassandra to store the dataset :-)

    As already mentioned there are existing JavaScript REST based frameworks that uses a model on the client and can hold the equivalent of your dataset. But these will typically onle send the changes and this will require "some smart way" to reread the original dataset and the records that you recieve for update.

    The point I tried to make was that you need to establish the ProDataSet fast for stateless requests in any case. There is no reason why it should be very time consuming to define or create an empty ProDataSet. It should also be fully possible to extend your data sources to take a query that specifies one record for the case where you want to save one record fast. Reading a set of random multiple records is some work, but it should not be particularly difficult to extend an OERA implementation to be able to pass a request temp-table and add it to the data source query buffer and expression and use the exisitng fill logic used for read. I'm not sure what your performance requirements are. Even if there is a large percentage overhead to save one record in this case, it should still be well within the pain limit of any UI.

    In all seriousness, if Progress Software is serious about its OpenEdge cloud proposition these type of patterns deserves an ouf-of-the-box solution. After all, OpenEdge is about simplify making the best world's business (cloud) applications.

    Make sure you bring this up with your Progress contact/representative. There is certainly a lot we can provide on the JSON/server/adapter side that could benefit many, but the existing JavaScript frameworks seems to have most of what you ask for on the client and it is fully possible to make this work in an existing OERA implementation that already performs well enough for stateless requests.

  • hdaniels wrote:

    Existing frameworks like extjs 4 and probably also backbone.js have a model on the client that will still have the old values. Many if these frameworks are quite sophisticated and provide everything you need for UI binding to the model.

    backbone is actually quite simple as it was built simply for the task of client-server model binding (there is a little bit of extra code for view stuff but not much). the current bleeding edge version is only 1245 SLOC; if memory serves the initial version was ~800 SLOC!

  • bfvo wrote:

    Well, the idea behind all this was to offload the burden for the client to store the original dataset. When I started to investigate I hoped that I could come up with some smart way to reread my original dataset (or at least the record for which I received updates) on the server to be able to use my current (OERA type of) architecture. Setting up context management for storing each and every single byte which goes out of the server doesn't seem to be very attractive because of the performance penalty this would apply. One transaction for every fetch of a dataset and physically storing all the data when maybe one record is needed is going to hurt my servers performance (badly in my estimate). Although this is something I wanted to avoid, I think I will have to come up with some patterns to make the client repsonsible for holding the original dataset. The overhead of sending the entire dataset and put it in some hidden html form field seems relatively minor. It's just a pity that I have to duplicate this solution for all the (stateless) clients. Since I control both server and client that shouldn't be too much of an issue.

    Well I have no idea about this OERA stuff (from the diagram it looks like MVC plus useless "businessy" garbage tossed in if you ask me), but again if you are able to respond to HTTP actions on your Web server (GET, POST, PUT, DELETE) then you should be able to write routes that will fully enable a JavaScript framework to do all the work for you by accepting/returning JSON representations of your resources (unless I'm totally missing something here).  If there is a lot of resource contention, where users are concurrently modifying resources and you are worried about consistency then it may get slightly more complicated (see Harvard's answer).

    Your kind of use case is partly what made Ruby on Rails explode back in 2005 or so.  I've got a Rails adapter for OpenEdge databases mostly-done that would make this pretty much a dead-simple operation, just need some time to finish it (hard to get motivated by having to work around OE-SQL deficiencies and JDBC driver bugs).  I had considered a KickStarter campaign but I don't think there'd be much interest!

    BTW, just one wild idea for scalable and performant context management: setup cassandra to store the dataset :-)

    Well this use case (reducing the glue between client and server DB) is certainly a big part of why the NoSQLs gained a lot of traction.  If you can get away with it and the non-relational aspect won't be a hurdle then I say go for it... MongoDB has drastically simplified an app that I'm currently working on that would have added unnecessary work to come up with a hard schema in relational-DB land (although I still think relational DBs make sense most of the time).

    There are even databases like Riak and CouchDB that have built-in REST adapters, so you could even have your JavaScript client talking directly to your DB (feasible but almost certainly unwise from a security standpoint).

    In all seriousness, if Progress Software is serious about its OpenEdge cloud proposition these type of patterns deserves an ouf-of-the-box solution. After all, OpenEdge is about simplify making the best world's business (cloud) applications.

    Well at some point you have to give up on expecting a biplane to behave like a fighter jet. lol

  • Thank you all for your contributions to this discussion. Everything combined together makes some interesting reading, I suppose. I first have to digest all this, but somehow I have the idea the thread will grow longer.

    Message was edited by: Bronco Oostermeyer (typos)