Management of domain objects - Forum - OpenEdge Development - Progress Community
 Forum

Management of domain objects

  • Much better, though I don't think I see it quite the

    same way. For complex domain objects, it seems to me that the

    purpose of caching is also performance, but perhaps we should ask

    about two different levels of caching. One is during the usage

    lifetime of the object. The other is some longer period which might

    be minutes, hours, or days, but at least it id during a period when

    the object is not currently in use in that session, but was

    previously used. You seem to think that it might be important to

    cache something like an order during its usage lifetime in case the

    same object were requested by another part of the session. Given

    that we don't have a multi-threaded session, this seems pretty

    remote, but I agree that it is one thing that I would like to do.

    Indeed, one of the things I would like if we could get

    multi-threaded sessions or closely cooperating sessions is that the

    source for a particular type of object would not only cache the

    object while it was in use, but it would also have a distributed

    event system so that it could notify all users of an object when a

    change was made so that they could get a fresh copy. With

    single-threaded sessions I'm not sure if this has practical value,

    but I agree that it is a nice idea. In an optimistic locking

    discipline, it is possible for different sessions to get different

    versions of an "object" and this is resolved or rejected at

    check-in. So, ensuring that all copies are itdentical is not

    considered necessary in this discipline in order to ensure

    consistency in the stored data. The discipline works because the

    conflict doesn't happen very often. Longer term caching is clearly

    just a performance issue. This is based on the notion that

    accessing someone once at time X means that it iw likely that it

    will be wanted again sometime "soon", e.g., an order generates a

    shipping request which is routed to the warehouse and, at least in

    some contexts, this means that the order will soon be updated by

    the results of the shiping request. Caching the object would keep

    the Mapper from having to rebuild it, which might be moderately

    expensive if lots of tables were involved. The second type of

    caching I think I agree is more of a matter of caching in the

    business logic, although it seems likely that one would back that

    cache with a cache in the data access layer. This kind of cache is

    one where it would be particularly nice to have the ability to

    publish a CollectionModified event. One of the reasons for the

    cache in the data access layer, btw, is if any of the requestors

    have edit ability since then one would want to be able to use

    Tracking-Changes to identify what was different. Note too that the

    authoritative source for such a collection may be in a different

    service than where it is being used so one needs to cache it on

    both sides of the bus.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • I seem to be having a hard time convincing you that

    I'm not trying to discuss specific architectures in

    this thread. This thread was intended to discuss the

    management of the lifetime of domain objects ...

    however composed.

    Thanks for your patience and I really appreciate talking to a fellow architect (it's a pitty the audience is rather small).

    The primary reason for me diving into details is that far too often people agree on the highest level of abstraction. But as soon as you start implementing things, you either run into runtime specifics or other concerns and than suddenly things aren't so easy anymore. It's you yourself who complains about the simple examples in the AutoEdge reference architecture and I agree with you (haven't dived into AutoEdge, but other samples/whitepapers only highlight certain areas).

    So when you say "No need to fetch the whole order to update impacted fields" you make a misjudgement imho. I think updating an orderline should be done in the scope of the order header. Something simple like changing the ordered quantity might have a big inpact on the price. Let's assume you get a discount of 25% when you order 10 pieces. So you order 10 pieces and a day later you cancel 9 of them. The order entry clerk updates the quantity and hopefully the business logic will update the price as well, unless the clerk overrules the system.

    Don't get me wrong: I'm really not into creating a perfect orderline system. It's just my favorite example.

  • You seem to think that it might be important to cache

    something like an order during its usage lifetime in

    case the same object were requested by another part

    of the session. Given that we don't have a

    multi-threaded session, this seems pretty remote, but

    I agree that it is one thing that I would like to do.

    No it's not. You tend to forget the responsibilities of classes. Once you start OO'ing your appplication, you also start creating "self supporting objects". So during orderline validation something might want to fetch the customer object. This code code manipulate the customer, so it's in memory state will be changed: it's not synchronized with the database state, since the processing is not done yet. Now another part of the code during this same call processing requires the same customer as well. What if the Finder will produce a new version of the Customer, which has been materialized with the current database state? The two Customer instances will be out-of-sync. This has nothing to do with being mulit-threaded.

    Indeed, one of the things I would like if we could

    get multi-threaded sessions or closely cooperating

    sessions

    Whow... don't underestimate the complexity you will introduce by adding multi-threading. There are very few people who know what they should be doing when it comes to multi-threading. It will be very easy to deadlock yourself. A simple example: thread A has a pending transaction and thread B wants to update the same data. More complex to detect is when you start adding synchronized code....

    is that the source for a particular type of

    object would not only cache the object while it was

    in use, but it would also have a distributed event

    system so that it could notify all users of an

    object when a change was made so that they could get

    a fresh copy.

    What about the ACID-rules here?

    In an optimistic locking

    discipline, it is possible for different sessions to

    get different versions of an "object" and this is

    resolved or rejected at check-in.

    Are you considering pessimistic locking in your architecture? You will soon be in trouble when you add the multi-threading part when it would be available....

    consistency in the stored data. The discipline

    works because the conflict doesn't happen very

    often.

    Hehe... that's the same argument as saying "I ignore conflicts in an optimistic concurrency controlled environment since two users will hardly ever update the same row"

    Longer term caching is clearly just a performance

    issue. This is based on the notion that accessing

    someone once at time X means that it iw likely that

    it will be wanted again sometime "soon", e.g., an

    order generates a shipping request which is routed to

    the warehouse and, at least in some contexts, this

    means that the order will soon be updated by the

    results of the shiping request. Caching the object

    would keep the Mapper from having to rebuild it,

    which might be moderately expensive if lots of tables

    were involved.

    So you mean that a subsequent request, in case of an AppServer a new stateless AppServer call, will want to reuse the same order instance? I find that very unrealistic... Between the two requests another user could be working on this order as well, perhaps he's approving it... And than you have load balancing: you won't get the same AppServer session.

    The second type of caching I think I agree is more of

    a matter of caching in the business logic, although

    it seems likely that one would back that cache with a

    cache in the data access layer.

    You will cache when you think it's worth it. You might cache things at the user interface level as well. A list of countries can very well be cached on a smart client device once it has been fetched. There is no need to spend another AppServer roundtrip on that one.

    This kind of cache

    is one where it would be particularly nice to have

    the ability to publish a CollectionModified event.

    One of the problems with hooking up subscribers is that the object's lifetime will be extended as well, since you will connect everything together. So my request handler will subscribe deep down there to a CollectionModified event published by a data access component. It probably have to do that for all the dac's it uses (order, product, customer, etc).

    One of the reasons for the cache in the data access

    layer, btw, is if any of the requestors have edit

    ability since then one would want to be able to use

    Tracking-Changes to identify what was different.

    Note too that the authoritative source for such a

    collection may be in a different service than where

    it is being used so one needs to cache it on both

    sides of the bus.

    So what will happen to the state of this cache when you rollback (UNDO) the transaction somewhere?

  • OK, I have started a new thread on Order and OrderLine ( http://www.psdn.com/library/thread.jspa?threadID=2725 ) so we can go into that discussion there.

    Let's pick this up in the new thread.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Very well understood. One of the points Gus

    made when we were disucssing my use case on multithreading was in

    essence, "how are we going to keep people from shooting themselves

    in the foot?" The Right Thing. Surely you

    don't intend having long open transactions ...

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com