This is a bad thing because then you have a buffer scoped to a method that is persistent in memory
class foo(): method public static void foo(): def buffer xx for SomeTable. <SomeCode> end method.end class.
method public static void foo():
def buffer xx for SomeTable.
Is this as bad ?
class foo(): method public static void foo(): def var xx as handle. create buffer xx for table "SomeTable". <SomeCode> finally: delete object lv_buffer. end finally. end method.end class.
def var xx as handle.
create buffer xx for table "SomeTable".
delete object lv_buffer.
So, in general, is it better to create and destroy buffer handles than to use define buffer xx ?
Why should the first one be bad? It's scoped to the method - so it's only available during the execution of the method. Regardless of the accessibility of the method.
I never would recommend to use dynamic buffers when the tablename is know already at compile time. That typically requires much more testing and changes to the DB only bang at runtime, not much more helpfully at compile time.
the first one is bad because the method is always present in memory,
as it can't be unloaded. If you execute that method, then drop back to
the editor, and try to change the data dictionary your session will
restart because of a schema change.
I don't change the dictionary that often. But I'm glad to have compile time validation of the data access every single day.
For me a key strength of the language!
Don't get me wrong - I agree with you 100% about defined buffers and dynamic buffers. That's why I generate code automatically Removes the need for dynamic buffers.
It's just that I don't like the idea of buffers hanging around in memory that you can't get rid off.
It .... just .... seems .... wrong
I also can't provide any use case where it is a problem.
The only thing that seems wrong to me is that you cannot get rid of static classes in your session.
But hey, that is just sometimes unconvenient in the development environment, but the goal is the production environment.
We developers can cope with it.
But the cause is not that much the buffer scope - it's because the method is static and nobody let's you unload the static 'instance' from memory.
For as much as I love the simplicity of a static API - this is would be one of the cases where a singleton approach has the advantage of being unloadable.
Absolutely - especially in OpenEdge Architect where the tools AVM and the developers test AVM are usually separated from each other.
or just a 'fake' singleton...
class foo(): def private var _self as foo no-undo. constructor static foo (): _self = new foo(). end constructor. method private _foo(): def buffer xx for someTable. ... end method. method public static foo (): _self:_foo(). end method.end class.
def private var _self as foo no-undo.
constructor static foo ():
_self = new foo().
method private _foo():
def buffer xx for someTable.
method public static foo ():
I am not 100% sure (no AVM on the iPhone), but I assume the static interface and the non static implementation will need to be separate classes (different R-Code) because the DB-REFERENCES of the R-Code is recorded for the whole thing only, not separately for the static and the instance members.
My first inclination is to question the use of DB references in static classes....
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
this I like
you are right. Just tried this, had a schema change restart on both scenarios
Not entirely static classes, but static methods ..
I *don't* like typing
this is so much cleaner and easier to read.
And, although you don't think that this is an issue, but the () around the (new demo.foo()) buggers up the intellisense so you don't get to see no methods, properties or events on the object.
You can, as always, do
def var foo1 as demo.foo no-undo.
foo1 = new demo.foo().
70 chars vs 14.
3 lines vs 1
and much less readable.