I believe this topic has been discussed previously, but I have not been able to find anything searching the Community.
We are in the process of evaluating our Application Lifecycle Management (ALM) processes and supporting tools.
Our current tool set includes ServiceNow (ticket mgmt), Subversion (SVN) with TortoiseSVN, Jenkins, ANT scripts, Excel (to tie everything together) and grunt work.
Some of our challenges:
1. ServiceNow does not having a link to Subversion. At least we have not found anything and it apparently would be expensive to do it. We, therefore, spend a significant amount of time reconciling artifacts in SVN with the corresponding tickets in SNow.
2. Schema management. We do have the full .df in SVN, but we don't version incrementals and their application is entirely manual across all shared environments (dev, QA, UAT, Production) and local developers' environments.
3. Too much developer involvement in releases and deployments. While it may be impossible to eliminate this entirely, it should, IMO, be the exception not the rule.
4. Configuration. We have several ini style properties files mostly used for those things that are environment specific. They are also released/deployed outside our normal process.
5. Manual reconciliation/coordination
Being in the the Progress world, we, of course, are evaluating Roundtable TSMS, but also want to consider other options before we make a decision.
Is anyone using MS Azure DevOps (or one if its previous incarnations)? How about the Mylyn Eclipse plugin?
Anything you could share about your processes and/or tool sets would be much appreciated.
Feel free to contact me offline if you prefer.
Lots of questions here which I'l leave to others to contribute on, but one in particular caught my eye.
Background info, we use Perforce for Source Control, and JIRA for tickets. They talk quite nicely to each other, and Jenkins has plugins that help with both.
We only version the full df also. We have an ant/PCT script that takes said df from the repository, creates a temporary DB, makes a delta.df to the master and then applies that to the master. It requires you having a compile time license, but it takes all the effort and stress out of schema management. It's also a relatively simple process, and with PCT now shipping with your Progress install, it makes sense to use it!
We only version the full df also. We have an ant/PCT script that takes said df from the repository, creates a temporary DB, makes a delta.df to the master and then applies
that to the master. It requires you having a compile time license, but it takes all the effort and stress out of schema management. It's also a relatively simple process, and with PCT now shipping with your Progress install, it makes sense to use it!
We rarely delete anything from the schema - not that we shouldn't - so that's not much of a concern at the moment.
I've heard JIRA is much better at playing nice with others than SNow. The decision to go with SNow had little, to no, input from our developers. Apparently, there are ties between JIRA and SNow -- go figure. I don't know how much of a chance it has, but utilizing JIRA for development is a light blip on our radar.
We are using PCT via ANT for deployments to our AIX systems. But not yet for our development environments. Our AIX systems, except for Dev/Integration, are still on 10.2b. Local development is 11.5.1 or 11.7.[4 or 5]. AIX Dev/Integration is now 11.7.5 in preparation for upgrading the other AIX systems. We may move to 12.x for local development once our AIX systems are all on 11.7.x We have not implemented the schema management features you mentioned though.
James, would you be willing/able to share your ant/PCT scripts for creating/applying delta.dfs?
P.S. I tried emailing you from your profile, but it failed.
I've updated my public email - sorry about that.
Unfortunately I am not at liberty to share the script. I can share the basic steps though...
<PCTCreateBase> - create a dummy db with a dummy structure and full df.
<PCTDumpIncremental> - dump a delta df between the 2 databases
<PCTLoadSchema> - load the new schema into the primary database
You can see it's quite simple. Obviously you'll want to build extra logic around that in case things fail, but it's the essence. Remember that if you use online schema change, that no other client can be connected to the database or you'll just eventually hit the lock wait timeout and nothing will happen.
The official ways to load DF (any kind) with code is below, must be run foreground "(m)pro" not batch
Now an unsupported but works... but again! RISKY! is to just create the _File/_Fields within a transaction... I used this in the past, but again. Might not be something to do in production but for Dev/Test instantiation were your DB corruption isn't a big deal. The follow is part something used, full code not available, but this should show you the direction to go.
/*Check to see if table exists already*/
FIND FIRST MyDBName._file WHERE MyDBName._file._file-name = chrTableName NO-ERROR.
IF NOT AVAILABLE MyDBName._file
ASSIGN MyDBName._file._db-recid = RECID(MyDBName._db)
MyDBName._file._dump-name = TRIM(LDBNAME(intDbCount)) + "_" + TRIM(hdlTableHandle:BUFFER-FIELD("_dump-name"):BUFFER-VALUE) + ".d"
MyDBName._file._file-name = /*chrTempDbPrefix + */ hdlTableHandle:BUFFER-FIELD("_file-name"):BUFFER-VALUE
MyDBName._file._ianum = intTableAreaNo
MyDBName._file._desc = hdlTableHandle:BUFFER-FIELD("_desc"):BUFFER-VALUE
MyDBName._file._valexp = hdlTableHandle:BUFFER-FIELD("_valexp"):BUFFER-VALUE
MyDBName._file._valmsg = hdlTableHandle:BUFFER-FIELD("_valmsg"):BUFFER-VALUE
MyDBName._file._hidden = hdlTableHandle:BUFFER-FIELD("_hidden"):BUFFER-VALUE
MyDBName._file._frozen = hdlTableHandle:BUFFER-FIELD("_frozen"):BUFFER-VALUE
MyDBName._file._can-dump = hdlTableHandle:BUFFER-FIELD("_can-dump"):BUFFER-VALUE
MyDBName._file._can-load = hdlTableHandle:BUFFER-FIELD("_can-load"):BUFFER-VALUE
MyDBName._file._file-label = hdlTableHandle:BUFFER-FIELD("_file-label"):BUFFER-VALUE
MyDBName._file._file-label-sa = hdlTableHandle:BUFFER-FIELD("_file-label-sa"):BUFFER-VALUE
MyDBName._file._for-cnt1 = hdlTableHandle:BUFFER-FIELD("_for-cnt1"):BUFFER-VALUE
MyDBName._file._for-cnt2 = hdlTableHandle:BUFFER-FIELD("_for-cnt2"):BUFFER-VALUE
MyDBName._file._for-flag = hdlTableHandle:BUFFER-FIELD("_for-flag"):BUFFER-VALUE
MyDBName._file._for-format = hdlTableHandle:BUFFER-FIELD("_for-format"):BUFFER-VALUE
MyDBName._file._for-id = hdlTableHandle:BUFFER-FIELD("_for-id"):BUFFER-VALUE
MyDBName._file._for-info = hdlTableHandle:BUFFER-FIELD("_for-info"):BUFFER-VALUE
MyDBName._file._for-name = hdlTableHandle:BUFFER-FIELD("_for-name"):BUFFER-VALUE
MyDBName._file._for-number = hdlTableHandle:BUFFER-FIELD("_for-number"):BUFFER-VALUE
MyDBName._file._for-owner = hdlTableHandle:BUFFER-FIELD("_for-owner"):BUFFER-VALUE
MyDBName._file._for-size = hdlTableHandle:BUFFER-FIELD("_for-size"):BUFFER-VALUE
MyDBName._file._for-type = hdlTableHandle:BUFFER-FIELD("_for-type"):BUFFER-VALUE
MyDBName._file._valmsg-sa = hdlTableHandle:BUFFER-FIELD("_valmsg-sa"):BUFFER-VALUE
MyDBName._file._can-create = hdlTableHandle:BUFFER-FIELD("_can-create"):BUFFER-VALUE
MyDBName._file._can-delete = hdlTableHandle:BUFFER-FIELD("_can-delete"):BUFFER-VALUE
MyDBName._file._can-read = hdlTableHandle:BUFFER-FIELD("_can-read"):BUFFER-VALUE
MyDBName._file._can-write = hdlTableHandle:BUFFER-FIELD("_can-write"):BUFFER-VALUE
MyDBName._file._dft-pk = hdlTableHandle:BUFFER-FIELD("_dft-pk"):BUFFER-VALUE
DO intCount = 1 TO 8 ON ERROR UNDO, RETURN ERROR:
ASSIGN MyDBName._file._fil-Misc1[intCount] = hdlTableHandle:BUFFER-FIELD("_fil-Misc1"):BUFFER-VALUE(intCount)
MyDBName._file._fil-Misc2[intCount] = hdlTableHandle:BUFFER-FIELD("_fil-Misc2"):BUFFER-VALUE(intCount).
END. /* do intCount = 1 to 8 */
put stream LogChange unformatted NOW "-" LDBNAME(intDbCount) "-CreateTable:" hdlTableHandle:BUFFER-FIELD("_file-name"):BUFFER-VALUE skip.
END. /*Does Table Exist?*/
MyDBName._file._desc = hdlTableHandle:BUFFER-FIELD("_desc"):BUFFER-VALUE
MyDBName._file._desc = hdlTableHandle::_desc
If any Roundtable users (current or former) are listening ...
I would very much like to hear from you regarding your experience with it.As I stated above, feel free to contact me offline. In the spirit of full disclosure, I may share any information provided with Roundtable Software, but will not reveal the source of the information.
Regarding deployment (including DB updates), my TODO list still has a bullet point about a PUG Challenge session on this topic... I can't share a DB update script right now, but all my work is based on Ant / PCT + Groovy. A session has been delivered on Groovy: www.pugchallenge.eu/.../groove-is-in-the-ant---querret.pdf . That was supposed to be an introduction for this DB update session, but I never took enough time to prepare this second session. Anyway, using Groovy can be a bit disturbing at the beginning, but it helps a lot to get concise scripts that can deal with all deployment cases. For example, INI files can be managed by templates stored in the repository, where some entries are replaced on the fly depending on the environment (those values also stored in another repo so that you can keep track of the history).
For source code management, Git is now the de facto standard and brings a lot in terms of branch management. Software ecosystem around Git is also extremely active. But you won't find any off the shelf software to deal with OpenEdge in Git, that will still involve some work. Getting some external help to be on the right track may be really beneficial.