Converting from 9.1D to 10.02B - Forum - OpenEdge RDBMS - Progress Community

Converting from 9.1D to 10.02B

 Forum

Converting from 9.1D to 10.02B

  • Hi --

    I have the task (luxury?) of migrating a client from 9.1D workgroup on SCO Unix to 10.2B enterprise on RHE Linux. I started thinking about how to lay out the database in V10 and start to take advantage of the new (performance related) features. I read up a little on the Type 2 storage areas and related optimization. (Thanks to Tom Bascom and Gus B, among many others.)

    My initial thoughts are that we’d switch to Type 2 storage for all the app data and indexes, keeping data and indexes separate. I also thought we’d isolate the two "hottest"  files (aka tables). Something like:

    Schema Area: 1Gb fixed plus 1 variable extent (Type 1)

    Hot File 1 Data: 05 Gb fixed + 1 var

    Hot File 1 Index: 0.5 Gb fixed + 1 var

    Hot File 2 Data: 05 Gb fixed + 1 var

    Hot File 2 Index: 0.5 Gb fixed + 1 var

    All Other Data: 2 of 1Gb fixed + 1 variable extent

    All Other Indexes: 2 of 1Gb fixed + 1 variable extent

    The database is about 3Gb now. (I remember having 40 Mb hard drives; now this fits on a cell phone :-)  ) (That means I probably have the luxury of revising this on any given Sunday afternoon.)

    The new server has  (2) 450 GB 15K SAS Hard Drives and (4) 600 GB 15K SAS Hard Drives. I was going to suggest making the 4 drives into 1 RAID 10 array to contain the data, indexes and source/object files. The other 2 drives would be a second RAID 10 array for the OS, Progress install, and the BI & AI files, plus the user home dirs (where some reports and CSV's are written to from the Progress app).

    Planning to prostrct create with -blocksize 8192 and to use -biblocksize 8 -bi 2048 on my BI truncates. LargeFileEnable too. I still need to think about startup parameters; be prepared for a followup post ! (Or chime in now, if you can't resist.) Thinking of  2 APW's and a BIW, plus the watchdog. And AI.

    Does this sound like a reasonable starting point? What am I missing? I would appreciate any thoughts and feedback.

    TIA --

    Jim Shepherd

    jrs@lvlsvn.com

  • Is there any reason why you are only going as far as 10.2B? Why not 11.x whilst you're upgrading anyway?

    As for the structure of your new storage areas, an up to date DBANALYS output from your database would be really helpful for deciding what structure your new database should take.

  • I don't see why you would want a 1 GB fixed extent for the schema area.  It shouldn't need anywhere near that much space if it only contains system tables.  Mine are typically 4-10 MB with over 1000 application objects in the schema.

    I would keep the code separate from the database.  You mention source and object code so I assume compiles are happening on this machine.  If you have to recompile the application that will be a lot of I/O on your database volume.

    Version 10.2B is over five years old and will never receive another service pack.  It isn't retired yet, but that could change at any time.  I recommend against moving to such an old release.

  • Customer dictates to on;ly go to v10 at this point. Probably do v11 in 6 - 12 months.

    Is there a way to attach my dbanalys output? As opposed to pasting it below?

  • With a 3 Gb DB you can just create two type II storage areas with variable lengths extents for everything (AI, BI, D). There is no need at all for fixed length extents unless your DB is growing quickly. Just "Data Area" with 128 RPB and 64 BPC and "Index Area" with 64 RPB and 64 BPC is *probably* sufficient. The final DB will be bigger than the original but 3 or 4 or 5 Gb is the same. If you probkup with -com the empty blocks won't be backed up.

    I also doubt that there is a need to separate AI, BI and D files in your case. Why do you want to segregate? In your suggested setup, AI, BI, temp files, reports, OS stuff, etc would all write to ONE 15K rpm drive that can do about 150 IOPS. The D files, which should get less write disk I/O then AI/BI/TMP get to share two 15K rpm drives (write) and 4 disks for reads. Personally, when I see a box with very few drives I like to put them all in one big RAID 10 so that everyone can benefit from all the available disk I/O.

    On RHEL use a blocksize of 4Kb (the default). And your AI and BI blocksizes should be 16Kb.

    It is highly unlikely that you need more than one APW. Two is overkill unless you tell me you're updating every record in the 3 Gb DB every hour.

    Paul Koufalis
    White Star Software

    pk@wss.com
    @oeDBA (https://twitter.com/oeDBA)

    ProTop: The #1 Free OpenEdge DB Monitoring Tool
    http://protop.wss.com
  • Good points Rob. I can see that Schema area is ridiculous overkill.

    Full application compiles are pretty rare and usually only when I have the system in single-user mode. And it only takes about 15 minutes. They can tolerate anything for 15 minutes.   :-)

  • One more point: if you go to 10.2B make sure to apply SP8.

    Paul Koufalis
    White Star Software

    pk@wss.com
    @oeDBA (https://twitter.com/oeDBA)

    ProTop: The #1 Free OpenEdge DB Monitoring Tool
    http://protop.wss.com
  • dbanalys.txt

    Here is the dbanalys output.

    Thanks again -- Jim

  • Paul Koufalis

    With a 3 Gb DB you can just create two type II storage areas with variable lengths extents for everything (AI, BI, D). There is no need at all for fixed length extents unless your DB is growing quickly. Just "Data Area" with 128 RPB and 64 BPC and "Index Area" with 64 RPB and 64 BPC is *probably* sufficient. The final DB will be bigger than the original but 3 or 4 or 5 Gb is the same. If you probkup with -com the empty blocks won't be backed up.

    I also doubt that there is a need to separate AI, BI and D files in your case. Why do you want to segregate? In your suggested setup, AI, BI, temp files, reports, OS stuff, etc would all write to ONE 15K rpm drive that can do about 150 IOPS. The D files, which should get less write disk I/O then AI/BI/TMP get to share two 15K rpm drives (write) and 4 disks for reads. Personally, when I see a box with very few drives I like to put them all in one big RAID 10 so that everyone can benefit from all the available disk I/O.

    On RHEL use a blocksize of 4Kb (the default). And your AI and BI blocksizes should be 16Kb.

    It is highly unlikely that you need more than one APW. Two is overkill unless you tell me you're updating every record in the 3 Gb DB every hour.

    Thanks Paul. I appreciate the feedback.

    My thoughts were: that I thought there was some advantage to not forcing the extents to grow every time there was a write, hence the fixed extents. And also that fixing the extents at create helped avoid disk scatter/fragmentation.

    My thoughts on segregating the bi from the d files was to share spread the workload around. I also thought it was advised to segregate the ai for recoverability in case the d's array had a meltdown.

    Is it better to use the OS default for blocksize (4k in my case) or is "bigger always better"? Or is that up for debate?

    May I ask why you recommend 16k for ai & bi -- just so i can understand, when someone else says something different?

    The second APW was just based on something I read somewhere....

    Thanks again --

    Jim Shepherd

    Level 7

  • I thought there was some advantage to not forcing the extents to grow every time there was a write 

    Do the math: if you grow the DB 1 Gb per month (certainly NOT your case), that's 50 Mg day (20 weekdays) over 10h = 5 Mg per hour. Assuming 512 blocks per cluster and a 4 Kb blocksize, the opsys will grow an extent 2-3 times per hour. Fixed length extents would save you less than a second an hour.

    segregating the bi from the d files was to share spread the workload around

    You would be doing the opposite. You would be making two drives work hard on write while the other four sat idle.

    thought it was advised to segregate the ai for recoverability

    It is. But in practice it is rarely true. For example, can you load the hot-swap disk in the DR box? Have you/will you test it? How long will it take? If the DR box is in another city management will not want to wait to recover the last 5 minutes of AI notes.

    Is it better to use the OS default for blocksize

    Probably. You avoid the mythical torn page

    why you recommend 16k for ai & bi...when someone else says something different

    I don't think any of the senior DBAs reading this would recommend anything else. On a system with a low rate of AI/BI note generation it probably doesn't matter as -Mf will kick in and write the partially full AI/BI buffer to disk after 3 seconds (the default value for -Mf). On a busy system you want to do less frequent IO to disk: fill 16Kb, write 16Kb, fill 16K, write 16K...

    The second APW was just based on something I read somewhere 

    There is a lot of garbage floating out there. For a while the recommendation was one APW per physical disk + 1. Same for -spin: it used to be 10K * #CPU. In reality it's π (pi) * your year of birth (hat tip to Dan Foreman). Since I was born in 1970, my default value for -spin is 6189. No I'm not joking!  For most environments, a -spin of 5-10K is sufficient.

    Paul Koufalis
    White Star Software

    pk@wss.com
    @oeDBA (https://twitter.com/oeDBA)

    ProTop: The #1 Free OpenEdge DB Monitoring Tool
    http://protop.wss.com