The best way to migrate the database to another platform - Forum - OpenEdge RDBMS - Progress Community

The best way to migrate the database to another platform

 Forum

The best way to migrate the database to another platform

This question is answered

Hi, everyone,

I'm looking for the best way to migrate a database from HP-UX to Linux RedHat. The size of this database is ~4 TB, the number of tables is more than 200. It requires minimal downtime, no more than one hour in the most extreme case.

For today, I came to the conclusion that the optimal solution would be ProD&L.

I would like to know your opinion on this task and will be grateful to any helpful tips.

Andriy.

Verified Answer
  • I have been known to be wrong and I haven't tried it this week so my memory is a bit dim but, as I recall, index 0 will not multi-thread.

    --
    Tom Bascom
    tom@wss.com

  • The presentation I did at the PUG Challenge 410: Case Study: Platform and Data Migration with Little Downtime used Pro2 as the technology to do the dump/load for the platform migration.  It does not have many of the restrictions that the old Pro D/L had.  So the fact that there are tables without a Unique Index is not a problem for Pro2.

    Mike
    -- 
    Mike Furgal
    Director – Database and Pro2 Services
    PROGRESS Bravepoint
    617-803-2870 


All Replies
  • For such a large DB I would suggest you use a selective binary dump (dumpspecified) for static/archived data and load as much as possible in the new DB before you start downtime.

  • Thank you, Steven!

    I thought about this method, but tests show the impossibility of using it due to downtime. In addition, not all large tables have fields with a date datatype.

    Andriy.

  • In my experience DUMP SPECIFIED is really, really, really slow.  You'd probably be better off hand delivering stone tablets.

    --
    Tom Bascom
    tom@wss.com

  • How large is the largest table?  That  is probably the  table that will determine the minimum downtime.

    Are the tables in type 2 areas?

    --
    Tom Bascom
    tom@wss.com

  • The largest table is ~1.1TB. All tables in type 2 areas.

    Now I asked to refresh tabnalys. As soon as I receive it, I will have more actual info.

  • The fastest way to dump the table is probably a binary dump with index 0.

    If you can get that done reasonably close to your down time window then you might be able to do a more or less "normal" d&l.

    A lot will depend on what you have available for HW resources.

    --
    Tom Bascom
    tom@wss.com

  • Do you have a horizontal table partitioning license available?

    --
    Tom Bascom
    tom@wss.com

  • ChUIMonster

    The fastest way to dump the table is probably a binary dump with index 0.

    If you can get that done reasonably close to your down time window then you might be able to do a more or less "normal" d&l.

    A lot will depend on what you have available for HW resources.

    As I understand it, you mean a binary dump with multiple threads + index 0.

    1. Run binary dump in parallel with multiple tables.
    2. Copy the dump files to another server.
    3. Load these files to a new database without build indexes.
    4. Run idxbuild.

    I think that it will still take more time than I will have.

    What you mean by "available for HW resources"? HDD, CPU, RAM?

  • ChUIMonster

    Do you have a horizontal table partitioning license available?

    Unfortunately no.
    But it's interesting, how can this help to migrate to another server?

  • No, we do not have a license for table partitioning.

    But it's interesting, how can this help to migrate to another server?

  • The effectiveness of multi-threading the dump varies.  But you should try it.

    Horizontal Table Partitioning would give you a simple way to do it over time - sort of like pro d&l only better:

    224 - Nirvana v3.pptx  pugchallenge.org/.../224 - Nirvana v3.pptx

    --
    Tom Bascom
    tom@wss.com

  • (You'd have to write some 4gl code to sweep stuff as it moves from one partition to the other.)

    You could also probably do it with CDC.

    --
    Tom Bascom
    tom@wss.com


  • The mention of Table Partitioning allows you to do a table-move, which essentially is a dump and load without any impact to the application.  Very cool stuff.

    But you mention you are doing a platform migration.  If you cannot deal with the downtime, check out this presentation I did last year at the PUG challenge:

    Basically, we leveraged the Pro2SQL product technology to do the platform migration. At the end of the day the downtime was 20 minutes to do the migration.  This downtime would be irregardless of the database size as Pro2 replication is keeping the new database up to date with changes.

    You could do this yourself with the CDC product introduced in 11.7 as well.

    Mike
    -- 
    Mike Furgal
    Director – Database and Pro2 Services
    PROGRESS Bravepoint
    617-803-2870 


  • ChUIMonster

    (You'd have to write some 4gl code to sweep stuff as it moves from one partition to the other.)

    You could also probably do it with CDC.

    Thank you, I'll look at this later.
  • ChUIMonster

    The effectiveness of multi-threading the dump varies.  But you should try it.

    I will try it.

    But does index 0 let to use a multi-threaded dump?