Hi, everyone,
I'm looking for the best way to migrate a database from HP-UX to Linux RedHat. The size of this database is ~4 TB, the number of tables is more than 200. It requires minimal downtime, no more than one hour in the most extreme case.
For today, I came to the conclusion that the optimal solution would be ProD&L.
I would like to know your opinion on this task and will be grateful to any helpful tips.
Andriy.
I have been known to be wrong and I haven't tried it this week so my memory is a bit dim but, as I recall, index 0 will not multi-thread.
--Tom Bascomtom@wss.com
For such a large DB I would suggest you use a selective binary dump (dumpspecified) for static/archived data and load as much as possible in the new DB before you start downtime.
Thank you, Steven!
I thought about this method, but tests show the impossibility of using it due to downtime. In addition, not all large tables have fields with a date datatype.
In my experience DUMP SPECIFIED is really, really, really slow. You'd probably be better off hand delivering stone tablets.
How large is the largest table? That is probably the table that will determine the minimum downtime.
Are the tables in type 2 areas?
The largest table is ~1.1TB. All tables in type 2 areas.
Now I asked to refresh tabnalys. As soon as I receive it, I will have more actual info.
The fastest way to dump the table is probably a binary dump with index 0.
If you can get that done reasonably close to your down time window then you might be able to do a more or less "normal" d&l.
A lot will depend on what you have available for HW resources.
Do you have a horizontal table partitioning license available?
ChUIMonster The fastest way to dump the table is probably a binary dump with index 0. If you can get that done reasonably close to your down time window then you might be able to do a more or less "normal" d&l. A lot will depend on what you have available for HW resources.
1. Run binary dump in parallel with multiple tables.2. Copy the dump files to another server.3. Load these files to a new database without build indexes.4. Run idxbuild.
I think that it will still take more time than I will have.
What you mean by "available for HW resources"? HDD, CPU, RAM?
ChUIMonster Do you have a horizontal table partitioning license available?
Unfortunately no.But it's interesting, how can this help to migrate to another server?
No, we do not have a license for table partitioning.
But it's interesting, how can this help to migrate to another server?
The effectiveness of multi-threading the dump varies. But you should try it.
Horizontal Table Partitioning would give you a simple way to do it over time - sort of like pro d&l only better:
224 - Nirvana v3.pptx pugchallenge.org/.../224 - Nirvana v3.pptx
(You'd have to write some 4gl code to sweep stuff as it moves from one partition to the other.)
You could also probably do it with CDC.
ChUIMonster (You'd have to write some 4gl code to sweep stuff as it moves from one partition to the other.) You could also probably do it with CDC.
ChUIMonster The effectiveness of multi-threading the dump varies. But you should try it.
I will try it.
But does index 0 let to use a multi-threaded dump?