HPE 3PAR Storage

Posted by Brian Bowman on 22-Mar-2017 08:27

Hi -

I am looking for anyone that is running HPE 3PAR Storage for their OpenEdge database.  If you are using this solution could you please post here and reach out to me? 

No issues, just looking for experiences either way with it.

Thanks

Brian (bowman@progress.com)

All Replies

Posted by gmuchon on 28-Mar-2017 12:28

Hi,

I'm using HPE 3PAR with OpenEdge Database.

3Par 8200, disk's SAS 10k.

OpenEdge 11.6.3 on Itanium Server and HP-UX 11.31.

Gustavo.

Posted by Brian Bowman on 28-Mar-2017 12:45

Hi Gustavo -  can you reach out to me offline?  I have some questions for you (bowman@progress.com)

Thanks

Brian

Posted by David Gowe on 11-Dec-2018 19:05

[mention:b95db7be848d40f8a39db30310edfa01:e9ed411860ed4f2ba0265705b8793d05] [mention:367c81f11d7f4e5da684cc92d278e61c:e9ed411860ed4f2ba0265705b8793d05] I am in the process of moving to an environment (new) on RHEL leveraging 3PAR storage. If possible, I would like to discuss with you on any challenges / success / disappointments on using OpenEdge 11.7.x's database, PAS on the above said environment.  Will appreciate a response.  TIA

Posted by ChUIMonster on 11-Dec-2018 20:06

If performance is not important to your database you can stop reading.

On the other hand if you are concerned about performance, the fundamental problem with 3par etc is that they are *external* and *shared* storage devices.  That means:

1) Performance is sub-par.  Your IO ops are at the wrong end of a cable and a bunch of adapters and switches.  Database IO is *random*, it does not benefit from sequential, streaming performance.  You pay the additional latency penalty of that cable etc with every IO op.  There is no cure for this.  Distance = time.  Handoffs = lots of time.

1a) Just in case you expect "all flash" to cure all of your problems -- putting SSD at the wrong end of a cable does not address any of the latency in the cable and the adapters.  It will help with rotational latency of spinning rust disks but that is just a part of the problem - and one that was already being handled by the cache in the storage system.  So don't expect to see much improvement from an "all flash" SAN.

2) "Shared" means that you have competition for resources.  Your database will not get 100% of the bandwidth.  Backing up the Exchange server will have a higher priority (anything that does lots of sequential IO will have higher priority -- that kind of thing is what these devices are designed for).  Lots of people who have no knowledge of your busy times and who have even less sensitivity to the impact that they are having on your performance will feel perfectly free to launch massive IO jobs right in the middle of your peak processing times.

3) These devices do not exist to make your database go faster.  Their purpose in life is to consolidate workloads and simplify the lives of storage administrators.  That is admirable but the trade-off is that it is not compatible with a high performance database.

4) If database performance is your #1 priority then the shortest path from the CPU to the data is the fastest.  That means internal SSD.  Coincidentally internal SSD is also often a lot less expensive than a fancy SAN.

4a) This does not mean that every workload in your data center should go to internal SSD.  But your performance critical databases certainly should.

5) If you already spent a whole bunch of money and bought a fancy storage subsystem that is slower than you hoped it would be, the only thing that can be done about that (aside from not using it and going with internal SSD instead) is to load up the server with as much RAM as possible and avoid IO ops for as long as possible.  Which is a big part of normal database  tuning anyway so you have likely already done that.

5b) You will still have to perform IO at times and at some of those times it will be very, very painful.  Index rebuilds, dump & load, backup and restore, after-image roll-forward recovery, dbrpr to fix corruption are all activities where there tends to be a lot of time pressure and where the pain of having an inappropriate storage solution is most keenly felt.  I hope that you never have to recover a corrupted database with a bunch of senior executives asking if it is done yet -- but if you do you will be very unhappy to have to do it with a sub-par IO subsystem.

The sales guy will, of course, deny all of the above and try to claim that OpenEdge is somehow deficient.  Or that I am a luddite.  Or both.  Keep in mind that the sales guy is earning a commission..  I am just performing a public service.

Posted by gus bjorklund on 12-Dec-2018 15:30

Adding to what Tom said:

These external storage devices all have complex sets of vendor specific configuration options and there are many tradeoffs that can be made.  It is easy to set up a configuration that performs poorly.  For example, a system that is used as a file-server has many creates and deletes of files and modifications of file metadata whereas a database server has few of those operations and is mostly doing random reads and writes to the same set of files. In these two cases, optimizing for one workload will have a deleterious effect on the other.

The vendor personnel are rarely database people or database knowledgeable.


This thread is closed