USER COUNT 1500 concurrent
We are in the very beginning of the process to upgrade our hardware,
our storage solution. Our storage and SA guy/gal say that NetAPP AFF 300 all flash array is the way to go. ?
This thread is probably worth a read: community.progress.com/.../30063
From a db performance point of view putting flash in a shared storage device is putting it in the least useful place that it can go.
That's a great way to drain you wallet and get poor performance while financing a new yacht for the Netapp sales team.
Just say no.
If you want your IO subsystem to be fast:
1) Do not put it at the other end of a cable.
2) Do not share it with other applications.
Instead spend a small fraction of the same money on internal SSD and get great performance. If your storage & admin team complains that their life is somehow made more difficult take a portion of your savings and hire some new storage & admin people. You will still come out ahead.
Database IO is random IO. Somewhat perversely, the better tuned your database is, the more random your IO becomes. The latency of each and every IO operation is your enemy. In a SAN (or NAS) the main contributor to latency is the cable along with the various adapter cards along the way. Not the device holding the data way out at the wrong end of that cable.
"Oh but it's a zigabit per second cable!" shows that whoever says such a thing has totally missed the point. Zigabits per second is a useful metric for sequentially streaming data -- it says *nothing* about the latency of random access.
From a database centric performance perspective internal SSD is:
1) Faster. *Much* faster. Literally 100x faster.
2) Cheaper. *Much* cheaper. You can buy enough SSD for most databases for the price of a good steak dinner.
It really isn't even close. But people will still insist on poking themselves in the eye with sharp sticks rather than doing the sensible thing. Because, after all, if it is the right solution for file sharing then it must be even better for a database application.
> On Jan 24, 2018, at 12:02 PM, ctoman wrote:
> Update from Progress Community
> OE 11.4
> OS HP-UX
> STORAGE 95K
> USER COUNT 1500 concurrent
> We are in the very beginning of the process to upgrade our hardware,
> our storage solution. Our storage and SA say that NetAPP AFF 300 all flash array is the way to go. What do others opinion?
when you are looking for storage for a database server, listen to what database people have to say, NOT to know-nothing storage vendor sales droids.
everything tom bascom said is correct.
Do any of you folks have any real world experience with the products from Pure Storage. I am seeing claims of <1ms latency and yet this seems like another flash drive at the end of a cable.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
When dealing with storage vendors you are always safe to assume snake oil.
One of the favorite deceptions of storage vendors when addressing latency is to use numbers from their *internal* monitoring tools. They report the latency from the disk to the controller inside their cabinet. Conveniently ignoring all the necessary bits of a real workload that make their product look bad.
Seems like that would have to be the case ... but I am not finding citations.
They aren't going to make a point out of a public confession...
Instead of trusting the vendor look for independent end to end benchmarks. If you can find something there that substantiates the claim then *that* would have weight. But be careful -- not everything that claims to be independent really is.
here are some SPC-1 benchmark results.
it will take some effort to understand what is being tested. still, better than nothing at all.
> On Jan 29, 2018, at 12:22 PM, gus bjorklund wrote:
> here are some SPC-1 benchmark results.
> it will take some effort to understand what is being tested. still, better than nothing at all.
forgot to mention: “there are three kinds of lies, damned lies, and benchmarks.”.
(paraphrasing benjamin disraeli)
Pure Storage is conspicuously missing from those benchmarks.
they are missing because they claim they aren’t allowed to participate. cuz the benchmark rules do not allow for dedup, compression, etc.
they could turn those off if they wanted.
pure storage marketing is very creative.
One of the posters on the thread about Pure which stimulated me to ask about it here has just said:
In addressing network latency, the networks that support storage are either high-bandwidth Ethernet (10GbE and above), or FibreChannel (designed as a very low latency protocol specifically for connecting storage arrays to servers). Network latency, as a rule, is very, very low, measured in nanoseconds (whereas storage latencies are microseconds or milliseconds). Usually, network latency is far less of a performance detractor than the storage media or the application itself.
This seems counter to what I have been hearing from the DB experts here. What do you think?
you have to measure end-to-end latency of the system, not just devices by themselves. latency of individual devices is only part of the story and does not take into account things like the device drivers and other things in the data path.
and: netapp relies on the NFS protocol for block-level (virtual) device access. i don’t know much about pure since nobody has any test data.
Tim, I agree ... I was looking for some ammunition! :)
Gus, good point.