Suggestions for a new Disk Array

MightyMouse

New Member
Hi,

Need to move to a New Disk Array.

The Progress 9.1D09 Databases are mounted on a HP L-3000, 4 - 550MHZ CPU, 16G RAM (we will be upgrading server too), OS - HP-UX 11i. Connections to the Databases 250 users with: Host, PC Clients, and WebSpeed users connected to the Prod-db and Ext-db databases.

Currently we have a production environment on a Hitachi 9570 array: Raid 0+1 (3 Data, 3 Parity), total of 18 - 29GB drives.

The environment is split over 9 mount points, /p00 through /p08 and the databases are at 61% capacity of the filesystems. The Hitachi has 2 - 1G FC controllers and the luns are split equally over these. The production environment consist of 5 databases 2 of these are in heavy use with our application:
During busy times we experience Processor and certain disks bottleneck, which slows the database performance. I have thought about bringing our bi files out to their own mount points and upgrading by adding more disks but beileve that this is not the best road as described below.

I am looking at new arrays as our current array is: costly for service contract, we need to add disks, it does not seem cost effective upgrading, EOL is coming up, and the performance issues during heavy use.

Production DB Environment:
DB ___________ Total____Size ____ Avg Days_____ Growth
Name _________Extents__In GB ___ T0 Grow______ GB
Prod-DB..............253........123.05........25...................1.5 Heavy use
Ext-Db..................30..........14.6.........72...................1.5 Heavy use
Hr-DB....................8............3.42........1000...............1.5 Light
Smmry-DB...........12............5.37........1200...............1.5 Light
Maint-DB..........----5............1.95....................................Light
Total of ---------------------148.4GB

Array Vendors do not have the smaller disks and most seem to push the 140GB or greater disks and "say" that they will meet our performance needs. I am very leery of the larger disks and wanted to know if anyone out there has experience with the "Newer" Mid Size arrays, HP, Hitachi, (not EMC), any others?

If you have experience with 100G+ DB's with 200+ user and would like to comment/suggest, please do, as I would value your comments with experience.

Mike
 
1) 9.1D is ancient and unsupported. You need to upgrade. Since you're running such a hoary old release I will also guess that you probably have lots of tuning opportunities that have not been exploited -- for instance how many storage areas do you have configured?

2) Big disks are not, in themselves, a problem so long as you don't fall into the trap of thinking that you only need as many gigabytes of disk as you have of database. IOW it's perfectly ok to configure a 10TB disk array for 50GB of data. For performance you need spindles not gigabytes -- vendors who tell you otherwise are ignorant or lying; or both.

You say that you're using "raid 0+1" but then refer to "parity" disks. Parity disks are generally associated with things like RAID 5 (a better term might be "parody" disks because you get a parody of performance with such configurations).

The discussion of multiple mount points is disturbing as well - there's nothing quite so damaging to performance as carving up disks into lots of partitions and then patching them together haphazardly. Having a partition mirrored to another partition on the same drive is an especially painful pathology.

RAID 01 and RAID 10 refer to combinations of striping (RAID 0) and mirroring (RAID 1). RAID 10 is generally considered somewhat more robust but both are excellent performers.

In your case I'd be looking for a RAID10 configuration with at least 8 drives for the data extents and the bi extents (you've got 5 databases -- 5 sets of discrete bi drives would be silly unless you've got a heck of a lot of TRX going on).

I'd want another 8 disk RAID10 for "other" purposes. Those other purposes would include:

1) after image files (if you aren't running after-imaging you need to stop fooling around and implement it now)
2) probkups
3) -T files
4) application stuff -- source, r-code and what have you

That's a total of 16 drives. 16 x 140GB = roughly 2 TB. You can get a 2TB "disk array" for roughly $1,500 at Fry's (albeit RAID5 and pitifully slow and it'll only have 4 spindles in it...), "slightly more" ;) from EMC et al -- but if properly configured (which is a rare thing to have happen without supervision) it will perform very well.

BTW, EMC's Clariion line is relatively reasonably priced and performs well in these sorts of systems.
 
Thank you Tom,

A correction on the Raid we are using RAID1+0 (3D+3P), using 36G, 15KRPM FC Drives (in a HDS 9570) with 6 drives making up a Raid Group and broken into 3 LUNS shared to the server. There are a total of 18 Disks, 3 Raid Groups, and 9 LUN's. We are doing a Triple mirroring with another set of disks that we split off, Silver and UnSilver, to create our backup snap. We do shutdown the database for this.

No, I have not split off the Bi's and I want to and need to do it with this change. And we do not have Ai yet and they do not want me to do it yet.

My budget has been cut almost entirely, no new array. However, I do need to acquire more disks for the current 9570 array as my database is beginning to fill up the mount points, 79% - 81% full. Last year this time we were 61% full. I have convinced management on adding to the existing array.

I do not know what would be better, buying a couple of shelves with the 72G 15K drives and redoing the same configuration by doubling it by replacing the 32G with 72G drives. Or purchasing some re-certified 36G drives (as I can not buy new) and adding to what I already have. I would, of course, take your advice on splitting off the Bi to it's own disks. What is your suggestions on this?

Thanks,
Mike
 
You know, with all that old iron, there is a pretty good chance you could replace server and array with cheap new hardware and get a payback in less than a year because of the service contracts. Having your budget slashed actually argues stronger in this direction, not less.
 
Thomas makes a very good point.

But assuming that you cannot get management to see reason... I wouldn't worry too much about having different sized disks. I'd look at the prices and see which option gives me the most bang for the buck. First in terms of spindles not space. Then in terms of the speed of those spindles and lastly in terms of the space that I can get.

You need to work harder at convincing them that they need after-imaging. There was a post earlier this week on PEG where the poster pointed out that his site previously switched ai files every 3 hours. They've had 5 crashes in the last year (which seems like a problem to me but that's another topic). Twice those crashes have occurred just a few minutes prior to an ai switch and the last ai file has not been recoverable. Which means that they lost 2 hours and 58 minutes of data in those crashes.

Of course that's better than losing 23 hours and 58 minutes if the same thing happens with just a daily backup. But it's a far cry from 15 minutes if you switch after image files every 15 minutes (or 5 or 2 or...).

Triple mirroring is very handy from an operation standpoint but it doesn't make you 3x safer. It just means that you have 3 copies of whatever -- including 3 copies of rm * or 3 copies of FOR EACH customer: DELETE customer. END.
 
Thank you both for your input.
As for new hardware, i tried and it was a "NO!" from management.

I have been looking at the differences between 36G and 72G drives. Between 146G, 72G, and 36G drives they all seem about the same proportionately.
Drive Capacity in GB 146.8 73.4 36.7
R/W data heads 8 4 2
Capacity/R/W head 18.35 18.35 18.35
Internal data rate 685 - 1,142 Same for all
Bytes/track 471,916 Same for all
tracks/surface 50,864 50,864 50,864
Tracks/in 85,000 85,000 85,000
Peak bits /inch 628 628 628
Disk rotation speed 15K 15K 15K
Avg rotational latency 2.0 2.0 2.0
Capacity/R/W head 18.35 18.35 18.35
Format time in minutes
Max (W/verify) 90 60 30
Max (W/O Verify) 45 30 15

They seem like they will produce the same performance. Does anyone have any thoughts/experience with this?

I am still working selling ai to management.
Oh, and congradulations on your new carear with wss Tom.

Mike
 
As for new hardware, i tried and it was a "NO!" from management.

You do recognize the difference between:

1. I want to buy new hardware ....

2. I have analyzed our current service costs in comparison with the cost of purchasing and servicing new equipment and can demonstrate a ROI in N months.

?
 
Are the costs proportional too? I'd spend my budget dollars in the way that gets me the largest number of disks. If you can fill the cabinet and still have money left over then consider larger disks. Or a party.
 
I have another question - In implementing a bi strtegy, would you suggest that I use the same RAID, RAID 0+1 (3Data 3 Parity) ? or would something other be better? I will be getting quotes on 72G drives and want to do this the right way. When I create a snap of the database would I need to truncate the Bi first? Or by shutting down the database and creating the snap I would have the bi as part of the snap? Not sure of how this works, truncating or not the bi with a shaddow image.

I will also be looking into getting more disks to use in creating between 4 and 6 mount points for the bi. Any suggestions in this would be great too.

I also need to get more information on storage areas. I currently only have one.

How much of a performance boost do you think that I would get from implementing the bi and storage areas, seperating the tables and indexes?

Regards,
Mike
 
Splitting the bi file off onto a dedicated disk was really, really important in the 1990s. It is less important today and probably not really much of a concern unless you're either running a benchmark or have a truly important transaction intensive process that needs a lot of throughput. Basically I wouldn't worry about it unless I had a proven need.

You do not need to truncate the bi file in order to take a snapshot. Nor do you need to shutdown if the snapshot capability is on a par with something like EMC's SRDF or IBM's "flash copy". But you do need to make sure that the bi file is an integral part of the snapshot (which is easier if it is on the same filesystem as everything else). Taking snapshots is something that needs to be carefully setup and tested -- don't just assume that it works.

And a snapshot is in no way a substitute for after-imaging. A snapshot of a corrupt database is just as corrupt as the original... After image logs, OTOH, allow you to rebuild your database right up to the point where something went wrong. It is an extremely important difference.

The performance benefits from a good implementation of storage areas are enormous. Take a look at this article -- Surprisng Benefits of Storage Areas for some more information. There is a lot more to it than just separating indexes and tables.

You might want to consider engaging an experienced consultant ;)
 
Back
Top