EMC Clariion: FLARE 30 Best Practice   Leave a comment

Overview:
Best practices for optimal configuration of the components of the CLARiiON storage system

General Best Practices:
Overall
– Read the manual
– Install latest firmware
– Know the workload
– Resolve problems quickly
– Use the default settings

Host best Practices:
I/O Types:
– Sequential vs random
– Write vs read
– Large-block size vs small block size
– Steady vs bursty
– Multiple-threaded vs single threaded
Comments:
– Writes consume more Clariion resources than reads
– Cache-based reads consume lowest amount of resources & have highest throughput
– Random I/O workloads have highest percentage of cache misses
– Large I/O’s (>64kB) deliver better bandwidth
– Understand I/O pattern > spikes during day & sequential for backups
– Watch I/O thread queues
– Plaid techniques (also called stripe-on-stripe) are the way LUN striping is implemented
o Dedicated RAID group
 Having two active fibre paths from server to LUN via both SP’s
o Multi-system
 Spanning storage systems. That is one LUN to both SP’s on 2 storage systems
o Cross
 More than one volume built from the same RAID group
 Effective when
• Small, random I/O
• Non-concurrent burst across all volumes
 There are various do’s and don’t’s (refer best practice paper)
– Regularly defragment LUN’s (not pool-based or FAST Cache)
– Ensure file-system and drive alignment
– Limit zoning to two SP’s per HBA
– HBA queue depth usually eliminate possibility of LUN generated QFULL

Network Best Practices
Comments:
– Keep iSCSI ports and SP ports on separate networks
– Also have each SP on a separate subnet

Storage System Best Practices
Comments:
– Port speed is 4GB/s (360MB/s bandwidth per FC port) per , can upgrade to 8GB/s
– Can allocate all storage system memory as write cache (best with 20% for read cache)
– Balance load across storage system resources
– Provision hot spares appropriately (ie, drive types, same buses, etc)
– Low data-to-parity drive ratio should be avoided
– Balance RAID groups across buses. Best is horizontal provisioning

LUN Provisioning
– LUN’s are a host visible construct on a RAID Group
– When workload is random I/O, distribute workload LUN’s across as any RAID groups as possible
– When workload is sequential I/O, distribute workload LUN’s across as few RAID groups as possible
– Separate LUN workloads between RAID Groups. That is, LUN’s for random I/O on different RAID Group to LUN’s sequential I/O
– Ideally all RAID Groups should have similar utilisation
– If must have high utilisation LUN’s in the same RAID Group > keep them consecutive > reduces drive seek distance
– Separate recovery data (clones, log files) from application data

Virtual Provisioning
– Conceptually, a storage pool is a file system overlaid on traditional RAID. It adds performance & capacity overhead. A pool will be segmented into a number of private RAID groups
– Homogenous pools
o Use FC for VP pools with thin LUN’s due to higher performance & availability
o Use drives of same type and speed
o Use RAID5, & RAID6 for large SATA drives
o Multiples of 5 for RAID5, multiples of 8 for RAID6 & 10
– Use thick LUN’s for high-bandwidth workloads

FAST
– Highest performance is with small-block random workload to a Flash-drive tier
– Modest performance with small-block random workload to a SATA-drive tier
– If no Flash, and is FC/SATA VP > 20% FC, 80% SATA
– Initial data placement option should be “High”

Compression
– Not used at TSA
– Not suitable for private LUN’s, compressed data

Drive Types
– Sequential reads: SATA similar to FC
– Random reads: SATA deteriorates with increasing queue depth
– Sequential writes: SATA similar to FC
– Random writes: SATA deteriorates with increasing queue depth
– Flash drives better than mechanical when:
o Drive utilization > 70%
o Queue length > 12
o Average response times >10ms
o I/O read-write ratio of 60% or greater
o I/O block size <=16kB

FAST Cache
– Workloads with high locality, small block size, random I/O benefit most
– Affecting factors:
o Locality
 When written & where written. Best if recent & location on drive
o Extent
 The size of the working set
o I/O size
 Ideally, the size of the I/O should match the Flash page size
o I/O type
 Random is best
– Disable FAST Caching of all reserved LUN’s

RAID type usage
– RAID 0
o Should be avoided, no business value
– RAID 1
o Not recommended. Not expandable. RAID10 for mirroring
– RAID 3
o Good for large block sequential reads
– RAID 5
o Favoured for messaging, data mining, media-serving, RDBMS (read-ahead, write-behind)
o Ideal applications:
 Random workloads with modest IOPS/GB
 High performance random I/O where writes 64k & access is random
 RDBMS log activity
 Messaging applications
 Video/media
o Best to use 4+1 drive config
– RAID 6
o Strongly recommended for high capacity SATA drives
o RAID 6 & 5 have same read performance, write has 50% more workload with RAID 6
– RAID 10
o Best performance on small, random write-intensive I/O (ie, >30%)
o Ideal applications
 High transaction rate OLTP
 Large messaging (email)
 Real-time brokerage
 Frequent updates to small record RDBMS

Storage System Sizing & Performance Planning
Comments:
– Consists of calculating the right number of drives for capacity & correct number of drives & right storage system for performance

Components:
– Capacity
o RAID type & drive-group size
– Performance
o Need to know threading model or workload, I/O type & size, MB/s per drive
o Use rules of thumb to get started
o For small random I/O > 20msec response time
o Number of threads affects bandwidth
o Sending fewer, larger I/O’s to the backed improves sequential I/O performance
– For quick estimate
o Determine the workload
 Total IOPS, percentage reads/writes
o Determine the drive load
 Drive IOPS implied by host I/O load > RAID 5: Drive IOPS = Read IOPS + 4xWrite IOPS
 Write load (sequential) > RAID 5: Drive MB/s = Read MB/s + Write MB/sx(1+(1/drives))
o Determine the number of drives required
 Performance: total IOPS/per-drive-IOPS
 Capacity: add it all!
o Determine the number & type of storage systems

Reference:
EMC Clariion flare 30 Best Practice
http://www.emc.com/collateral/hardware/white-papers/h5773-clariion-best-practices-performance-availability-wp.pdf

Advertisements

Posted March 1, 2013 by terop

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: