<p>IBM only supports a queue depth of 1 when attaching to XIV with the default algorithm of round_robin.  Usually round_robin or load_balance is the best choice, but since IBM is only supporting a queue depth of 1 at this time, there is a performance penalty for asynchronous I/Os.  This looks to have been fixed in 5.3.10 (APAR IZ42730) and 6.1 (APAR IZ43146), but is still broken (probably never to be fixed) in earlier releases.</p>
<p>So, IBM recommendation is to split up your storage needs into a number of LUNs matching the number of paths to your XIV, use the fail_over algorithm with a larger queue depth, and assign a different path the highest priority for each LUN.  This is kind of a poor man&#8217;s load balancing.  It&#8217;s not that bad, other than having to look at 4 or more hdisks for every LUN, and having to figure out what path to give the highest priority for each one! </p>
<p>IBM doesn&#8217;t really see this as a problem, but it&#8217;s a huge pain to do correctly in an enterprise.  </p>
<p>So, how do we start?  First, figure out what hdisk you&#8217;re talking about, then run:</p>
<pre><code>lspath -H -l <em>hdiskx</em> -F "status name parent path_id connection"
status  name   parent path_id connection

Enabled hdiskx fscsi0 0       50050763061302fb,4010401200000000
Enabled hdiskx fscsi0 1       50050763060302fb,4010401200000000
Enabled hdiskx fscsi1 2       50050763060802fb,4010401200000000
Enabled hdiskx fscsi1 3       50050763061802fb,4010401200000000</code></pre>
<p>We need the parent device and the connection bit (WWN,LUN#) to specify just a single path.  Then run:</p>
<pre><code>lspath -AHE -l <em>hdiskx</em> -p <em>fscsi0</em> -w "<em>50050763061302fb,4010401200000000</em>"
attribute value              description  user_settable

scsi_id   0x20400            SCSI ID      False
node_name 0x5005076306ffc2fb FC Node Name False
priority  1                  Priority     True</code></pre>
<p>That shows you the priority of this path.  You can see it&#8217;s still the default of 1. You can check the other paths too.</p>
<p>The goal is to spread out the load between all the available paths. To do this, we must create 4 LUNs.  If we need 4GB, we need 4 1GB LUNs.  Then we can give each one a different primary path. So, in this example, we should run:</p>
<pre><code>chpath -l <em>hdiskx</em> -p <em>fscsi0</em> -w <em>50050763061302fb,4010401200000000</em> -a priority=1
chpath -l <em>hdiskx</em> -p <em>fscsi0</em> -w <em>50050763060302fb,4010401200000000</em> -a priority=2
chpath -l <em>hdiskx</em> -p <em>fscsi1</em> -w <em>50050763060802fb,4010401200000000</em> -a priority=3
chpath -l <em>hdiskx</em> -p <em>fscsi1</em> -w <em>50050763061802fb,4010401200000000</em> -a priority=4</code></pre>
<p>The first command isn&#8217;t really necessary, but I was on a roll. Now, we have to change the algorithm for the hdisk and set the queue depth:</p>
<pre><code>chdev -l  <em>hdiskx</em> -a algorithm=fail_over -a queue_depth=32</code></pre>
<p>Make sure to stager the next one so that path 1 gets a priority of 1, 2 gets 2&#8230; and 0 gets a priority of 4.  Rinse and repeat until you have 4 LUNs each with a different primary path.</p>
<p>Now wasn&#8217;t that easy.  Oh, and when you add more disks, be sure to keep them distributed as evenly as possible.</p>
{"id":394,"date":"2009-07-27T16:27:49","date_gmt":"2009-07-27T20:27:49","guid":{"rendered":"http:\/\/patrickv.info\/wordpress\/?p=394"},"modified":"2009-07-27T16:27:49","modified_gmt":"2009-07-27T20:27:49","slug":"load-balance-algorithm-w-aix-and-xiv","status":"publish","type":"post","link":"https:\/\/rootuser.ninja\/index.php\/2009\/07\/27\/load-balance-algorithm-w-aix-and-xiv\/","title":{"rendered":"Load balance algorithm w\/ AIX and XIV"},"content":{"rendered":"<p>IBM only supports a queue depth of 1 when attaching to XIV with the default algorithm of round_robin.  Usually round_robin or load_balance is the best choice, but since IBM is only supporting a queue depth of 1 at this time, there is a performance penalty for asynchronous I\/Os.  This looks to have been fixed in 5.3.10 (APAR IZ42730) and 6.1 (APAR IZ43146), but is still broken (probably never to be fixed) in earlier releases.<\/p>\n<p>So, IBM recommendation is to split up your storage needs into a number of LUNs matching the number of paths to your XIV, use the fail_over algorithm with a larger queue depth, and assign a different path the highest priority for each LUN.  This is kind of a poor man&#8217;s load balancing.  It&#8217;s not that bad, other than having to look at 4 or more hdisks for every LUN, and having to figure out what path to give the highest priority for each one! <\/p>\n<p>IBM doesn&#8217;t really see this as a problem, but it&#8217;s a huge pain to do correctly in an enterprise.  <\/p>\n<p>So, how do we start?  First, figure out what hdisk you&#8217;re talking about, then run:<\/p>\n<pre><code>lspath -H -l <em>hdiskx<\/em> -F \"status name parent path_id connection\"\nstatus  name   parent path_id connection\n\nEnabled hdiskx fscsi0 0       50050763061302fb,4010401200000000\nEnabled hdiskx fscsi0 1       50050763060302fb,4010401200000000\nEnabled hdiskx fscsi1 2       50050763060802fb,4010401200000000\nEnabled hdiskx fscsi1 3       50050763061802fb,4010401200000000<\/code><\/pre>\n<p>We need the parent device and the connection bit (WWN,LUN#) to specify just a single path.  Then run:<\/p>\n<pre><code>lspath -AHE -l <em>hdiskx<\/em> -p <em>fscsi0<\/em> -w \"<em>50050763061302fb,4010401200000000<\/em>\"\nattribute value              description  user_settable\n\nscsi_id   0x20400            SCSI ID      False\nnode_name 0x5005076306ffc2fb FC Node Name False\npriority  1                  Priority     True<\/code><\/pre>\n<p>That shows you the priority of this path.  You can see it&#8217;s still the default of 1. You can check the other paths too.<\/p>\n<p>The goal is to spread out the load between all the available paths. To do this, we must create 4 LUNs.  If we need 4GB, we need 4 1GB LUNs.  Then we can give each one a different primary path. So, in this example, we should run:<\/p>\n<pre><code>chpath -l <em>hdiskx<\/em> -p <em>fscsi0<\/em> -w <em>50050763061302fb,4010401200000000<\/em> -a priority=1\nchpath -l <em>hdiskx<\/em> -p <em>fscsi0<\/em> -w <em>50050763060302fb,4010401200000000<\/em> -a priority=2\nchpath -l <em>hdiskx<\/em> -p <em>fscsi1<\/em> -w <em>50050763060802fb,4010401200000000<\/em> -a priority=3\nchpath -l <em>hdiskx<\/em> -p <em>fscsi1<\/em> -w <em>50050763061802fb,4010401200000000<\/em> -a priority=4<\/code><\/pre>\n<p>The first command isn&#8217;t really necessary, but I was on a roll. Now, we have to change the algorithm for the hdisk and set the queue depth:<\/p>\n<pre><code>chdev -l  <em>hdiskx<\/em> -a algorithm=fail_over -a queue_depth=32<\/code><\/pre>\n<p>Make sure to stager the next one so that path 1 gets a priority of 1, 2 gets 2&#8230; and 0 gets a priority of 4.  Rinse and repeat until you have 4 LUNs each with a different primary path.<\/p>\n<p>Now wasn&#8217;t that easy.  Oh, and when you add more disks, be sure to keep them distributed as evenly as possible.<\/p>\n","protected":false},"excerpt":{"rendered":null,"protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2,7,1],"tags":[],"class_list":["post-394","post","type-post","status-publish","format-standard","hentry","category-aix-notes","category-san-notes","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/posts\/394","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/comments?post=394"}],"version-history":[{"count":0,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/posts\/394\/revisions"}],"wp:attachment":[{"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/media?parent=394"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/categories?post=394"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/tags?post=394"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}