Poor Performance and Pending Tasks in Satellite 6.1

We recently installed a new Satellite 6.1 server on VMWare to replace our older physical Satellite server. On our VMWare engineer’s recommendation we configure the VM with 2 cores and 8GB of RAM, a bit under what RedHat calls for. This is from the Red Hat Satellite 6.1 Installation Guide:

Red Hat Satellite requires a networked base system with the following minimum specifications:
64-bit architecture
The latest version of Red Hat Enterprise Linux 6 Server or 7 Server
A minimum of two CPU cores, but four CPU cores are recommended.
A minimum of 12 GB memory but ideally 16 GB of memory for each instance of Satellite. A minimum of 4 GB of swap space is recommended.

Looking at the system, it didn’t appear to be busy. But, tasks would sit in the Pending state and never complete. After a lot of work with Red Hat, we looked at the /etc/default/pulp_workers file:
# Configuration file for Pulp's Celery workers

# Define the number of worker nodes you wish to have here. This defaults to the number of processors
# that are detected on the system if left commented here.
PULP_CONCURRENCY=1

If PULP_CONCURRENCY were commented out, then the number of worker processes would be set to the number of CPUs on startup, or 2 in our case. But, with it set to 1, there aren’t enough processes to take the work off the queue. Once we changed PULP_CONCURRENCY to 4, the system load increased and tasks started moving. Red Hat wasn’t sure how this is set at install time, but tuning the setting made a big difference.

We also increased the number of vCPUs to 4 and the RAM to 12GB, which dramatically improved performance. VCOPS will tell your VMWare administrator to cut back on resources because Satellite is idle almost all the time. But, you need to tune to the peak load times, when Satellite is synchronizing repositories, installing packages or running puppet tasks.

Our server runs at almost 100% idle, with almost no load, and about 7.5GB of ram used. While running repository synchronization, the CPU utilization goes to 100%, with a run queue of 8 to 15, and about 8.3GB of RAM used.

Stor2RRD Overview

If you manage your own SAN, you’ll eventually be asked questions like “Why are some of my databases slow?”, “Why do we periodically have performance problems?” or “Do we have a hot LUN?”. Modern arrays have real-time performance monitoring, but not all of them have historical data so you can see if there’s a periodic performance issue or if the current performance is out of the ordinary. There are vendor supplied products and lots of third party products that let you gather performance statistics, but they’re usually pretty expensive. If you just need to gather and report on the performance data for IBM V7000, SVC, or DS8000 storage, there is a great FREE product call Stor2RRD.

Stor2RRD is developed by XORUX, the developers of the excellent Lpar2RRD tool, and is free to use with relatively modest fees for support. As it’s name suggests, it collects data from your storage arrays and puts the data into RRD databases. It has much the same requirements as Lpar2RRD, a simple Linux web server with PERL and RRD, and you can run it on the same server as LPAR2RRD. If you have a DS8000 array, you’ll also need the DSCLI package for your storage, or just SSH if you have an SVC or V7000 storage array.

We had issues getting version 0.45 to work. But the developers responded to a quick Email with a preview of the next version, 0.48, which fixed the problem. The setup was pretty simple, we didn’t have any problems with the provided directions, and got everything setup and tested in an couple of hours.

After running the tool for a couple of weeks, we’ve collected what seems like a lot of data. Some of the high-level graphs are very busy, so much that it runs the risk of being “data porn”, data for data’s sake that loses some of it’s usefulness. But, you can drill down from these high-level graphs to the Storage Pool, MDisk, LUN, drive or SAN Port level and get details like IOPS, throughput, latency and capacity.

For instance, here is a graph if the read performance for the managed disks in one of our V7000’s:
mdisk_read

That sure looks like mdiskSSD3, the teal blue one, is a hot array. Here is the read response time for that particular mdisk:
mdiskSSD3_read_resp
The response time isn’t too bad on that array, 3ms Max and 1.4ms on average, which for this data is more than fast enough.

This is just one simple example of the data that Stor2RRD collects. With this data we have real information showing if a system’s slowness is because the server is using an abnormal amount of bandwidth or if we should consider adding more SSD to an over-subscribed pool. And that helps us make intelligent storage decisions and backup our reasoning with real numbers.

For the cost of a small Linux VM, you can deploy a troubleshooting and monitoring tool the rivals some very expensive third party products. And, if it’s helpful in your environment, Stor2RRD annual support is a fraction of the cost of other products.

There is a full featured demo on the Stor2RRD website where you use the tool yourself with the developers data.

Linux LUN Resize

I recently had someone ask me how to rezise a LUN in RHEL without rebooting. The “go-to” method for this admin was to reboot! This is easily accomplished in AIX with “chvg -g”, but how to do this in Linux wasn’t so obvious.

In my example, I’m using LUNs from a SAN attached XIV storage array, using dm-multipath for multipathing and then LVM for carving up the filesystems. After the LUN is resized on the storage array (96Gb to 176GB in my case), we have to scan for changes on the SCSI bus. I’m assuming you have the sg3_utils package installed to get the scsi-rescan command. The simplest thing is to just rescan them all, though you can do them individually if you want:

[root@mmc-tsm2 bin]# scsi-rescan --forcerescan                                                                                                   
Host adapter 0 (qla2xxx) found.
Host adapter 1 (qla2xxx) found.
Host adapter 2 (qla2xxx) found.
Host adapter 3 (qla2xxx) found.
Host adapter 4 (usb-storage) found.
Scanning SCSI subsystem for new devices
 and remove devices that have disappeared
Scanning host 0 for  all SCSI target IDs, all LUNs
Scanning for device 0 0 0 0 ...

This will run for a while as it scans all the LUNs attached to the system. Now lets look at what multipathd thinks:

# multipath -ll dbvg5
dbvg5 (200173800049510dc) dm-7 IBM,2810XIV
size=96G features='1 queue_if_no_path' hwhandler='0' wp=rw

Multipathd now has to be updated with the correct information:

# multipathd -k"resize map dbvg5"
ok

And check it again:

# multipath -ll dbvg5
dbvg5 (200173800049510dc) dm-7 IBM,2810XIV
size=176G features='1 queue_if_no_path' hwhandler='0' wp=rw
...

Now lets look at the PV:

# pvs /dev/mapper/dbvg5
  PV                VG            Fmt  Attr PSize   PFree
  /dev/mapper/dbvg5 tsminst1_dbvg lvm2 a--  96.00g  0

The LUN is resized, multipathd has the correct size, but the LVM PV is still the original size. I’m using whole disk PVs, if you’re using partitions you’ll have to resize the partition with parted or similar tool too. Now we just need to resize the partition:

# pvresize /dev/mapper/dbvg5
  Physical volume "/dev/mapper/dbvg5" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

# pvs /dev/mapper/dbvg5
  PV                VG            Fmt  Attr PSize   PFree
  /dev/mapper/dbvg5 tsminst1_dbvg lvm2 a--  176.00g  80.00g

Now we can resize our LVs and run resize2fs on the filesystems to take advantage of the additional space.

Is it Power7 or is it Power7+?


UPDATED

Last year I budgeted for 3 P740C model’s to replace 3 P6 550 models that were getting long in the tooth. Because of the long lead time in our budget process and the continued downward pressure from IBM on their pricing, I was able to purchase 4 P7+ 740D models.  That is a big win for us.

After implementing new 7042-CR7 model HMCs (which I recommend everyone upgrade to) and powering on our first box, I noticed that the latest HMC code reports the server has a Power7 and not a Power7+.  The Power7 chip has been out for nearly a year, and the HMC has been through several updates since then, so why does it not show Power7+ the way it did for Power6+?  Here’s what the screen looks like:

HMC CPU Mode

So, what does the LPAR say when it’s powered on?  Everywhere I look, it’s Power7.  Here’s what the system thinks the CPU is:

nim # lsattr -El proc0
frequency   4228000000     Processor Speed       False
smt_enabled true           Processor SMT enabled False
smt_threads 4              Processor SMT threads False
state       enable         Processor state       False
type        PowerPC_POWER7 Processor type        False

And prtconf:

nim # prtconf 
System Model: IBM,8205-E6D
Machine Serial Number: 
Processor Type: PowerPC_POWER7
Processor Implementation Mode: POWER 7
Processor Version: PV_7_Compat

I do have a Power7 server running in Power6+ compatibility mode, here’s the output of prtconf on that server:

# prtconf
System Model: IBM,8202-E4B
Machine Serial Number: 10418BP
Processor Type: PowerPC_POWER7
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat

So, maybe the OS commands aren’t aware of the CPU compatibility mode.  This is the latest firmware and the latest AIX 7.1 level.  I’m also running the latest HMC code, and I’ve confirmed the same behavior in the latest VIOS level (2.2.2.2).

Of course, the question was asked, did we really get what we paid for? So, I called my IBM Business Partner and asked their Technical sales team to dig into this.  The box does have Power7+ processors, so it’s wasn’t mis-ordered and it WAS built correctly in the factory. They reached out to some other customers running a new P7+ 770, and they’ve confirmed the same behavior there, so I assume this is the same across the product line.

Then I had a bit of luck. As part of this upgrade, I’m testing AME on our non-production servers.  The amepat tool, shows the correct processor mode:

nim # amepat

Command Invoked                : amepat

Date/Time of invocation        : Fri Sep 27 11:53:38 EDT 2013
Total Monitored time           : NA
Total Samples Collected        : NA

System Configuration:
---------------------
Partition Name                 : nim
Processor Implementation Mode  : POWER7+ Mode
Number Of Logical CPUs         : 4
Processor Entitled Capacity    : 0.10
Processor Max. Capacity        : 1.00
True Memory                    : 4.00 GB
SMT Threads                    : 4
Shared Processor Mode          : Enabled-Uncapped
Active Memory Sharing          : Disabled
Active Memory Expansion        : Enabled
Target Expanded Memory Size    : 8.00 GB
Target Memory Expansion factor : 2.00

There we see the expected Power7+ mode.  This command works and reports the processor correctly on systems without AME enabled, so it can be used on any LPAR to show the correct processor type for Power7+ systems.  Here is the output on our Power7 LPAR running in Power6+ mode:

# amepat
Command Invoked : amepat
Date/Time of invocation : Wed Oct 2 12:41:43 EDT 2013
Total Monitored time : NA
Total Samples Collected : NA
System Configuration:
---------------------
Partition Name : tsm1
Processor Implementation Mode : POWER6

So, amepat doesn’t report Power6+ for Power7 systems running in Power6+ mode.

Our IBM client team is looking into this issue, and I expect the relevant commands will be enhanced in a future service pack and HMC level.  But, in the mean time, we can prove that what we ordered is what was delivered.

UPDATE :

IBM’s answer:

Historically IBM has not included the “+” on any of our products (ie Power 5+, Power6 or Power7+).  You can open a PMR and request a Design Change Request (DCR) to have the “+” added for Power7 servers.

That is an interesting answer to me.  We never purchased any Power6+ servers, so I can’t comment on what the OS commands, lsattr and the like, may or may not report. But, the HMC most definitely did report a separate compatibility mode for Power6+. My only thought is that the Power7+ CPU didn’t introduce a new operational mode, which is a little surprising to me because of some of the work done in this chip.

Privileges Necessary for MySQLDump

I recently setup a backup process to dump a MySQL database to a file for backup. With this database, our DBA group has been using the ‘root’ account setup the by software vendor for administration. This server is used for internal system administration and sending performance data off to our software vendor. So, other than being bad form to use the ‘root’ ID, there’s probably no regulatory responsibility to use user or role specific IDs.

That’s all well and good, but I’m not comfortable putting the ‘root’ ID password in scripts or backup products. And, I need to ensure the mysqldump command is run and completes before the backup begins, so the natural thing to do is make the backup software run mysqldump as a pre-backup job with a dedicated mysql user ID. While I’m at it, we really should give the backup user ID the minimum privileges necessary. So, first I create a user:

create user 'backup_user'@'localhost' identified by 'somepassword';

Now what privileges do we need? Here’s a list of privileges we may need:

select This is a given, without select we won’t get very far
show view We need this if we want to backup views
trigger If we have triggers to backup, we’ll need this
lock tables This is needed so mysqldump can lock the tables. Don’t need it if using –single-transaction
reload We need this if using –flush-logs
file We would need this if we were writing the files with mysqldump, and not redirecting the output to a file with ‘>’

So, we can grant these privileges to all the schemas, or just the schema’s we want to backup:

grant select, show view, trigger, lock tables, reload, file on *.* to 'backup_user'@'localhost';
flush privileges;

Sending AIX Syslog Data to Splunk

I recently put up a test Splunk server to act as a central syslog repository, one of the issues in our security audits. There are some “open” projects to do this, but Splunk has a lot of features and is “pretty” compared to some of the open alternatives. Getting data from our Linux hosts was a snap, but data from our AIX hosts has a few minor annoyances. Fortunately, we were able to overcome them.

The syslogd shipped with AIX only supports UDP. rsyslog supports TCP, but hasn’t been ported to AIX. Another option is syslog-ng, for which there are open source and commercial versions compiled for AIX. But, after installing all the dependent RPMs for the open source version, it would only segfault with no indication of the problem. So, to support syslog via UDP, on the Splunk server you have to enable a UDP source. That’s easily accomplished by going to Manager -> Data Inputs -> UDP -> New, enter 514 for the port, set sourcetype to “From List”, and source type of “syslog”. Check “More settings” and select DNS for “Set host” and click Save.

Once that is done, add a line to /etc/syslog.conf on the source node to send the data you want Splunk to record to the Splunk server. If your splunk server is named “splunk” it would look something like this:

*.info        @splunk

One of the problems with AIX’s implementation of syslog is it’s format. Here’s what Splunk records:

3/26/13 12:32:07.000 PM	Mar 26 12:32:07 HOSTNAME Mar 26 12:32:07 Message forwarded from HOSTNAME: sshd[21168310]: Accepted publickey for root from xxx.xxx.xxx.xxx port 39508 ssh2 host=HOSTNAME   Options|  sourcetype=syslog   Options|  source=udp:514   Options|  process=HOSTNAME

The AIX implementation of syslog by default adds “Message forwarded from HOSTNAME:”. That’s a little annoying to look at, but worse is that Splunk uses the hostname of the source as the process name, so you lose the ability to search on the process field. You can turn this off on the source with:

stopsrc -s syslogd
chssys -s syslogd -a "-n"
startsrc -s syslogd

TSM Deduplication Increases Storage Usage (for some values of deduplication)

I ran into an interesting problem recently. A de-duplicated pool containing TDP for Oracle backups was consuming much more space than would otherwise be indicated. Here’s what the occupancy looked like:

Node Name         Storage         Number of     Logical
                  Pool Name           Files     Space
                                                Occupied
                                                (MB)
----------        ----------    -----------    ----------- 
CERN_ORA_ADMIN    CERNERDISK            810      31,600.95 
CERN_ORA_BUILD    CERNERDISK          1,189      74,594.84 
CERN_ORA_CERT     CERNERDISK            402   3,876,363.50 
CERN_ORA_TEST     CERNERDISK            905   7,658,362.00
LAW_ORA_PROD      CERNERDISK          1,424     544,896.19 
OEM_ORA_RAM       CERNERDISK          2,186     524,795.31

That works out to about 12.7 TB. And, here’s what the storage usage looked like:

Storage         Device          Estimated       Pct       Pct     High  Low  
Pool Name       Class Name       Capacity      Util      Migr      Mig  Mig  
                                                                   Pct  Pct  
-----------     ----------     ----------     -----     -----     ----  ---  
CERNERDISK      CERNERDISK       47,319 G      90.4      90.4       97  90 

That’s about 47TB of storage, 90% used, which works out to just over 42TB of used storage. On top of that, TSM was reporting a “savings” of about 2TB, which means I should have about 44TB of data stored on disk. But only 12.7TB was actually backed up!

IBM has built a few interesting scripts to collect TSM data for support lately. One of which is tsm_dedup_stats.pl. This little Perl script collects quite a bit of information relating to deduplication. Here’s some summary info from that script ran a couple of days later:

Pool: CERNERDISK
  Type: PRIMARY		   Est. Cap. (MB): 48445474.5  Pct Util: 88.7
  Reclaim Thresh: 60	Reclaim Procs: 8		  Next Pool: FILEPOOL
  Identify Procs: 4	  Dedup Saved(MB): 2851277


  Logical stored (MB):	  9921898.18
  Dedup Not Stored (MB):  2851277.87
  Total Managed (MB):	 12773176.05

  Volume count:			        4713
  AVG volume size(MB):	        9646
  Number of chunks:	       847334486
  Avg chunk size:	           87388

There’s some interesting stuff in there. There’s almost 10TB of logical storage in the storage pool, almost 3TB saved in deduplication, and about 12TB total managed storage, which matches the output from “Q OCCupancy” pretty closely. The output also has a breakdown by client type and storage pool of the deduplication rate:

Client Node Information
-----------------------
    DP Oracle:		7
      Stats for Storage Pool:		CERNERDISK
        Dedup Pct:		22.28%
    TDPO:		1
      Stats for Storage Pool:		CERNERDISK
        Dedup Pct:		23.14%

So far so good, tsm_dedup_stats.pl matches what we’re seeing with the regular TSM administrative commands.

At this point, I ran “REPAIR OCC“. There’s a possible issue where the occupancy reported and the storage reported by “Q STG” can be inacurate. This new command validates and corrects the numbers reported. Unfortunately, running this had no effect on the problem.

The next thing we looked at was the running deduplication worker threads. After the “IDentify DUPlicates” command locates and marks “chunks” as duplicates, background processes run and actually remove the duplicated chunks. Running “SHOW DEDUPDELETE”, one of the undocumented show commands in TSM, reports the number of worker threads defined, the number of active threads, and which node and filesystem IDs are currently being worked on. If all the worker threads are active for a significant amount of time, more worker threads can be started by putting the “DEDUPDELETIONTHREADS” option in the dsmserv.opt file and restarting the server. The default is 8, on the bigger servers I’ve bumped that to 12. Bumping this number will generate more log and database traffic as well as drive more CPU usage, so you’ll want to keep an eye on that.

I only had 4 threads busy routinely, so adding more threads wouldn’t have helped. But, those threads had were always working on the same node and filespace. The node IDs and node names can be pulled out of the database by running this as the instance owner:

db2 connect to tsmdb1
db2 set schema tsmdb1
db2 "select NODENAME, NODEID, PLATFORM from Nodes"

Those 4 node IDs mapped to 4 of the nodes with data in our problem storage pool. You can see how much work is queued up per node ID with this SQL:

db2 connect to tsmdb1
db2 set schema tsmdb1
db2 "select count(*) as \"chunks\", nodeid from tsmdb1.bf_queued_chunks group by nodeid for read only with ur"

What was happening is that clients were backing up data faster than the TSM server could remove the duplicate data. Part of the problem is probably that so much data is concentrated in one filespace on the TDP clients, so only one deduplication worker thread can process each nodes data at any one time.

We could do client-side deduplication to take the load off the server, but we’ve found that with the big TDP backups that slows down the backup too much. So, with only saving about 2TB of storage on 12TB of backed up data, we came to the conclusion that just turning off deduplication for this storage pool was probably our best bet. After turning off deduplicaiton, it took about 10 days to work through the old duplicate chunks. Now the space used and occupancy reported are practically identical.

Installing the XIVGui on Fedora 16

I’ve been running the XIVGui on a Windows7 VM so that I have it available from anywhere. That does work, but then I have to launch an rdesktop session, login, then launch the XIVGui, and login again. I finally got tired of the extra steps and decided to load the XIVGui when I upgraded to Fedora 16. I considered making an RPM, but I’m sure IBM would frown on redistributing their code. These manual steps work great on Fedora 16, should work fine on Fedora 15. I haven’t tested it with RHEL or other versions.

First, you need the 32bit version of libXtst, even if you’re using the 64bit client:

yum install libXtst-1.2.0-2.fc15.i686

Then just download the package from IBM’s ftp server, uncompress it, and move the resulting directory to someplace on your system, I used /usr/local/lib.

tar -zxvf xivgui-xxx-linux64.tar.gz
mv XIVGUI /usr/local/lib/

Then, we just need to make a couple of .desktop files.

/usr/share/applications/xivgui.desktop:

[Desktop Entry]
Name=XIVGui
Comment=GUI management tool for IBM XIV
Exec=/usr/local/lib/XIVGUI/xivgui
Icon=/usr/local/lib/XIVGUI/images/xivIconGreen-32.png
Terminal=false
Type=Application
Categories=System;
StartupNotify=true
X-Desktop-File-Install-Version=0.18

/usr/share/applications/xivtop.desktop:

[Desktop Entry]
Name=XIVTop
Comment=GUI performance tool for IBM XIV
Exec=/usr/local/lib/XIVGUI/xivtop
Icon=/usr/local/lib/XIVGUI/images/xivIconTop-32.png
Terminal=false
Type=Application
Categories=System;
StartupNotify=true
X-Desktop-File-Install-Version=0.18

Now XIVGui and XIVTop should show up under “System Tools”.

Static DHCP IPs with KVM Virtualization

When building a virtualization lab system, I’ve found that I want static IPs assigned to my guests. You could just assign static IPs in the guest OS, but then you should document what IPs are in use for what hosts. It would be easier to just assign static IP entries in the DHCP server. There doesn’t seem to be a straight-forward way to get this done.

What I’ve found works is to destroy the network, edit it directly, and then restart it.

[root@m77413 libvirt]# virsh -c qemu:///system net-destroy default
Network default destroyed

[root@m77413 libvirt]# virsh -c qemu:///system net-edit default
Network default XML configuration edited.

[root@m77413 libvirt]# virsh -c qemu:///system net-start default
Network default started

The xml file entries should look like:

  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254' />
      <host mac='52:54:00:10:6e:17' name='cent-install.test' ip='192.168.122.2' />
      <host mac='52:54:00:ab:10:2a' name='cent-netserver.test' ip='192.168.122.3' />
      <host mac='52:54:00:df:47:95' name='install.test' ip='192.168.122.10' />
    </dhcp>
  </ip>

TSM Windows Client OS Level Demystified

The client OS level field in TSM for most operating systems is pretty straightforward. On Linux, it’s the kernel version, HP-UX and AIX show a recognizable OS level. For windows the OS level is more cryptic. Here is a list of the OS levels:

Operating System Client OS Level
Windows 95 4.00
Windows 98 4.10
Windows ME 4.90
Windows NT 4.0 4.00
Windows 2000 5.00
Windows XP 5.01
Windows Server 2003 5.02
Windows Vista 6.00
Windows Server 2008 6.00
Windows Server 2008 R2 6.01
Windows 7 6.01