<p>List the number of clients by platform:</p>
<pre><code>SELECT platform_name,count(*) AS "Number of Nodes" FROM nodes GROUP BY platform_name ORDER BY "Number of Nodes"</code></pre>
<p>List all of the tapes used by a node:</p>
<pre><code>SELECT DISTINCT node_name,volume_name,stgpool_name FROM volumeusage WHERE node_name='xxx'</code></pre>
<p>List all of the nodes using a tape:</p>
<pre><code>SELECT DISTINCT node_name,volume_name,stgpool_name FROM volumeusage WHERE volume_name='xxx'</code></pre>
<p>List tapes used today:</p>
<pre><code>SELECT volume_name,last_write_date FROM volumes WHERE (last_write_date >=current_timestamp - 24 hours)</code></pre>
<p>Or to list the last write date for all the tapes in order:</p>
<pre><code>SELECT volume_name,last_write_date FROM volumes ORDER BY last_write_date</code></pre>
<p>Or to see just the number of tapes written to in the last 24 hours:</p>
<pre><code>SELECT count(*) AS "Tapes Used" FROM volumes WHERE (last_write_date >=current_timestamp - 24 hours)</code></pre>
<p>List tapes with errors:</p>
<pre><code>SELECT volume_name, read_errors,write_errors FROM volumes WHERE (read_errors >= 1 OR write_errors >= 1)</code></pre>
<p>List the number of tapes to be reclaimed at a certain level:</p>
<pre><code>SELECT count(*) FROM volumes WHERE (stgpool_name='TAPEPOOL' AND upper(status)='FULL' AND pct_utilized < 70)</code></pre>
<p>List the MB backed up last night per node:</p>
<pre><code>SELECT entity AS "Node name", CAST(sum(bytes/1024/1024) AS decimal(8,2)) AS "MB xfer", SUBSTR (CAST(min (start_time) AS char(29)),1,10) AS "start date", SUBSTR (CAST(min (start_time) AS char(29)),12,8) AS "start time", SUBSTR(CAST(max (end_time) AS char(29)),1,10) AS "end date", SUBSTR (CAST(max (end_time) AS char(29)),12,8) AS "end time" FROM summary WHERE (activity='BACKUP' AND start_time >= current_timestamp - 24 hours) GROUP BY ENTITY order by "MB xfer"</code></pre>
<p>List how much data was backed up yesterday:</p>
<pre><code>SELECT entity AS "Node name", CAST(sum(bytes/1024/1024) AS decimal(8,2)) AS "MB xfer", SUBSTR (CAST(min (start_time) AS char(29)),1,10) AS "start date", SUBSTR (CAST(min (start_time) AS char(29)),12,8) AS "start time", SUBSTR (CAST(max (end_time) AS char(29)),1,10) AS "end date", SUBSTR (CAST(max (end_time) AS char(29)),12,8) AS "end time" FROM summary WHERE (activity='BACKUP' AND start_time >= current_timestamp - 24 hours) GROUP BY entity</code></pre>
<p>Or to just see how much was backed up system wide:</p>
<pre><code>SELECT CAST(sum(bytes/1024/1024) AS decimal(8,2)) AS "MB xfer" FROM summary WHERE (activity='BACKUP' AND start_time>=current_timestamp - 24 hours)</code></pre>
<p>List what nodes did restores yesterday:</p>
<pre><code>SELECT entity, bytes, start_time, successful FROM summary WHERE (activity='RESTORE' AND start_time>=current_timestamp - 24 hours)</code></pre>
<p>List the tapes in use and max scratch by storage pool:</p>
<pre><code>SELECT a.stgpool_name,a.maxscratch,count(*) AS Volumes FROM stgpools a, volumes b WHERE a.stgpool_name = b.stgpool_name GROUP BY a.stgpool_name,a.maxscratch</code></pre>
<p>List the number of tapes filling and how full by storage pool:</p>
<p>This lists the number of tapes that are "filling" and an average of how full they are per storage pool. This is a query that I wrote for the BigBrother client as a replacement for q stg:</p>
<pre><code>SELECT a.stgpool_name AS "STORAGE POOL", CAST(a.EST_CAPACITY_MB AS DECIMAL(12,0)) AS "SIZE MB", CAST(a.PCT_UTILIZED AS DECIMAL(2,0)) AS "PCT", count(*) AS Filling, CAST(AVG(b.PCT_UTILIZED) AS DECIMAL(2,0)) AS "PCT Full" FROM stgpools a, volumes b WHERE ((a.stgpool_name = b.stgpool_name AND upper(status)='FILLING') OR (a.stgpool_name = b.stgpool_name AND devclass_name='DISK')) GROUP BY a.stgpool_name, a.EST_CAPACITY_MB, a.PCT_UTILIZED</code></pre>
<p>Check database performance:</p>
<p>List the DB Pages backed up per hour (should be 5M or above).</p>
<pre><code>SELECT activity, CAST((end_time) AS date) AS "Date", (examined/cast ((end_time-start_time) seconds AS DECIMAL(18,13))*3600) "Pages backed up/Hr" FROM summary WHERE activity='FULL_DBBACKUP' AND days(end_time) - days(start_time)=0M</code></pre>
<p>List the DB Pages expired per hour (should be 3.8M or above).</p>
<pre><code>SELECT activity, CAST((end_time) AS date) AS "Date", (examined/CAST((end_time-start_time) seconds AS DECIMAL(24,2))*3600) "Objects Examined Up/Hr" FROM summary WHERE activity='EXPIRATION'</code></pre>
{"id":281,"date":"2009-07-02T16:10:07","date_gmt":"2009-07-02T20:10:07","guid":{"rendered":"http:\/\/patrickv.info\/wordpress\/?p=281"},"modified":"2009-07-02T16:10:07","modified_gmt":"2009-07-02T20:10:07","slug":"sql-queries","status":"publish","type":"post","link":"https:\/\/rootuser.ninja\/index.php\/2009\/07\/02\/sql-queries\/","title":{"rendered":"SQL queries"},"content":{"rendered":"<p>List the number of clients by platform:<\/p>\n<pre><code>SELECT platform_name,count(*) AS \"Number of Nodes\" FROM nodes GROUP BY platform_name ORDER BY \"Number of Nodes\"<\/code><\/pre>\n<p>List all of the tapes used by a node:<\/p>\n<pre><code>SELECT DISTINCT node_name,volume_name,stgpool_name FROM volumeusage WHERE node_name='xxx'<\/code><\/pre>\n<p>List all of the nodes using a tape:<\/p>\n<pre><code>SELECT DISTINCT node_name,volume_name,stgpool_name FROM volumeusage WHERE volume_name='xxx'<\/code><\/pre>\n<p>List tapes used today:<\/p>\n<pre><code>SELECT volume_name,last_write_date FROM volumes WHERE (last_write_date >=current_timestamp - 24 hours)<\/code><\/pre>\n<p>Or to list the last write date for all the tapes in order:<\/p>\n<pre><code>SELECT volume_name,last_write_date FROM volumes ORDER BY last_write_date<\/code><\/pre>\n<p>Or to see just the number of tapes written to in the last 24 hours:<\/p>\n<pre><code>SELECT count(*) AS \"Tapes Used\" FROM volumes WHERE (last_write_date >=current_timestamp - 24 hours)<\/code><\/pre>\n<p>List tapes with errors:<\/p>\n<pre><code>SELECT volume_name, read_errors,write_errors FROM volumes WHERE (read_errors >= 1 OR write_errors >= 1)<\/code><\/pre>\n<p>List the number of tapes to be reclaimed at a certain level:<\/p>\n<pre><code>SELECT count(*) FROM volumes WHERE (stgpool_name='TAPEPOOL' AND upper(status)='FULL' AND pct_utilized < 70)<\/code><\/pre>\n<p>List the MB backed up last night per node:<\/p>\n<pre><code>SELECT entity AS \"Node name\", CAST(sum(bytes\/1024\/1024) AS decimal(8,2)) AS \"MB xfer\", SUBSTR (CAST(min (start_time) AS char(29)),1,10) AS \"start date\", SUBSTR (CAST(min (start_time) AS char(29)),12,8) AS \"start time\", SUBSTR(CAST(max (end_time) AS char(29)),1,10) AS \"end date\", SUBSTR (CAST(max (end_time) AS char(29)),12,8) AS \"end time\" FROM summary WHERE (activity='BACKUP' AND start_time >= current_timestamp - 24 hours) GROUP BY ENTITY order by \"MB xfer\"<\/code><\/pre>\n<p>List how much data was backed up yesterday:<\/p>\n<pre><code>SELECT entity AS \"Node name\", CAST(sum(bytes\/1024\/1024) AS decimal(8,2)) AS \"MB xfer\", SUBSTR (CAST(min (start_time) AS char(29)),1,10) AS \"start date\", SUBSTR (CAST(min (start_time) AS char(29)),12,8) AS \"start time\", SUBSTR (CAST(max (end_time) AS char(29)),1,10) AS \"end date\", SUBSTR (CAST(max (end_time) AS char(29)),12,8) AS \"end time\" FROM summary WHERE (activity='BACKUP' AND start_time >= current_timestamp - 24 hours) GROUP BY entity<\/code><\/pre>\n<p>Or to just see how much was backed up system wide:<\/p>\n<pre><code>SELECT CAST(sum(bytes\/1024\/1024) AS decimal(8,2)) AS \"MB xfer\" FROM summary WHERE (activity='BACKUP' AND start_time>=current_timestamp - 24 hours)<\/code><\/pre>\n<p>List what nodes did restores yesterday:<\/p>\n<pre><code>SELECT entity, bytes, start_time, successful FROM summary WHERE (activity='RESTORE' AND start_time>=current_timestamp - 24 hours)<\/code><\/pre>\n<p>List the tapes in use and max scratch by storage pool:<\/p>\n<pre><code>SELECT a.stgpool_name,a.maxscratch,count(*) AS Volumes FROM stgpools a, volumes b WHERE a.stgpool_name = b.stgpool_name GROUP BY a.stgpool_name,a.maxscratch<\/code><\/pre>\n<p>List the number of tapes filling and how full by storage pool:<\/p>\n<p>This lists the number of tapes that are \"filling\" and an average of how full they are per storage pool. This is a query that I wrote for the BigBrother client as a replacement for q stg:<\/p>\n<pre><code>SELECT a.stgpool_name AS \"STORAGE POOL\", CAST(a.EST_CAPACITY_MB AS DECIMAL(12,0)) AS \"SIZE MB\", CAST(a.PCT_UTILIZED AS DECIMAL(2,0)) AS \"PCT\", count(*) AS Filling, CAST(AVG(b.PCT_UTILIZED) AS DECIMAL(2,0)) AS \"PCT Full\" FROM stgpools a, volumes b WHERE ((a.stgpool_name = b.stgpool_name AND upper(status)='FILLING') OR (a.stgpool_name = b.stgpool_name AND devclass_name='DISK')) GROUP BY a.stgpool_name, a.EST_CAPACITY_MB, a.PCT_UTILIZED<\/code><\/pre>\n<p>Check database performance:<\/p>\n<p>List the DB Pages backed up per hour (should be 5M or above).<\/p>\n<pre><code>SELECT activity, CAST((end_time) AS date) AS \"Date\", (examined\/cast ((end_time-start_time) seconds AS DECIMAL(18,13))*3600) \"Pages backed up\/Hr\" FROM summary WHERE activity='FULL_DBBACKUP' AND days(end_time) - days(start_time)=0M<\/code><\/pre>\n<p>List the DB Pages expired per hour (should be 3.8M or above).<\/p>\n<pre><code>SELECT activity, CAST((end_time) AS date) AS \"Date\", (examined\/CAST((end_time-start_time) seconds AS DECIMAL(24,2))*3600) \"Objects Examined Up\/Hr\" FROM summary WHERE activity='EXPIRATION'<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":null,"protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4,1],"tags":[],"class_list":["post-281","post","type-post","status-publish","format-standard","hentry","category-tsm-notes","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/posts\/281","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/comments?post=281"}],"version-history":[{"count":0,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/posts\/281\/revisions"}],"wp:attachment":[{"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/media?parent=281"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/categories?post=281"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rootuser.ninja\/index.php\/wp-json\/wp\/v2\/tags?post=281"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}