hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Taylor, Ronald C" <ronald.tay...@pnl.gov>
Subject RE: Solved - Question on Hbase 0.89 - interactive shell works, programs don't - could use help
Date Tue, 07 Sep 2010 23:59:51 GMT

Thanks - I'll talk to Tim as to cutting down on the zookeeper peers. At the moment we at least
don't have to worry about storage space - we have 25 Tb of disk on each node - 600 Tb total
to play with, which is plenty for us. (I'd trade some of that disk capacity for more RAM per
node, but have to work with the cluster we were given for testing purposes - hopefully we'll
expand in the future.)

Ron

___________________________________________
Ronald Taylor, Ph.D.
Computational Biology & Bioinformatics Group
Pacific Northwest National Laboratory
902 Battelle Boulevard
P.O. Box 999, Mail Stop J4-33
Richland, WA  99352 USA
Office:  509-372-6568
Email: ronald.taylor@pnl.gov

-----Original Message-----
From: Buttler, David [mailto:buttler1@llnl.gov]
Sent: Tuesday, September 07, 2010 4:47 PM
To: user@hbase.apache.org
Cc: Witteveen, Tim
Subject: RE: Solved - Question on Hbase 0.89 - interactive shell works, programs don't - could
use help

Are you sure you want 9 peers in zookeeper?  I think the standard advice is to have:
* 1 peer for clusters of size < 10
* 5 peers for medium size clusters (10-40)
* 1 peer per rack for large clusters

9 seems like overkill for a cluster that has 25 nodes.  Zookeeper should probably have its
own disk on each device (which will reduce your potential storage space), and it has to write
to disk on every peer before a zookeeper write will succeed -- more peers means that the cost
per write is higher.

Dave



-----Original Message-----
From: Taylor, Ronald C [mailto:ronald.taylor@pnl.gov]
Sent: Tuesday, September 07, 2010 4:40 PM
To: 'user@hbase.apache.org'
Cc: Taylor, Ronald C; Witteveen, Tim
Subject: Solved - Question on Hbase 0.89 - interactive shell works, programs don't - could
use help


J-D, David, and Jeff,

Thanks for getting back to me so quickly. Problem has been resolved. I added
   /home/hbase/hbase/conf
 to my CLASSPATH var,

 and made sure that both these files:
  hbase-default.xml
 and
  hbase-site.xml

 in the
    /home/hbase/hbase/conf
 directory use the values below for setting the quorum (using the h02,h03, etc nodes on our
cluster):

  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>h02,h03,h04,h05,h06,h07,h08,h09,h10</value>

    <description>Comma separated list of servers in the ZooKeeper Quorum.
    For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
    By default this is set to localhost for local and pseudo-distributed modes
    of operation. For a fully-distributed setup, this should be set to a full
    list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
    this is the list of servers which we will start/stop ZooKeeper on.
    </description>
  </property>

This appears to have fixed the problem. Thanks again.
Ron

___________________________________________
Ronald Taylor, Ph.D.
Computational Biology & Bioinformatics Group Pacific Northwest National Laboratory
902 Battelle Boulevard
P.O. Box 999, Mail Stop J4-33
Richland, WA  99352 USA
Office:  509-372-6568
Email: ronald.taylor@pnl.gov


-----Original Message-----
From: Buttler, David [mailto:buttler1@llnl.gov]
Sent: Tuesday, September 07, 2010 3:24 PM
To: user@hbase.apache.org; 'hbase-user@hadoop.apache.org'
Cc: Witteveen, Tim
Subject: RE: Question on Hbase 0.89 - interactive shell works, programs don't - could use
help

Hi Ron,
The first thing that jumps out at me is that you are getting localhost as the address for
your zookeeper server.  This is almost certainly wrong.  You should be getting a list of your
zookeeper quorum here.  Until you fix that nothing will work.

You need something like the following in your hbase-site.xml file (and your hbase-site.xml
file should be in the classpath of all of the jobs you expect to run against your cluster):
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
    <description> the port at which the clients will connect </description> </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>node-01,node-02,node-03,node-04,node-05</value>
    <description>Comma separated list of servers in the ZooKeeper Quorum.
    For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
    By default this is set to localhost for local and pseudo-distributed modes
    of operation. For a fully-distributed setup, this should be set to a full
    list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
    this is the list of servers which we will start/stop ZooKeeper on.
    </description>
  </property>

Let me know if that helps,
Dave

-----Original Message-----
From: jdcryans@gmail.com [mailto:jdcryans@gmail.com] On Behalf Of Jean-Daniel Cryans
Sent: Tuesday, September 07, 2010 3:23 PM
To: user@hbase.apache.org
Subject: Re: Question on Hbase 0.89 - interactive shell works, programs don't - could use
help

Your client is trying to connect to a local zookeeper ensemble (grep for "connectString" in
the message). This means that the client doesn't know about the proper configurations in order
to connect to the cluster. Either put your hbase-site.xml on the client's classpath or set
the proper settings on the HBaseConfiguration object.

J-D

On Tue, Sep 7, 2010 at 3:18 PM, Taylor, Ronald C <ronald.taylor@pnl.gov> wrote:
>
> Hello folks,
>
> We've just installed Hbase 0.89 on a 24-node cluster running Hadoop 0.20.2 here at our
government lab.
>
> Got a problem. The Hbase interactive shell works fine. I  can create a
> table with a column family, add a couple rows, get the rows back out.
> Also, the Hbase web site on our cluster at
>
>   http://*h01.emsl.pnl.gov:60010/master.jsp
>
>  doesn't appear (to our untrained eyes) to show anything going wrong
>
> However, the Hbase programs that I used on another cluster that ran an earlier version
of Hbase no longer run. I altered such a program to use the new API, and it compiles fine.
However, when I try to run it, I get the error msgs seen below.
>
> So - I downloaded the sample 0.89 Hbase program from the Hbase web site and tried that,
simply altering the table name used to "peptideTable", column family to "f1", and column to
"name".
>
> The interactive shell shows that the table and data are there . But the slightly altered
program from the Hbase web site, while compiling fine, again shows the same errors as I got
using my own Hbase program. I've tried running the programs in both my own 'rtaylor' account,
and in the 'hbase' account - I get the same errors.
>
> So my colleague Tim and I think we missed something in the install.
>
> I have appended the test program in full below, followed by the error
> msgs that it generated. Lastly, I have appended a screen dump of the
> contents of the web page at
>      http://*h01.emsl.pnl.gov:60010/master.jsp
>
>  on our cluster.
>
>  We would very much appreciate some guidance.
>
>   Cheers,
>    Ron Taylor
> ___________________________________________
> Ronald Taylor, Ph.D.
> Computational Biology & Bioinformatics Group Pacific Northwest
> National Laboratory
> 902 Battelle Boulevard
> P.O. Box 999, Mail Stop J4-33
> Richland, WA  99352 USA
> Office:  509-372-6568
> Email: ronald.taylor@pnl.gov
>
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> Contents of MyLittleHBaseClient.java:
>
>
> import java.io.IOException;
>
> // javac MyLittleHBaseClient.java
> // javac -Xlint MyLittleHBaseClient.java
>
> // java MyLittleHBaseClient
>
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.client.Get;
> import org.apache.hadoop.hbase.client.HTable;
> import org.apache.hadoop.hbase.client.Put;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.client.ResultScanner;
> import org.apache.hadoop.hbase.client.Scan;
> import org.apache.hadoop.hbase.util.Bytes;
>
>
> // Class that has nothing but a main.
> // Does a Put, Get and a Scan against an hbase table.
>
> public class MyLittleHBaseClient {
>
>    public static void main(String[] args) throws IOException {
>
>        // You need a configuration object to tell the client where to connect.
>        // When you create a HBaseConfiguration, it reads in whatever
> you've set
>        // into your hbase-site.xml and in hbase-default.xml, as long
> as these can
>        // be found on the CLASSPATH
>        HBaseConfiguration config = new HBaseConfiguration();
>
>        // This instantiates an HTable object that connects you to
>        // the "myLittleHBaseTable" table.
>        HTable table = new HTable(config, "peptideTable");
>
>        // To add to a row, use Put.  A Put constructor takes the name
> of the row
>        // you want to insert into as a byte array.  In HBase, the
> Bytes class has
>        // utility for converting all kinds of java types to byte
> arrays.  In the
>        // below, we are converting the String "myLittleRow" into a
> byte array to
>        // use as a row key for our update. Once you have a Put
> instance, you can
>        // adorn it by setting the names of columns you want to update
> on the row,
>        // the timestamp to use in your update, etc.If no timestamp,
> the server
>        // applies current time to the edits.
>        //
>        Put p = new Put(Bytes.toBytes("2001"));
>
>        // To set the value you'd like to update in the row
> 'myLittleRow', specify
>        // the column family, column qualifier, and value of the table
> cell you'd
>        // like to update.  The column family must already exist in
> your table
>        // schema.  The qualifier can be anything.  All must be
> specified as byte
>        // arrays as hbase is all about byte arrays.  Lets pretend the
> table
>        // 'myLittleHBaseTable' was created with a family 'myLittleFamily'.
>        //
>        p.add(Bytes.toBytes("f1"), Bytes.toBytes("name"),
>              Bytes.toBytes("p2001"));
>
>        // Once you've adorned your Put instance with all the updates
> you want to
>        // make, to commit it do the following (The HTable#put method
> takes the
>        // Put instance you've been building and pushes the changes you
> made into
>        // hbase)
>        //
>        table.put(p);
>
>        // Now, to retrieve the data we just wrote. The values that
> come back are
>        // Result instances. Generally, a Result is an object that will
> package up
>        // the hbase return into the form you find most palatable.
>        //
>        Get g = new Get(Bytes.toBytes("2001"));
>        Result r = table.get(g);
>        byte [] value = r.getValue(Bytes.toBytes("f1"),
>                                   Bytes.toBytes("name"));
>        // If we convert the value bytes, we should get back 'Some
> Value', the
>        // value we inserted at this location.
>        //
>        String valueStr = Bytes.toString(value);
>        System.out.println("GET: " + valueStr);
>
>        // Sometimes, you won't know the row you're looking for. In
> this case, you
>        // use a Scanner. This will give you cursor-like interface to
> the contents
>        // of the table.  To set up a Scanner, do like you did above
> making a Put
>        // and a Get, create a Scan.  Adorn it with column names, etc.
>        //
>        Scan s = new Scan();
>        s.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
>        ResultScanner scanner = table.getScanner(s);
>        try {
>            // Scanners return Result instances.
>            // Now, for the actual iteration. One way is to use a while loop like so:
>            for (Result rr = scanner.next(); rr != null; rr =
> scanner.next()) {
>                // print out the row we found and the columns we were
> looking for
>                System.out.println("Found row: " + rr);
>            }
>
>            // The other approach is to use a foreach loop. Scanners are iterable!
>            // for (Result rr : scanner) {
>            //   System.out.println("Found row: " + rr);
>            // }
>        } finally {
>            // Make sure you close your scanners when you are done!
>            // Thats why we have it inside a try/finally clause
>            scanner.close();
>        }
>    }
> }
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> Error msgs seen:
>
>
> [rtaylor@h01 Sid]$ java MyLittleHBaseClient
>
> 10/09/07 14:09:51 WARN hbase.HBaseConfiguration: instantiating
> HBaseConfiguration() is deprecated. Please use
> HBaseConfiguration#create() to construct a plain Configura\ tion
> 10/09/07 14:09:51 INFO zookeeper.ZooKeeperWrapper: Reconnecting to
> zookeeper
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.3.1-942149, built on 05/07/2010 17:14
> GMT
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:host.name=h01.emsl.pnl.gov
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_21
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems
Inc.
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:java.home=/usr/java/jdk1.6.0_21/jre
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/hadoop/hadoop/lib/commons-logging-ap
> i-1.0.4.jar:/home/hadoop/hadoop/lib/commons-log\
> ging-1.0.4.jar:/home/hadoop/hadoop/lib/commons-net-1.4.1.jar:/home/had
> oop/hadoop/lib/commons-httpclient-3.0.1.jar:/home/hadoop/hadoop/lib/co
> mmons-el-1.0.jar:/home/hadoo\
> p/hadoop/lib/commons-codec-1.3.jar:/home/hadoop/hadoop/lib/commons-cli
> -1.2.jar:/home/hbase/hbase/hbase-0.89.20100726.jar:/home/rtaylor/Hadoo
> pWork/log4j-1.2.16.jar:/home\
> /rtaylor/HadoopWork/zookeeper-3.3.1.jar:/home/hadoop/hadoop/hadoop-0.2
> 0.2-tools.jar:/home/hadoop/hadoop/hadoop-0.20.2-test.jar:/home/hadoop/
> hadoop/hadoop-0.20.2-example\
> s.jar:/home/hadoop/hadoop/hadoop-0.20.2-ant.jar:/home/hadoop/hadoop/hadoop-0.20.2-core.jar:.
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:java.library.path=/usr/java/jdk1.6.0_21/jre/lib/i386/serve
> r:/usr/java/jdk1.6.0_21/jre/lib/i386:/usr/java/\
> jdk1.6.0_21/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:os.arch=i386
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:os.version=2.6.18-194.11.1.el5
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:user.name=rtaylor
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:user.home=/home/rtaylor
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Client
> environment:user.dir=/home/rtaylor/HadoopWork/Sid
> 10/09/07 14:09:52 INFO zookeeper.ZooKeeper: Initiating client
> connection, connectString=localhost:2181 sessionTimeout=60000
> watcher=org.apache.hadoop.hbase.zookeeper.Zo\
> oKeeperWrapper@1c86be5
> 10/09/07 14:09:52 INFO zookeeper.ClientCnxn: Opening socket connection
> to server localhost/127.0.0.1:2181
> 10/09/07 14:09:52 WARN zookeeper.ClientCnxn: Session 0x0 for server
> null, unexpected error, closing socket connection and attempting
> reconnect
> java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
> 10/09/07 14:09:53 INFO zookeeper.ClientCnxn: Opening socket connection
> to server localhost/127.0.0.1:2181
> 10/09/07 14:09:53 WARN zookeeper.ClientCnxn: Session 0x0 for server
> null, unexpected error, closing socket connection and attempting
> reconnect
> java.net.ConnectException: Connection refused
>
>  .... (long section skiipped - just the same msgs as seen above and
> below)
>
> 10/09/07 14:38:31 WARN zookeeper.ClientCnxn: Session 0x0 for server
> null, unexpected error, closing socket connection and attempting
> reconnect
> java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
> 10/09/07 14:38:32 INFO zookeeper.ClientCnxn: Opening socket connection
> to server localhost/127.0.0.1:2181
> 10/09/07 14:38:32 WARN zookeeper.ClientCnxn: Session 0x0 for server
> null, unexpected error, closing socket connection and attempting
> reconnect
> java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
> 10/09/07 14:38:34 INFO zookeeper.ClientCnxn: Opening socket connection
> to server localhost/127.0.0.1:2181
> 10/09/07 14:38:34 WARN zookeeper.ClientCnxn: Session 0x0 for server
> null, unexpected error, closing socket connection and attempting
> reconnect
> java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
> 10/09/07 14:38:36 INFO zookeeper.ClientCnxn: Opening socket connection
> to server localhost/127.0.0.1:2181
> 10/09/07 14:38:36 WARN zookeeper.ClientCnxn: Session 0x0 for server
> null, unexpected error, closing socket connection and attempting
> reconnect
> java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
> 10/09/07 14:38:36 INFO zookeeper.ZooKeeper: Session: 0x0 closed
> [rtaylor@h01 Sid]$
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> Screen dump from
>
> http://*h01.emsl.pnl.gov:60010/master.jsp
>
>
>
> Master Attributes
>
> Attribute Name  Value   Description
>
> HBase Version   0.89.20100726, r979826  HBase version and svn revision
>
> HBase Compiled  Sat Jul 31 02:01:58 PDT 2010, stack     When HBase
> version was compiled and by whom
>
> Hadoop Version  0.20.3-append-r964955-1240, r960957     Hadoop version
> and svn revision
>
> Hadoop Compiled Fri Jul 16 14:34:43 PDT 2010, Stack     When Hadoop
> version was compiled and by whom
>
> HBase Root Directory    hdfs://h01.emsl.pnl.gov:9000/scratch/hbase
> Location of HBase home directory
>
> Load average    0.20833333333333334     Average number of regions per regionserver. Naive
computation.
>
> Regions On FS   5       Number of regions on FileSystem. Rough count.
>
> Zookeeper Quorum        h05:2182,h04:2182,h03:2182,h02:2182,h10:2182,h09:2182,h08:2182,h07:2182,h06:2182
       Addresses of all registered ZK servers. For more, see zk dump</zk.jsp>.
>
> Catalog Tables
>
> Table   Description
>
> -ROOT-<table.jsp?name=-ROOT->   The -ROOT- table holds references to all .META.
regions.
>
> .META.<table.jsp?name=.META.>   The .META. table holds references to
> all User Table regions
>
> User Tables
>
> Table   Description
>
> fulltable<table.jsp?name=fulltable>     {NAME => 'fulltable', FAMILIES
> => [{NAME => 'fullcolumnfamily', BLOOMFILTER => 'NONE',
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL
> => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',
> BLOCKCACHE => 'true'}]}
>
> peptideTable<table.jsp?name=peptideTable>       {NAME =>
> 'peptideTable', FAMILIES => [{NAME => 'f1', BLOOMFILTER => 'NONE',
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL
> => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',
> BLOCKCACHE => 'true'}]}
>
> pseudoetable<table.jsp?name=pseudoetable>       {NAME =>
> 'pseudoetable', FAMILIES => [{NAME => 'pseudoecolumnfamily',
> BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION =>
> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}
> 3 table(s) in set.
>
> Region Servers
>
>        Address Start Code      Load
>
>        h02.emsl.pnl.gov:60030<http://*h02.emsl.pnl.gov:60030/>
> 1282848897506   requests=0, regions=0, usedHeap=24, maxHeap=996
>
>        h03.emsl.pnl.gov:60030<http://*h03.emsl.pnl.gov:60030/>
> 1282848897225   requests=0, regions=0, usedHeap=22, maxHeap=996
>
>        h04.emsl.pnl.gov:60030<http://*h04.emsl.pnl.gov:60030/>
> 1282848897511   requests=0, regions=0, usedHeap=33, maxHeap=996
>
>        h05.emsl.pnl.gov:60030<http://*h05.emsl.pnl.gov:60030/>
> 1282848897452   requests=0, regions=1, usedHeap=32, maxHeap=996
>
>        h06.emsl.pnl.gov:60030<http://*h06.emsl.pnl.gov:60030/>
> 1282848897259   requests=0, regions=0, usedHeap=32, maxHeap=996
>
>        h07.emsl.pnl.gov:60030<http://*h07.emsl.pnl.gov:60030/>
> 1282848897274   requests=0, regions=0, usedHeap=23, maxHeap=996
>
>        h08.emsl.pnl.gov:60030<http://*h08.emsl.pnl.gov:60030/>
> 1282848897531   requests=0, regions=0, usedHeap=25, maxHeap=996
>
>        h09.emsl.pnl.gov:60030<http://*h09.emsl.pnl.gov:60030/>
> 1282848897283   requests=0, regions=0, usedHeap=32, maxHeap=996
>
>        h10.emsl.pnl.gov:60030<http://*h10.emsl.pnl.gov:60030/>
> 1282848897520   requests=0, regions=1, usedHeap=25, maxHeap=996
>
>        h11.emsl.pnl.gov:60030<http://*h11.emsl.pnl.gov:60030/>
> 1282848897521   requests=0, regions=0, usedHeap=29, maxHeap=996
>
>        h12.emsl.pnl.gov:60030<http://*h12.emsl.pnl.gov:60030/>
> 1282848897310   requests=0, regions=1, usedHeap=25, maxHeap=996
>
>        h13.emsl.pnl.gov:60030<http://*h13.emsl.pnl.gov:60030/>
> 1282848897367   requests=0, regions=1, usedHeap=25, maxHeap=996
>
>        h14.emsl.pnl.gov:60030<http://*h14.emsl.pnl.gov:60030/>
> 1282848897365   requests=0, regions=0, usedHeap=30, maxHeap=996
>
>        h15.emsl.pnl.gov:60030<http://*h15.emsl.pnl.gov:60030/>
> 1282848897379   requests=0, regions=0, usedHeap=23, maxHeap=996
>
>        h16.emsl.pnl.gov:60030<http://*h16.emsl.pnl.gov:60030/>
> 1282848897434   requests=0, regions=0, usedHeap=32, maxHeap=996
>
>        h17.emsl.pnl.gov:60030<http://*h17.emsl.pnl.gov:60030/>
> 1282848897507   requests=0, regions=0, usedHeap=31, maxHeap=996
>
>        h18.emsl.pnl.gov:60030<http://*h18.emsl.pnl.gov:60030/>
> 1282848897413   requests=0, regions=0, usedHeap=30, maxHeap=996
>
>        h19.emsl.pnl.gov:60030<http://*h19.emsl.pnl.gov:60030/>
> 1282848897412   requests=0, regions=0, usedHeap=29, maxHeap=996
>
>        h20.emsl.pnl.gov:60030<http://*h20.emsl.pnl.gov:60030/>
> 1282848897394   requests=0, regions=0, usedHeap=30, maxHeap=996
>
>        h22.emsl.pnl.gov:60030<http://*h22.emsl.pnl.gov:60030/>
> 1282848897415   requests=0, regions=0, usedHeap=30, maxHeap=996
>
>        h23.emsl.pnl.gov:60030<http://*h23.emsl.pnl.gov:60030/>
> 1282848897397   requests=0, regions=1, usedHeap=28, maxHeap=996
>
>        h24.emsl.pnl.gov:60030<http://*h24.emsl.pnl.gov:60030/>
> 1282848897475   requests=0, regions=0, usedHeap=23, maxHeap=996
>
>        h25.emsl.pnl.gov:60030<http://*h25.emsl.pnl.gov:60030/>
> 1282848897466   requests=0, regions=0, usedHeap=31, maxHeap=996
>
>        h26.emsl.pnl.gov:60030<http://*h26.emsl.pnl.gov:60030/>
> 1282848897469   requests=0, regions=0, usedHeap=30, maxHeap=996
>
> Total:
>
> servers: 24             requests=0, regions=5
>
> Load is requests per second and count of regions loaded
>
>
>
>


Mime
View raw message