hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Noah Watkins (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-5718) HBaseTestingUtility causes javax.management.NotCompliantMBeanException
Date Wed, 04 Apr 2012 23:02:22 GMT

    [ https://issues.apache.org/jira/browse/HBASE-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13246822#comment-13246822 ] 

Noah Watkins commented on HBASE-5718:
-------------------------------------

The unit test I was using to generate the error is. I'm not using any tricks at all. Just 'mvn test' with textbook beginner-level Java bits :)

{noformat}
public class PersiaAdminTest {
 private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();

 @BeforeClass
 public static void setUpBeforeClass() throws Exception {
   TEST_UTIL.startMiniCluster(1);
 }

 @AfterClass
 public static void tearDownAfterClass() throws Exception {
   TEST_UTIL.shutdownMiniCluster();
 }

 @Test
 public void testPersiaAdmin() throws IOException {
   assertTrue(true);
 }
}
{noformat}

I also see some of the InstanceAlreadyExistsException errors. The Hadoop core solutions seem to be related to permissions with the files created by the HDFS mini cluster. The suggestion seemed to be 'umask 0022', but this didn't solve anything. I'm not sure if they are related. I'll also post an issue there if there isn't a duplicate.

Running  'mvn test -Dtest=TestHBaseTestingUtility' in 0.92.1 release I get the same set of errors, and the tests also are reported as being ran successfully:

{noformat}
Starting DataNode 0 with dfs.data.dir: /home/nwatkins/projects/hbase-0.92.1/target/test-data/21457e26-5373-44c6-9d0b-727445912175/dfscluster_4213e934-65bd-4875-9485-225333165b62/dfs/data/data1,/home/nwatkins/projects/hbase-0.92.1/target/test-data/21457e26-5373-44c6-9d0b-727445912175/dfscluster_4213e934-65bd-4875-9485-225333165b62/dfs/data/data2

javax.management.NotCompliantMBeanException: org.apache.hadoop.hbase.master.MXBean: Method org.apache.hadoop.hbase.master.MXBean.getRegionServers has parameter or return type that cannot be translated into an open type


	at com.sun.jmx.mbeanserver.Introspector.throwException(Introspector.java:419)

	at com.sun.jmx.mbeanserver.MBeanAnalyzer.<init>(MBeanAnalyzer.java:118)

	at com.sun.jmx.mbeanserver.MBeanAnalyzer.analyzer(MBeanAnalyzer.java:104)

	at com.sun.jmx.mbeanserver.MXBeanIntrospector.getAnalyzer(MXBeanIntrospector.java:71)

	at com.sun.jmx.mbeanserver.MBeanIntrospector.getPerInterface(MBeanIntrospector.java:181)

	at com.sun.jmx.mbeanserver.MBeanSupport.<init>(MBeanSupport.java:136)

	at com.sun.jmx.mbeanserver.MXBeanSupport.<init>(MXBeanSupport.java:66)

	at com.sun.jmx.mbeanserver.Introspector.makeDynamicMBean(Introspector.java:184)

	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:936)

	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)

	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:516)

	at org.apache.hadoop.metrics.util.MBeanUtil.registerMBean(MBeanUtil.java:58)

	at org.apache.hadoop.hbase.master.HMaster.registerMBean(HMaster.java:1646)

	at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:527)

	at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:336)

	at java.lang.Thread.run(Thread.java:679)

Caused by: java.lang.IllegalArgumentException: Method org.apache.hadoop.hbase.master.MXBean.getRegionServers has parameter or return type that cannot be translated into an open type

	at com.sun.jmx.mbeanserver.ConvertingMethod.from(ConvertingMethod.java:45)

	at com.sun.jmx.mbeanserver.MXBeanIntrospector.mFrom(MXBeanIntrospector.java:81)

	at com.sun.jmx.mbeanserver.MXBeanIntrospector.mFrom(MXBeanIntrospector.java:51)

	at com.sun.jmx.mbeanserver.MBeanAnalyzer.initMaps(MBeanAnalyzer.java:135)

	at com.sun.jmx.mbeanserver.MBeanAnalyzer.<init>(MBeanAnalyzer.java:116)

	... 14 more

Caused by: javax.management.openmbean.OpenDataException: Cannot obtain array class

	at com.sun.jmx.mbeanserver.OpenConverter.openDataException(OpenConverter.java:1389)

	at com.sun.jmx.mbeanserver.OpenConverter.makeArrayOrCollectionConverter(OpenConverter.java:346)

	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:295)

	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)

	at com.sun.jmx.mbeanserver.OpenConverter.makeTabularConverter(OpenConverter.java:376)

	at com.sun.jmx.mbeanserver.OpenConverter.makeParameterizedConverter(OpenConverter.java:417)

	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:312)

	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)

	at com.sun.jmx.mbeanserver.OpenConverter.makeCompositeConverter(OpenConverter.java:482)

	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:309)

	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)

	at com.sun.jmx.mbeanserver.OpenConverter.makeTabularConverter(OpenConverter.java:377)

	at com.sun.jmx.mbeanserver.OpenConverter.makeParameterizedConverter(OpenConverter.java:417)

	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:312)

	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)

	at com.sun.jmx.mbeanserver.ConvertingMethod.<init>(ConvertingMethod.java:197)

	at com.sun.jmx.mbeanserver.ConvertingMethod.from(ConvertingMethod.java:40)

	... 18 more

Caused by: java.lang.ClassNotFoundException: [Lbyte;

	at java.lang.Class.forName0(Native Method)

	at java.lang.Class.forName(Class.java:186)

	at com.sun.jmx.mbeanserver.OpenConverter.makeArrayOrCollectionConverter(OpenConverter.java:344)

	... 33 more

{noformat}
                
> HBaseTestingUtility causes javax.management.NotCompliantMBeanException
> ----------------------------------------------------------------------
>
>                 Key: HBASE-5718
>                 URL: https://issues.apache.org/jira/browse/HBASE-5718
>             Project: HBase
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 0.92.1
>         Environment: Stock Ubuntu 11.10.
> java version "1.6.0_23"
> OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.2)
> OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)
>            Reporter: Noah Watkins
>            Assignee: Andrew Purtell
>
> {noformat}
> Running edu.ucsc.srl.persia.PersiaAdminTest
> 12/04/04 13:04:57 INFO hbase.HBaseTestingUtility: Starting up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s)
> 12/04/04 13:04:57 INFO util.GSet: VM type       = 64-bit
> 12/04/04 13:04:57 INFO util.GSet: 2% max memory = 53.51125 MB
> 12/04/04 13:04:57 INFO util.GSet: capacity      = 2^23 = 8388608 entries
> 12/04/04 13:04:57 INFO util.GSet: recommended=8388608, actual=8388608
> 12/04/04 13:04:57 INFO namenode.FSNamesystem: fsOwner=nwatkins
> 12/04/04 13:04:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/04/04 13:04:57 INFO namenode.FSNamesystem: isPermissionEnabled=true
> 12/04/04 13:04:57 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
> 12/04/04 13:04:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/04/04 13:04:57 INFO namenode.NameNode: Caching file names occuring more than 10 times 
> 12/04/04 13:04:57 INFO common.Storage: Image file of size 114 saved in 0 seconds.
> 12/04/04 13:04:57 INFO common.Storage: Storage directory /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/name1 has been successfully formatted.
> 12/04/04 13:04:57 INFO common.Storage: Image file of size 114 saved in 0 seconds.
> 12/04/04 13:04:58 INFO common.Storage: Storage directory /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/name2 has been successfully formatted.
> 12/04/04 13:04:58 WARN impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-namenode.properties, hadoop-metrics2.properties
> 12/04/04 13:04:58 INFO util.GSet: VM type       = 64-bit
> 12/04/04 13:04:58 INFO util.GSet: 2% max memory = 53.51125 MB
> 12/04/04 13:04:58 INFO util.GSet: capacity      = 2^23 = 8388608 entries
> 12/04/04 13:04:58 INFO util.GSet: recommended=8388608, actual=8388608
> 12/04/04 13:04:58 INFO namenode.FSNamesystem: fsOwner=nwatkins
> 12/04/04 13:04:58 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/04/04 13:04:58 INFO namenode.FSNamesystem: isPermissionEnabled=true
> 12/04/04 13:04:58 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
> 12/04/04 13:04:58 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/04/04 13:04:58 INFO namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
> 12/04/04 13:04:58 INFO namenode.NameNode: Caching file names occuring more than 10 times 
> 12/04/04 13:04:58 INFO common.Storage: Number of files = 1
> 12/04/04 13:04:58 INFO common.Storage: Number of files under construction = 0
> 12/04/04 13:04:58 INFO common.Storage: Image file of size 114 loaded in 0 seconds.
> 12/04/04 13:04:58 INFO common.Storage: Edits file /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/name1/current/edits of size 4 edits # 0 loaded in 0 seconds.
> 12/04/04 13:04:58 INFO common.Storage: Image file of size 114 saved in 0 seconds.
> 12/04/04 13:04:58 INFO common.Storage: Image file of size 114 saved in 0 seconds.
> 12/04/04 13:04:58 INFO common.Storage: Image file of size 114 saved in 0 seconds.
> 12/04/04 13:04:59 INFO common.Storage: Image file of size 114 saved in 0 seconds.
> 12/04/04 13:04:59 INFO namenode.NameCache: initialized with 0 entries 0 lookups
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: Finished loading FSImage in 1278 msecs
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: Total number of blocks = 0
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: Number of invalid blocks = 0
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: Number of under-replicated blocks = 0
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: Number of  over-replicated blocks = 0
> 12/04/04 13:04:59 INFO hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 14 msec
> 12/04/04 13:04:59 INFO hdfs.StateChange: STATE* Leaving safe mode after 1 secs.
> 12/04/04 13:04:59 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
> 12/04/04 13:04:59 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
> 12/04/04 13:04:59 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
> 12/04/04 13:04:59 INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
> 12/04/04 13:04:59 INFO ipc.Server: Starting SocketReader
> 12/04/04 13:04:59 INFO namenode.NameNode: Namenode up at: localhost/127.0.0.1:44291
> 12/04/04 13:04:59 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
> 12/04/04 13:04:59 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/04/04 13:04:59 INFO http.HttpServer: dfs.webhdfs.enabled = false
> 12/04/04 13:04:59 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
> 12/04/04 13:04:59 INFO http.HttpServer: listener.getLocalPort() returned 58577 webServer.getConnectors()[0].getLocalPort() returned 58577
> 12/04/04 13:04:59 INFO http.HttpServer: Jetty bound to port 58577
> 12/04/04 13:04:59 INFO mortbay.log: jetty-6.1.26
> 12/04/04 13:04:59 INFO mortbay.log: Extract jar:file:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/hdfs to /tmp/Jetty_localhost_58577_hdfs____vcl5oc/webapp
> 12/04/04 13:04:59 INFO mortbay.log: Started SelectChannelConnector@localhost:58577
> 12/04/04 13:04:59 INFO namenode.NameNode: Web-server up at: localhost:58577
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server Responder: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server listener on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 0 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 1 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 2 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 3 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 4 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 5 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 7 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 8 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 6 on 44291: starting
> 12/04/04 13:04:59 INFO ipc.Server: IPC Server handler 9 on 44291: starting
> Starting DataNode 0 with dfs.data.dir: /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data1,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data2
> 12/04/04 13:04:59 WARN impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-datanode.properties, hadoop-metrics2.properties
> 12/04/04 13:04:59 INFO common.Storage: Storage directory /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data1 is not formatted.
> 12/04/04 13:04:59 INFO common.Storage: Formatting ...
> 12/04/04 13:05:00 INFO common.Storage: Storage directory /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data2 is not formatted.
> 12/04/04 13:05:00 INFO common.Storage: Formatting ...
> 12/04/04 13:05:00 INFO datanode.DataNode: Registered FSDatasetStatusMBean
> 12/04/04 13:05:00 INFO datanode.DataNode: Opened info server at 56956
> 12/04/04 13:05:00 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s
> 12/04/04 13:05:00 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/04/04 13:05:00 INFO datanode.DataNode: dfs.webhdfs.enabled = false
> 12/04/04 13:05:00 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
> 12/04/04 13:05:00 INFO http.HttpServer: listener.getLocalPort() returned 53398 webServer.getConnectors()[0].getLocalPort() returned 53398
> 12/04/04 13:05:00 INFO http.HttpServer: Jetty bound to port 53398
> 12/04/04 13:05:00 INFO mortbay.log: jetty-6.1.26
> 12/04/04 13:05:00 INFO mortbay.log: Extract jar:file:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/datanode to /tmp/Jetty_localhost_53398_datanode____.t2wbfp/webapp
> 12/04/04 13:05:00 INFO mortbay.log: Started SelectChannelConnector@localhost:53398
> 12/04/04 13:05:00 INFO ipc.Server: Starting SocketReader
> 12/04/04 13:05:00 INFO datanode.DataNode: dnRegistration = DatanodeRegistration(127.0.0.1:56956, storageID=, infoPort=53398, ipcPort=60794)
> 12/04/04 13:05:00 INFO hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:56956 storage DS-134885185-127.0.0.1-56956-1333569900707
> 12/04/04 13:05:00 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:56956
> 12/04/04 13:05:00 INFO datanode.DataNode: New storage id DS-134885185-127.0.0.1-56956-1333569900707 is assigned to data-node 127.0.0.1:56956
> 12/04/04 13:05:00 INFO datanode.DataNode: Finished generating blocks being written report for 2 volumes in 0 seconds
> 12/04/04 13:05:00 INFO hdfs.StateChange: *BLOCK* NameNode.blocksBeingWrittenReport: from 127.0.0.1:56956 0 blocks
> 12/04/04 13:05:00 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:56956, storageID=DS-134885185-127.0.0.1-56956-1333569900707, infoPort=53398, ipcPort=60794)In DataNode.run, data = FSDataset{dirpath='/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data1/current,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data2/current'}
> 12/04/04 13:05:00 INFO ipc.Server: IPC Server Responder: starting
> 12/04/04 13:05:00 INFO ipc.Server: IPC Server listener on 60794: starting
> 12/04/04 13:05:00 INFO ipc.Server: IPC Server handler 0 on 60794: starting
> 12/04/04 13:05:00 INFO ipc.Server: IPC Server handler 1 on 60794: starting
> 12/04/04 13:05:00 INFO ipc.Server: IPC Server handler 2 on 60794: starting
> 12/04/04 13:05:00 INFO datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
> 12/04/04 13:05:00 INFO hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:56956, blocks: 0, processing time: 1 msecs
> 12/04/04 13:05:00 INFO datanode.DataNode: BlockReport of 0 blocks took 0 msec to generate and 20 msecs for RPC and NN processing
> 12/04/04 13:05:00 INFO datanode.DataNode: Starting Periodic block scanner.
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:host.name=localhost
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:java.version=1.6.0_23
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:java.vendor=Sun Microsystems Inc.
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:java.home=/usr/lib/jvm/java-6-openjdk/jre
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:java.class.path=/home/nwatkins/projects/ssdb-hadoop/persia/target/test-classes:/home/nwatkins/projects/ssdb-hadoop/persia/target/classes:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar:/home/nwatkins/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/nwatkins/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/nwatkins/.m2/repository/commons-httpclient/commons-httpclient/3.0.1/commons-httpclient-3.0.1.jar:/home/nwatkins/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/nwatkins/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/nwatkins/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/nwatkins/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/nwatkins/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/nwatkins/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/nwatkins/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/nwatkins/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/servlet-api/2.5-20081211/servlet-api-2.5-20081211.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/nwatkins/.m2/repository/tomcat/jasper-runtime/5.5.12/jasper-runtime-5.5.12.jar:/home/nwatkins/.m2/repository/tomcat/jasper-compiler/5.5.12/jasper-compiler-5.5.12.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/nwatkins/.m2/repository/ant/ant/1.6.5/ant-1.6.5.jar:/home/nwatkins/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/nwatkins/.m2/repository/net/java/dev/jets3t/jets3t/0.7.1/jets3t-0.7.1.jar:/home/nwatkins/.m2/repository/net/sf/kosmosfs/kfs/0.3/kfs-0.3.jar:/home/nwatkins/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/nwatkins/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/nwatkins/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1.jar:/home/nwatkins/.m2/repository/com/google/guava/guava/r09/guava-r09.jar:/home/nwatkins/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/nwatkins/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/nwatkins/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/nwatkins/.m2/repository/log4j/log4j/1.2.16/log4j-1.2.16.jar:/home/nwatkins/.m2/repository/org/apache/avro/avro/1.5.3/avro-1.5.3.jar:/home/nwatkins/.m2/repository/org/xerial/snappy/snappy-java/1.0.3.2/snappy-java-1.0.3.2.jar:/home/nwatkins/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/nwatkins/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/nwatkins/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/nwatkins/.m2/repository/org/apache/zookeeper/zookeeper/3.4.3/zookeeper-3.4.3.jar:/home/nwatkins/.m2/repository/org/apache/thrift/libthrift/0.7.0/libthrift-0.7.0.jar:/home/nwatkins/.m2/repository/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/home/nwatkins/.m2/repository/org/apache/httpcomponents/httpclient/4.0.1/httpclient-4.0.1.jar:/home/nwatkins/.m2/repository/org/apache/httpcomponents/httpcore/4.0.1/httpcore-4.0.1.jar:/home/nwatkins/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/nwatkins/.m2/repository/org/slf4j/slf4j-api/1.5.8/slf4j-api-1.5.8.jar:/home/nwatkins/.m2/repository/org/slf4j/slf4j-log4j12/1.5.8/slf4j-log4j12-1.5.8.jar:/home/nwatkins/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/nwatkins/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-core/1.4/jersey-core-1.4.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-json/1.4/jersey-json-1.4.jar:/home/nwatkins/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/nwatkins/.m2/repository/com/sun/xml/bind/jaxb-impl/2.1.12/jaxb-impl-2.1.12.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-server/1.4/jersey-server-1.4.jar:/home/nwatkins/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/nwatkins/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/nwatkins/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/nwatkins/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.5.5/jackson-core-asl-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.5.5/jackson-mapper-asl-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.5.5/jackson-jaxrs-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-xc/1.5.5/jackson-xc-1.5.5.jar:/home/nwatkins/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/nwatkins/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1-tests.jar:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-test/1.0.0/hadoop-test-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:java.library.path=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64:/home/nwatkins/projects/openmpi_java/install/lib::/usr/java/packages/lib/amd64:/usr/lib/jni:/lib:/usr/lib
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:java.compiler=<NA>
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:os.name=Linux
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:os.arch=amd64
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:os.version=3.0.0-12-server
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:user.name=nwatkins
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:user.home=/home/nwatkins
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Server environment:user.dir=/home/nwatkins/projects/ssdb-hadoop/persia
> 12/04/04 13:05:00 INFO server.ZooKeeperServer: Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/zookeeper_0/version-2 snapdir /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/zookeeper_0/version-2
> 12/04/04 13:05:00 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:21818
> 12/04/04 13:05:00 INFO persistence.FileTxnSnapLog: Snapshotting: 0x0 to /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/zookeeper_0/version-2/snapshot.0
> 12/04/04 13:05:00 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53709
> 12/04/04 13:05:00 INFO server.NIOServerCnxn: Processing stat command from /127.0.0.1:53709
> 12/04/04 13:05:00 INFO server.NIOServerCnxn: Stat command output
> 12/04/04 13:05:00 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53709 (no session established for client)
> 12/04/04 13:05:00 INFO zookeeper.MiniZooKeeperCluster: Started MiniZK Cluster and connect 1 ZK server on client port: 21818
> 12/04/04 13:05:00 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/hbase.version. blk_-5896652733233295526_1001
> 12/04/04 13:05:01 INFO datanode.DataNode: Receiving block blk_-5896652733233295526_1001 src: /127.0.0.1:46142 dest: /127.0.0.1:56956
> 12/04/04 13:05:01 INFO DataNode.clienttrace: src: /127.0.0.1:46142, dest: /127.0.0.1:56956, bytes: 3, op: HDFS_WRITE, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-5896652733233295526_1001, duration: 1377157
> 12/04/04 13:05:01 INFO datanode.DataNode: PacketResponder 0 for block blk_-5896652733233295526_1001 terminating
> 12/04/04 13:05:01 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_-5896652733233295526_1001 size 3
> 12/04/04 13:05:01 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/hbase.version from client DFSClient_-128056158
> 12/04/04 13:05:01 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/hbase.version is closed by DFSClient_-128056158
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseServer: Starting Thread-49
> 12/04/04 13:05:01 INFO ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=35192
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_23
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-6-openjdk/jre
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/nwatkins/projects/ssdb-hadoop/persia/target/test-classes:/home/nwatkins/projects/ssdb-hadoop/persia/target/classes:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar:/home/nwatkins/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/nwatkins/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/nwatkins/.m2/repository/commons-httpclient/commons-httpclient/3.0.1/commons-httpclient-3.0.1.jar:/home/nwatkins/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/nwatkins/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/nwatkins/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/nwatkins/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/nwatkins/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/nwatkins/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/nwatkins/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/nwatkins/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/servlet-api/2.5-20081211/servlet-api-2.5-20081211.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/nwatkins/.m2/repository/tomcat/jasper-runtime/5.5.12/jasper-runtime-5.5.12.jar:/home/nwatkins/.m2/repository/tomcat/jasper-compiler/5.5.12/jasper-compiler-5.5.12.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/nwatkins/.m2/repository/ant/ant/1.6.5/ant-1.6.5.jar:/home/nwatkins/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/nwatkins/.m2/repository/net/java/dev/jets3t/jets3t/0.7.1/jets3t-0.7.1.jar:/home/nwatkins/.m2/repository/net/sf/kosmosfs/kfs/0.3/kfs-0.3.jar:/home/nwatkins/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/nwatkins/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/nwatkins/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1.jar:/home/nwatkins/.m2/repository/com/google/guava/guava/r09/guava-r09.jar:/home/nwatkins/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/nwatkins/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/nwatkins/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/nwatkins/.m2/repository/log4j/log4j/1.2.16/log4j-1.2.16.jar:/home/nwatkins/.m2/repository/org/apache/avro/avro/1.5.3/avro-1.5.3.jar:/home/nwatkins/.m2/repository/org/xerial/snappy/snappy-java/1.0.3.2/snappy-java-1.0.3.2.jar:/home/nwatkins/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/nwatkins/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/nwatkins/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/nwatkins/.m2/repository/org/apache/zookeeper/zookeeper/3.4.3/zookeeper-3.4.3.jar:/home/nwatkins/.m2/repository/org/apache/thrift/libthrift/0.7.0/libthrift-0.7.0.jar:/home/nwatkins/.m2/repository/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/home/nwatkins/.m2/repository/org/apache/httpcomponents/httpclient/4.0.1/httpclient-4.0.1.jar:/home/nwatkins/.m2/repository/org/apache/httpcomponents/httpcore/4.0.1/httpcore-4.0.1.jar:/home/nwatkins/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/nwatkins/.m2/repository/org/slf4j/slf4j-api/1.5.8/slf4j-api-1.5.8.jar:/home/nwatkins/.m2/repository/org/slf4j/slf4j-log4j12/1.5.8/slf4j-log4j12-1.5.8.jar:/home/nwatkins/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/nwatkins/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-core/1.4/jersey-core-1.4.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-json/1.4/jersey-json-1.4.jar:/home/nwatkins/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/nwatkins/.m2/repository/com/sun/xml/bind/jaxb-impl/2.1.12/jaxb-impl-2.1.12.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-server/1.4/jersey-server-1.4.jar:/home/nwatkins/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/nwatkins/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/nwatkins/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/nwatkins/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.5.5/jackson-core-asl-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.5.5/jackson-mapper-asl-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.5.5/jackson-jaxrs-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-xc/1.5.5/jackson-xc-1.5.5.jar:/home/nwatkins/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/nwatkins/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1-tests.jar:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-test/1.0.0/hadoop-test-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64:/home/nwatkins/projects/openmpi_java/install/lib::/usr/java/packages/lib/amd64:/usr/lib/jni:/lib:/usr/lib
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:os.version=3.0.0-12-server
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:user.name=nwatkins
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/nwatkins
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/nwatkins/projects/ssdb-hadoop/persia
> 12/04/04 13:05:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:21818 sessionTimeout=180000 watcher=master:35192
> 12/04/04 13:05:01 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:21818
> 12/04/04 13:05:01 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24477@kyoto
> 12/04/04 13:05:01 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
> 12/04/04 13:05:01 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53711
> 12/04/04 13:05:01 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:21818, initiating session
> 12/04/04 13:05:01 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:53711
> 12/04/04 13:05:01 INFO persistence.FileTxnLog: Creating new log file: log.1
> 12/04/04 13:05:01 INFO server.ZooKeeperServer: Established session 0x1367ef5d1320000 with negotiated timeout 40000 for client /127.0.0.1:53711
> 12/04/04 13:05:01 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:21818, sessionid = 0x1367ef5d1320000, negotiated timeout = 40000
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server Responder: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server listener on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 0 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 1 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 2 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 3 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 4 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 5 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 6 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 7 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 8 on 35192: starting
> 12/04/04 13:05:02 INFO ipc.HBaseServer: IPC Server handler 9 on 35192: starting
> 12/04/04 13:05:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=Master, sessionId=localhost,35192,1333569901811
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: revision
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: hdfsUser
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: hdfsDate
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: hdfsUrl
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: date
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: hdfsRevision
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: user
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: hdfsVersion
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: url
> 12/04/04 13:05:02 INFO hbase.metrics: MetricsString added: version
> 12/04/04 13:05:02 INFO hbase.metrics: new MBeanInfo
> 12/04/04 13:05:02 INFO hbase.metrics: new MBeanInfo
> 12/04/04 13:05:02 INFO metrics.MasterMetrics: Initialized
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseServer: Starting Thread-61
> 12/04/04 13:05:02 INFO ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=MiniHBaseCluster$MiniHBaseClusterRegionServer, port=60207
> 12/04/04 13:05:02 INFO hfile.CacheConfig: Allocating LruBlockCache with maximum size 668.9m
> 12/04/04 13:05:02 INFO regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:RegionServer:0;localhost,60207,1333569902087
> 12/04/04 13:05:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:21818 sessionTimeout=180000 watcher=regionserver:60207
> 12/04/04 13:05:02 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:21818
> 12/04/04 13:05:02 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24477@kyoto
> 12/04/04 13:05:02 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
> 12/04/04 13:05:02 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53712
> 12/04/04 13:05:02 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:21818, initiating session
> 12/04/04 13:05:02 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:53712
> 12/04/04 13:05:02 INFO master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/localhost,35192,1333569901811 from backup master directory
> 12/04/04 13:05:02 INFO server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1367ef5d1320000 type:delete cxid:0x10 zxid:0xb txntype:-1 reqpath:n/a Error Path:/hbase/backup-masters/localhost,35192,1333569901811 Error:KeeperErrorCode = NoNode for /hbase/backup-masters/localhost,35192,1333569901811
> 12/04/04 13:05:02 INFO server.ZooKeeperServer: Established session 0x1367ef5d1320001 with negotiated timeout 40000 for client /127.0.0.1:53712
> 12/04/04 13:05:02 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:21818, sessionid = 0x1367ef5d1320001, negotiated timeout = 40000
> 12/04/04 13:05:02 WARN zookeeper.RecoverableZooKeeper: Node /hbase/backup-masters/localhost,35192,1333569901811 already deleted, and this is not a retry
> 12/04/04 13:05:02 INFO master.ActiveMasterManager: Master=localhost,35192,1333569901811
> 12/04/04 13:05:02 INFO master.SplitLogManager: found 0 orphan tasks and 0 rescan nodes
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46145, bytes: 7, op: HDFS_READ, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-5896652733233295526_1001, duration: 572653
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/hbase.id. blk_1144459159338720817_1002
> 12/04/04 13:05:02 INFO datanode.DataNode: Receiving block blk_1144459159338720817_1002 src: /127.0.0.1:46146 dest: /127.0.0.1:56956
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:46146, dest: /127.0.0.1:56956, bytes: 38, op: HDFS_WRITE, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_1144459159338720817_1002, duration: 1191073
> 12/04/04 13:05:02 INFO datanode.DataNode: PacketResponder 0 for block blk_1144459159338720817_1002 terminating
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_1144459159338720817_1002 size 38
> 12/04/04 13:05:02 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/hbase.id from client DFSClient_-128056158
> 12/04/04 13:05:02 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/hbase.id is closed by DFSClient_-128056158
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46147, bytes: 42, op: HDFS_READ, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_1144459159338720817_1002, duration: 197585
> 12/04/04 13:05:02 INFO master.MasterFileSystem: BOOTSTRAP: creating ROOT and first META regions
> 12/04/04 13:05:02 INFO regionserver.HRegion: creating HRegion -ROOT- HTD == {NAME => '-ROOT-', IS_ROOT => 'true', IS_META => 'true', FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => '2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}]} RootDir = hdfs://localhost:44291/user/nwatkins/hbase Table name == -ROOT-
> 12/04/04 13:05:02 INFO wal.HLog: HLog configuration: blocksize=64 MB, rollsize=60.8 MB, enabled=true, optionallogflushinternal=1000ms
> 12/04/04 13:05:02 INFO wal.HLog:  for /user/nwatkins/hbase/-ROOT-/70236052/.logs/hlog.1333569902275
> 12/04/04 13:05:02 INFO wal.HLog: Using getNumCurrentReplicas--HDFS-826
> 12/04/04 13:05:02 INFO regionserver.HRegion: Setting up tabledescriptor config now ...
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/-ROOT-/70236052/.tmp/.regioninfo. blk_8139025035320914028_1004
> 12/04/04 13:05:02 INFO datanode.DataNode: Receiving block blk_8139025035320914028_1004 src: /127.0.0.1:46148 dest: /127.0.0.1:56956
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:46148, dest: /127.0.0.1:56956, bytes: 109, op: HDFS_WRITE, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_8139025035320914028_1004, duration: 1301874
> 12/04/04 13:05:02 INFO datanode.DataNode: PacketResponder 0 for block blk_8139025035320914028_1004 terminating
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_8139025035320914028_1004 size 109
> 12/04/04 13:05:02 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/-ROOT-/70236052/.tmp/.regioninfo from client DFSClient_-128056158
> 12/04/04 13:05:02 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/-ROOT-/70236052/.tmp/.regioninfo is closed by DFSClient_-128056158
> 12/04/04 13:05:02 INFO regionserver.HRegion: Onlined -ROOT-,,0.70236052; next sequenceid=1
> 12/04/04 13:05:02 INFO regionserver.HRegion: creating HRegion .META. HTD == {NAME => '.META.', IS_META => 'true', FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => '2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}]} RootDir = hdfs://localhost:44291/user/nwatkins/hbase Table name == .META.
> 12/04/04 13:05:02 INFO wal.HLog: HLog configuration: blocksize=64 MB, rollsize=60.8 MB, enabled=true, optionallogflushinternal=1000ms
> 12/04/04 13:05:02 INFO wal.HLog:  for /user/nwatkins/hbase/.META./1028785192/.logs/hlog.1333569902500
> 12/04/04 13:05:02 INFO wal.HLog: Using getNumCurrentReplicas--HDFS-826
> 12/04/04 13:05:02 INFO regionserver.HRegion: Setting up tabledescriptor config now ...
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/.META./1028785192/.tmp/.regioninfo. blk_-6030749091335767198_1006
> 12/04/04 13:05:02 INFO datanode.DataNode: Receiving block blk_-6030749091335767198_1006 src: /127.0.0.1:46149 dest: /127.0.0.1:56956
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:46149, dest: /127.0.0.1:56956, bytes: 111, op: HDFS_WRITE, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-6030749091335767198_1006, duration: 874904
> 12/04/04 13:05:02 INFO datanode.DataNode: PacketResponder 0 for block blk_-6030749091335767198_1006 terminating
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_-6030749091335767198_1006 size 111
> 12/04/04 13:05:02 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/.META./1028785192/.tmp/.regioninfo from client DFSClient_-128056158
> 12/04/04 13:05:02 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/.META./1028785192/.tmp/.regioninfo is closed by DFSClient_-128056158
> 12/04/04 13:05:02 INFO regionserver.HRegion: Onlined .META.,,1.1028785192; next sequenceid=1
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/-ROOT-/70236052/.logs/hlog.1333569902275. blk_-5061682789569976312_1006
> 12/04/04 13:05:02 INFO datanode.DataNode: Receiving block blk_-5061682789569976312_1006 src: /127.0.0.1:46150 dest: /127.0.0.1:56956
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.fsync: file /user/nwatkins/hbase/-ROOT-/70236052/.logs/hlog.1333569902275 for DFSClient_-128056158
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/-ROOT-/70236052/.tmp/d18b916e4e534f5cb5b818702457ac50. blk_6328668344013294287_1007
> 12/04/04 13:05:02 INFO datanode.DataNode: Receiving block blk_6328668344013294287_1007 src: /127.0.0.1:46151 dest: /127.0.0.1:56956
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:46151, dest: /127.0.0.1:56956, bytes: 714, op: HDFS_WRITE, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_6328668344013294287_1007, duration: 466968
> 12/04/04 13:05:02 INFO datanode.DataNode: PacketResponder 0 for block blk_6328668344013294287_1007 terminating
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_6328668344013294287_1007 size 714
> 12/04/04 13:05:02 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/-ROOT-/70236052/.tmp/d18b916e4e534f5cb5b818702457ac50 from client DFSClient_-128056158
> 12/04/04 13:05:02 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/-ROOT-/70236052/.tmp/d18b916e4e534f5cb5b818702457ac50 is closed by DFSClient_-128056158
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46152, bytes: 722, op: HDFS_READ, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_6328668344013294287_1007, duration: 454709
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46153, bytes: 722, op: HDFS_READ, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_6328668344013294287_1007, duration: 446032
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46154, bytes: 722, op: HDFS_READ, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_6328668344013294287_1007, duration: 627237
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46155, bytes: 722, op: HDFS_READ, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_6328668344013294287_1007, duration: 433856
> 12/04/04 13:05:02 INFO regionserver.Store: Added hdfs://localhost:44291/user/nwatkins/hbase/-ROOT-/70236052/info/d18b916e4e534f5cb5b818702457ac50, entries=2, sequenceid=2, memsize=472.0, filesize=714.0
> 12/04/04 13:05:02 INFO regionserver.HRegion: Finished memstore flush of ~472.0/472, currentsize=0.0/0 for region -ROOT-,,0.70236052 in 134ms, sequenceid=2, compaction requested=false
> 12/04/04 13:05:02 INFO regionserver.HRegion: Closed -ROOT-,,0.70236052
> 12/04/04 13:05:02 INFO wal.HLog: Master:0;localhost,35192,1333569901811.logSyncer exiting
> 12/04/04 13:05:02 INFO DataNode.clienttrace: src: /127.0.0.1:46150, dest: /127.0.0.1:56956, bytes: 411, op: HDFS_WRITE, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-5061682789569976312_1006, duration: 169072676
> 12/04/04 13:05:02 INFO datanode.DataNode: PacketResponder 0 for block blk_-5061682789569976312_1006 terminating
> 12/04/04 13:05:02 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_-5061682789569976312_1006 size 411
> 12/04/04 13:05:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:21818 sessionTimeout=180000 watcher=hconnection
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:21818
> 12/04/04 13:05:03 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24477@kyoto
> 12/04/04 13:05:03 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:21818, initiating session
> 12/04/04 13:05:03 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53724
> 12/04/04 13:05:03 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:53724
> 12/04/04 13:05:03 INFO server.ZooKeeperServer: Established session 0x1367ef5d1320002 with negotiated timeout 40000 for client /127.0.0.1:53724
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:21818, sessionid = 0x1367ef5d1320002, negotiated timeout = 40000
> 12/04/04 13:05:03 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/-ROOT-/70236052/.logs/hlog.1333569902275 from client DFSClient_-128056158
> 12/04/04 13:05:03 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/-ROOT-/70236052/.logs/hlog.1333569902275 is closed by DFSClient_-128056158
> 12/04/04 13:05:03 INFO regionserver.HRegion: Closed .META.,,1.1028785192
> 12/04/04 13:05:03 INFO wal.HLog: Master:0;localhost,35192,1333569901811.logSyncer exiting
> 12/04/04 13:05:03 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/.META./1028785192/.logs/hlog.1333569902500. blk_4053631606215666533_1007
> 12/04/04 13:05:03 INFO datanode.DataNode: Receiving block blk_4053631606215666533_1007 src: /127.0.0.1:46157 dest: /127.0.0.1:56956
> 12/04/04 13:05:03 INFO DataNode.clienttrace: src: /127.0.0.1:46157, dest: /127.0.0.1:56956, bytes: 124, op: HDFS_WRITE, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_4053631606215666533_1007, duration: 980966
> 12/04/04 13:05:03 INFO datanode.DataNode: PacketResponder 0 for block blk_4053631606215666533_1007 terminating
> 12/04/04 13:05:03 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_4053631606215666533_1007 size 124
> 12/04/04 13:05:03 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/.META./1028785192/.logs/hlog.1333569902500 from client DFSClient_-128056158
> 12/04/04 13:05:03 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/.META./1028785192/.logs/hlog.1333569902500 is closed by DFSClient_-128056158
> 12/04/04 13:05:03 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/-ROOT-/.tmp/.tableinfo.0000000001. blk_1915384032580458934_1008
> 12/04/04 13:05:03 INFO datanode.DataNode: Receiving block blk_1915384032580458934_1008 src: /127.0.0.1:46158 dest: /127.0.0.1:56956
> 12/04/04 13:05:03 INFO DataNode.clienttrace: src: /127.0.0.1:46158, dest: /127.0.0.1:56956, bytes: 551, op: HDFS_WRITE, cliID: DFSClient_-128056158, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_1915384032580458934_1008, duration: 797417
> 12/04/04 13:05:03 INFO datanode.DataNode: PacketResponder 0 for block blk_1915384032580458934_1008 terminating
> 12/04/04 13:05:03 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_1915384032580458934_1008 size 551
> 12/04/04 13:05:03 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/-ROOT-/.tmp/.tableinfo.0000000001 from client DFSClient_-128056158
> 12/04/04 13:05:03 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/-ROOT-/.tmp/.tableinfo.0000000001 is closed by DFSClient_-128056158
> 12/04/04 13:05:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:21818 sessionTimeout=180000 watcher=hconnection
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:21818
> 12/04/04 13:05:03 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24477@kyoto
> 12/04/04 13:05:03 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:21818, initiating session
> 12/04/04 13:05:03 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53727
> 12/04/04 13:05:03 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:53727
> 12/04/04 13:05:03 INFO server.ZooKeeperServer: Established session 0x1367ef5d1320003 with negotiated timeout 40000 for client /127.0.0.1:53727
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:21818, sessionid = 0x1367ef5d1320003, negotiated timeout = 40000
> 12/04/04 13:05:03 INFO master.HMaster: Server active/primary master; localhost,35192,1333569901811, sessionid=0x1367ef5d1320000, cluster-up flag was=false
> 12/04/04 13:05:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:21818 sessionTimeout=180000 watcher=hconnection
> 12/04/04 13:05:03 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24477@kyoto
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:21818
> 12/04/04 13:05:03 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:21818, initiating session
> 12/04/04 13:05:03 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53728
> 12/04/04 13:05:03 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:53728
> 12/04/04 13:05:03 INFO server.ZooKeeperServer: Established session 0x1367ef5d1320004 with negotiated timeout 40000 for client /127.0.0.1:53728
> 12/04/04 13:05:03 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:21818, sessionid = 0x1367ef5d1320004, negotiated timeout = 40000
> 12/04/04 13:05:03 INFO regionserver.MemStoreFlusher: globalMemStoreLimit=1.0g, globalMemStoreLimitLowMark=936.4m, maxHeap=2.6g
> 12/04/04 13:05:03 INFO regionserver.HRegionServer: Runs every 2hrs, 46mins, 40sec
> 12/04/04 13:05:03 INFO regionserver.HRegionServer: Attempting connect to Master server at localhost,35192,1333569901811
> 12/04/04 13:05:03 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/04/04 13:05:03 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 60010
> 12/04/04 13:05:03 INFO http.HttpServer: listener.getLocalPort() returned 60010 webServer.getConnectors()[0].getLocalPort() returned 60010
> 12/04/04 13:05:03 INFO http.HttpServer: Jetty bound to port 60010
> 12/04/04 13:05:03 INFO mortbay.log: jetty-6.1.26
> 12/04/04 13:05:03 INFO mortbay.log: Extract jar:file:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1.jar!/hbase-webapps/master to /tmp/Jetty_0_0_0_0_60010_master____q3nwom/webapp
> 12/04/04 13:05:03 INFO regionserver.HRegionServer: Master isn't available yet, retrying
> 12/04/04 13:05:03 INFO mortbay.log: Started SelectChannelConnector@0.0.0.0:60010
> 12/04/04 13:05:05 INFO master.ServerManager: Waiting on regionserver(s) to checkin
> 12/04/04 13:05:06 INFO regionserver.HRegionServer: Attempting connect to Master server at localhost,35192,1333569901811
> 12/04/04 13:05:06 INFO regionserver.HRegionServer: Connected to master at localhost/127.0.0.1:60207
> 12/04/04 13:05:06 INFO regionserver.HRegionServer: Telling master at localhost,35192,1333569901811 that we are up with port=60207, startcode=1333569902087
> 12/04/04 13:05:06 INFO master.ServerManager: Registering server=localhost,60207,1333569902087
> 12/04/04 13:05:06 INFO regionserver.HRegionServer: Master passed us hostname to use. Was=localhost, Now=localhost
> 12/04/04 13:05:06 INFO wal.HLog: HLog configuration: blocksize=64 MB, rollsize=60.8 MB, enabled=true, optionallogflushinternal=1000ms
> 12/04/04 13:05:06 INFO wal.HLog:  for /user/nwatkins/hbase/.logs/localhost,60207,1333569902087/localhost%2C60207%2C1333569902087.1333569906635
> 12/04/04 13:05:06 INFO wal.HLog: Using getNumCurrentReplicas--HDFS-826
> 12/04/04 13:05:06 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=RegionServer, sessionId=RegionServer:0;localhost,60207,1333569902087 - already initialized
> 12/04/04 13:05:06 INFO hbase.metrics: new MBeanInfo
> 12/04/04 13:05:06 INFO metrics.RegionServerMetrics: Initialized
> 12/04/04 13:05:06 INFO master.ServerManager: Waiting on regionserver(s) count to settle; currently=1
> 12/04/04 13:05:06 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/04/04 13:05:06 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 60030
> 12/04/04 13:05:06 INFO http.HttpServer: listener.getLocalPort() returned 60030 webServer.getConnectors()[0].getLocalPort() returned 60030
> 12/04/04 13:05:06 INFO http.HttpServer: Jetty bound to port 60030
> 12/04/04 13:05:06 INFO mortbay.log: jetty-6.1.26
> 12/04/04 13:05:06 INFO mortbay.log: Extract jar:file:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1.jar!/hbase-webapps/regionserver to /tmp/Jetty_0_0_0_0_60030_regionserver____.h599vl/webapp
> 12/04/04 13:05:06 INFO mortbay.log: Started SelectChannelConnector@0.0.0.0:60030
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server Responder: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server listener on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 0 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 1 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 2 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 3 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 4 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 5 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 6 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 7 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 8 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: IPC Server handler 9 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 0 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 1 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 2 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 3 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 4 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 5 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 6 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 7 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 8 on 60207: starting
> 12/04/04 13:05:06 INFO ipc.HBaseServer: PRI IPC Server handler 9 on 60207: starting
> 12/04/04 13:05:06 INFO regionserver.HRegionServer: Serving as localhost,60207,1333569902087, RPC listening on localhost/127.0.0.1:60207, sessionid=0x1367ef5d1320001
> 12/04/04 13:05:06 INFO regionserver.SplitLogWorker: SplitLogWorker localhost,60207,1333569902087 starting
> 12/04/04 13:05:06 INFO regionserver.HRegionServer: Registered RegionServer MXBean
> 12/04/04 13:05:07 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/.logs/localhost,60207,1333569902087/localhost%2C60207%2C1333569902087.1333569906635. blk_7181700300756881384_1009
> 12/04/04 13:05:07 INFO datanode.DataNode: Receiving block blk_7181700300756881384_1009 src: /127.0.0.1:46164 dest: /127.0.0.1:56956
> 12/04/04 13:05:07 INFO hdfs.StateChange: BLOCK* NameSystem.fsync: file /user/nwatkins/hbase/.logs/localhost,60207,1333569902087/localhost%2C60207%2C1333569902087.1333569906635 for DFSClient_hb_rs_localhost,60207,1333569902087
> 12/04/04 13:05:08 INFO master.ServerManager: Finished waiting for regionserver count to settle; count=1, sleptFor=4500
> 12/04/04 13:05:08 INFO master.MasterFileSystem: Log folder hdfs://localhost:44291/user/nwatkins/hbase/.logs/localhost,60207,1333569902087 belongs to an existing region server
> 12/04/04 13:05:08 INFO master.MasterFileSystem: No logs to split
> 12/04/04 13:05:09 INFO catalog.RootLocationEditor: Unsetting ROOT region location in ZooKeeper
> 12/04/04 13:05:09 INFO server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1367ef5d1320000 type:delete cxid:0x24 zxid:0x13 txntype:-1 reqpath:n/a Error Path:/hbase/root-region-server Error:KeeperErrorCode = NoNode for /hbase/root-region-server
> 12/04/04 13:05:09 WARN zookeeper.RecoverableZooKeeper: Node /hbase/root-region-server already deleted, and this is not a retry
> 12/04/04 13:05:09 INFO regionserver.HRegionServer: Received request to open region: -ROOT-,,0.70236052
> 12/04/04 13:05:09 INFO regionserver.HRegion: Setting up tabledescriptor config now ...
> 12/04/04 13:05:09 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46166, bytes: 722, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_6328668344013294287_1007, duration: 563005
> 12/04/04 13:05:09 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46167, bytes: 722, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_6328668344013294287_1007, duration: 413420
> 12/04/04 13:05:09 INFO regionserver.HRegion: Onlined -ROOT-,,0.70236052; next sequenceid=3
> 12/04/04 13:05:09 INFO regionserver.HRegionServer: Post open deploy tasks for region=-ROOT-,,0.70236052, daughter=false
> 12/04/04 13:05:09 INFO catalog.RootLocationEditor: Setting ROOT region location in ZooKeeper as localhost,60207,1333569902087
> 12/04/04 13:05:09 INFO regionserver.HRegionServer: Done with post open deploy task for region=-ROOT-,,0.70236052, daughter=false
> 12/04/04 13:05:09 INFO handler.OpenedRegionHandler: Handling OPENED event for -ROOT-,,0.70236052 from localhost,60207,1333569902087; deleting unassigned node
> 12/04/04 13:05:09 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46169, bytes: 516, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_6328668344013294287_1007, duration: 285626
> 12/04/04 13:05:09 INFO master.AssignmentManager: The master has opened the region -ROOT-,,0.70236052 that was online on localhost,60207,1333569902087
> 12/04/04 13:05:09 INFO master.HMaster: -ROOT- assigned=1, rit=false, location=localhost,60207,1333569902087
> 12/04/04 13:05:10 INFO regionserver.HRegionServer: Received request to open region: .META.,,1.1028785192
> 12/04/04 13:05:10 INFO regionserver.HRegion: Setting up tabledescriptor config now ...
> 12/04/04 13:05:10 INFO regionserver.HRegion: Onlined .META.,,1.1028785192; next sequenceid=1
> 12/04/04 13:05:10 INFO regionserver.HRegionServer: Post open deploy tasks for region=.META.,,1.1028785192, daughter=false
> 12/04/04 13:05:10 INFO catalog.MetaEditor: Updated row .META.,,1.1028785192 with server=localhost,60207,1333569902087
> 12/04/04 13:05:10 INFO regionserver.HRegionServer: Done with post open deploy task for region=.META.,,1.1028785192, daughter=false
> 12/04/04 13:05:10 INFO handler.OpenedRegionHandler: Handling OPENED event for .META.,,1.1028785192 from localhost,60207,1333569902087; deleting unassigned node
> 12/04/04 13:05:10 INFO master.AssignmentManager: The master has opened the region .META.,,1.1028785192 that was online on localhost,60207,1333569902087
> 12/04/04 13:05:10 INFO master.HMaster: .META. assigned=2, rit=false, location=localhost,60207,1333569902087
> 12/04/04 13:05:10 INFO catalog.MetaMigrationRemovingHTD: Meta version=0; migrated=true
> 12/04/04 13:05:10 INFO catalog.MetaMigrationRemovingHTD: ROOT/Meta already up-to date with new HRI.
> 12/04/04 13:05:10 INFO master.AssignmentManager: Clean cluster startup. Assigning userregions
> javax.management.NotCompliantMBeanException: org.apache.hadoop.hbase.master.MXBean: Method org.apache.hadoop.hbase.master.MXBean.getRegionServers has parameter or return type that cannot be translated into an open type
> 	at com.sun.jmx.mbeanserver.Introspector.throwException(Introspector.java:419)
> 	at com.sun.jmx.mbeanserver.MBeanAnalyzer.<init>(MBeanAnalyzer.java:118)
> 	at com.sun.jmx.mbeanserver.MBeanAnalyzer.analyzer(MBeanAnalyzer.java:104)
> 	at com.sun.jmx.mbeanserver.MXBeanIntrospector.getAnalyzer(MXBeanIntrospector.java:71)
> 	at com.sun.jmx.mbeanserver.MBeanIntrospector.getPerInterface(MBeanIntrospector.java:181)
> 	at com.sun.jmx.mbeanserver.MBeanSupport.<init>(MBeanSupport.java:136)
> 	at com.sun.jmx.mbeanserver.MXBeanSupport.<init>(MXBeanSupport.java:66)
> 	at com.sun.jmx.mbeanserver.Introspector.makeDynamicMBean(Introspector.java:184)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:936)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> 	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:516)
> 	at org.apache.hadoop.metrics.util.MBeanUtil.registerMBean(MBeanUtil.java:58)
> 	at org.apache.hadoop.hbase.master.HMaster.registerMBean(HMaster.java:1646)
> 	at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:527)
> 	at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:336)
> 	at java.lang.Thread.run(Thread.java:679)
> Caused by: java.lang.IllegalArgumentException: Method org.apache.hadoop.hbase.master.MXBean.getRegionServers has parameter or return type that cannot be translated into an open type
> 	at com.sun.jmx.mbeanserver.ConvertingMethod.from(ConvertingMethod.java:45)
> 	at com.sun.jmx.mbeanserver.MXBeanIntrospector.mFrom(MXBeanIntrospector.java:81)
> 	at com.sun.jmx.mbeanserver.MXBeanIntrospector.mFrom(MXBeanIntrospector.java:51)
> 	at com.sun.jmx.mbeanserver.MBeanAnalyzer.initMaps(MBeanAnalyzer.java:135)
> 	at com.sun.jmx.mbeanserver.MBeanAnalyzer.<init>(MBeanAnalyzer.java:116)
> 	... 14 more
> Caused by: javax.management.openmbean.OpenDataException: Cannot obtain array class
> 	at com.sun.jmx.mbeanserver.OpenConverter.openDataException(OpenConverter.java:1389)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeArrayOrCollectionConverter(OpenConverter.java:346)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:295)
> 	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeTabularConverter(OpenConverter.java:376)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeParameterizedConverter(OpenConverter.java:417)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:312)
> 	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeCompositeConverter(OpenConverter.java:482)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:309)
> 	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeTabularConverter(OpenConverter.java:377)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeParameterizedConverter(OpenConverter.java:417)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:312)
> 	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)
> 	at com.sun.jmx.mbeanserver.ConvertingMethod.<init>(ConvertingMethod.java:197)
> 	at com.sun.jmx.mbeanserver.ConvertingMethod.from(ConvertingMethod.java:40)
> 	... 18 more
> Caused by: java.lang.ClassNotFoundException: [Lbyte;
> 	at java.lang.Class.forName0(Native Method)
> 	at java.lang.Class.forName(Class.java:186)
> 	at com.sun.jmx.mbeanserver.OpenConverter.makeArrayOrCollectionConverter(OpenConverter.java:344)
> 	... 33 more
> 12/04/04 13:05:10 INFO master.HMaster: Registered HMaster MXBean
> 12/04/04 13:05:10 INFO master.HMaster: Master has completed initialization
> 12/04/04 13:05:11 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x1367ef5d1320002
> 12/04/04 13:05:11 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1367ef5d1320002
> 12/04/04 13:05:11 INFO zookeeper.ZooKeeper: Session: 0x1367ef5d1320002 closed
> 12/04/04 13:05:11 INFO hbase.HBaseTestingUtility: Minicluster is up
> 12/04/04 13:05:11 INFO zookeeper.ClientCnxn: EventThread shut down
> 12/04/04 13:05:11 INFO hbase.HBaseTestingUtility: Shutting down minicluster
> 12/04/04 13:05:11 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53724 which had sessionid 0x1367ef5d1320002
> 12/04/04 13:05:11 INFO master.HMaster: Cluster shutdown requested
> 12/04/04 13:05:11 INFO regionserver.HRegionServer: STOPPED: Shutdown requested
> 12/04/04 13:05:11 INFO master.ServerManager: Waiting on regionserver(s) to go down localhost,60207,1333569902087
> 12/04/04 13:05:11 INFO master.HMaster$2: localhost,35192,1333569901811-BalancerChore exiting
> 12/04/04 13:05:11 INFO master.CatalogJanitor: localhost,35192,1333569901811-CatalogJanitor exiting
> 12/04/04 13:05:12 INFO master.SplitLogManager$TimeoutMonitor: localhost,35192,1333569901811.splitLogManagerTimeoutMonitor exiting
> 12/04/04 13:05:12 INFO master.ServerManager: Waiting on regionserver(s) to go down localhost,60207,1333569902087
> 12/04/04 13:05:12 INFO ipc.HBaseServer: Stopping server on 60207
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 1 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 0 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 5 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 2 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 6 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 7 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 9 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 4 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: Stopping IPC Server listener on 60207
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 2 on 60207: exiting
> 12/04/04 13:05:12 INFO regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 0 on 60207: exiting
> 12/04/04 13:05:12 INFO regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException
> 12/04/04 13:05:12 INFO regionserver.SplitLogWorker: SplitLogWorker localhost,60207,1333569902087 exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 3 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 8 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 7 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 1 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 9 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 3 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: Stopping IPC Server Responder
> 12/04/04 13:05:12 INFO ipc.HBaseServer: Stopping IPC Server Responder
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 5 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 4 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 6 on 60207: exiting
> 12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 8 on 60207: exiting
> 12/04/04 13:05:12 INFO regionserver.HRegionServer: Stopping infoServer
> 12/04/04 13:05:12 INFO mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60030
> 12/04/04 13:05:12 INFO regionserver.HRegionServer$CompactionChecker: RegionServer:0;localhost,60207,1333569902087.compactionChecker exiting
> 12/04/04 13:05:12 INFO regionserver.LogRoller: LogRoller exiting.
> 12/04/04 13:05:12 INFO regionserver.MemStoreFlusher: RegionServer:0;localhost,60207,1333569902087.cacheFlusher exiting
> 12/04/04 13:05:12 WARN regionserver.HRegionServer: Received close for region we are already opening or closing; 70236052
> 12/04/04 13:05:12 INFO regionserver.HRegionServer: stopping server localhost,60207,1333569902087
> 12/04/04 13:05:12 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x1367ef5d1320004
> 12/04/04 13:05:12 INFO regionserver.HRegion: Closed .META.,,1.1028785192
> 12/04/04 13:05:12 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1367ef5d1320004
> 12/04/04 13:05:12 INFO zookeeper.ZooKeeper: Session: 0x1367ef5d1320004 closed
> 12/04/04 13:05:12 INFO zookeeper.ClientCnxn: EventThread shut down
> 12/04/04 13:05:12 INFO regionserver.HRegionServer: Waiting on 1 regions to close
> 12/04/04 13:05:12 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53728 which had sessionid 0x1367ef5d1320004
> 12/04/04 13:05:12 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/-ROOT-/70236052/.tmp/bb4ce7aee8ac4d1e8320b984d6ce5b6c. blk_-2312968814331638487_1010
> 12/04/04 13:05:12 INFO datanode.DataNode: Receiving block blk_-2312968814331638487_1010 src: /127.0.0.1:46171 dest: /127.0.0.1:56956
> 12/04/04 13:05:12 INFO DataNode.clienttrace: src: /127.0.0.1:46171, dest: /127.0.0.1:56956, bytes: 721, op: HDFS_WRITE, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 709184
> 12/04/04 13:05:12 INFO datanode.DataNode: PacketResponder 0 for block blk_-2312968814331638487_1010 terminating
> 12/04/04 13:05:12 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_-2312968814331638487_1010 size 721
> 12/04/04 13:05:13 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/-ROOT-/70236052/.tmp/bb4ce7aee8ac4d1e8320b984d6ce5b6c from client DFSClient_hb_rs_localhost,60207,1333569902087
> 12/04/04 13:05:13 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/-ROOT-/70236052/.tmp/bb4ce7aee8ac4d1e8320b984d6ce5b6c is closed by DFSClient_hb_rs_localhost,60207,1333569902087
> 12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46172, bytes: 729, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 485712
> 12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46173, bytes: 729, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 426832
> 12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46174, bytes: 729, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 422864
> 12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46175, bytes: 729, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 443916
> 12/04/04 13:05:13 INFO regionserver.Store: Added hdfs://localhost:44291/user/nwatkins/hbase/-ROOT-/70236052/info/bb4ce7aee8ac4d1e8320b984d6ce5b6c, entries=2, sequenceid=5, memsize=464.0, filesize=721.0
> 12/04/04 13:05:13 INFO regionserver.HRegion: Finished memstore flush of ~464.0/464, currentsize=0.0/0 for region -ROOT-,,0.70236052 in 533ms, sequenceid=5, compaction requested=false
> 12/04/04 13:05:13 INFO regionserver.HRegion: Closed -ROOT-,,0.70236052
> 12/04/04 13:05:13 INFO master.ServerManager: Waiting on regionserver(s) to go down localhost,60207,1333569902087
> 12/04/04 13:05:13 INFO master.AssignmentManager$TimeoutMonitor: localhost,35192,1333569901811.timeoutMonitor exiting
> 12/04/04 13:05:13 INFO regionserver.HRegionServer: stopping server localhost,60207,1333569902087; all regions closed.
> 12/04/04 13:05:13 INFO wal.HLog: RegionServer:0;localhost,60207,1333569902087.logSyncer exiting
> 12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:46164, dest: /127.0.0.1:56956, bytes: 408, op: HDFS_WRITE, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_7181700300756881384_1009, duration: 6282407448
> 12/04/04 13:05:13 INFO datanode.DataNode: PacketResponder 0 for block blk_7181700300756881384_1009 terminating
> 12/04/04 13:05:13 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_7181700300756881384_1009 size 408
> 12/04/04 13:05:13 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/.logs/localhost,60207,1333569902087/localhost%2C60207%2C1333569902087.1333569906635 from client DFSClient_hb_rs_localhost,60207,1333569902087
> 12/04/04 13:05:13 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/.logs/localhost,60207,1333569902087/localhost%2C60207%2C1333569902087.1333569906635 is closed by DFSClient_hb_rs_localhost,60207,1333569902087
> 12/04/04 13:05:13 INFO regionserver.Leases: RegionServer:0;localhost,60207,1333569902087 closing leases
> 12/04/04 13:05:13 INFO regionserver.Leases: RegionServer:0;localhost,60207,1333569902087 closed leases
> 12/04/04 13:05:14 INFO zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, processing expiration [localhost,60207,1333569902087]
> 12/04/04 13:05:14 INFO master.ServerManager: Cluster shutdown set; localhost,60207,1333569902087 expired; onlineServers=0
> 12/04/04 13:05:14 INFO master.HMaster: Cluster shutdown set; onlineServer=0
> 12/04/04 13:05:14 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1367ef5d1320001
> 12/04/04 13:05:14 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53712 which had sessionid 0x1367ef5d1320001
> 12/04/04 13:05:14 INFO zookeeper.ClientCnxn: EventThread shut down
> 12/04/04 13:05:14 INFO zookeeper.ZooKeeper: Session: 0x1367ef5d1320001 closed
> 12/04/04 13:05:14 INFO regionserver.HRegionServer: stopping server localhost,60207,1333569902087; zookeeper connection closed.
> 12/04/04 13:05:14 INFO regionserver.HRegionServer: RegionServer:0;localhost,60207,1333569902087 exiting
> 12/04/04 13:05:14 INFO hbase.MiniHBaseCluster: Hook closing fs=DFS[DFSClient[clientName=DFSClient_hb_rs_localhost,60207,1333569902087, ugi=nwatkins.hfs.0]]
> 12/04/04 13:05:14 INFO util.JVMClusterUtil: Shutdown of 1 master(s) and 1 regionserver(s) complete
> 12/04/04 13:05:14 INFO ipc.HBaseServer: Stopping server on 35192
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 6 on 35192: exiting
> 12/04/04 13:05:14 INFO ipc.HBaseServer: Stopping IPC Server listener on 35192
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 0 on 35192: exiting
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 2 on 35192: exiting
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 1 on 35192: exiting
> 12/04/04 13:05:14 INFO master.HMaster: Stopping infoServer
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 4 on 35192: exiting
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 8 on 35192: exiting
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 7 on 35192: exiting
> 12/04/04 13:05:14 INFO mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010
> 12/04/04 13:05:14 INFO ipc.HBaseServer: Stopping IPC Server Responder
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 5 on 35192: exiting
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 3 on 35192: exiting
> 12/04/04 13:05:14 INFO ipc.HBaseServer: Stopping IPC Server Responder
> 12/04/04 13:05:14 INFO master.LogCleaner: Master:0;localhost,35192,1333569901811.oldLogCleaner exiting
> 12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 9 on 35192: exiting
> 12/04/04 13:05:14 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1367ef5d1320000
> 12/04/04 13:05:14 INFO zookeeper.ZooKeeper: Session: 0x1367ef5d1320000 closed
> 12/04/04 13:05:14 INFO zookeeper.ClientCnxn: EventThread shut down
> 12/04/04 13:05:14 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53711 which had sessionid 0x1367ef5d1320000
> 12/04/04 13:05:14 INFO master.HMaster: HMaster main thread exiting
> 12/04/04 13:05:14 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53727 which had sessionid 0x1367ef5d1320003
> 12/04/04 13:05:14 INFO server.NIOServerCnxnFactory: NIOServerCnxn factory exited run method
> 12/04/04 13:05:14 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x1367ef5d1320003, likely server has closed socket, closing socket connection and attempting reconnect
> 12/04/04 13:05:14 INFO server.ZooKeeperServer: shutting down
> 12/04/04 13:05:14 INFO server.SessionTrackerImpl: Shutting down
> 12/04/04 13:05:14 INFO server.PrepRequestProcessor: Shutting down
> 12/04/04 13:05:14 INFO server.SyncRequestProcessor: Shutting down
> 12/04/04 13:05:14 INFO server.PrepRequestProcessor: PrepRequestProcessor exited loop!
> 12/04/04 13:05:14 INFO server.SyncRequestProcessor: SyncRequestProcessor exited!
> 12/04/04 13:05:14 INFO server.FinalRequestProcessor: shutdown of request processor complete
> 12/04/04 13:05:14 INFO zookeeper.MiniZooKeeperCluster: Shutdown MiniZK cluster with all ZK servers
> Shutting down the Mini HDFS Cluster
> Shutting down DataNode 0
> 12/04/04 13:05:14 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
> 12/04/04 13:05:14 INFO ipc.Server: Stopping server on 60794
> 12/04/04 13:05:14 INFO ipc.Server: Stopping IPC Server listener on 60794
> 12/04/04 13:05:14 INFO ipc.Server: IPC Server handler 2 on 60794: exiting
> 12/04/04 13:05:14 INFO ipc.Server: IPC Server handler 1 on 60794: exiting
> 12/04/04 13:05:14 INFO ipc.Server: IPC Server handler 0 on 60794: exiting
> 12/04/04 13:05:14 INFO metrics.RpcInstrumentation: shut down
> 12/04/04 13:05:14 INFO ipc.Server: Stopping IPC Server Responder
> 12/04/04 13:05:14 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
> 12/04/04 13:05:14 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:56956, storageID=DS-134885185-127.0.0.1-56956-1333569900707, infoPort=53398, ipcPort=60794):DataXceiveServer:java.nio.channels.AsynchronousCloseException
> 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:170)
> 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:102)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
> 	at java.lang.Thread.run(Thread.java:679)
> 12/04/04 13:05:14 INFO datanode.DataNode: Exiting DataXceiveServer
> 12/04/04 13:05:14 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
> 12/04/04 13:05:15 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
> 12/04/04 13:05:15 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:56956, storageID=DS-134885185-127.0.0.1-56956-1333569900707, infoPort=53398, ipcPort=60794):Finishing DataNode in: FSDataset{dirpath='/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data1/current,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data2/current'}
> 12/04/04 13:05:15 WARN util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
> javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1118)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
> 	at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:540)
> 	at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:513)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:726)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1442)
> 	at java.lang.Thread.run(Thread.java:679)
> 12/04/04 13:05:15 INFO ipc.Server: Stopping server on 60794
> 12/04/04 13:05:15 INFO metrics.RpcInstrumentation: shut down
> 12/04/04 13:05:15 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
> 12/04/04 13:05:15 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
> 12/04/04 13:05:15 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
> 12/04/04 13:05:15 WARN util.MBeans: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2074696888
> javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2074696888
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1118)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
> 	at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:540)
> 	at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550)
> 	at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:350)
> 	at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:576)
> 	at edu.ucsc.srl.persia.PersiaAdminTest.tearDownAfterClass(PersiaAdminTest.java:27)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:616)
> 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:36)
> 	at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> 	at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
> 	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
> 	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:616)
> 	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
> 	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
> 	at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
> 	at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
> 	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
> 12/04/04 13:05:15 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
> 12/04/04 13:05:15 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
> 12/04/04 13:05:15 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:21818
> 12/04/04 13:05:15 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
> 12/04/04 13:05:15 WARN zookeeper.ClientCnxn: Session 0x1367ef5d1320003 for server null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
> 	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> 	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
> 	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
> 	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
> 12/04/04 13:05:15 INFO namenode.DecommissionManager: Interrupted Monitor
> java.lang.InterruptedException: sleep interrupted
> 	at java.lang.Thread.sleep(Native Method)
> 	at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
> 	at java.lang.Thread.run(Thread.java:679)
> 12/04/04 13:05:15 INFO namenode.FSNamesystem: Number of transactions: 67 Total time for transactions(ms): 4Number of transactions batched in Syncs: 2 Number of syncs: 50 SyncTimes(ms): 461 451 
> 12/04/04 13:05:15 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
> 12/04/04 13:05:15 INFO ipc.Server: Stopping server on 44291
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 0 on 44291: exiting
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 5 on 44291: exiting
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 6 on 44291: exiting
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 7 on 44291: exiting
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 1 on 44291: exiting
> 12/04/04 13:05:15 INFO metrics.RpcInstrumentation: shut down
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 2 on 44291: exiting
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 4 on 44291: exiting
> 12/04/04 13:05:15 INFO ipc.Server: Stopping IPC Server listener on 44291
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 8 on 44291: exiting
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 9 on 44291: exiting
> 12/04/04 13:05:15 INFO ipc.Server: Stopping IPC Server Responder
> 12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 3 on 44291: exiting
> 12/04/04 13:05:15 INFO hbase.HBaseTestingUtility: Minicluster is down
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.98 sec
> 12/04/04 13:05:15 INFO regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-0,5,main]
> 12/04/04 13:05:15 INFO regionserver.HRegionServer: STOPPED: Shutdown hook
> 12/04/04 13:05:15 INFO regionserver.ShutdownHook: Starting fs shutdown hook thread.
> 12/04/04 13:05:15 INFO regionserver.ShutdownHook: Shutdown hook finished.
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

Mime
View raw message