hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Smith <ryan.justin.sm...@gmail.com>
Subject Re: Row Counter rpoblem
Date Sun, 10 Jun 2012 20:51:00 GMT
I tried all three filters and got the same as below.

HTTP ERROR 410

Problem accessing /tasklog. Reason:

    Failed to retrieve stderr log for task:
attempt_201206101609_0001_m_000098_0

I will try upgrading to the latest versions of both hadoop and hbase and
see what happens too.  If you have any more ideas, please feel free to
share.

Thank you,
-Ryan


On Sun, Jun 10, 2012 at 2:30 PM, Harsh J <harsh@cloudera.com> wrote:

> Hi Ryan,
>
> The issue doesn't seem to be ACLs if those are the only changed properties.
>
> I think the issue may be either this: "Error: Could not initialize
> class org.apache.log4j.LogManager" or better detailed at
>
> http://datanode003.cluster.local:50060/tasklog?plaintext=true&attemptid=attempt_201206101609_0001_m_000098_0
>
> I've not tried the combo of the two versions you're using yet, but its
> either a bad log4j lib mixup or some other class-loading-related
> trouble. Can you tell us what the latter link shows, if anything at
> all, for one failed task attempt?
>
> On Sun, Jun 10, 2012 at 9:09 PM, Ryan Smith <ryan.justin.smith@gmail.com>
> wrote:
> >>
> >> Hello,
> >>
> >> I am having problems using the rowcounter mapreduce job on my hbase
> table.
> >>  The table exists, and rowcoutner works in the hbase shell, However,
> when I
> >> try it from the command below using the mapreduce jar, I get this error
> at
> >> the bottom..
> >>
> >> # HADOOP_CLASSPATH=`/opt/hbase/bin/hbase classpath`
> /opt/hadoop/bin/hadoop
> >> jar /opt/hbase/hbase-0.92.0.jar rowcounter mytable
> >> log4j:ERROR Could not find value for key log4j.appender.NullAppender
> >> log4j:ERROR Could not instantiate appender named "NullAppender".
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:zookeeper.version=3.4.2-1221870, built on 12/21/2011 20:46
> GMT
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client environment:
> host.name
> >> =namenodeone.cluster.local
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:java.version=1.6.0_29
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:java.vendor=Sun Microsystems Inc.
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:java.home=/usr/java/jdk1.6.0_29/jre
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >>
> environment:java.class.path=/opt/hadoop-1.0.2/libexec/../conf:/usr/java/default//lib/tools.jar:/opt/hadoop-1.0.2/libexec/..:/opt/hadoop-1.0.2/libexec/../hadoop-core-1.0.2.jar:/opt/hadoop-1.0.2/libexec/../lib/asm-3.2.jar:/opt/hadoop-1.0.2/libexec/../lib/aspectjrt-1.6.5.jar:/opt/hadoop-1.0.2/libexec/../lib/aspectjtools-1.6.5.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-cli-1.2.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-codec-1.4.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-collections-3.2.1.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-configuration-1.6.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-daemon-1.0.1.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-digester-1.8.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-el-1.0.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-lang-2.4.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-logging-1.1.1.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-math-2.1.jar:/opt/hadoop-1.0.2/libexec/../lib/commons-net-1.4.1.jar:/opt/hadoop-1.0.2/libexec/../lib/core-3.1.1.jar:/opt/hadoop-1.0.2/libexec/../lib/guava-11.0.2.jar:/opt/hadoop-1.0.2/libexec/../lib/hadoop-capacity-scheduler-1.0.2.jar:/opt/hadoop-1.0.2/libexec/../lib/hadoop-fairscheduler-1.0.2.jar:/opt/hadoop-1.0.2/libexec/../lib/hadoop-thriftfs-1.0.2.jar:/opt/hadoop-1.0.2/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/hadoop-1.0.2/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/hadoop-1.0.2/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop-1.0.2/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/hadoop-1.0.2/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/hadoop-1.0.2/libexec/../lib/jdeb-0.8.jar:/opt/hadoop-1.0.2/libexec/../lib/jersey-core-1.8.jar:/opt/hadoop-1.0.2/libexec/../lib/jersey-json-1.8.jar:/opt/hadoop-1.0.2/libexec/../lib/jersey-server-1.8.jar:/opt/hadoop-1.0.2/libexec/../lib/jets3t-0.6.1.jar:/opt/hadoop-1.0.2/libexec/../lib/jetty-6.1.26.jar:/opt/hadoop-1.0.2/libexec/../lib/jetty-util-6.1.26.jar:/opt/hadoop-1.0.2/libexec/../lib/jsch-0.1.42.jar:/opt/hadoop-1.0.2/libexec/../lib/junit-4.5.jar:/opt/hadoop-1.0.2/libexec/../lib/kfs-0.2.2.jar:/opt/hadoop-1.0.2/libexec/../lib/log4j-1.2.15.jar:/opt/hadoop-1.0.2/libexec/../lib/log4j-1.2.16.jar:/opt/hadoop-1.0.2/libexec/../lib/mockito-all-1.8.5.jar:/opt/hadoop-1.0.2/libexec/../lib/oro-2.0.8.jar:/opt/hadoop-1.0.2/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/hadoop-1.0.2/libexec/../lib/slf4j-api-1.4.3.jar:/opt/hadoop-1.0.2/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/hadoop-1.0.2/libexec/../lib/xmlenc-0.52.jar:/opt/hadoop-1.0.2/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/hadoop-1.0.2/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/hbase-0.92.0/lib/zookeeper-3.4.2.jar:/opt/hbase/bin/../conf:/usr/java/default/lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.92.0.jar:/opt/hbase/bin/../hbase-0.92.0-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-r09.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.0.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.0.1.jar:/opt/hbase/bin/../lib/httpcore-4.0.1.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-xc-1.5.5.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/opt/hbase/bin/../lib/jersey-core-1.4.jar:/opt/hbase/bin/../lib/jersey-json-1.4.jar:/opt/hbase/bin/../lib/jersey-server-1.4.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/libthrift-0.7.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/servlet-api-2.5.jar:/opt/hbase/bin/../lib/slf4j-api-1.5.8.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.2.jar
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >>
> environment:java.library.path=/opt/hadoop-1.0.2/libexec/../lib/native/Linux-amd64-64
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:java.io.tmpdir=/tmp
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:java.compiler=<NA>
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client environment:os.name
> >> =Linux
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:os.arch=amd64
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:os.version=2.6.32-220.7.1.el6.x86_64
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client environment:
> user.name
> >> =root
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:user.home=/root
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Client
> >> environment:user.dir=/opt/hadoop-1.0.2
> >> 12/06/10 16:12:09 INFO zookeeper.ZooKeeper: Initiating client
> connection,
> >> connectString=188.94.23.3:2181 sessionTimeout=180000
> watcher=hconnection
> >> 12/06/10 16:12:09 INFO zookeeper.ClientCnxn: Opening socket connection
> to
> >> server /188.94.23.3:2181
> >> 12/06/10 16:12:09 INFO zookeeper.RecoverableZooKeeper: The identifier of
> >> this process is 16267@namenodeone
> >> 12/06/10 16:12:09 INFO zookeeper.ClientCnxn: Socket connection
> established
> >> to namenodeone.cluster.local/188.94.23.3:2181, initiating session
> >> 12/06/10 16:12:09 INFO zookeeper.ClientCnxn: Session establishment
> >> complete on server namenodeone.cluster.local/188.94.23.3:2181,
> sessionid
> >> = 0x1377407b6930099, negotiated timeout = 180000
> >> 12/06/10 16:12:09 INFO hdfs.DFSClient: Exception in
> >> createBlockOutputStream 188.94.23.13:50010 java.io.IOException: Bad
> >> connect ack with firstBadLink as 188.94.23.11:50010
> >> 12/06/10 16:12:09 INFO hdfs.DFSClient: Abandoning block
> >> blk_5041391389848324461_85581
> >> 12/06/10 16:12:09 INFO hdfs.DFSClient: Excluding datanode
> >> 188.94.23.11:50010
> >> 12/06/10 16:12:16 INFO mapred.JobClient: Running job:
> job_201206101609_0001
> >> 12/06/10 16:12:17 INFO mapred.JobClient:  map 0% reduce 0%
> >> 12/06/10 16:12:26 INFO mapred.JobClient: Task Id :
> >> attempt_201206101609_0001_m_000098_0, Status : FAILED
> >> Error: Could not initialize class org.apache.log4j.LogManager
> >> 12/06/10 16:12:27 WARN mapred.JobClient: Error reading task
> >>
> outputhttp://datanode003.cluster.local:50060/tasklog?plaintext=true&attemptid=attempt_201206101609_0001_m_000098_0&filter=stdout
> >> 12/06/10 16:12:27 WARN mapred.JobClient: Error reading task
> >>
> outputhttp://datanode003.cluster.local:50060/tasklog?plaintext=true&attemptid=attempt_201206101609_0001_m_000098_0&filter=stderr
> >> 12/06/10 16:12:30 INFO mapred.JobClient: Task Id :
> >> attempt_201206101609_0001_r_000001_0, Status : FAILED
> >> Error: Could not initialize class org.apache.log4j.LogManager
> >> 12/06/10 16:12:30 WARN mapred.JobClient: Error reading task
> >>
> outputhttp://datanode003.cluster.local:50060/tasklog?plaintext=true&attemptid=attempt_201206101609_0001_r_000001_0&filter=stdout
> >> 12/06/10 16:12:30 WARN mapred.JobClient: Error reading task
> >>
> outputhttp://datanode003.cluster.local:50060/tasklog?plaintext=true&attemptid=attempt_201206101609_0001_r_000001_0&filter=stderr
> >> <ctrl-c>
> >>
> >> I will get the errors you see above on all the nodes until the job ends
> in
> >> failure.    So next I edited the mapred-queue-acls.xml to look like this
> >> then restarted TaskTrackers and jobTracker:
> >>
> >> <property>
> >>   <name>mapred.queue.default.acl-submit-job</name>
> >>   <value>*</value>
> >> </property>
> >>
> >> <property>
> >>   <name>mapred.queue.default.acl-administer-jobs</name>
> >>   <value>*</value>
> >> </property>
> >>
> >>
> >> And I got the same errors.  Any ideas what I am doing wrong?
> >>
> >> Thank you,
> >> -Ryan
> >>
> >>
>
>
>
> --
> Harsh J
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message