hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Re: Zookeeper tries to connect to localhost when i have specified another clearly.
Date Wed, 21 Aug 2013 13:26:49 GMT
I'm running with this INFO for more than a year now ;) So no, I don't think
this is going to pose any real threats. You have everything configured
correctly and everything seems to be working fine.

JM

2013/8/21 Pavan Sudheendra <pavan0591@gmail.com>

> it doesn't pose any real threats?
>
>
> On Wed, Aug 21, 2013 at 6:30 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:
>
> > All fine on those logs too.
> >
> > So everything is working fine, ZK, HBase, the job is working fine too.
> The
> > only issue is this INFO regarding SAS, correct?
> >
> > I think you should simply ignore it.
> >
> > If it's annoying you, just turn org.apache.zookeeper.ClientCnxn loglevel
> to
> > WARN on log4j.properties. (It's the setting I have on my own cluster).
> >
> > JM
> >
> > 2013/8/21 Pavan Sudheendra <pavan0591@gmail.com>
> >
> > > @Jean the log which i got at the start of running the hadoop jar: Maybe
> > you
> > > can spot something
> > >
> > > 11:51:44,431  INFO ZooKeeper:100 - Client
> > > environment:java.library.path=/usr/lib/hadoop/lib/native
> > > 11:51:44,432  INFO ZooKeeper:100 - Client
> environment:java.io.tmpdir=/tmp
> > > 11:51:44,432  INFO ZooKeeper:100 - Client
> environment:java.compiler=<NA>
> > > 11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name=Linux
> > > 11:51:44,432  INFO ZooKeeper:100 - Client environment:os.arch=amd64
> > > 11:51:44,432  INFO ZooKeeper:100 - Client
> > > environment:os.version=3.2.0-23-virtual
> > > 11:51:44,432  INFO ZooKeeper:100 - Client environment:user.name=root
> > > 11:51:44,433  INFO ZooKeeper:100 - Client environment:user.home=/root
> > > 11:51:44,433  INFO ZooKeeper:100 - Client
> > > environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
> > > 11:51:44,437  INFO ZooKeeper:438 - Initiating client connection,
> > > connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
> > > 11:51:44,493  INFO ClientCnxn:966 - Opening socket connection to server
> > > localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
> > > (Unable to locate a login configuration)
> > > 11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier of this
> > > process is 19267@ip-10-34-187-170.eu-west-1.compute.internal
> > > 11:51:44,513  INFO ClientCnxn:849 - Socket connection established to
> > > localhost/127.0.0.1:2181, initiating session
> > > 11:51:44,532  INFO ClientCnxn:1207 - Session establishment complete on
> > > server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb167,
> > negotiated
> > > timeout = 60000
> > > 11:51:44,743  INFO ZooKeeper:438 - Initiating client connection,
> > > connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
> > > 11:51:44,747  INFO ClientCnxn:966 - Opening socket connection to server
> > > localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
> > > (Unable to locate a login configuration)
> > > 11:51:44,747  INFO ClientCnxn:849 - Socket connection established to
> > > localhost/127.0.0.1:2181, initiating session
> > > 11:51:44,747  INFO RecoverableZooKeeper:104 - The identifier of this
> > > process is 19267@ip-10-34-187-170.eu-west-1.compute.internal
> > > 11:51:44,749  INFO ClientCnxn:1207 - Session establishment complete on
> > > server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb168,
> > negotiated
> > > timeout = 60000
> > > 11:51:44,803  WARN Configuration:824 - hadoop.native.lib is deprecated.
> > > Instead, use io.native.lib.available
> > > 11:51:45,051  INFO HConnectionManager$HConnectionImplementation:1789 -
> > > Closed zookeeper sessionid=0x13ff1cff71bb168
> > > 11:51:45,054  INFO ZooKeeper:684 - Session: 0x13ff1cff71bb168 closed
> > > 11:51:45,054  INFO ClientCnxn:509 - EventThread shut down
> > > 11:51:45,057  INFO ZooKeeper:438 - Initiating client connection,
> > > connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
> > > 11:51:45,059  INFO ClientCnxn:966 - Opening socket connection to server
> > > localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
> > > (Unable to locate a login configuration)
> > > 11:51:45,060  INFO ClientCnxn:849 - Socket connection established to
> > > localhost/127.0.0.1:2181, initiating session
> > > 11:51:45,061  INFO ClientCnxn:1207 - Session establishment complete on
> > > server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb169,
> > negotiated
> > > timeout = 60000
> > > 11:51:45,065  INFO RecoverableZooKeeper:104 - The identifier of this
> > > process is 19267@ip-10-34-187-170.eu-west-1.compute.internal
> > > 11:51:45,135  INFO ZooKeeper:438 - Initiating client connection,
> > > connectString=10.34.187.170:2181 sessionTimeout=180000
> > watcher=hconnection
> > > 11:51:45,137  INFO ClientCnxn:966 - Opening socket connection to server
> > > ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181. Will
> not
> > > attempt to authenticate using SASL (Unable to locate a login
> > configuration)
> > > 11:51:45,138  INFO ClientCnxn:849 - Socket connection established to
> > > ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
> > initiating
> > > session
> > > 11:51:45,138  INFO RecoverableZooKeeper:104 - The identifier of this
> > > process is 19267@ip-10-34-187-170.eu-west-1.compute.internal
> > > 11:51:45,140  INFO ClientCnxn:1207 - Session establishment complete on
> > > server ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
> > > sessionid = 0x13ff1cff71bb16a, negotiated timeout = 60000
> > > 11:51:45,173  INFO ZooKeeper:438 - Initiating client connection,
> > > connectString=10.34.187.170:2181 sessionTimeout=180000
> > >
> > >
> >
> watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@7444f787
> > > 11:51:45,176  INFO ClientCnxn:966 - Opening socket connection to server
> > > ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181. Will
> not
> > > attempt to authenticate using SASL (Unable to locate a login
> > configuration)
> > > 11:51:45,176  INFO ClientCnxn:849 - Socket connection established to
> > > ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
> > initiating
> > > session
> > > 11:51:45,178  INFO ClientCnxn:1207 - Session establishment complete on
> > > server ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
> > > sessionid = 0x13ff1cff71bb16b, negotiated timeout = 60000
> > > 11:51:45,180  INFO RecoverableZooKeeper:104 - The identifier of this
> > > process is 19267@ip-10-34-187-170.eu-west-1.compute.internal
> > > 11:51:45,211  INFO ZooKeeper:684 - Session: 0x13ff1cff71bb16b closed
> > > 11:51:45,211  INFO ClientCnxn:509 - EventThread shut down
> > > 11:51:45,218  INFO ZooKeeper:438 - Initiating client connection,
> > > connectString=10.34.187.170:2181 sessionTimeout=180000
> > >
> > >
> >
> watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@7444f787
> > > 11:51:45,220  INFO ClientCnxn:966 - Opening socket connection to server
> > > ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181. Will
> not
> > > attempt to authenticate using SASL (Unable to locate a login
> > configuration)
> > >
> > >
> > >
> > > On Wed, Aug 21, 2013 at 6:23 PM, Pavan Sudheendra <pavan0591@gmail.com
> > > >wrote:
> > >
> > > > Yes .. The zookeeper server is also 10.34.187.170 ..
> > > >
> > > >
> > > > On Wed, Aug 21, 2013 at 6:21 PM, Jean-Marc Spaggiari <
> > > > jean-marc@spaggiari.org> wrote:
> > > >
> > > >> Are you able to connect to your ZK server shell and list the nodes?
> > > >>
> > > >> 2013/8/21 Pavan Sudheendra <pavan0591@gmail.com>
> > > >>
> > > >> > Yes.. I can do everything.. But i do not want my Hadoop Namenode
> to
> > > >> report
> > > >> > logs like this.. Also, it says
> > > >> >
> > > >> > KeeperException, re-throwing exception
> > > >> > org.apache.zookeeper.KeeperException$ConnectionLossException:
> > > >> > KeeperErrorCode = ConnectionLoss for /hbase/
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > On Wed, Aug 21, 2013 at 6:14 PM, Jean-Marc Spaggiari <
> > > >> > jean-marc@spaggiari.org> wrote:
> > > >> >
> > > >> > > Sound correct. You are able to start the shell and scan the few
> > > first
> > > >> > line
> > > >> > > of the tables, right?
> > > >> > >
> > > >> > > 2013/8/21 Pavan Sudheendra <pavan0591@gmail.com>
> > > >> > >
> > > >> > > > This is my hbase-site.xml file if it helps:
> > > >> > > >
> > > >> > > > <?xml version="1.0" encoding="UTF-8"?>
> > > >> > > >
> > > >> > > > <!--Autogenerated by Cloudera CM on
> 2013-07-09T09:26:49.841Z-->
> > > >> > > > <configuration>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.rootdir</name>
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> <value>hdfs://ip-10-34-187-170.eu-west-1.compute.internal:8020/hbase</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.client.write.buffer</name>
> > > >> > > >     <value>2097152</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.client.pause</name>
> > > >> > > >     <value>1000</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.client.retries.number</name>
> > > >> > > >     <value>10</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.client.scanner.caching</name>
> > > >> > > >     <value>1</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.client.keyvalue.maxsize</name>
> > > >> > > >     <value>10485760</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.rpc.timeout</name>
> > > >> > > >     <value>60000</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.security.authentication</name>
> > > >> > > >     <value>simple</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>zookeeper.session.timeout</name>
> > > >> > > >     <value>60000</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>zookeeper.znode.parent</name>
> > > >> > > >     <value>/hbase</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>zookeeper.znode.rootserver</name>
> > > >> > > >     <value>root-region-server</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.zookeeper.quorum</name>
> > > >> > > >     <value>ip-10-34-187-170.eu-west-1.compute.internal</value>
> > > >> > > >   </property>
> > > >> > > >   <property>
> > > >> > > >     <name>hbase.zookeeper.property.clientPort</name>
> > > >> > > >     <value>2181</value>
> > > >> > > >   </property>
> > > >> > > > </configuration>
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > On Wed, Aug 21, 2013 at 6:09 PM, Jean-Marc Spaggiari <
> > > >> > > > jean-marc@spaggiari.org> wrote:
> > > >> > > >
> > > >> > > > > Hum.
> > > >> > > > >
> > > >> > > > > Things seems to be correct there.
> > > >> > > > >
> > > >> > > > > Can you try something simple like:
> > > >> > > > >
> > > >> > > > >             Configuration config =
> > HBaseConfiguration.create();
> > > >> > > > >             config.set("hbase.zookeeper.quorum",
> > > >> "ip-10-34-187-170");
> > > >> > > > >             HTable table  = new HTable (config,
> > > >> > > > > Bytes.toBytes("TABLE_NAME"));
> > > >> > > > >
> > > >> > > > > And see if it works?
> > > >> > > > >
> > > >> > > > > JM
> > > >> > > > >
> > > >> > > > > 2013/8/21 Pavan Sudheendra <pavan0591@gmail.com>
> > > >> > > > >
> > > >> > > > > > Is this a zookeeper specific error or something?
> > > >> > > > > >
> > > >> > > > > >
> > > >> > > > > > On Wed, Aug 21, 2013 at 6:06 PM, Pavan Sudheendra <
> > > >> > > pavan0591@gmail.com
> > > >> > > > > > >wrote:
> > > >> > > > > >
> > > >> > > > > > > Hi Jean,
> > > >> > > > > > >
> > > >> > > > > > > ubuntu@ip-10-34-187-170:~$ cat /etc/hostname
> > > >> > > > > > > ip-10-34-187-170
> > > >> > > > > > > ubuntu@ip-10-34-187-170:~$ hostname
> > > >> > > > > > > ip-10-34-187-170
> > > >> > > > > > >
> > > >> > > > > > >
> > > >> > > > > > >
> > > >> > > > > > > On Wed, Aug 21, 2013 at 6:01 PM, Jean-Marc Spaggiari <
> > > >> > > > > > > jean-marc@spaggiari.org> wrote:
> > > >> > > > > > >
> > > >> > > > > > >> And what about:
> > > >> > > > > > >> # cat /etc/hostname
> > > >> > > > > > >>
> > > >> > > > > > >> and
> > > >> > > > > > >> # hostname
> > > >> > > > > > >>
> > > >> > > > > > >> ?
> > > >> > > > > > >>
> > > >> > > > > > >> 2013/8/21 Pavan Sudheendra <pavan0591@gmail.com>
> > > >> > > > > > >>
> > > >> > > > > > >> > Sure..
> > > >> > > > > > >> > /etc/hosts file:
> > > >> > > > > > >> >
> > > >> > > > > > >> > 127.0.0.1 localhost
> > > >> > > > > > >> > 10.34.187.170 ip-10-34-187-170
> > > >> > > > > > >> > # The following lines are desirable for IPv6 capable
> > > hosts
> > > >> > > > > > >> > ::1 ip6-localhost ip6-loopback
> > > >> > > > > > >> > fe00::0 ip6-localnet
> > > >> > > > > > >> > ff00::0 ip6-mcastprefix
> > > >> > > > > > >> > ff02::1 ip6-allnodes
> > > >> > > > > > >> > ff02::2 ip6-allrouters
> > > >> > > > > > >> > ff02::3 ip6-allhosts
> > > >> > > > > > >> >
> > > >> > > > > > >> > Configuration conf = HBaseConfiguration.create();
> > > >> > > > > > >> > conf.set("hbase.zookeeper.quorum", "10.34.187.170");
> > > >> > > > > > >> >
> >  conf.set("hbase.zookeeper.property.clientPort","2181");
> > > >> > > > > > >> >    conf.set("hbase.master","10.34.187.170");
> > > >> > > > > > >> >    Job job = new Job(conf, ViewersTable);
> > > >> > > > > > >> >
> > > >> > > > > > >> > I'm trying to process table data which has 19 million
> > > >> rows..It
> > > >> > > > runs
> > > >> > > > > > fine
> > > >> > > > > > >> > for a while although i don't see the map percent
> > > completion
> > > >> > > change
> > > >> > > > > > from
> > > >> > > > > > >> 0%
> > > >> > > > > > >> > .. After a while it says
> > > >> > > > > > >> >
> > > >> > > > > > >> > Task attempt_201304161625_0028_m_000000_0 failed to
> > > report
> > > >> > > status
> > > >> > > > > for
> > > >> > > > > > >> > 600 seconds. Killing!
> > > >> > > > > > >> >
> > > >> > > > > > >> >
> > > >> > > > > > >> >
> > > >> > > > > > >> >
> > > >> > > > > > >> >
> > > >> > > > > > >> > On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari
> <
> > > >> > > > > > >> > jean-marc@spaggiari.org> wrote:
> > > >> > > > > > >> >
> > > >> > > > > > >> > > Can you past you host file here again with the
> > > >> modification
> > > >> > > you
> > > >> > > > > have
> > > >> > > > > > >> > done?
> > > >> > > > > > >> > >
> > > >> > > > > > >> > > Also, can you share a big more of you code? What
> are
> > > you
> > > >> > doing
> > > >> > > > > with
> > > >> > > > > > >> the
> > > >> > > > > > >> > > config object after, how do you create your table
> > > object,
> > > >> > > etc.?
> > > >> > > > > > >> > >
> > > >> > > > > > >> > > Thanks,
> > > >> > > > > > >> > >
> > > >> > > > > > >> > > JM
> > > >> > > > > > >> > >
> > > >> > > > > > >> > > 2013/8/21 Pavan Sudheendra <pavan0591@gmail.com>
> > > >> > > > > > >> > >
> > > >> > > > > > >> > > > @Jean tried your method didn't work..
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > > 2013-08-21 12:17:10,908 INFO
> > > >> > > org.apache.zookeeper.ClientCnxn:
> > > >> > > > > > >> Opening
> > > >> > > > > > >> > > > socket connection to server localhost/
> > 127.0.0.1:2181
> > > .
> > > >> > Will
> > > >> > > > not
> > > >> > > > > > >> attempt
> > > >> > > > > > >> > > to
> > > >> > > > > > >> > > > authenticate using SASL (Unable to locate a login
> > > >> > > > configuration)
> > > >> > > > > > >> > > > 2013-08-21 12:17:10,908 WARN
> > > >> > > org.apache.zookeeper.ClientCnxn:
> > > >> > > > > > >> Session
> > > >> > > > > > >> > 0x0
> > > >> > > > > > >> > > > for server null, unexpected error, closing socket
> > > >> > connection
> > > >> > > > and
> > > >> > > > > > >> > > attempting
> > > >> > > > > > >> > > > reconnect
> > > >> > > > > > >> > > > java.net.ConnectException: Connection refused
> > > >> > > > > > >> > > >     at
> > > sun.nio.ch.SocketChannelImpl.checkConnect(Native
> > > >> > > > Method)
> > > >> > > > > > >> > > >     at
> > > >> > > > > > >> > > >
> > > >> > > > > > >>
> > > >> > > >
> > > >>
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> > > >> > > > > > >> > > >     at
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > >
> > > >> > > > > > >> >
> > > >> > > > > > >>
> > > >> > > > > >
> > > >> > > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
> > > >> > > > > > >> > > >     at
> > > >> > > > > > >> > >
> > > >> > > > >
> > > >> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
> > > >> > > > > > >> > > > 2013-08-21 12:17:11,009 WARN
> > > >> > > > > > >> > > >
> > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper:
> > > >> > > > Possibly
> > > >> > > > > > >> > > transient
> > > >> > > > > > >> > > > ZooKeeper exception:
> > > >> > > > > > >> > > >
> > > >> > > org.apache.zookeeper.KeeperException$ConnectionLossException:
> > > >> > > > > > >> > > > KeeperErrorCode = ConnectionLoss for /hbase
> > > >> > > > > > >> > > > 2013-08-21 12:17:11,009 INFO
> > > >> > > > > > >> org.apache.hadoop.hbase.util.RetryCounter:
> > > >> > > > > > >> > > > Sleeping 8000ms before retry #3...\
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > > Any tips?
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > > On Wed, Aug 21, 2013 at 5:15 PM, Jean-Marc
> > Spaggiari
> > > <
> > > >> > > > > > >> > > > jean-marc@spaggiari.org> wrote:
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > > > Hi Pavan,
> > > >> > > > > > >> > > > >
> > > >> > > > > > >> > > > > I don't think Cloudera Manager assign the
> address
> > > to
> > > >> > your
> > > >> > > > > > >> computer.
> > > >> > > > > > >> > > When
> > > >> > > > > > >> > > > CM
> > > >> > > > > > >> > > > > is down, your computer still have an IP, and
> even
> > > if
> > > >> you
> > > >> > > > > > >> un-install
> > > >> > > > > > >> > CM,
> > > >> > > > > > >> > > > you
> > > >> > > > > > >> > > > > will still have an IP assigned to your
> computer.
> > > >> > > > > > >> > > > >
> > > >> > > > > > >> > > > > If you have not configured anything there, then
> > you
> > > >> most
> > > >> > > > > > probably
> > > >> > > > > > >> > have
> > > >> > > > > > >> > > a
> > > >> > > > > > >> > > > > DHCP. Just give a try to what I told you on the
> > > other
> > > >> > > > message.
> > > >> > > > > > >> > > > >
> > > >> > > > > > >> > > > > JM
> > > >> > > > > > >> > > > >
> > > >> > > > > > >> > > > > 2013/8/21 Pavan Sudheendra <
> pavan0591@gmail.com>
> > > >> > > > > > >> > > > >
> > > >> > > > > > >> > > > > > @Manoj i have set hbase.zookeeper.quorum in
> my
> > > M-R
> > > >> > > > > > application..
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > > > @Jean The cloudera manager picks up the ip
> > > address
> > > >> > > > > > >> automatically..
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > > > On Wed, Aug 21, 2013 at 5:07 PM, manoj p <
> > > >> > > > eorstvz@gmail.com
> > > >> > > > > >
> > > >> > > > > > >> > wrote:
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > > > > Can you try passing the argument
> > > >> > > > > > >> > > > -Dhbase.zookeeper.quorum=10.34.187.170
> > > >> > > > > > >> > > > > > > while running the program
> > > >> > > > > > >> > > > > > >
> > > >> > > > > > >> > > > > > > If this does'nt work either please check if
> > > >> > HBASE_HOME
> > > >> > > > and
> > > >> > > > > > >> > > > > HBASE_CONF_DIR
> > > >> > > > > > >> > > > > > > is set correctly.
> > > >> > > > > > >> > > > > > >
> > > >> > > > > > >> > > > > > > BR/Manoj
> > > >> > > > > > >> > > > > > >
> > > >> > > > > > >> > > > > > >
> > > >> > > > > > >> > > > > > > On Wed, Aug 21, 2013 at 4:48 PM, Pavan
> > > >> Sudheendra <
> > > >> > > > > > >> > > > pavan0591@gmail.com
> > > >> > > > > > >> > > > > > > >wrote:
> > > >> > > > > > >> > > > > > >
> > > >> > > > > > >> > > > > > > > Yes. My /etc/hosts have the correct
> mapping
> > > to
> > > >> > > > localhost
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > > 127.0.0.1    localhost
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > > # The following lines are desirable for
> > IPv6
> > > >> > capable
> > > >> > > > > hosts
> > > >> > > > > > >> > > > > > > > ::1     ip6-localhost ip6-loopback
> > > >> > > > > > >> > > > > > > > fe00::0 ip6-localnet
> > > >> > > > > > >> > > > > > > > ff00::0 ip6-mcastprefix
> > > >> > > > > > >> > > > > > > > ff02::1 ip6-allnodes
> > > >> > > > > > >> > > > > > > > ff02::2 ip6-allrouters
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > > I've added the HBase jars to the Hadoop
> > > >> Classpath
> > > >> > as
> > > >> > > > > well.
> > > >> > > > > > >> Not
> > > >> > > > > > >> > > sure
> > > >> > > > > > >> > > > > > why..
> > > >> > > > > > >> > > > > > > > I'm running this on a 6 node cloudera
> > cluster
> > > >> > which
> > > >> > > > > > consist
> > > >> > > > > > >> of
> > > >> > > > > > >> > 1
> > > >> > > > > > >> > > > > > > > jobtrackers and 5 tasktrackers..
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > > After a while all my map jobs fail..
> > > Completely
> > > >> > > > baffled
> > > >> > > > > > >> because
> > > >> > > > > > >> > > the
> > > >> > > > > > >> > > > > map
> > > >> > > > > > >> > > > > > > > tasks were doing the required tasks..
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > > On Wed, Aug 21, 2013 at 4:45 PM, manoj p
> <
> > > >> > > > > > eorstvz@gmail.com
> > > >> > > > > > >> >
> > > >> > > > > > >> > > > wrote:
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > > > For your code to run, please ensure if
> > you
> > > >> use
> > > >> > the
> > > >> > > > > > correct
> > > >> > > > > > >> > > > > > HBase/Hadoop
> > > >> > > > > > >> > > > > > > > jar
> > > >> > > > > > >> > > > > > > > > versions while compiling your program.
> > > >> > > > > > >> > > > > > > > >
> > > >> > > > > > >> > > > > > > > > BR/Manoj
> > > >> > > > > > >> > > > > > > > >
> > > >> > > > > > >> > > > > > > > >
> > > >> > > > > > >> > > > > > > > > On Wed, Aug 21, 2013 at 4:38 PM, manoj
> p
> > <
> > > >> > > > > > >> eorstvz@gmail.com>
> > > >> > > > > > >> > > > > wrote:
> > > >> > > > > > >> > > > > > > > >
> > > >> > > > > > >> > > > > > > > > > Check your /etc/hosts file if you
> have
> > > the
> > > >> > > correct
> > > >> > > > > > >> mapping
> > > >> > > > > > >> > to
> > > >> > > > > > >> > > > > > > localhost
> > > >> > > > > > >> > > > > > > > > > for 127.0.0.1. Also ensure that if
> you
> > > have
> > > >> > > > > > >> > > > > hbase.zookeeper.quorum
> > > >> > > > > > >> > > > > > in
> > > >> > > > > > >> > > > > > > > > your
> > > >> > > > > > >> > > > > > > > > > configuration and also check if HBase
> > > >> > classpath
> > > >> > > is
> > > >> > > > > > >> appended
> > > >> > > > > > >> > > to
> > > >> > > > > > >> > > > > > Hadoop
> > > >> > > > > > >> > > > > > > > > > classpath.
> > > >> > > > > > >> > > > > > > > > >
> > > >> > > > > > >> > > > > > > > > >
> > > >> > > > > > >> > > > > > > > > > BR/Manoj
> > > >> > > > > > >> > > > > > > > > >
> > > >> > > > > > >> > > > > > > > > >
> > > >> > > > > > >> > > > > > > > > > On Wed, Aug 21, 2013 at 4:10 PM,
> Pavan
> > > >> > > Sudheendra
> > > >> > > > <
> > > >> > > > > > >> > > > > > > pavan0591@gmail.com
> > > >> > > > > > >> > > > > > > > > >wrote:
> > > >> > > > > > >> > > > > > > > > >
> > > >> > > > > > >> > > > > > > > > >> Hadoop Namenode reports the
> following
> > > >> error
> > > >> > > which
> > > >> > > > > is
> > > >> > > > > > >> > > unusual :
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >> 013-08-21 09:21:12,328 INFO
> > > >> > > > > > >> > org.apache.zookeeper.ClientCnxn:
> > > >> > > > > > >> > > > > > Opening
> > > >> > > > > > >> > > > > > > > > >> socket
> > > >> > > > > > >> > > > > > > > > >> connection to server localhost/
> > > >> > 127.0.0.1:2181.
> > > >> > > > > Will
> > > >> > > > > > >> not
> > > >> > > > > > >> > > > attempt
> > > >> > > > > > >> > > > > > to
> > > >> > > > > > >> > > > > > > > > >> authenticate using SASL (Unable to
> > > locate
> > > >> a
> > > >> > > login
> > > >> > > > > > >> > > > configuration)
> > > >> > > > > > >> > > > > > > > > >> java.net.ConnectException:
> Connection
> > > >> refused
> > > >> > > > > > >> > > > > > > > > >>     at
> > > >> > > > > > sun.nio.ch.SocketChannelImpl.checkConnect(Native
> > > >> > > > > > >> > > > Method)
> > > >> > > > > > >> > > > > > > > > >>     at
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > >
> > > >> > > > > > >> > > >
> > > >> > > > > > >>
> > > >> > > >
> > > >>
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> > > >> > > > > > >> > > > > > > > > >>     at
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > >
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > >
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > >
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > >
> > > >> > > > > > >> >
> > > >> > > > > > >>
> > > >> > > > > >
> > > >> > > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
> > > >> > > > > > >> > > > > > > > > >>     at
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> >
> > > >> > > >
> > > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
> > > >> > > > > > >> > > > > > > > > >> 2013-08-21 09:33:11,033 WARN
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper:
> > > >> > > > > > >> > > > Possibly
> > > >> > > > > > >> > > > > > > > > transient
> > > >> > > > > > >> > > > > > > > > >> ZooKeeper exception:
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > >
> > > >> > org.apache.zookeeper.KeeperException$ConnectionLossException:
> > > >> > > > > > >> > > > > > > > > >> KeeperErrorCode = ConnectionLoss for
> > > >> /hbase
> > > >> > > > > > >> > > > > > > > > >> 2013-08-21 09:33:11,033 INFO
> > > >> > > > > > >> > > > > > > >
> org.apache.hadoop.hbase.util.RetryCounter:
> > > >> > > > > > >> > > > > > > > > >> Sleeping 8000ms before retry #3...
> > > >> > > > > > >> > > > > > > > > >> 2013-08-21 09:33:11,043 WARN
> > > >> > > > > > >> > org.apache.hadoop.mapred.Task:
> > > >> > > > > > >> > > > > Parent
> > > >> > > > > > >> > > > > > > > died.
> > > >> > > > > > >> > > > > > > > > >> Exiting
> > > >> attempt_201307181246_0548_m_000022_2
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >> Because i have specified the address
> > in
> > > >> the
> > > >> > > java
> > > >> > > > > file
> > > >> > > > > > >> > > > > > > > > >>     Configuration conf =
> > > >> > > > > HBaseConfiguration.create();
> > > >> > > > > > >> > > > > > > > > >>
> conf.set("hbase.zookeeper.quorum",
> > > >> > > > > > >> "10.34.187.170");
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> >
> conf.set("hbase.zookeeper.property.clientPort","2181");
> > > >> > > > > > >> > > > > > > > > >>
> > > >> conf.set("hbase.master","10.34.187.170");
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >> All my map tasks fail like this!
> > Please
> > > >> > help..
> > > >> > > > I'm
> > > >> > > > > > on a
> > > >> > > > > > >> > > > timebomb
> > > >> > > > > > >> > > > > > > > > >> --
> > > >> > > > > > >> > > > > > > > > >> Regards-
> > > >> > > > > > >> > > > > > > > > >> Pavan
> > > >> > > > > > >> > > > > > > > > >>
> > > >> > > > > > >> > > > > > > > > >
> > > >> > > > > > >> > > > > > > > > >
> > > >> > > > > > >> > > > > > > > >
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > > > --
> > > >> > > > > > >> > > > > > > > Regards-
> > > >> > > > > > >> > > > > > > > Pavan
> > > >> > > > > > >> > > > > > > >
> > > >> > > > > > >> > > > > > >
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > > > --
> > > >> > > > > > >> > > > > > Regards-
> > > >> > > > > > >> > > > > > Pavan
> > > >> > > > > > >> > > > > >
> > > >> > > > > > >> > > > >
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > > > --
> > > >> > > > > > >> > > > Regards-
> > > >> > > > > > >> > > > Pavan
> > > >> > > > > > >> > > >
> > > >> > > > > > >> > >
> > > >> > > > > > >> >
> > > >> > > > > > >> >
> > > >> > > > > > >> >
> > > >> > > > > > >> > --
> > > >> > > > > > >> > Regards-
> > > >> > > > > > >> > Pavan
> > > >> > > > > > >> >
> > > >> > > > > > >>
> > > >> > > > > > >
> > > >> > > > > > >
> > > >> > > > > > >
> > > >> > > > > > > --
> > > >> > > > > > > Regards-
> > > >> > > > > > > Pavan
> > > >> > > > > > >
> > > >> > > > > >
> > > >> > > > > >
> > > >> > > > > >
> > > >> > > > > > --
> > > >> > > > > > Regards-
> > > >> > > > > > Pavan
> > > >> > > > > >
> > > >> > > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > --
> > > >> > > > Regards-
> > > >> > > > Pavan
> > > >> > > >
> > > >> > >
> > > >> >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > Regards-
> > > >> > Pavan
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > Regards-
> > > > Pavan
> > > >
> > >
> > >
> > >
> > > --
> > > Regards-
> > > Pavan
> > >
> >
>
>
>
> --
> Regards-
> Pavan
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message