hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Roberto Alonso <roal...@gmail.com>
Subject Re: distributed cluster proble
Date Wed, 28 Mar 2012 08:18:34 GMT
Hello,

I think the problem is not that. If I don't put this env
variable $HBASE_CONF_DIR, everything starts correctly, I don't think I need
to put it. My problem is that the map reduce is not executing in parallel.
if I ask:
 Configuration config = HBaseConfiguration.create();
 config.get("hbase.cluster.distributed")
 It says hbase.cluster.distributed= false, although I have this in
 It says hbase.cluster.distributed= false, although I have this
in my config files
 <property>
            <name>hbase.cluster.distributed</name>
            <value>true</value>
</property>

On 27 March 2012 18:57, Dave Wang <dsw@cloudera.com> wrote:

> Roberto,
>
> That error is not quite what I expected.
>
> $HBASE_CONF_DIR is what the hbase scripts use as an alternate to the
> default location of the conf files in $HBASE_HOME/conf.
>
> How do you know that ZooKeeper is not starting?
>
> Also, did you define HBASE_MANAGES_ZK to be "true" (it is "true" by
> default)?  You will have to do so if you want HBase to start ZooKeeper.
>
> Do you have the other hosts that you want to run region servers on, defined
> in $HBASE_CONF_DIR/regionservers?
>
> Have you looked at:
>
> http://hbase.apache.org/book/configuration.html
>
> - Dave
>
> On Tue, Mar 27, 2012 at 12:49 AM, Roberto Alonso <roalva1@gmail.com>
> wrote:
>
> > Hello,
> >
> > I put it in my .bashrc and resource it, and the problem is still there. I
> > haven't seen to many documents in the web about this HBASE_CONF_DIR,
> > something about PIG, but I am not using it.
> > I have 4 servers, after putting  HBASE_CONF_DIR, it only starts the first
> > node, in the others there is nothing new in the logs.
> > So in the first node it says this:
> >
> >
> > 12/03/27 09:40:57 INFO wal.SequenceFileLogReader: Input stream class:
> > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker, not
> > adjusting length
> > 12/03/27 09:40:57 WARN wal.HLogSplitter: Could not open
> > file:/tmp/hbase-hadoopuser/hbase/.logs/gen19.bioinfo.cipf.es
> > ,55437,1332777708660-splitting/gen19.bioinfo.cipf.es
> > %2C55437%2C1332777708660.1332781312292
> > for reading. File is empty
> > java.io.EOFException
> >        at java.io.DataInputStream.readFully(DataInputStream.java:180)
> >        at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >        at
> > org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> >        at
> > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1486)
> >        at
> > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1475)
> >        at
> > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1470)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:58)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:166)
> >        at
> > org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:649)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:845)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:758)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:384)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
> >        at
> >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
> >        at java.lang.Thread.run(Thread.java:662)
> >
> > ...
> > 12/03/27 09:40:58 INFO master.HMaster: -ROOT- assigned=1, rit=false,
> > location=ge
> > 12/03/27 09:40:58 INFO ipc.HBaseRPC: Server at
> > gen19.bioinfo.cipf.es/172.24.78.9
> > 12/03/27 09:40:58 INFO catalog.CatalogTracker: Passed hostingServer is
> null
> > 12/03/27 09:40:58 INFO ipc.HBaseRPC: Server at
> > gen19.bioinfo.cipf.es/172.24.78.9
> > 12/03/27 09:40:58 INFO catalog.CatalogTracker: Passed hostingServer is
> null
> > 12/03/27 09:40:58 INFO ipc.HBaseRPC: Server at
> > gen19.bioinfo.cipf.es/172.24.78.9
> > 12/03/27 09:40:58 INFO catalog.CatalogTracker: Passed hostingServer is
> null
> > 12/03/27 09:40:58 INFO ipc.HBaseRPC: Server at
> > gen19.bioinfo.cipf.es/172.24.78.
> >
> > I don't know if I am doing right..., any idea?
> >
> > thanks!!
> > On 26 March 2012 17:56, Dave Wang <dsw@cloudera.com> wrote:
> >
> > > Roberto,
> > >
> > > It should be set in whatever shell you are using.  If you are using
> bash,
> > > then .bashrc seems reasonable.  Remember to re-source your .bashrc
> after
> > > making the change.  You can verify by running "env | grep
> HBASE_CONF_DIR"
> > > from your shell.
> > >
> > > If your ZooKeeper is not starting, we'll need to see what the output of
> > > your logs and command-line are in order to debug further.  Also
> contents
> > of
> > > your HBase config files.
> > >
> > > - Dave
> > >
> > > On Mon, Mar 26, 2012 at 8:51 AM, Roberto Alonso <roalva1@gmail.com>
> > wrote:
> > >
> > > > Thanks Dave for your answer. In hbase-env.sh I have $HADOOP_CONF_DIR
> > > setted
> > > > but no  $HBASE_CONF_DIR. I have put it right now in that file but my
> > > > zookeeper doesn't start, should I put the variable in the .bashrc or
> > > > another file?
> > > >
> > > > thanks!
> > > >
> > > > On 26 March 2012 17:39, Dave Wang <dsw@cloudera.com> wrote:
> > > >
> > > > > Roberto,
> > > > >
> > > > > Is your $HBASE_CONF_DIR pointing to the directory that contains
> your
> > > > > hbase-site.xml?
> > > > >
> > > > > - Dave
> > > > >
> > > > > On Mon, Mar 26, 2012 at 8:35 AM, Roberto Alonso CIPF <
> > ralonso@cipf.es
> > > > > >wrote:
> > > > >
> > > > > > Hello!
> > > > > > I am experimenting some problems because I think I don't have
> > > > distributed
> > > > > > computation.
> > > > > > I have a map reduce code where I go to a table and I get
> something
> > of
> > > > my
> > > > > > interest. When I do htop to my 4 servers I see that the
> processors
> > > are
> > > > > > working sequentially not in parallel, in other words, one after
> the
> > > > other
> > > > > > but never all at the same time, so that I guess my map reduce
is
> > not
> > > > > doing
> > > > > > well. But if I ask:
> > > > > > Configuration config = HBaseConfiguration.create();
> > > > > > config.get("hbase.cluster.distributed")
> > > > > > It says hbase.cluster.distributed= false, although I have this
in
> > my
> > > > > config
> > > > > > files
> > > > > >  <property>
> > > > > >            <name>hbase.cluster.distributed</name>
> > > > > >            <value>true</value>
> > > > > >         </property>
> > > > > > What do you think is going on there?
> > > > > >
> > > > > > Thanks a lot!
> > > > > >
> > > > > > --
> > > > > > Roberto Alonso
> > > > > > Bioinformatics and Genomics Department
> > > > > > Centro de Investigacion Principe Felipe (CIPF)
> > > > > > C/E.P. Avda. Autopista del Saler, 16-3 (junto Oceanografico)
> > > > > > 46012 Valencia, Spain
> > > > > > Tel: +34 963289680 Ext. 1021
> > > > > > Fax: +34 963289574
> > > > > > E-Mail: ralonso@cipf.es
> > > > > >
> > > > >
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message