hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vinod Gupta Tankala <tvi...@readypulse.com>
Subject Re: how to set mapred.system.dir?
Date Thu, 29 Sep 2011 18:06:07 GMT
Thanks Dejo for pointing that out. I realized that earlier and fixed it. But
I still hit the same problem.

In my case, I only have a single host for now. But I am still trying to do a
distributed setup by listing the machine itself as a slave in config and not
using localhost anywhere. Does this even work? if not, I can try spending
more time on pseudo-distributed setup for now.

thanks


On Thu, Sep 29, 2011 at 4:48 AM, Dejan Menges <dejan.menges@gmail.com>wrote:

> In core-site.xml, on first, you miss port in the end for HDFS:
>
>  <property>
>   <name>fs.default.name</name>
>   <value>hdfs://ec2-184-73-22-146.compute-1.amazonaws.com/</value>
>  </property>
>
> Regards,
> Dejo
>
> On Wed, Sep 28, 2011 at 6:21 PM, Vinod Gupta Tankala
> <tvinod@readypulse.com>wrote:
>
> > Hi,
> > I am trying to setup a test system to host a distributed hbase
> > installation.
> > No matter what I do, I get the below errors.
> >
> > 2011-09-28 22:17:26,288 WARN org.apache.hadoop.hdfs.DFSClient: Error
> > Recovery fo
> > r block null bad datanode[0] nodes == null
> > 2011-09-28 22:17:26,288 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> > get
> > blo
> > ck locations. Source file "/tmp/mapred/system/jobtracker.info" -
> > Aborting...
> > 2011-09-28 22:17:26,288 WARN org.apache.hadoop.mapred.JobTracker: Writing
> > to
> > fil
> > e hdfs://
> > ec2-184-73-22-146.compute-1.amazonaws.com/tmp/mapred/system/jobtracker.
> > info failed!
> > 2011-09-28 22:17:26,288 WARN org.apache.hadoop.mapred.JobTracker:
> > FileSystem
> > is
> > not ready yet!
> > 2011-09-28 22:17:26,292 WARN org.apache.hadoop.mapred.JobTracker: Failed
> to
> > init
> > ialize recovery manager.
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > /tmp/mapred/sys
> > tem/jobtracker.info could only be replicated to 0 nodes, instead of 1
> >        at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBloc
> > k(FSNamesystem.java:1417)
> > ....
> >
> > this is how i setup my config -
> > core-site.xml -
> > <configuration>
> >
> >  <property>
> >    <name>fs.default.name</name>
> >    <value>hdfs://ec2-184-73-22-146.compute-1.amazonaws.com/</value>
> >  </property>
> >
> > </configuration>
> >
> > hdfs-site.xml -
> > <configuration>
> >
> >  <property>
> >    <name>dfs.replication</name>
> >    <value>1</value>
> >  </property>
> >
> >  <property>
> >    <name>dfs.name.dir</name>
> >    <value>/tmp/hbase</value>
> >  </property>
> >
> >  <property>
> >    <name>dfs.data.dir</name>
> >    <value>/tmp/hbase</value>
> >  </property>
> >
> > </configuration>
> >
> >
> > mapred-site.xml -
> > <configuration>
> >
> >  <property>
> >    <name>mapred.job.tracker</name>
> >    <value>ec2-184-73-22-146.compute-1.amazonaws.com:9001</value>
> >  </property>
> >
> >  <property>
> >    <name>mapred.local.dir</name>
> >    <value>/tmp/mapred_tmp</value>
> >  </property>
> >
> >  <property>
> >    <name>mapred.map.tasks</name>
> >    <value>10</value>
> >  </property>
> >
> >  <property>
> >    <name>mapred.reduce.tasks</name>
> >    <value>2</value>
> >  </property>
> >
> >  <property>
> >    <name>mapred.system.dir</name>
> >    <value>/tmp/mapred/system/</value>
> >  </property>
> >
> >
> > </configuration>
> >
> > i know that i am missing something really basic but not sure what it is.
> > the
> > documentation says mapred.system.dir should be globally accessible. how
> do
> > i
> > achieve that?
> >
> > thanks
> > vinod
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message