hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eli Collins <...@cloudera.com>
Subject Re: run hadoop directly out of trunk checkout?
Date Tue, 21 Jun 2011 22:19:32 GMT
Hey Eric,

It works for hdfs (here are the scripts I used:
https://github.com/elicollins/hadoop-dev)

Not long ago it worked for everything, looks like mr was recently broken. I
think there's a jira for this.

$ jt2
/home/eli/src/hadoop2/mapreduce/bin/mapred: line 22:
/home/eli/src/hadoop2/mapreduce/bin/../libexec/mapred-config.sh: No such
file or directory

Thanks,
Eli

On Tue, Jun 21, 2011 at 2:41 PM, Eric Caspole <Eric.Caspole@amd.com> wrote:

> Is it still possible to run hadoop directly out of a svn checkout and build
> of trunk? A few weeks ago I was using the three variables
> HADOOP_HDFS_HOME/HADOOP_**COMMON_HOME/HADOOP_MAPREDUCE_**HOME and it all
> worked fine. It seems there has been a lot of changes in the scripts, and I
> can't get it to work or figure out what else to set either in the shell env
> or at the top of hadoop-env.sh. I have checked out trunk with a dir
> structure like this:
>
> [trunk]$ pwd
> /home/ecaspole/views/hadoop/**trunk
> [trunk]$ ll
> total 12
> drwxrwxr-x. 12 ecaspole ecaspole 4096 Jun 21 15:55 common
> drwxrwxr-x. 10 ecaspole ecaspole 4096 Jun 21 13:20 hdfs
> drwxrwxr-x. 11 ecaspole ecaspole 4096 Jun 21 16:19 mapreduce
>
> [ecaspole@wsp133572wss hdfs]$ env | grep HADOOP
> HADOOP_HDFS_HOME=/home/**ecaspole/views/hadoop/trunk/**hdfs/
> HADOOP_COMMON_HOME=/home/**ecaspole/views/hadoop/trunk/**common
> HADOOP_MAPREDUCE_HOME=/home/**ecaspole/views/hadoop/trunk/**mapreduce/
>
> [hdfs]$ ./bin/start-dfs.sh
> ./bin/start-dfs.sh: line 54: /home/ecaspole/views/hadoop/**trunk/common/bin/../bin/hdfs:
> No such file or directory
> Starting namenodes on []
> localhost: starting namenode, logging to /home/ecaspole/views/hadoop/**
> trunk/common/logs/ecaspole/**hadoop-ecaspole-namenode-**
> wsp133572wss.amd.com.out
> localhost: Hadoop common not found.
> localhost: starting datanode, logging to /home/ecaspole/views/hadoop/**
> trunk/common/logs/ecaspole/**hadoop-ecaspole-datanode-**
> wsp133572wss.amd.com.out
> localhost: Hadoop common not found.
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
>
> Does anyone else actually run it this way? If so could you show what
> variables you set and where so the components can find each other?
>
> Otherwise, what is the recommended way to run a build of trunk?
> Thanks,
> Eric
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message