hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chaitanya Krishna <chaitanyavv.ii...@gmail.com>
Subject Re: How to specify HADOOP_COMMON_HOME in hadoop-mapreduce trunk?
Date Mon, 06 Sep 2010 04:39:35 GMT
Rita,

   I think this should be solved when you build a jar of hadoop common and
place it in the classpath (preferably in lib/, to maintain the notion of
putting all dependent jars in this folder) of mapreduce-trunk/hdfs-trunk.

Hope this helps.

Thanks,
Chaitanya


On Mon, Sep 6, 2010 at 10:04 AM, Rita Liu <crystaldoll06@gmail.com> wrote:

> Hi :)
>
> In the current hadoop-common trunk, bin/start-all.sh has been deprecated.
> In order to start the cluster, we have to go to hadoop-hdfs trunk and
> hadoop-mapred trunk and run bin/start-dfs.sh and bin/start-mapred,
> separately. However, when I do so, I always get the error message "can't
> find hadoop common". This problem can be solved if I export
> HADOOP_COMMON_HOME to be hadoop-common trunk, but is there a way to
> configure this environment variable so that I don't have to export
> HADOOP_COMMON_HOME every time when I start the cluster?
>
> Please help me if possible? Thank you very much!
> -Rita :))
>
>

Mime
View raw message