hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Azuryy Yu <azury...@gmail.com>
Subject Re: The best practice of migrating hadoop 1.0.1 to hadoop 2.2.3
Date Thu, 06 Mar 2014 05:45:29 GMT
Hi,

1)      Is it possible to do an "in-place" migration, while keeping all
data in HDFS safely?

yes. stop the HDFS firstly, then run "start-dfs.sh -upgrade"

2)      If it is yes, is there any doc/guidance to do this?

you just want a HDFS upgrade, so I don't think there are some useful doc.

3)      Is the 2.0.3 MR API binary compatible with the one of 1.0.1?

Not much compatible of the FileSystem API. and there are some new HDFS
configurations and some depracates.




On Thu, Mar 6, 2014 at 12:47 PM, Jerry Zhang <emacs2008@hotmail.com> wrote:

>  Hi there
>
>
>
>
>
> We plan to migrate a 30 nodes hadoop 1.0.1 cluster to the version 2.3.0.
> We don't have extra machines to setup a separate new cluster, thus hope to
> do an "in-place" migration by replacing the components on the existing
> computers. So the questions are:
>
>
>
> 1)      Is it possible to do an "in-place" migration, while keeping all
> data in HDFS safely?
>
> 2)      If it is yes, is there any doc/guidance to do this?
>
> 3)      Is the 2.0.3 MR API binary compatible with the one of 1.0.1?
>
>
>
>
>
> Any information are highly appreciated.
>
>
>
>
>
> Jerry Zhang
>
>
>

Mime
View raw message