hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ken Goodhope <kengoodh...@gmail.com>
Subject Re: decomission a node
Date Tue, 06 Jul 2010 15:34:23 GMT
Inside the hdfs conf,

   <property>
      <name>dfs.hosts.exclude</name>
      <value></value>
      <description>Names a file that contains a list of hosts that are
      not permitted to connect to the namenode.  The full pathname of the
      file must be specified.  If the value is empty, no hosts are
      excluded.</description>
   </property>

Point this property at a file containing a list of nodes you want to
decommision.  From there, use the command line "hadoop dfsadmin
-refreshNodes".


On Tue, Jul 6, 2010 at 7:31 AM, Some Body <somebody@squareplanet.de> wrote:

> Hi,
>
> Is it possible move all the data blocks off a cluster node and then
> decommision the node?
>
> I'm asking because,  now that my MR job is working, I'd like see how things
> scale. I.e.,
>  less processing nodes, amount of data (number & size of files, etc.). I
> currently have 8 nodes,
> and am processing 5GB spread across 2000 files.
>
> Alan
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message