Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7701C10735 for ; Thu, 27 Feb 2014 10:09:26 +0000 (UTC) Received: (qmail 17486 invoked by uid 500); 27 Feb 2014 10:09:18 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 16813 invoked by uid 500); 27 Feb 2014 10:09:17 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 16763 invoked by uid 99); 27 Feb 2014 10:09:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Feb 2014 10:09:15 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dechouxb@gmail.com designates 209.85.215.41 as permitted sender) Received: from [209.85.215.41] (HELO mail-la0-f41.google.com) (209.85.215.41) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Feb 2014 10:09:11 +0000 Received: by mail-la0-f41.google.com with SMTP id gl10so1536899lab.0 for ; Thu, 27 Feb 2014 02:08:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=XqG4+2TU+QavRO0WQXn/esDgCQ5xQ7tVg9ye/Q9mYm4=; b=GbvQosO6dUrTlv1EaB600S0ip1HjvYzeIKHYvBkEpjc/30v+fVocp8eKZzPNitebtb OwBfLNE5tHJFaERU/H6Et2XbSHGGKv4iq2ex6QS5H+fq63urviH4tSui1bTb3BLBbcTO myuUmerTOO0Xzj0Rsu2XsBK3MUAjTxtqhjLEDP7+zBTuJaOg5MWRZ9P0cPV5yqt6lwDg HwJ3PCecBcNb56OHhMR5lWdjfV2PgrTclHOq24aGX6dHzuzVqyFn8QjuOhHhvClFn5s3 tfThjGiRSk+9+E1A7mMl60ncr09px0s6P4P5NH0EgD9h0u0C4MM7u6LdeynswRWr6IpP CoDA== MIME-Version: 1.0 X-Received: by 10.152.206.70 with SMTP id lm6mr29511lac.82.1393495729529; Thu, 27 Feb 2014 02:08:49 -0800 (PST) Received: by 10.112.125.165 with HTTP; Thu, 27 Feb 2014 02:08:49 -0800 (PST) In-Reply-To: References: <0D81E141873F47C5B8235593E7DF42F1@neusofte4edc94> Date: Thu, 27 Feb 2014 11:08:49 +0100 Message-ID: Subject: Re: Question about DataNode From: Bertrand Dechoux To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=001a1134a00af4692404f3608178 X-Virus-Checked: Checked by ClamAV on apache.org --001a1134a00af4692404f3608178 Content-Type: text/plain; charset=ISO-8859-1 I am not sure what your question is. You might want to be more explicit and read more about Hadoop architecture and the roles of the various daemons. A directory is only metadata so you can mess around with DataNodes if you want but they are not involved. Regards Bertrand Bertrand Dechoux On Thu, Feb 27, 2014 at 10:12 AM, Juan Carlos wrote: > Hi Edward, > maybe you are sending your request to the master from the slave. I don't > are sure, but I think that secondary never answer any request, neither read > request, and you have to modify your config files by hand to change your > slave to be master. > > I haven't tested so much with master/slave configurations, I only tested > with QJM and NFS syncronizations, in these cases whenever you start > namenodes they first get syncronized before being able to promote to active > namenode by checking journalnodes or NFS looking for changes in metadata to > update them. > > > 2014-02-27 9:52 GMT+01:00 EdwardKing : > > Two node,one is master,another is slave, I kill DataNode under slave, >> then create directory by dfs command under master machine: >> [hadoop@master]$./start-all.sh >> [hadoop@slave]$ jps >> 9917 DataNode >> 10152 Jps >> [hadoop@slave]$ kill -9 9917 >> [hadoop@master]$ hadoop dfs -mkdir test >> [hadoop@master]$ hadoop dfs -ls >> drwxr-xr-x - hadoop supergroup 0 2014-02-27 00:15 test >> I guess test directory can't exist under slave,because slave is killed. >> Right? >> Then I restart service under master,like follows: >> [hadoop@master]$./start-all.sh >> >> This time I find slave also contains test directory, why? Is hadoop2.2.0 >> can automatic recovery? >> Any idea will be appreciated. >> Best regards, >> Edward >> >> >> >> >> >> --------------------------------------------------------------------------------------------------- >> Confidentiality Notice: The information contained in this e-mail and any >> accompanying attachment(s) >> is intended only for the use of the intended recipient and may be >> confidential and/or privileged of >> Neusoft Corporation, its subsidiaries and/or its affiliates. If any >> reader of this communication is >> not the intended recipient, unauthorized use, forwarding, printing, >> storing, disclosure or copying >> is strictly prohibited, and may be unlawful.If you have received this >> communication in error,please >> immediately notify the sender by return e-mail, and delete the original >> message and all copies from >> your system. Thank you. >> >> --------------------------------------------------------------------------------------------------- >> > > --001a1134a00af4692404f3608178 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
I am not sure what your question is. You might want to be = more explicit and read more about Hadoop architecture and the roles of the = various daemons. A directory is only metadata so you can mess around with D= ataNodes if you want but they are not involved.

Regards

Bertrand


Bertrand Dechoux<= /div>

On Thu, Feb 27, 2014 at 10:12 AM, Juan C= arlos <jucaf1@gmail.com> wrote:
Hi Edward,
maybe you are sending y= our request to the master from the slave. I don't are sure, but I think= that secondary never answer any request, neither read request, and you hav= e to modify your config files by hand to change your slave to be master.
I haven't tested so much with master/slave configurations, I = only tested with QJM and NFS syncronizations, in these cases whenever you s= tart namenodes they first get syncronized before being able to promote to a= ctive namenode by checking journalnodes or NFS looking for changes in metad= ata to update them.


2= 014-02-27 9:52 GMT+01:00 EdwardKing <zhangsc@neusoft.com>:=

Two node,one is master,another is slave,=A0I=20 kill DataNode under slave, then create directory by dfs command under maste= r=20 machine:
[hadoop@master]$./start-all.sh
[hadoop@slave]$ jps
9917 DataNode
10152 Jps
[hadoop@slave]$ kill -9 9917
[hadoop@master]$ hadoop dfs -mkdir= =20 test
[hadoop@master]$ hadoop dfs -ls=20
drwxr-xr-x=A0=A0 - hadoop=20 supergroup=A0=A0=A0=A0=A0=A0=A0=A0=A0 0 2014-02-27=20 00:15 test
I guess test directory can't exist under=20 slave,because slave is killed. Right?
Then I restart service under master,like=20 follows:
[hadoop@master]$./start-all.sh
=A0
This time I find slave also contains test directory, why? Is hadoop2.2= .0=20 can automatic recovery?
Any idea will be appreciated.
Best regards,
Edward
=A0
=A0
=A0

---------------------------------------------= ------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any ac= companying attachment(s)
is intended only for the use of the intended recipient and may be confident= ial and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader = of this communication is
not the intended recipient, unauthorized use, forwarding, printing,=A0 stor= ing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this commu= nication in error,please
immediately notify the sender by return e-mail, and delete the original mes= sage and all copies from
your system. Thank you.
---------------------------------------------------------------------------= ------------------------



--001a1134a00af4692404f3608178--