Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A19F7D1A6 for ; Fri, 28 Sep 2012 04:31:18 +0000 (UTC) Received: (qmail 14915 invoked by uid 500); 28 Sep 2012 04:31:14 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 13776 invoked by uid 500); 28 Sep 2012 04:31:06 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 13733 invoked by uid 99); 28 Sep 2012 04:31:05 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 28 Sep 2012 04:31:05 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of liulei412@gmail.com designates 209.85.223.176 as permitted sender) Received: from [209.85.223.176] (HELO mail-ie0-f176.google.com) (209.85.223.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 28 Sep 2012 04:30:58 +0000 Received: by ieak11 with SMTP id k11so7539871iea.35 for ; Thu, 27 Sep 2012 21:30:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=WArx48Ea0CqKpEk5+79Pp19/5GKOWxp9SRtDQwaXma4=; b=anw+EDJmNRoC/4Cg9NvRwwJLk31P+U5ab8vB6i+t3X6fg+xpQ12lpHPFpy02pbAauh 1iDDXSyX+41mTeoz5khVDVzW0gwAnF1phTvCHJokwXL+jYh2sO40imJbGmBdNVcSXYiC b01bWiwbzo2Oj66TOwdcgw598t8ztpE9pgstgkB+GjoKx8b3Km4WP4cqtp6mT8qJlA1s oWK1SWOlb/xnVLrPk4Lyl0XxHzydaX7c4FAVLPPGzzdKSvXxcJGiisCwy+na4SPt5n4/ KWtmwHOPJBszqkzAVDcbgs0lYcrKSumdyRyQRgIwkSxnfv0weCvRS9bGayrawTFtO3Qt 3DcQ== MIME-Version: 1.0 Received: by 10.50.91.197 with SMTP id cg5mr510468igb.46.1348806637727; Thu, 27 Sep 2012 21:30:37 -0700 (PDT) Received: by 10.64.168.72 with HTTP; Thu, 27 Sep 2012 21:30:37 -0700 (PDT) In-Reply-To: References: Date: Fri, 28 Sep 2012 12:30:37 +0800 Message-ID: Subject: Re: DFSClient may read wrong data in local read From: jlei liu To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e89a8f3b9b5583304104cabb84f2 --e89a8f3b9b5583304104cabb84f2 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable Hi Colin=A3=AC thanks for your reply. Where can I see the new design for BlockReaderLocal class? Thanks, LiuLei 2012/9/28 Colin McCabe > We don't make very strong guarantees about what happens when clients > read from a deleted file. DFSClients definitely may read data from a > deleted file even if local reads are not enabled. > > Incidentally, BlockReaderLocal is being redesigned to pass file > descriptors rather than paths, which will be more secure and fix some > corner cases surrounding append and local reads. > > cheers, > Colin > > > On Wed, Sep 26, 2012 at 11:19 PM, jlei liu wrote: > > In local read, BlockReaderLocal class use "static Map > LocalDatanodeInfo> localDatanodeInfoMap" property to store local block > file > > path and local meta file path. When I stop HDFS cluster or I kill the > local > > DataNode and delete file use "./hadoop dfs -rm path" command , the > DFSClient > > still can read the data form local file. I think that may lead to > DFSClient > > read wrong data. > > > > I think we should fix the problem. > > > > > > Thanks, > > > > LiuLei > --e89a8f3b9b5583304104cabb84f2 Content-Type: text/html; charset=GB2312 Content-Transfer-Encoding: quoted-printable Hi Colin=A3=AC thanks for your reply.

Where can I see the new design= for BlockReaderLocal class?

Thanks,

LiuLei


2012/9/28 Colin McCabe <cmccabe@alumni.cmu.edu= >
We don't make very strong guarantees abo= ut what happens when clients
read from a deleted file.  DFSClients definitely may read data from a<= br> deleted file even if local reads are not enabled.

Incidentally, BlockReaderLocal is being redesigned to pass file
descriptors rather than paths, which will be more secure and fix some
corner cases surrounding append and local reads.

cheers,
Colin


On Wed, Sep 26, 2012 at 11:19 PM, jlei liu <liulei412@gmail.com> wrote:
> In local read, BlockReaderLocal class use "static Map<Integer,=
> LocalDatanodeInfo> localDatanodeInfoMap" property to store loc= al block file
> path and local meta file path. When I stop HDFS cluster or I kill the = local
> DataNode and delete file use "./hadoop dfs -rm path" command= , the DFSClient
> still can read the data form local file. I think that may lead to DFSC= lient
> read wrong data.
>
> I think we should fix the problem.
>
>
> Thanks,
>
> LiuLei

--e89a8f3b9b5583304104cabb84f2--