Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 757F118FEB for ; Tue, 8 Sep 2015 03:08:32 +0000 (UTC) Received: (qmail 89194 invoked by uid 500); 8 Sep 2015 03:08:17 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 89034 invoked by uid 500); 8 Sep 2015 03:08:17 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 89024 invoked by uid 99); 8 Sep 2015 03:08:17 -0000 Received: from Unknown (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Sep 2015 03:08:17 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 96384C01AB for ; Tue, 8 Sep 2015 03:08:16 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.14 X-Spam-Level: *** X-Spam-Status: No, score=3.14 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, T_REMOTE_IMAGE=0.01, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 1E01BPlIcSsd for ; Tue, 8 Sep 2015 03:08:05 +0000 (UTC) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id B217D20645 for ; Tue, 8 Sep 2015 03:08:04 +0000 (UTC) Received: by lbcao8 with SMTP id ao8so46056700lbc.3 for ; Mon, 07 Sep 2015 20:07:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=bfJJsIlegJ1II+ZQrZO3UES9gzfcbRR2WeHhZyenb/4=; b=fSQRr3rN/FAEDnUubwswyiJBxAnVqq8SPwUyD/DV0M058yoEi+2wU7x7kIihE+zlYV sSiabJkNfuzLwXRkUdxaEkTkhF+Omvg3lNH+0ObZPryEtRa6apccykyKMrNmSGLh+jIV qmXzqto73sHNbnbQlQ4shs+8Jrd7xD5I3B7y0uV+0dgOu5RIpxGw9VVat2VECZ9no9Nn GZYVy4ACK3ZH1R+mf6LLLaJgiqrVh2pObIVPRszNaktahFomptmiM2IFQXoo/kJKxbbV S+OXJHrxLcAkFDDZBtBgqdWn8F1V/hksBQOhKBhOWPwCBP1Ax3N2zAP1/zFoH2Qnuuig pxCQ== MIME-Version: 1.0 X-Received: by 10.112.24.163 with SMTP id v3mr20273495lbf.101.1441681678299; Mon, 07 Sep 2015 20:07:58 -0700 (PDT) Received: by 10.112.168.71 with HTTP; Mon, 7 Sep 2015 20:07:58 -0700 (PDT) In-Reply-To: <1441676667832.1242936583@boxbe> References: <1441676667832.1242936583@boxbe> Date: Tue, 8 Sep 2015 08:37:58 +0530 Message-ID: Subject: Re: Who will Responsible for Handling DFS Write Pipe line Failure From: miriyala srinivas To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a1135e3de507d0a051f33aca0 --001a1135e3de507d0a051f33aca0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable @Harsh thanks for sharing link. On Tue, Sep 8, 2015 at 6:56 AM, Harsh J wrote: > [image: Boxbe] This message is eligible > for Automatic Cleanup! (harsh@cloudera.com) Add cleanup rule > > | More info > > > These 2-part blog posts from Yongjun should help you understand the HDFS > file write recovery process better: > http://blog.cloudera.com/blog/2015/02/understanding-hdfs-recovery-process= es-part-1/ > and > http://blog.cloudera.com/blog/2015/03/understanding-hdfs-recovery-process= es-part-2/ > > On Mon, Sep 7, 2015 at 10:39 AM miriyala srinivas > wrote: > >> Hi All, >> >> I am just started Learning fundamentals of HDFS and its internal >> mechanism , concepts used here are very impressive and looks simple but >> makes me confusing and my question is *who will responsible for handling >> DFS write failure in pipe line (assume replication factor is 3 and 2nd D= N >> failed in the pipeline)*? if any data node failed during the pipe line >> write then the entire pipe line will get stopped? or new data node added= to >> the existing pipe line? how this entire mechanism works?I really appreci= ate >> if someone with good knowledge of HDFS can explains to me. >> >> Note:I read bunch of documents but none seems to be explained what i am >> looking for. >> >> thanks >> srinivas >> > > --001a1135e3de507d0a051f33aca0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
@Harsh
thanks for sharing link.

On Tue, Sep 8, 2015 = at 6:56 AM, Harsh J <harsh@cloudera.com> wrote:
3D"Boxbe" This message is eligible for Automatic Cleanup! (harsh@cloudera.com)=20 Add cleanup rule | More info

These 2-part blog posts from Yongjun should help you u= nderstand the HDFS file write recovery process better:=C2=A0http://blog.cloudera.com/blog/2015/02/understanding= -hdfs-recovery-processes-part-1/=C2=A0and=C2=A0http://blog.cloudera.com/blog/2015/03/understanding-hdfs-rec= overy-processes-part-2/

On Mon, Sep 7, 2015 at 10:39 AM miriyala srinivas <srinivas2828@gmail.com> w= rote:
Hi All,

I am just started Learning fundamentals of=C2=A0 HDFS=C2= =A0 and its internal mechanism , concepts used here are very impressive and= looks simple but makes me confusing and my question is who will respons= ible for handling DFS write failure in pipe line (assume replication factor= is 3 and 2nd DN failed in the pipeline)? if any data node failed durin= g the pipe line write then the entire pipe line will get stopped? or new da= ta node added to the existing pipe line? how this entire mechanism works?I = really appreciate if someone with good knowledge of HDFS can explains to me= .

Note:I read bunch of documents but none seems to be explaine= d what i am looking for.

thanks
srinivas


--001a1135e3de507d0a051f33aca0--