Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4477F10B76 for ; Wed, 4 Mar 2015 21:51:43 +0000 (UTC) Received: (qmail 57068 invoked by uid 500); 4 Mar 2015 21:51:42 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 56965 invoked by uid 500); 4 Mar 2015 21:51:42 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 56954 invoked by uid 99); 4 Mar 2015 21:51:42 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 Mar 2015 21:51:42 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of andrew.wang@cloudera.com designates 209.85.223.178 as permitted sender) Received: from [209.85.223.178] (HELO mail-ie0-f178.google.com) (209.85.223.178) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 Mar 2015 21:51:17 +0000 Received: by ierx19 with SMTP id x19so71144086ier.3 for ; Wed, 04 Mar 2015 13:51:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=4kBpf6PTOqcOslBMrLV/Pod0xqnrO+w3vKnYWl0GJZ8=; b=Hhi0fOeb6yA1BfMAciB0dO8m6D/QMbLzmhtcGY/U/zC21EJoWz3s/hc2VCmw7WbF7V wk46aTr6MuDrU2r7u+3YT/iUSt9JLt3mO5W3BC2S1tcz/tBtxx808GI7omTQJsEyiU/J LZiQI8HB9MQ8x/zQBrMydBigazl1pX01A05khH808c3GiYcroKC2JHWuXstQShRU2Y9H S/WRYbyNJpWxiTxgKUY/gQqYHKHnRLa7k4RyK8QXoyVzgFuV0eJgJ3DxifCHScrXZLc2 8tpfgNxMeGuAjhyQd2eB6QeyZUKpUB/H4v3zUimYjkX+rapixpU1GOQD2/5ww4+GH3vZ GBNA== X-Gm-Message-State: ALoCoQmUaRCLDkJRCY/KAca/19Hicyw4ZxESJjhHPG47SpBDWP9i7lC/2VjQiVSucN0i9kV2Jvls X-Received: by 10.107.131.83 with SMTP id f80mr15668153iod.50.1425505875217; Wed, 04 Mar 2015 13:51:15 -0800 (PST) MIME-Version: 1.0 Received: by 10.107.58.6 with HTTP; Wed, 4 Mar 2015 13:50:53 -0800 (PST) In-Reply-To: References: From: Andrew Wang Date: Wed, 4 Mar 2015 13:50:53 -0800 Message-ID: Subject: Re: Dose hdfs support the configuration that different blocks can have different number of replcias? To: "hdfs-dev@hadoop.apache.org" Content-Type: multipart/alternative; boundary=001a113f998a51939805107d737e X-Virus-Checked: Checked by ClamAV on apache.org --001a113f998a51939805107d737e Content-Type: text/plain; charset=UTF-8 Lipeng, -setrep allows you to change the replication of an existing file. You can also specify the replication factor when you initially create a file. Not sure what you mean by "dynamically", that to me means calling setrep. There is replication or invalidation work done as part of running -setrep. This is done as a low-priority operation, unless the file is already in a bad replication state (e.g. under-replicated). Best, Andrew On Wed, Mar 4, 2015 at 12:18 PM, Lipeng Wan wrote: > Hi Andrew, > > By using the -setrep command, can we change the replication factor of > existing files? Or, can we change the replication factor of files > dynamically? If that's possible, how much data movement overhead will > occur? > Thanks! > > Lipeng > > On Tue, Mar 3, 2015 at 2:57 PM, Andrew Wang > wrote: > > Yup, definitely. Check out the -setrep command: > > > > > http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#setrep > > > > HTH, > > Andrew > > > > On Tue, Mar 3, 2015 at 11:49 AM, Lipeng Wan > wrote: > > > >> Hi Andrew, > >> > >> Thanks for your reply! > >> Then is it possible for us to specify different replication factors > >> for different files? > >> > >> Lipeng > >> > >> On Tue, Mar 3, 2015 at 2:38 PM, Andrew Wang > >> wrote: > >> > Hi Lipeng, > >> > > >> > Right now that is unsupported, replication is set on a per-file basis, > >> not > >> > per-block. > >> > > >> > Andrew > >> > > >> > On Tue, Mar 3, 2015 at 11:23 AM, Lipeng Wan > >> wrote: > >> > > >> >> Hi devs, > >> >> > >> >> By default, hdfs creates same number of replicas for each block. Is > it > >> >> possible for us to create more replicas for some of the blocks? > >> >> Thanks! > >> >> > >> >> L. W. > >> >> > >> > --001a113f998a51939805107d737e--