Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 18545 invoked from network); 11 Oct 2010 23:27:34 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 11 Oct 2010 23:27:34 -0000 Received: (qmail 44287 invoked by uid 500); 11 Oct 2010 23:27:29 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 44145 invoked by uid 500); 11 Oct 2010 23:27:29 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 44124 invoked by uid 500); 11 Oct 2010 23:27:29 -0000 Delivered-To: apmail-hadoop-core-user@hadoop.apache.org Received: (qmail 44121 invoked by uid 99); 11 Oct 2010 23:27:29 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Oct 2010 23:27:29 +0000 X-ASF-Spam-Status: No, hits=3.6 required=10.0 tests=FREEMAIL_FROM,FS_REPLICA,SPF_HELO_PASS,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of lists@nabble.com designates 216.139.236.158 as permitted sender) Received: from [216.139.236.158] (HELO kuber.nabble.com) (216.139.236.158) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Oct 2010 23:27:22 +0000 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1P5Rll-0005Wn-Na for core-user@hadoop.apache.org; Mon, 11 Oct 2010 16:27:01 -0700 Message-ID: <29938912.post@talk.nabble.com> Date: Mon, 11 Oct 2010 16:27:01 -0700 (PDT) From: adamphelps To: core-user@hadoop.apache.org Subject: Finding replicants of an HDFS file MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: amp@opendns.com X-Virus-Checked: Checked by ClamAV on apache.org Is there a command that will display which nodes the blocks of a file are replicated to? We're prototyping a hadoop cluster and want to perform some failure testing where we kill the correct combination of nodes to make a file inaccessible, however I haven't been able to track down a command that will do this. Thanks -- View this message in context: http://old.nabble.com/Finding-replicants-of-an-HDFS-file-tp29938912p29938912.html Sent from the Hadoop core-user mailing list archive at Nabble.com.