From user-return-16775-apmail-hadoop-hdfs-user-archive=hadoop.apache.org@hadoop.apache.org Wed Aug 27 04:14:46 2014 Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A22E6119CA for ; Wed, 27 Aug 2014 04:14:46 +0000 (UTC) Received: (qmail 29209 invoked by uid 500); 27 Aug 2014 04:14:42 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 29022 invoked by uid 500); 27 Aug 2014 04:14:42 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 29012 invoked by uid 99); 27 Aug 2014 04:14:42 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Aug 2014 04:14:42 +0000 X-ASF-Spam-Status: No, hits=2.6 required=5.0 tests=HTML_IMAGE_ONLY_16,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_REMOTE_IMAGE X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of sshi@pivotal.io designates 74.125.82.41 as permitted sender) Received: from [74.125.82.41] (HELO mail-wg0-f41.google.com) (74.125.82.41) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Aug 2014 04:14:16 +0000 Received: by mail-wg0-f41.google.com with SMTP id z12so15219699wgg.0 for ; Tue, 26 Aug 2014 21:14:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=EnjgqClkkWnjaZamtGkTsXudy/LhIMfXH0sLRMK20Hc=; b=MzPiyYGKxYLHBdQp+HSSAyqTeuBTBe8g4LzJDnNKIZHJbjGXR0/7wN6ySELEjk2h/v ghTOzcIayOj1vO6F5t9Lc6hZuJs8cQD7Mt7Tey0cjBeYHXX53O3A95AF7bgvZ5Soh2CV 2qQ3cLp+GzEpEMOGkKjvK6ZZHGk8TCd6QlbiSWqtGscTFN7EZkPekTjEFp4gtG5Lf6yU cXd2VScLoO7pzLHLoVsf55Q8xzoigdN0t5jIs2YW55Mi9SZDjsIqeAEHJp7OLqt/as9S 57D2FtgBHd0bB5l7swiOLMO0LxjxwunDdjytmxTj4G9A2dD4fZe6cYobqRYpfN9CJ8S5 74pw== X-Gm-Message-State: ALoCoQnBX/5XuvB/tRcQXX430uur3SrGA9krtf0BVF6eUbNE7t8Tfw2VicS3tt5tK/AMhd89Ed7e MIME-Version: 1.0 X-Received: by 10.194.188.46 with SMTP id fx14mr40559wjc.112.1409112855822; Tue, 26 Aug 2014 21:14:15 -0700 (PDT) Received: by 10.217.157.5 with HTTP; Tue, 26 Aug 2014 21:14:15 -0700 (PDT) In-Reply-To: <303B76A0-A1E2-4A58-9D0D-5C5E92DA51F0@gmail.com> References: <303B76A0-A1E2-4A58-9D0D-5C5E92DA51F0@gmail.com> Date: Wed, 27 Aug 2014 12:14:15 +0800 Message-ID: Subject: Re: Local file system to access hdfs blocks From: Stanley Shi To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=047d7bdc7bf03868dc050194a7cc X-Virus-Checked: Checked by ClamAV on apache.org --047d7bdc7bf03868dc050194a7cc Content-Type: text/plain; charset=UTF-8 I am not sure this is what you want but you can try this shell command: find [DATANODE_DIR] -name [blockname] On Tue, Aug 26, 2014 at 6:42 AM, Demai Ni wrote: > Hi, folks, > > New in this area. Hopefully to get a couple pointers. > > I am using Centos and have Hadoop set up using cdh5.1(Hadoop 2.3) > > I am wondering whether there is a interface to get each hdfs block > information in the term of local file system. > > For example, I can use "Hadoop fsck /tmp/test.txt -files -blocks -racks" > to get blockID and its replica on the nodes, such as: repl =3[ > /rack/hdfs01, /rack/hdfs02...] > > With such info, is there a way to > 1) login to hfds01, and read the block directly at local file system level? > > > Thanks > > Demai on the run -- Regards, *Stanley Shi,* --047d7bdc7bf03868dc050194a7cc Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I am not sure this is what you want but you can try this s= hell command:

find [DATANODE_DIR] -name [blockname]


On Tue= , Aug 26, 2014 at 6:42 AM, Demai Ni <nidmgg@gmail.com> wrote:=
Hi, folks,

New in this area. Hopefully to get a couple pointers.

I am using Centos and have Hadoop set up using cdh5.1(Hadoop 2.3)

I am wondering whether there is a interface to get each hdfs block informat= ion in the term of local file system.

For example, I can use "Hadoop fsck /tmp/test.txt -files -blocks -rack= s" to get blockID and its replica on the nodes, such as: repl =3D3[ /r= ack/hdfs01, /rack/hdfs02...]

=C2=A0With such info, is there a way to
1) login to hfds01, and read the block directly at local file system level?=


Thanks

Demai on the run



-- =
Regards,
Stanley Shi,

--047d7bdc7bf03868dc050194a7cc--