Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 33C4E4036 for ; Mon, 9 May 2011 05:46:26 +0000 (UTC) Received: (qmail 73947 invoked by uid 500); 9 May 2011 05:46:24 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 73688 invoked by uid 500); 9 May 2011 05:46:22 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 73679 invoked by uid 99); 9 May 2011 05:46:20 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 May 2011 05:46:20 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.160.48 as permitted sender) Received: from [209.85.160.48] (HELO mail-pw0-f48.google.com) (209.85.160.48) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 May 2011 05:46:12 +0000 Received: by pwi16 with SMTP id 16so3224303pwi.35 for ; Sun, 08 May 2011 22:45:52 -0700 (PDT) Received: by 10.143.21.32 with SMTP id y32mr3381857wfi.263.1304919952298; Sun, 08 May 2011 22:45:52 -0700 (PDT) MIME-Version: 1.0 Received: by 10.142.177.14 with HTTP; Sun, 8 May 2011 22:45:31 -0700 (PDT) In-Reply-To: <2fd8ee45.631d.12fd2db5cc6.Coremail.ltomuno@163.com> References: <2fd8ee45.631d.12fd2db5cc6.Coremail.ltomuno@163.com> From: Harsh J Date: Mon, 9 May 2011 11:15:31 +0530 Message-ID: Subject: Re: How to use the HDFS just like the common file system? To: hdfs-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Tom White's book "Hadoop: The Definitive Guide" has a neat little section about this in Chapter 3. Look at the FsUrlStreamHandlerFactory class (http://hadoop.apache.org/common/docs/r0.20.2/api/org/apache/hadoop/f= s/FsUrlStreamHandlerFactory.html) Once registered into Java's net.URL, you should be able to use most regular Java classes as long as you provide "hdfs" as the scheme (and that the FileSystem instance knows what to use for it). 2011/5/9 ltomuno : > using=A0java > new File("/tmp/common") > but > /tmp/common is a HDFS file > how to implement this feature? > thanks > > > --=20 Harsh J