Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9E643912B for ; Fri, 13 Jul 2012 06:15:03 +0000 (UTC) Received: (qmail 65210 invoked by uid 500); 13 Jul 2012 06:15:02 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 65161 invoked by uid 500); 13 Jul 2012 06:15:02 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 65135 invoked by uid 99); 13 Jul 2012 06:15:01 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Jul 2012 06:15:01 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=FSL_RCVD_USER,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.216.176 as permitted sender) Received: from [209.85.216.176] (HELO mail-qc0-f176.google.com) (209.85.216.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Jul 2012 06:14:56 +0000 Received: by qcsc21 with SMTP id c21so2297603qcs.35 for ; Thu, 12 Jul 2012 23:14:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=0C9NmYq+zRXcCN9hbB6wakC3iiIGhkaZPVYJ9F/XSAY=; b=a+1ZtW3QdTGrRBQuPLUaFY4fILvVw5TDagHVYgpJsfuALLOM2fW07Ten28Sun3GoQM iO3DUJBDukiBoA2fchLtbp6OMAQZ3dJEEk5Z5A/OLFHwzQdOXbttgMFOY5gEdi7Vjt2b F+hJMFkBaxljfwqYmx1nBAtZev7FAJ8yPqWIKuAqaa0DBscQnZKBrK++UIj9ZCQfx3cP PtFqMs6600sdxsJFjjP5n7bKmWuXbGYyf8oTSsV880SmXQT3MTxGYtXFTNJ31wvVmxcF xdCl0z4bRA0OzkmObB64bhLzrysEyyxjTB+6tlzTvNkxxqN1V3CnwQoyFGBBNt83c6b2 CywA== Received: by 10.229.136.139 with SMTP id r11mr600603qct.42.1342160075416; Thu, 12 Jul 2012 23:14:35 -0700 (PDT) MIME-Version: 1.0 Received: by 10.229.227.11 with HTTP; Thu, 12 Jul 2012 23:14:15 -0700 (PDT) In-Reply-To: References: From: Harsh J Date: Fri, 13 Jul 2012 11:44:15 +0530 Message-ID: Subject: Re: suggest Best way to upload xml files to HDFS To: mapreduce-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQlUAHNV7AXwAPAUBdnn4zqX0xzQ15enovaL5X5wwlddT4H7Wvy7V6qxn3t9jz1aASEnE2sc X-Virus-Checked: Checked by ClamAV on apache.org If you're looking at automated file/record/event collection, take a look at Apache Flume: http://incubator.apache.org/flume/. It does well for distributed collections as well and is very configurable. Otherwise, write a scheduled script to do the uploads every X period (your choice). Consider using https://github.com/edwardcapriolo/filecrush or similar tools too, if your files are much small and getting in the way of MR processing. On Fri, Jul 13, 2012 at 8:59 AM, Manoj Babu wrote: > Hi, > > I need to upload large xml files files daily. Right now am having a small > program to read all the files from local folder and writing it to HDFS as a > single file. Is this a right way? > If there any best practices or optimized way to achieve this Kindly let me > know. > > Thanks in advance! > > Cheers! > Manoj. > -- Harsh J