Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E4ECC4362 for ; Fri, 17 Jun 2011 18:15:38 +0000 (UTC) Received: (qmail 33919 invoked by uid 500); 17 Jun 2011 18:15:35 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 33876 invoked by uid 500); 17 Jun 2011 18:15:35 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 33868 invoked by uid 99); 17 Jun 2011 18:15:35 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Jun 2011 18:15:35 +0000 X-ASF-Spam-Status: No, hits=2.9 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RFC_ABUSE_POST,SPF_NEUTRAL,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [180.222.116.72] (HELO web95910.mail.in.yahoo.com) (180.222.116.72) by apache.org (qpsmtpd/0.29) with SMTP; Fri, 17 Jun 2011 18:15:28 +0000 Received: (qmail 90974 invoked by uid 60001); 17 Jun 2011 18:15:05 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.co.in; s=s1024; t=1308334505; bh=UurMls/3ptHiqhT38ApcLa2uzMLCyLCbGh0osIIttyM=; h=Message-ID:X-YMail-OSG:Received:X-Mailer:References:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=WRT5oERfGdWod84cMDD+HMpSoewrySAYAv9+wqPJLCQervqWd8X79vpg4hXAgyUsbEa/tM1vutQ6XNWOsXQgFLpdHuj+WtTarqQ1DriP4bNLuX+8zBqsJvl4LStLOrMJQ24Rq2xMHUv4pLWdGedtyBD8ejD0vtnyzoNINo++f0s= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.co.in; h=Message-ID:X-YMail-OSG:Received:X-Mailer:References:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=vudw9yHAwC+RyM/ceo+2VgLhTxaDPxIU0N8BfP+3rfKYfTbCj3TLECDRwF3WWBnRdqd2RY18ue3ii8blT1nDarWmHi13Xq5sIhzhzwgbOvnVRpdnMh/aU8GLFZBP7iVTpvwNxmB6bJzrX33eOPC5x2ZYuoaVn3xHdSHVI36g7V8=; Message-ID: <7084.78174.qm@web95910.mail.in.yahoo.com> X-YMail-OSG: iefstnAVM1nqoqZ7VL0Sz4FvSyYB0UkRKWK7i3Ewnb1a1GK bP2FRKJmofDzpUU1Q3YfoLKuYvE4f4i7n8ECVf8v3NzJicft0RTQYu4BA43. Qgj3XJP65eSHrupVlch1Pd7iHlXEbtFhWSgfz9ywgYyAyjOjJ7tOMgx0y_kJ JNR5_dnlEFIfTaEE.cButEWhfQxGSeQDn47YiHa_Ddrnbp1qpalyYysU9kr_ yv5Qv3vM0hM7pO8L.Cze9li.CtZ4tV_cbTOYQBtLvJ3o_iri0polYVm4.rGV 6w3hA6ozykoCxYVTnwarYdnM2a2HR.1812t6Bu_lZyELZmRn39Jxxu5m_nGZ ueljmkSC8DK_g511dLPbcsVrhuU8- Received: from [17.246.50.73] by web95910.mail.in.yahoo.com via HTTP; Fri, 17 Jun 2011 23:45:04 IST X-Mailer: YahooMailRC/572 YahooMailWebService/0.8.111.304355 References: <420784.14837.qm@web95902.mail.in.yahoo.com> <543013.69411.qm@web95910.mail.in.yahoo.com> Date: Fri, 17 Jun 2011 23:45:04 +0530 (IST) From: jagaran das Subject: Fw: HDFS File Appending URGENT To: common-user@hadoop.apache.org In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="0-842192982-1308334504=:78174" X-Virus-Checked: Checked by ClamAV on apache.org --0-842192982-1308334504=:78174 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Please help me on this.=0AI need it very urgently=0A=0ARegards,=0AJagaran = =0A=0A=0A----- Forwarded Message ----=0AFrom: jagaran das =0ATo: common-user@hadoop.apache.org=0ASent: Thu, 16 June, 2011 9:5= 1:51 PM=0ASubject: Re: HDFS File Appending URGENT=0A=0AThanks a lot Xiabo.= =0A=0AI have tried with the below code in HDFS version 0.20.20 and it work= ed.=0AIs it not stable yet?=0A=0Apublic class HadoopFileWriter {=0Apublic s= tatic void main (String [] args) throws Exception{=0Atry{=0AURI uri =3D new= =0AURI("hdfs://localhost:9000/Users/jagarandas/Work-Assignment/Analytics/a= nalytics-poc/hadoop-0.20.203.0/data/test.dat");=0A=0A=0APath pt=3Dnew Path(= uri);=0AFileSystem fs =3D FileSystem.get(new Configuration());=0ABufferedWr= iter br;=0Aif(fs.isFile(pt)){=0Abr=3Dnew BufferedWriter(new OutputStreamWri= ter(fs.append(pt)));=0Abr.newLine();=0A}else{=0Abr=3Dnew BufferedWriter(new= OutputStreamWriter(fs.create(pt,true)));=0A}=0AString line =3D args[0];=0A= System.out.println(line);=0Abr.write(line);=0Abr.close();=0A}catch(Exceptio= n e){=0Ae.printStackTrace();=0ASystem.out.println("File not found");=0A}=0A= }=0A}=0A=0AThanks a lot for your help.=0A=0ARegards,=0AJagaran =0A=0A=0A=0A= =0A________________________________=0AFrom: Xiaobo Gu =0ATo: common-user@hadoop.apache.org=0ASent: Thu, 16 June, 2011 8:01:14 = PM=0ASubject: Re: HDFS File Appending URGENT=0A=0AYou can merge multiple fi= les into a new one, there is no means to=0Aappend to a existing file.=0A=0A= On Fri, Jun 17, 2011 at 10:29 AM, jagaran das wro= te:=0A> Is the hadoop version Hadoop 0.20.203.0 API=0A>=0A> That means stil= l the hadoop files in HDFS version 0.20.20 are immutable?=0A> And there is= no means we can append to an existing file in HDFS?=0A>=0A> We need to do = this urgently as we have do set up the pipeline accordingly in=0A> producti= on?=0A>=0A> Regards,=0A> Jagaran=0A>=0A>=0A>=0A> __________________________= ______=0A> From: Xiaobo Gu =0A> To: common-user@had= oop.apache.org=0A> Sent: Thu, 16 June, 2011 6:26:45 PM=0A> Subject: Re: HDF= S File Appending=0A>=0A> please refer to FileUtil.CopyMerge=0A>=0A> On Fri,= Jun 17, 2011 at 8:33 AM, jagaran das wrote:=0A>>= Hi,=0A>>=0A>> We have a requirement where=0A>>=0A>> There would be huge n= umber of small files to be pushed to hdfs and then use=0A>>pig=0A>> to do a= nalysis.=0A>> To get around the classic "Small File Issue" we merge the fi= les and push a=0A>> bigger file in to HDFS.=0A>> But we are loosing time i= n this merging process of our pipeline.=0A>>=0A>> But If we can directly ap= pend to an existing file in HDFS we can save this=0A>> "Merging Files" time= .=0A>>=0A>> Can you please suggest if there a newer stable version of Hadoo= p where can go=0A>> for appending ?=0A>>=0A>> Thanks and Regards,=0A>> Jaga= ran=0A>=0A --0-842192982-1308334504=:78174--