Return-Path: X-Original-To: apmail-flume-user-archive@www.apache.org Delivered-To: apmail-flume-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1AB1011BFD for ; Mon, 16 Jun 2014 07:04:31 +0000 (UTC) Received: (qmail 70069 invoked by uid 500); 16 Jun 2014 07:04:29 -0000 Delivered-To: apmail-flume-user-archive@flume.apache.org Received: (qmail 70015 invoked by uid 500); 16 Jun 2014 07:04:29 -0000 Mailing-List: contact user-help@flume.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flume.apache.org Delivered-To: mailing list user@flume.apache.org Received: (qmail 70001 invoked by uid 99); 16 Jun 2014 07:04:29 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 16 Jun 2014 07:04:29 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of alajangikishore@gmail.com designates 209.85.213.41 as permitted sender) Received: from [209.85.213.41] (HELO mail-yh0-f41.google.com) (209.85.213.41) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 16 Jun 2014 07:04:24 +0000 Received: by mail-yh0-f41.google.com with SMTP id z6so4034941yhz.28 for ; Mon, 16 Jun 2014 00:04:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=M3E3heEPQmOnixmbiurVaFA6uEqQnpCp8U1aNQWX8ko=; b=OLXTU1n4H+hA0XTU+AoOUVvxQ958R5mzVj9HJ2zwE6z8Jf+RBcNDVwh/xmKNxBE7t4 qyi4YOJSrpf41CCu5vMuXcFiTdylPZIxmocSWGGt2GMlQFna4UkKeSYgUyvR/rOA/rPJ K4Q/iKUCIMU05Izf0wUMh/+SCXzNyGDY0s2FFD9xbuEsKHTX4TI0Jv42sn9kX85j9CGl uS4KxXzfC1r/VP2+qu6+px3QDH0Pj8ACaiD4e8RWuAmtR7lwuS+jv0YCvZDvWx4A9Lny n8ZyyKuVrcgsyMPZMInp40JjYFhDFTpaP8FNtzpI+ml5iW++iwbLjy6ZfFqftlDrXhKO jWew== MIME-Version: 1.0 X-Received: by 10.236.126.174 with SMTP id b34mr30456522yhi.56.1402902243347; Mon, 16 Jun 2014 00:04:03 -0700 (PDT) Received: by 10.170.142.198 with HTTP; Mon, 16 Jun 2014 00:04:03 -0700 (PDT) In-Reply-To: References: Date: Mon, 16 Jun 2014 11:04:03 +0400 Message-ID: Subject: Re: copy to hdfs From: kishore alajangi To: user@flume.apache.org Content-Type: multipart/alternative; boundary=20cf30684361de95c804fbeea139 X-Virus-Checked: Checked by ClamAV on apache.org --20cf30684361de95c804fbeea139 Content-Type: text/plain; charset=ISO-8859-1 could anybody help me ? On Mon, Jun 16, 2014 at 10:27 AM, kishore alajangi < alajangikishore@gmail.com> wrote: > Instead just mentioning hdsfs.path = /flume/messages/, do i need to > mention something else? > > > On Mon, Jun 16, 2014 at 10:25 AM, kishore alajangi < > alajangikishore@gmail.com> wrote: > >> I created the /flume/messages directories, but still nothing is written >> with flume in those directories. please help me. >> >> >> On Mon, Jun 16, 2014 at 10:15 AM, kishore alajangi < >> alajangikishore@gmail.com> wrote: >> >>> Do I need to create the /flume/messages/ directories? >>> >>> >>> >>> On Mon, Jun 16, 2014 at 10:14 AM, kishore alajangi < >>> alajangikishore@gmail.com> wrote: >>> >>>> checked, nothing is written in hdfs. >>>> >>>> >>>> On Mon, Jun 16, 2014 at 10:10 AM, Sharninder >>>> wrote: >>>> >>>>> That just means source has done its work and is waiting for more data >>>>> to read. Did you check hdfs to see if all data has been written? >>>>> >>>>> >>>>> >>>>> On Mon, Jun 16, 2014 at 11:34 AM, kishore alajangi < >>>>> alajangikishore@gmail.com> wrote: >>>>> >>>>>> Hi Mohit and sharminder, >>>>>> >>>>>> Thanks for reply, after I called with -n tier, source is not >>>>>> directory error came, I changed the source to /tmp/ and hdfs.path to >>>>>> /flume/messages/ in config file, and run the command, the INFO i am getting >>>>>> now is "spooling directory source runner has shutdown" >>>>>> what could be the problem, please help me. >>>>>> >>>>>> >>>>>> On Sun, Jun 15, 2014 at 10:21 PM, Mohit Durgapal < >>>>>> durgapalmohit@gmail.com> wrote: >>>>>> >>>>>>> Replace -n agent with -n tier1 >>>>>>> >>>>>>> >>>>>>> On Sunday, June 15, 2014, kishore alajangi < >>>>>>> alajangikishore@gmail.com> wrote: >>>>>>> >>>>>>>> Dear Sharminder, >>>>>>>> >>>>>>>> Thanks for your reply, yes I am playing with flume, as you >>>>>>>> suggested i am using spool directory source, My configuration file looks >>>>>>>> like >>>>>>>> >>>>>>>> *tier1.sources = source1 >>>>>>>> tier1.channels = channel1 >>>>>>>> tier1.sinks = sink1 >>>>>>>> >>>>>>>> tier1.sources.source1.type = spooldir >>>>>>>> tier1.sources.source1.spoolDir = /var/log/messages >>>>>>>> tier1.sources.source1.channels = channel1 >>>>>>>> tier1.channels.channel1.type = memory >>>>>>>> >>>>>>>> tier1.sinks.sink1.type = hdfs >>>>>>>> tier1.sinks.sink1.hdfs.path = hdfs://localhost:8020/flume/messages >>>>>>>> tier1.sinks.sink1.hdfs.fileType = SequenceFile >>>>>>>> tier1.sinks.sink1.hdfs.filePrefix = data >>>>>>>> tier1.sinks.sink1.hdfs.fileSuffix = .seq >>>>>>>> >>>>>>>> # Roll based on the block size only >>>>>>>> tier1.sinks.sink1.hdfs.rollCount=0 >>>>>>>> tier1.sinks.sink1.hdfs.rollInterval=0 >>>>>>>> tier1.sinks.sink1.hdfs.rollSize = 120000000 >>>>>>>> # seconds to wait before closing the file. >>>>>>>> tier1.sinks.sink1.hdfs.idleTimeout = 60 >>>>>>>> tier1.sinks.sink1.channel = channel1 >>>>>>>> >>>>>>>> tier1.channels.channel1.capacity = 100000* >>>>>>>> tier1.sources.source1.deserializer.maxLineLength = 32768 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> the command I used is >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ./flume-ng agent --conf ./conf/ -f bin/example.conf -Dflume.root.logger=DEBUG,console -n agent >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> it gives warn after created sources, channels, sinks for tier1 agent is >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> no configuration found for this host:agent >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> any help? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Jun 15, 2014 at 11:18 AM, Sharninder >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> I want to copy my local data to hdfs using flume in a single >>>>>>>>>> machine which isrunning hadoop, How can I do that, please help me. >>>>>>>>>> >>>>>>>>>> What is this "local data" ? >>>>>>>>> >>>>>>>>> If it's just files, why not use the hadoop fs copy command >>>>>>>>> instead? If you want to play around with flume, take a look at the spool >>>>>>>>> directory source or the exec source and you should be able to put something >>>>>>>>> together that'll push data through flume to hadoop. >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Sharninder >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Thanks, >>>>>>>> Kishore. >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Thanks, >>>>>> Kishore. >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Thanks, >>>> Kishore. >>>> >>> >>> >>> >>> -- >>> Thanks, >>> Kishore. >>> >> >> >> >> -- >> Thanks, >> Kishore. >> > > > > -- > Thanks, > Kishore. > -- Thanks, Kishore. --20cf30684361de95c804fbeea139 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
could anybody help me ?


On Mon, Jun 16, 2014 at 10:27 AM, kish= ore alajangi <alajangikishore@gmail.com> wrote:
Instead just mentioning hds= fs.path =3D /flume/messages/, do i need to mention something else?


On Mon, Jun 16, 2= 014 at 10:25 AM, kishore alajangi <alajangikishore@gmail.com&g= t; wrote:
I created the /flume/m= essages directories, but still nothing is written with flume in those direc= tories. please help me.


On Mon, Jun 16, 2014 at 10:15 AM, kishore alajangi <alaja= ngikishore@gmail.com> wrote:
Do I need to create the /fl= ume/messages/ directories?



On Mon, Jun 16, 2014 at 10:14 AM, kishor= e alajangi <alajangikishore@gmail.com> wrote:
checked, nothing is written= in hdfs.


On Mon, Jun 16, 2014 at 10:10 AM, Sharninder <sharninder@gmail.com= > wrote:
That just means source has = done its work and is waiting for more data to read. Did you check hdfs to s= ee if all data has been written?



On Mon, Jun 16, 2014 at 11:34 AM, kishore alajangi <alajang= ikishore@gmail.com> wrote:
Hi Mohit and shar= minder,

Thanks for reply, after I called with -n tier, source = is not directory error came, I changed the source to /tmp/ and hdfs.path to= /flume/messages/ in config file, and run the command, the INFO i am gettin= g now is "spooling directory source runner has shutdown"
what could be the problem, please help me.


On Sun, Jun 15, 2014 at 10:21 = PM, Mohit Durgapal <durgapalmohit@gmail.com> wrote:
Replace -n agent with -n tier1


On Sunday, June 15, 2014, kishore alajangi <alajangikishore@gmail.c= om> wrote:
Dear Sharminder,

Thanks for your reply, = yes I am playing with flume, as you suggested i am using spool directory so= urce, My configuration file looks like
tier1.sources  =3D source1
tier1.channels =3D channel1
tier1.sinks    =3D sink1
=A0
tier1.sources.source1.type     =3D spooldir
tier1.sources.source1.spoolD=
ir =3D /var/log/messages
tier1.sources.source1.channe=
ls =3D channel1
tier1.channels.channel1.type=
   =3D memory
=A0
tier1.sinks.sink1.type         =3D hdfs
tier1.sinks.sink1.hdfs.path =3D hdfs://localhost:8020/flume/messages
tier1.sinks.sink1.hdfs.fileType =3D SequenceFile
tier1.sinks.sink1.hdfs.filePrefix =3D data
tier1.sinks.sink1.hdfs.fileSuffix =3D .seq
=A0
# Roll based on the block size only
tier1.sinks.sink1.hdfs.rollCount=3D0
tier1.sinks.sink1.hdfs.rollInterval=3D0
tier1.sinks.sink1.hdfs.rollSize =3D 120000000=

# seconds to wait before closing the file.
tier1.sinks.sink1.hdfs.idleTimeout =3D 60
tier1.sinks.sink1.channel      =3D channel1
=A0
tier1.channels.=
channel1.capacity =3D 100000
tier1.sources.source1.deseri=
alizer.maxLineLength =3D 32768

the command I used is 

./flume-ng agent --conf ./conf/ -f bin/example.co=
nf -Dflume.root.logger=3DDEBUG,console -n agent

it gives warn after created sources, channels, si=
nks for tier1 agent is

no configuration found for this host:agent=


any help?



On Sun, Jun 15, 2014 at 11:18 AM, Sha= rninder <sharninder@gmail.com> wrote:=

I want to copy my local data to hdfs using flume in a sing= le machine which isrunning hadoop, How can I do that, please help me.=

What = is this "local data" ?

If it's just = files, why not use the hadoop fs copy command instead? If you want to play = around with flume, take a look at the spool directory source or the exec so= urce and you should be able to put something together that'll push data= through flume to hadoop.

--
Sharninder




= --
Thanks,
Kishore.
=


--
Thanks,
Kishore.




= --
Thanks,
Kishore.



--
Thanks,
Kishore.



--
Thanks,
Kishore.



--
Thanks,
Kishore.



--
Thanks,
Kishore. --20cf30684361de95c804fbeea139--