hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Siddharth Tiwari <siddharth.tiw...@live.com>
Subject Flume not moving data help !!!
Date Thu, 31 Oct 2013 18:40:25 GMT
Hi team I created flume source and sink as following in hadoop yarn and I am not getting data
transferred from source to sink in HDFS it doesnt create any file and on local everytime I
start agent it creates one empty file. Below are my configs in source and sink

Source :-
agent.sources = logger1agent.sources.logger1.type = execagent.sources.logger1.command = tail
-f /var/log/messagesagent.sources.logger1.batchsSize = 0agent.sources.logger1.channels = memoryChannelagent.channels
= memoryChannelagent.channels.memoryChannel.type = memoryagent.channels.memoryChannel.capacity
= 100agent.sinks = AvroSinkagent.sinks.AvroSink.type = avroagent.sinks.AvroSink.channel =
memoryChannelagent.sinks.AvroSink.hostname = 192.168.147.101agent.sinks.AvroSink.port = 4545agent.sources.logger1.interceptors
= itime ihostagent.sources.logger1.interceptors.itime.type = TimestampInterceptoragent.sources.logger1.interceptors.ihost.type
= hostagent.sources.logger1.interceptors.ihost.useIP = falseagent.sources.logger1.interceptors.ihost.hostHeader
= host

Sink at one of the slave ( datanodes on my Yarn cluster ) :
collector.sources = AvroIncollector.sources.AvroIn.type = avrocollector.sources.AvroIn.bind
= 0.0.0.0collector.sources.AvroIn.port = 4545collector.sources.AvroIn.channels = mc1 mc2collector.channels
= mc1 mc2collector.channels.mc1.type = memorycollector.channels.mc1.capacity = 100
collector.channels.mc2.type = memorycollector.channels.mc2.capacity = 100
collector.sinks = LocalOut HadoopOutcollector.sinks.LocalOut.type = file_rollcollector.sinks.LocalOut.sink.directory
= /home/hadoop/flumecollector.sinks.LocalOut.sink.rollInterval = 0collector.sinks.LocalOut.channel
= mc1collector.sinks.HadoopOut.type = hdfscollector.sinks.HadoopOut.channel = mc2collector.sinks.HadoopOut.hdfs.path
= /flumecollector.sinks.HadoopOut.hdfs.fileType = DataStreamcollector.sinks.HadoopOut.hdfs.writeFormat
= Textcollector.sinks.HadoopOut.hdfs.rollSize = 0collector.sinks.HadoopOut.hdfs.rollCount
= 10000collector.sinks.HadoopOut.hdfs.rollInterval = 600

can somebody point me to what am I doing wrong ?
This is what I get in my local directory
[hadoop@node1 flume]$ ls -lrttotal 0-rw-rw-r-- 1 hadoop hadoop 0 Oct 31 11:25 1383243942803-1-rw-rw-r--
1 hadoop hadoop 0 Oct 31 11:28 1383244097923-1-rw-rw-r-- 1 hadoop hadoop 0 Oct 31 11:31 1383244302225-1-rw-rw-r--
1 hadoop hadoop 0 Oct 31 11:33 1383244404929-1

when I restart the collector it creates one 0 bytes file.
Please help 
*------------------------*

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 

"Maybe other people will try to limit me but I don't limit myself"
 		 	   		  
Mime
View raw message