flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shara Shi" <shiruih...@dhgate.com>
Subject flume agent start failure in server without hadoop
Date Thu, 21 Jun 2012 07:31:05 GMT
Hi all , 


When I execute ./flume-ng agent -name agent1 -f ../conf/flume.conf

 in a web server , I got following information. 


+ exec /usr/local/jdk/jdk1.6.0_26/bin/java -Xmx20m -cp
'/tmp/flume-1.2.0-incubating-SNAPSHOT/lib/*' -Djava.library.path=
org.apache.flume.node.Application -name agent1 -f ../conf/flume.conf

log4j:WARN No appenders could be found for logger

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.



I check the script flume-ng , found add_hadoop_paths , add_HBASE_paths

I think those two function are probably the reason why agent dose not start


I confused that should I install hadoop in every web server? It is not

How can I start flume agent successfully in a server without hadoop ?

My flume.conf is listed below, which does not access HDFS. 



 # Define a memory channel called ch1 on agent1

agent1.channels.ch1.type = memory


# Define an Avro source called avro-source1 on agent1 and tell it

# to bind to Connect it to channel ch1.

agent1.sources.avro-source1.channels = ch1

agent1.sources.avro-source1.type = avro

agent1.sources.avro-source1.bind =

agent1.sources.avro-source1.port = 41414



# Define a tail source



agent1.sources.tail1.command=tail -n +0 -F /tmp/test2.log



# Define a logger sink that simply logs all events it receives

# and connect it to the other end of the same channel.

agent1.sinks.log-sink1.channel = ch1

agent1.sinks.log-sink1.type = logger



# Finally, now that we've defined all of our components, tell

# agent1 which ones we want to activate.

agent1.channels = ch1

#agent1.sources = avro-source1

agent1.sinks = log-sink1

agent1.sources = tail1





View raw message