flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandeep Reddy P <sandeepreddy.3...@gmail.com>
Subject Re: Flume not writing data to hdfs
Date Fri, 17 Aug 2012 15:41:35 GMT
Hi,
Here is my flume.conf
agent1.sources = tail

agent1.channels = Channel-2

agent1.sinks = HDFS



agent1.sources.tail.type = exec

agent1.sources.tail.command = tail -F
/usr/lib/hadoop-0.20/logs/hadoop-hadoop-namenode-hadoop1.log

agent1.sources.tail.channels = Channel-2



agent1.sinks.HDFS.channel = Channel-2

agent1.sinks.HDFS.type = hdfs

agent1.sinks.HDFS.hdfs.path =
hdfs://hadoop1.xxxxxx.local:8020/user/hdfs/flume/

agent1.sinks.HDFS.hdfs.file.Type = DataStream



agent1.channels.Channel-2.type = memory

agent1.channels.Channel-2.capacity = 1000

agent1.channels.Channel-2.transactionCapacity = 100
On Mon, Aug 13, 2012 at 2:32 PM, Alexander Lorenz <wget.null@gmail.com>wrote:

> Yes, was a wrong copy, I meant:
>
> 2012-08-13 11:22:49,061 ERROR
> org.apache.thrift.server.TSaneThreadPoolServer: Error occurred during
> listening.
> org.apache.thrift.transport.TTransportException: Could not create
> ServerSocket on address 0.0.0.0/0.0.0.0:35862.
>
> Do a netstat and check if another process is running on that port, mostly
> the physical nodes are binding there. You can change the
> flume.node.status.port property in the flume-conf.xml to another port.
>
> Lemme point you to flume 1.x - since pre 1.0 flume will only receive
> security fixes:
> http://flume.apache.org/
>
> https://cwiki.apache.org/confluence/display/FLUME/Articles%2C+Blog+Posts%2C+HOWTOs
>
> cheers
> - Alex
>
>
> On Aug 13, 2012, at 11:18 AM, Sandeep Reddy P <sandeepreddy.3647@gmail.com>
> wrote:
>
> > Hadoop is working fine
> > Total files:   19125 (Files currently being written: 2)
> > Total blocks (validated):      19740 (avg. block size 2303408 B) (Total
> > open file blocks (not validated): 2)
> > Minimally replicated blocks:   19740 (100.0 %)
> > Over-replicated blocks:        0 (0.0 %)
> > Under-replicated blocks:       0 (0.0 %)
> > Mis-replicated blocks:         0 (0.0 %)
> > Default replication factor:    3
> > Average block replication:     3.0
> > Corrupt blocks:                0
> > Missing replicas:              0 (0.0 %)
> > Number of data-nodes:          4
> > Number of racks:               1
> > FSCK ended at Mon Aug 13 14:18:26 EDT 2012 in 498 milliseconds
> >
> >
> > The filesystem under path '/' is HEALTHY
> >
> >
> > On Mon, Aug 13, 2012 at 2:13 PM, Alexander Lorenz <wget.null@gmail.com
> >wrote:
> >
> >> Hi,
> >>
> >> looks like your HDFS are in an error state:
> >> Exiting driver logicalNode
> >> hadoop1.liaisondevqa.local-21 in error state
> >>
> >> check please that your cluster is running well.
> >>
> >> cheers,
> >> Alex
> >>
> >>
> >> On Aug 13, 2012, at 10:53 AM, Sandeep Reddy P <
> sandeepreddy.3647@gmail.com>
> >> wrote:
> >>
> >>> Hi,
> >>> No, i followed apache flume docs. I'll try following cloudera flume.
> Here
> >>> are my error logs
> >>> 2012-08-13 11:22:49,059 INFO
> >>> com.cloudera.flume.handlers.thrift.ThriftEventSource: Closed server on
> >> port
> >>> 35862...
> >>> 2012-08-13 11:22:49,059 INFO
> >>> com.cloudera.flume.handlers.thrift.ThriftEventSource: Queue still has 0
> >>> elements ...
> >>> 2012-08-13 11:22:49,059 INFO
> >> com.cloudera.flume.handlers.rolling.RollSink:
> >>> closing RollSink
> >>> 'escapedCustomDfs("hdfs://hadoop1.liaisondevqa.local/user/flume/
> >>> ","syslog%{rolltag}" )'
> >>> 2012-08-13 11:22:49,059 ERROR
> >>> com.cloudera.flume.core.connector.DirectDriver: Exiting driver
> >> logicalNode
> >>> hadoop1.liaisondevqa.local-21 in error state CollectorSource |
> Collector
> >>> because Waiting for queue element was interrupted! null
> >>> 2012-08-13 11:22:49,060 INFO
> >>> com.cloudera.flume.handlers.thrift.ThriftEventSource: Starting blocking
> >>> thread pool server on port 35862...
> >>> 2012-08-13 11:22:49,061 ERROR
> >>> org.apache.thrift.server.TSaneThreadPoolServer: Error occurred during
> >>> listening.
> >>> org.apache.thrift.transport.TTransportException: Could not create
> >>> ServerSocket on address 0.0.0.0/0.0.0.0:35862.
> >>>       at
> >>>
> >>
> org.apache.thrift.transport.TSaneServerSocket.bind(TSaneServerSocket.java:110)
> >>>       at
> >>>
> >>
> org.apache.thrift.transport.TSaneServerSocket.listen(TSaneServerSocket.java:116)
> >>>       at
> >>>
> >>
> org.apache.thrift.server.TSaneThreadPoolServer.start(TSaneThreadPoolServer.java:162)
> >>>       at
> >>>
> >>
> com.cloudera.flume.handlers.thrift.ThriftEventSource.open(ThriftEventSource.java:151)
> >>>       at
> >>>
> >>
> com.cloudera.flume.collector.CollectorSource.open(CollectorSource.java:67)
> >>>       at
> >>>
> >>
> com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:87)
> >>> 2012-08-13 11:22:49,061 INFO
> >> com.cloudera.flume.handlers.rolling.RollSink:
> >>> opening RollSink
> >>> 'escapedCustomDfs("hdfs://hadoop1.liaisondevqa.local/user/hdfs/
> >>> ","syslog%{rolltag}" )'
> >>> 2012-08-13 11:22:49,062 INFO
> >>> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: Opened
> >>> MaskDecorator on try 0
> >>> 2012-08-13 11:30:10,056 INFO
> >> com.cloudera.flume.handlers.rolling.RollSink:
> >>> Created RollSink: trigger=[TimeTrigger: maxAge=30000
> >>> tagger=com.cloudera.flume.handlers.rolling.ProcessTagger@10cb42cf]
> >>> checkPeriodMs = 250
> >>> spec='escapedCustomDfs("hdfs://hadoop1.liaisondevqa.local/user/hdfs/
> >>> ","raw=%{rolltag}" )'
> >>> 2012-08-13 11:30:20,058 WARN com.cloudera.flume.agent.LivenessManager:
> >>> Heartbeats are backing up, currently behind by 1 heartbeats
> >>> 2012-08-13 11:30:25,061 WARN com.cloudera.flume.agent.LivenessManager:
> >>> Heartbeats are backing up, currently behind by 2 heartbeats
> >>> 2012-08-13 11:30:30,063 WARN com.cloudera.flume.agent.LivenessManager:
> >>> Heartbeats are backing up, currently behind by 3 heartbeats
> >>> 2012-08-13 11:30:35,065 WARN com.cloudera.flume.agent.LivenessManager:
> >>> Heartbeats are backing up, currently behind by 4 heartbeats
> >>> 2012-08-13 11:30:40,056 ERROR com.cloudera.flume.agent.LogicalNode:
> >> Forcing
> >>> driver to exit uncleanly
> >>> 2012-08-13 11:30:40,057 ERROR
> >>> com.cloudera.flume.core.connector.DirectDriver: Closing down due to
> >>> exception during append calls
> >>> 2012-08-13 11:30:40,057 INFO
> >>> com.cloudera.flume.core.connector.DirectDriver: Connector logicalNode
> >>> hadoop1.liaisondevqa.local-23 exited with error: Waiting for queue
> >> element
> >>> was interrupted! null
> >>> java.io.IOException: Waiting for queue element was interrupted! null
> >>>       at
> >>>
> >>
> com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:222)
> >>>       at
> >>>
> >>
> com.cloudera.flume.collector.CollectorSource.next(CollectorSource.java:72)
> >>>       at
> >>>
> >>
> com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:108)
> >>> Caused by: java.lang.InterruptedException
> >>>       at
> >>>
> >>
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1961)
> >>>       at
> >>>
> >>
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2038)
> >>>       at
> >>>
> >>
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
> >>>       at
> >>>
> >>
> com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:209)
> >>>       ... 2 more
> >>>
> >>>
> >>> On Mon, Aug 13, 2012 at 12:03 PM, alo alt <wget.null@gmail.com> wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> Did you follow the userguide, archived here:
> >>>> http://archive.cloudera.com/cdh/3/flume-0.9.1+1/UserGuide.html ?
> >>>>
> >>>> Without error messages or logfiles it's hard to say whats really
> happen.
> >>>>
> >>>> - Alex
> >>>>
> >>>>
> >>>> On Aug 13, 2012, at 8:12 AM, Sandeep Reddy P <
> >> sandeepreddy.3647@gmail.com>
> >>>> wrote:
> >>>>
> >>>>> Hi,
> >>>>> I'm using flume version Flume 0.9.4-cdh3u4
> >>>>> I'm using flume master webpage to configure the following to move
> data
> >>>> into
> >>>>> hdfs.
> >>>>>
> >>>>> host : syslogTcp(5140) |
> agentSink("hadoop1.liaisondevqa.local",35862)
> >> ;
> >>>>> hadoop1.liaisondevqa.local : collectorSource(35862) |
> >>>>> collectorSink("hdfs://hadoop1.liaisondevqa.local/user/flume/
> >> ","syslog");
> >>>>>
> >>>>> Command history says its successful but i cant see any data in hdfs.
> >>>>>
> >>>>> Similarly how should i configure to move a log file from linux box
to
> >>>> hdfs
> >>>>> using flume? I'm following apache flume cookbook.
> >>>>> --
> >>>>> Thanks,
> >>>>> sandeep
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Thanks,
> >>>>> sandeep
> >>>>
> >>>>
> >>>> --
> >>>> Alexander Alten-Lorenz
> >>>> http://mapredit.blogspot.com
> >>>> German Hadoop LinkedIn Group: http://goo.gl/N8pCF
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Thanks,
> >>> sandeep
> >>
> >>
> >> --
> >> Alexander Alten-Lorenz
> >> http://mapredit.blogspot.com
> >> German Hadoop LinkedIn Group: http://goo.gl/N8pCF
> >>
> >>
> >
> >
> > --
> > Thanks,
> > sandeep
>
>
> --
> Alexander Alten-Lorenz
> http://mapredit.blogspot.com
> German Hadoop LinkedIn Group: http://goo.gl/N8pCF
>
>


-- 
Thanks,
sandeep

Mime
View raw message