flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Lord <jl...@cloudera.com>
Subject Re: How to handle ChannelFullException
Date Thu, 29 Jan 2015 15:56:36 GMT
Have you considered increasing the size of the memory channel? I haven't
played with Kafka sink much but in regards to hdfs we often add sinks which
can help to increase the flow of the channel.
The multi port Syslog source is the way to go here as it will give better
performance. We should probably go ahead and deprecate the vanilla syslog
source.

On Thursday, January 29, 2015, Sverre Bakke <sverre.bakke@gmail.com> wrote:

> Hi,
>
> I have a syslogtcp source using a default memory channel and Kafka
> sink. When producing data as fast as possible (3000 syslog events in a
> second), the source seems to accept all the data, but will crash due
> to ChannelFullException when adding the event to the channel.
>
> Is there any way to throttle or otherwise wait receiving more syslog
> events before channel is available again rather than crashing because
> the channel is full? I would prefer that Flume would accept syslog
> events slower rather than crashing and dropping events.
>
> 29 Jan 2015 16:26:56,721 ERROR [New I/O  worker #2]
>
> (org.apache.flume.source.SyslogTcpSource$syslogTcpHandler.messageReceived:94)
>  - Error writting to channel, event dropped
>
> Also, the syslogtcp seems to keep the syslog headers regardless of the
> keepFields setting, is there any common reason for why this might
> happen? In contrast, the multiport syslog tcp listener works as
> expected with this particular setting.
>

Mime
View raw message