flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Cochran, David M (Contractor)" <David.Coch...@bsee.gov>
Subject RE: Errors
Date Fri, 12 Oct 2012 11:46:59 GMT
Mike,

 

Thanks for the very helpful and in-depth explanation!  That will be very helpful in the future
as I try to move this forward.

 

-Dave

 

From: Mike Percy [mailto:mpercy@apache.org] 
Sent: Thursday, October 11, 2012 6:09 PM
To: user@flume.apache.org
Subject: Re: Errors

 

You should consider how your system will act if there is a downstream failure. Even a capacity
of 500 is extremely (orders of magnitude) too small in my opinion.

 

Consider setting a channel capacity equal to (average events per second ingested * # of seconds
downtime you want to tolerate). So if you are ingesting 1000 events/sec and you want to tolerate
1 hour of downtime without dropping events, you would want a channel capacity of 1000 * (60
* 60) = 3600000. Don't forget that the channel is a buffer that is intended to smooth out
the latencies inherent in a complex network of storage systems. Even HDFS and HBase have latency
hiccups sometimes, so try to avoid running close to your buffer capacity.

 

Regards,

Mike

 

On Thu, Oct 11, 2012 at 11:37 AM, Harish Mandala <mvharish14988@gmail.com> wrote:

I've noticed in general that capacity = 100*transactionCapacity (or 10*transactionCapacity)
works well for me. 
 

	Regards,
	Harish 

	 

	On Thu, Oct 11, 2012 at 2:34 PM, Cochran, David M (Contractor) <David.Cochran@bsee.gov>
wrote:

	Trying that now...  set to 500 for each channel...  we'll see how it
	goes.
	
	For some reason 'channel capacity' didn't equate with the error msg.  In
	my figuring the part about the sinks not being able to keep up led me in
	another direction.... maybe I wasn't holding my head just right :)
	
	Thanks for the quick response!
	-Dave

	
	
	-----Original Message-----
	From: Brock Noland [mailto:brock@cloudera.com]
	Sent: Thursday, October 11, 2012 1:14 PM
	To: user@flume.apache.org
	Subject: Re: Errors
	
	Basically the channel is filling up. Have you increased the capacity of
	the channel?
	
	On Thu, Oct 11, 2012 at 1:08 PM, Cochran, David M (Contractor)
	<David.Cochran@bsee.gov> wrote:
	>
	> This error insists on making an appearance at least daily on my test
	> systems.
	>
	> Unable to put batch on required channel:
	> org.apache.flume.channel.MemoryChannel@555c07d8
	> Caused by: org.apache.flume.ChannelException: Space for commit to
	> queue couldn't be acquired Sinks are likely not keeping up with
	> sources, or the buffer size is too tight
	>
	> Changing the batch-size and batchSize around from default to 100's or
	> a 1000 doesn't seem to help
	>
	> increased the JAVA_OPTS="-Xms256m -Xmx512m"  still no change
	>
	> This shows up very intermittently, but daily, the logs being tailed
	> are not very big, and are not growing very quickly, actually very slow
	
	> in the grand scheme of things.
	>
	> Am I missing something to help balance things out here?
	>
	> Thanks
	> Dave
	
	
	
	--
	Apache MRUnit - Unit testing MapReduce -
	http://incubator.apache.org/mrunit/

	 

 

Mime
View raw message