Return-Path: X-Original-To: apmail-flume-user-archive@www.apache.org Delivered-To: apmail-flume-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 02586EACB for ; Thu, 17 Jan 2013 02:06:36 +0000 (UTC) Received: (qmail 7012 invoked by uid 500); 17 Jan 2013 02:06:35 -0000 Delivered-To: apmail-flume-user-archive@flume.apache.org Received: (qmail 6970 invoked by uid 500); 17 Jan 2013 02:06:35 -0000 Mailing-List: contact user-help@flume.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flume.apache.org Delivered-To: mailing list user@flume.apache.org Received: (qmail 6962 invoked by uid 99); 17 Jan 2013 02:06:35 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Jan 2013 02:06:35 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of juhani_connolly@cyberagent.co.jp designates 210.134.177.71 as permitted sender) Received: from [210.134.177.71] (HELO ipl2.aams0.jp) (210.134.177.71) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Jan 2013 02:06:26 +0000 Received: from [192.168.177.207] (61-121-214-170.bitcat.net [61.121.214.170] (may be forged)) (authenticated bits=0) by ipl2.aams0.jp (Sentrion-MTA-4.0.2/Switch-3.2.5) with ESMTP id r0H26210006562 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Thu, 17 Jan 2013 11:06:02 +0900 Message-ID: <50F75C90.3020606@cyberagent.co.jp> Date: Thu, 17 Jan 2013 11:06:08 +0900 From: Juhani Connolly User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: user@flume.apache.org Subject: Re: OutOfMemory References: In-Reply-To: Content-Type: multipart/alternative; boundary="------------070408020901040800080508" X-AAMS0-Virus-Status: clean X-AAMS0-Virus-Status: clean X-Aams0-Archive-Original-S: YES X-Virus-Checked: Checked by ClamAV on apache.org This is a multi-part message in MIME format. --------------070408020901040800080508 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit How big are your events? 10000 capacity doesn't seem like it should run into any issues, but since it is all on memory, it's possible your channel is eating up all your memory. Note: channel capacity is the number of events, not the physical size. You can verify what is going on by setting up ganglia or using something like jconsole to get counter data via jmx: you'll want to pull the channelFillPercentage. On 01/17/2013 07:15 AM, Mohit Anchlia wrote: > channel transaction is 500 and I've not set any batchsize parameter. > > On Wed, Jan 16, 2013 at 1:49 PM, Bhaskar V. Karambelkar > > wrote: > > What is the channel transaction capacity and HDFS batch size ? > > > On Wed, Jan 16, 2013 at 1:52 PM, Mohit Anchlia > > wrote: > > I often get out of memory even when there is no load on the > system. I am wondering what's the best way to debug this. I > have heap size set to 2G and memory capacity is 10000 > > ///13///01////16// ///09/:/09//:/38// //ERROR// ///hdfs/./HDFSEventSink//:/ /process/ /failed/ > > > > ////java/./lang//./OutOfMemoryError//:/ /Java/ /heap/ /space/ > /at/ ////java/./util//./Arrays//./copyOf//(///Arrays/./java//:/2786//) > > > > /at/ ////java/./io//./ByteArrayOutputStream//./write//(///ByteArrayOutputStream/./java//:/94//) > > /at/ ////java/./io//./DataOutputStream//./write//(///DataOutputStream/./java//:/90//) > > /at/ //////org/./apache//./hadoop//./io//./Text//./write//(///Text/./java//:/282//) > > ///./././ /11/ /lines/ /omitted/ ///./././ > /at/ ////java/./lang//./Thread//./run//(///Thread/./java//:/662//) > > > > /Exception/ /in/ /thread/ "///SinkRunner/-/PollingRunner//-/DefaultSinkProcessor//"////java/./lang//./OutOfMemoryError//:/ /Java/ /heap/ /space/ > > > > /at/ ////java/./util//./Arrays//./copyOf//(///Arrays/./java//:/2786//) > > /at/ ////java/./io//./ByteArrayOutputStream//./write//(///ByteArrayOutputStream/./java//:/94//) > > > --------------070408020901040800080508 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit
How big  are your events? 10000 capacity doesn't seem like it should run into any issues, but since it is all on memory, it's possible your channel is eating up all your memory.
Note: channel capacity is the number of events, not the physical size.

You can verify what is going on by setting up ganglia or using something like jconsole to get counter data via jmx: you'll want to pull the channelFillPercentage.

On 01/17/2013 07:15 AM, Mohit Anchlia wrote:
channel transaction is 500 and I've not set any batchsize parameter.

On Wed, Jan 16, 2013 at 1:49 PM, Bhaskar V. Karambelkar <bhaskarvk@gmail.com> wrote:
What is the channel transaction capacity and HDFS batch size ?


On Wed, Jan 16, 2013 at 1:52 PM, Mohit Anchlia <mohitanchlia@gmail.com> wrote:
I often get out of memory even when there is no load on the system. I am wondering what's the best way to debug this. I have heap size set to 2G and memory capacity is 10000
 
13/01/16 09:09:38 ERROR hdfs.HDFSEventSink: process failed



java.lang.OutOfMemoryError: Java heap space
	at java.util.Arrays.copyOf(Arrays.java:2786)



	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)

	at java.io.DataOutputStream.write(DataOutputStream.java:90)

	at org.apache.hadoop.io.Text.write(Text.java:282)

... 11 lines omitted ...
	at java.lang.Thread.run(Thread.java:662)



Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.OutOfMemoryError: Java heap space



	at java.util.Arrays.copyOf(Arrays.java:2786)

	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)



--------------070408020901040800080508--