Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E19E6F8E7 for ; Tue, 28 May 2013 05:05:15 +0000 (UTC) Received: (qmail 26982 invoked by uid 500); 28 May 2013 05:05:13 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 26960 invoked by uid 500); 28 May 2013 05:05:13 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 26940 invoked by uid 99); 28 May 2013 05:05:12 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 May 2013 05:05:12 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of goudarzi@gmail.com designates 209.85.128.52 as permitted sender) Received: from [209.85.128.52] (HELO mail-qe0-f52.google.com) (209.85.128.52) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 May 2013 05:05:06 +0000 Received: by mail-qe0-f52.google.com with SMTP id 1so4049347qec.39 for ; Mon, 27 May 2013 22:04:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=/Np/haiwKXV/t4wfH4L7ci1ZUPwKSXdXAPrAPEVdHwk=; b=GynN1j55rNGsvL0pl3WnlVswZIUdY85MTcijjr1Y91r9QFElNmUD+FSzjYJV8Ginbw mUUC9kByhmNLvNSAQPjfOCEkfpJ6BpAvVJCmCBTPV6JWghEEKA4nN2SE8OiI0gHtLYSV m+GGS6HdJc76HWGtLixopJMvazYsNZAaY1OeZMdf3y7czdzGeJabyorSutZ+yOIkTY3M gGmPJDjdp8L48+E7YPHinDgrHhoqLMwXpbeoTWV7iaIvrvplNGN7Ct2dpRXmnOkzRXsY kjujClxDjyi/AbBT8t8wKZ4CTxZBJoMHg3VDVKtCF9MbgMiHqSdSMHZCX+v7odEbqeIN vemg== MIME-Version: 1.0 X-Received: by 10.49.104.7 with SMTP id ga7mr34424545qeb.27.1369717485550; Mon, 27 May 2013 22:04:45 -0700 (PDT) Received: by 10.229.38.2 with HTTP; Mon, 27 May 2013 22:04:45 -0700 (PDT) In-Reply-To: <17BC6168-F20F-461D-9D82-68135ED31CA0@thelastpickle.com> References: <617291366786741@web30g.yandex.ru> <1190261366876503@web21e.yandex.ru> <17BC6168-F20F-461D-9D82-68135ED31CA0@thelastpickle.com> Date: Mon, 27 May 2013 22:04:45 -0700 Message-ID: Subject: Re: Cassandra + Hadoop - 2 Task attempts with million of rows From: Arya Goudarzi To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=047d7b5dbd842b452f04ddc0343e X-Virus-Checked: Checked by ClamAV on apache.org --047d7b5dbd842b452f04ddc0343e Content-Type: text/plain; charset=UTF-8 We haven't tried using Pig. However, we had a problem where our mapreduce job blew up for a subset of data. It appeared that we had a bug in our code that had generated a row as big as 3Gb. It was actually causing long GC pauses and would cause GC thrashing. The hadoop job of course would time out. Our range batch sizes are 32 and we had wide rows enabled. Your scenario seems similar. Try using nodetool tpstats to see if the column family involved with your job has a max row size which is very large. Also inspect your C* logs looking for long GC pause log lines from GCInspector. You can also refer to heap usage trends if you have them in your monitoring tools. On Thu, Apr 25, 2013 at 7:03 PM, aaron morton wrote: > 2013-04-23 16:09:17,838 INFO > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader: > Current split being processed ColumnFamilySplit((9197470410121435301, '-1] > @[p00nosql02.00, p00nosql01.00]) > Why it's split data from two nodes? we have 6 nodes cassandra cluster + > hadoop slaves - every task should get local input split from local > cassandra - am i right? > > My understanding is that it may get it locally, but it's not something > that has to happen. Once of the Hadoop guys will have a better idea. > > Try reducing the cassandra.range.batch.size and/or if you are using wide > rows enable cassandra.input.widerows > > Cheers > > ----------------- > Aaron Morton > Freelance Cassandra Consultant > New Zealand > > @aaronmorton > http://www.thelastpickle.com > > On 25/04/2013, at 7:55 PM, Shamim wrote: > > Hello Aaron, > I have got the following Log from the server (Sorry for being late) > > job_201304231203_0004 > attempt_201304231203_0004_m_000501_0 > > 2013-04-23 16:09:14,196 INFO org.apache.hadoop.util.NativeCodeLoader: > Loaded the native-hadoop library > 2013-04-23 16:09:14,438 INFO > org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating > symlink: > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/jars/pigContext > <- > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_201304231203_0004_m_000501_0/work/pigContext > 2013-04-23 16:09:14,453 INFO > org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating > symlink: > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/jars/dk > <- > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_201304231203_0004_m_000501_0/work/dk > 2013-04-23 16:09:14,456 INFO > org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating > symlink: > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/jars/META-INF > <- > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_201304231203_0004_m_000501_0/work/META-INF > 2013-04-23 16:09:14,459 INFO > org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating > symlink: > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/jars/org > <- > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_201304231203_0004_m_000501_0/work/org > 2013-04-23 16:09:14,469 INFO > org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating > symlink: > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/jars/com > <- > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_201304231203_0004_m_000501_0/work/com > 2013-04-23 16:09:14,471 INFO > org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating > symlink: > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/jars/.job.jar.crc > <- > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_201304231203_0004_m_000501_0/work/.job.jar.crc > 2013-04-23 16:09:14,474 INFO > org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating > symlink: > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/jars/job.jar > <- > /egov/data/hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_201304231203_0004_m_000501_0/work/job.jar > 2013-04-23 16:09:17,329 INFO org.apache.hadoop.util.ProcessTree: setsid > exited with exit code 0 > 2013-04-23 16:09:17,387 INFO org.apache.hadoop.mapred.Task: Using > ResourceCalculatorPlugin : > org.apache.hadoop.util.LinuxResourceCalculatorPlugin@256ef705 > 2013-04-23 16:09:17,838 INFO > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader: > Current split being processed ColumnFamilySplit((9197470410121435301, '-1] > @[p00nosql02.00, p00nosql01.00]) > 2013-04-23 16:09:18,088 INFO org.apache.pig.data.SchemaTupleBackend: Key > [pig.schematuple] was not set... will not generate code. > 2013-04-23 16:09:19,784 INFO > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map: > Aliases being processed per job phase (AliasName[line,offset]): M: > data[12,7],null[-1,-1],filtered[14,11],null[-1,-1],c1[23,5],null[-1,-1],updated[111,10] > C: R: > 2013-04-23 17:35:11,199 INFO org.apache.hadoop.mapred.TaskLogsTruncater: > Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 > 2013-04-23 17:35:11,384 INFO org.apache.hadoop.io.nativeio.NativeIO: > Initialized cache for UID to User mapping with a cache timeout of 14400 > seconds. > 2013-04-23 17:35:11,385 INFO org.apache.hadoop.io.nativeio.NativeIO: Got > UserName cassandra for UID 500 from the native implementation > 2013-04-23 17:35:11,417 WARN org.apache.hadoop.mapred.Child: Error running > child > java.lang.RuntimeException: TimedOutException() > at > org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:384) > at > org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:390) > at > org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:313) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getProgress(ColumnFamilyRecordReader.java:103) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.getProgress(PigRecordReader.java:169) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:514) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:539) > at > org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > at org.apache.hadoop.mapred.Child.main(Child.java:249) > Caused by: TimedOutException() > at > org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at > org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734) > at > org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718) > at > org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:346) > ... 17 more > 2013-04-23 17:35:11,427 INFO org.apache.hadoop.mapred.Task: Runnning > cleanup for the task > > These Two tasks hanged for long time and crashes with timeout exception. > Very interesting part is as follows > 2013-04-23 16:09:17,838 INFO > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader: > Current split being processed ColumnFamilySplit((9197470410121435301, '-1] > @[p00nosql02.00, p00nosql01.00]) > Why it's split data from two nodes? we have 6 nodes cassandra cluster + > hadoop slaves - every task should get local input split from local > cassandra - am i right? > > -- > Best regards > Shamim A. > > 24.04.2013, 10:59, "Shamim" : > > Hello Aron, > We have build up our new cluster from the scratch with version 1.2 - > partition murmor3. We are not using vnodes at all. > Actually log is clean and nothing serious, now investigating logs and post > soon if found something criminal > > Our cluster is evenly partitioned (Murmur3Partitioner) > > > Murmor3Partitioner is only available in 1.2 and changing partitioners is > not supported. Did you change from Random Partitioner under 1.1? > > Are > you using virtual nodes in your 1.2 cluster ? > >>> We have roughly > 97million rows in our cluster. Why we are getting above behavior? Do you > have any suggestion or clue to trouble shoot in this issue? > > Can you > make some of the logs from the tasks available? > > Cheers > > -- > > > --------------- > Aaron Morton > Freelance Cassandra Consultant > New > Zealand > > @aaronmorton > http://www.thelastpickle.com > > On > 23/04/2013, at 5:50 AM, Shamim wrote: > >> We are using Hadoop 1.0.3 and > pig 0.11.1 version >> >> -- >> Best regards >> Shamim A. >> >> 22.04.2013, > 21:48, "Shamim" : >> >>> Hello all, >>> recently we have upgrade our > cluster (6 nodes) from cassandra version 1.1.6 to 1.2.1. Our cluster is > evenly partitioned (Murmur3Partitioner). We are using pig for parse and > compute aggregate data. >>> >>> When we submit job through pig, what i > consistently see is that, while most of the task have 20-25k row assigned > each (Map input records), only 2 of them (always 2 ) getting more than 2 > million rows. This 2 tasks always complete 100% and hang for long time. > Also most of the time we are getting killed task (2%) with > TimeoutException. >>> >>> We increased rpc_timeout to 60000, also set > cassandra.input.split.size=1024 but nothing help. >>> >>> We have roughly > 97million rows in our cluster. Why we are getting above behavior? Do you > have any suggestion or clue to trouble shoot in this issue? Any help will > be highly thankful. Thankx in advance. >>> >>> -- >>> Best regards >>> > Shamim A. -- Best regards > Shamim A. > > > --047d7b5dbd842b452f04ddc0343e Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
We haven't tried using Pig. However, we had a problem = where our mapreduce job blew up for a subset of data. It appeared that we h= ad a bug in our code that had generated a row as big as 3Gb. It was actuall= y causing long GC pauses and would cause GC thrashing. The hadoop job of co= urse would time out. Our range batch sizes are 32 and we had wide rows enab= led. Your scenario seems similar. Try using nodetool tpstats to see if the = column family involved with your job has a max row size which is very large= . Also inspect your C* logs looking for long GC pause log lines from GCInsp= ector. You can also refer to heap usage trends if you have them in your mon= itoring tools.=C2=A0


On Thu, Apr 2= 5, 2013 at 7:03 PM, aaron morton <aaron@thelastpickle.com> wrote:
2013-04-23 16:09:17,838 INFO org.apache= .pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader: Current= split being processed ColumnFamilySplit((9197470410121435301, '-1] @[p= 00nosql02.00, p00nosql01.00])
Why it's split data from two nodes? we have 6 nodes cassandra cluster += hadoop slaves - =C2=A0every task should get local input split from local c= assandra - am i right?=C2=A0
My understanding is that it = may get it locally, but it's not something that has to happen. Once of = the Hadoop guys will have a better idea.=C2=A0

Try reducing the=C2=A0cassandra.range.batch.size and/or if y= ou are using wide rows enable=C2=A0cassandra.input.widerows

<= /div>
Cheers

-----------------
Aaron Morton
F= reelance Cassandra Consultant
New Zealand

@aaronmorton

On 25/04/2013, at 7:55 PM, Shamim <srecon@yandex.ru> wrote:

Hello Aaron,
=C2=A0I have got the following Log = from the server (Sorry for being late)

job_201304231203_0004
a= ttempt_201304231203_0004_m_000501_0

2013-04-23 16:09:14,196 INFO org.apache.hadoop.util.NativeCode= Loader: Loaded the native-hadoop library
2013-04-23 16:09:14,438 INFO org.apache.hadoop.filecache.TrackerDistributed= CacheManager: Creating symlink: /egov/data/hadoop/mapred/local/taskTracker/= cassandra/jobcache/job_201304231203_0004/jars/pigContext <- /egov/data/h= adoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/att= empt_201304231203_0004_m_000501_0/work/pigContext
2013-04-23 16:09:14,453 INFO org.apache.hadoop.filecache.TrackerDistributed= CacheManager: Creating symlink: /egov/data/hadoop/mapred/local/taskTracker/= cassandra/jobcache/job_201304231203_0004/jars/dk <- /egov/data/hadoop/ma= pred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_201= 304231203_0004_m_000501_0/work/dk
2013-04-23 16:09:14,456 INFO org.apache.hadoop.filecache.TrackerDistributed= CacheManager: Creating symlink: /egov/data/hadoop/mapred/local/taskTracker/= cassandra/jobcache/job_201304231203_0004/jars/META-INF <- /egov/data/had= oop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attem= pt_201304231203_0004_m_000501_0/work/META-INF
2013-04-23 16:09:14,459 INFO org.apache.hadoop.filecache.TrackerDistributed= CacheManager: Creating symlink: /egov/data/hadoop/mapred/local/taskTracker/= cassandra/jobcache/job_201304231203_0004/jars/org <- /egov/data/hadoop/m= apred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_20= 1304231203_0004_m_000501_0/work/org
2013-04-23 16:09:14,469 INFO org.apache.hadoop.filecache.TrackerDistributed= CacheManager: Creating symlink: /egov/data/hadoop/mapred/local/taskTracker/= cassandra/jobcache/job_201304231203_0004/jars/com <- /egov/data/hadoop/m= apred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attempt_20= 1304231203_0004_m_000501_0/work/com
2013-04-23 16:09:14,471 INFO org.apache.hadoop.filecache.TrackerDistributed= CacheManager: Creating symlink: /egov/data/hadoop/mapred/local/taskTracker/= cassandra/jobcache/job_201304231203_0004/jars/.job.jar.crc <- /egov/data= /hadoop/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/a= ttempt_201304231203_0004_m_000501_0/work/.job.jar.crc
2013-04-23 16:09:14,474 INFO org.apache.hadoop.filecache.TrackerDistributed= CacheManager: Creating symlink: /egov/data/hadoop/mapred/local/taskTracker/= cassandra/jobcache/job_201304231203_0004/jars/job.jar <- /egov/data/hado= op/mapred/local/taskTracker/cassandra/jobcache/job_201304231203_0004/attemp= t_201304231203_0004_m_000501_0/work/job.jar
2013-04-23 16:09:17,329 INFO org.apache.hadoop.util.ProcessTree: setsid exi= ted with exit code 0
2013-04-23 16:09:17,387 INFO org.apache.hadoop.mapr= ed.Task: =C2=A0Using ResourceCalculatorPlugin : org.apache.hadoop.util.Linu= xResourceCalculatorPlugin@256ef705
2013-04-23 16:09:17,838 INFO org.apache.pig.backend.hadoop.executionengine.= mapReduceLayer.PigRecordReader: Current split being processed ColumnFamilyS= plit((9197470410121435301, '-1] @[p00nosql02.00, p00nosql01.00])
2013-04-23 16:09:18,088 INFO org.apache.pig.data.SchemaTupleBackend: Key [p= ig.schematuple] was not set... will not generate code.
2013-04-23 16:09:= 19,784 INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Pi= gMapOnly$Map: Aliases being processed per job phase (AliasName[line,offset]= ): M: data[12,7],null[-1,-1],filtered[14,11],null[-1,-1],c1[23,5],null[-1,-= 1],updated[111,10] C: =C2=A0R:
2013-04-23 17:35:11,199 INFO org.apache.hadoop.mapred.TaskLogsTruncater: In= itializing logs' truncater with mapRetainSize=3D-1 and reduceRetainSize= =3D-1
2013-04-23 17:35:11,384 INFO org.apache.hadoop.io.nativeio.NativeI= O: Initialized cache for UID to User mapping with a cache timeout of 14400 = seconds.
2013-04-23 17:35:11,385 INFO org.apache.hadoop.io.nativeio.NativeIO: Got Us= erName cassandra for UID 500 from the native implementation
2013-04-23 1= 7:35:11,417 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: TimedOutException()
=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0at org.apache.cassandra.hadoop.ColumnFamilyRecordReade= r$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:384)
=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.cassandra.hadoop.ColumnFa= milyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.jav= a:390)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.cassandra.hadoop.C= olumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordRea= der.java:313)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at com.google.c= ommon.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)<= br> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at com.google.common.collect.Abs= tractIterator.hasNext(AbstractIterator.java:138)
=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0at org.apache.cassandra.hadoop.ColumnFamilyRecordReade= r.getProgress(ColumnFamilyRecordReader.java:103)
=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0at org.apache.pig.backend.hadoop.executionengine.mapRe= duceLayer.PigRecordReader.getProgress(PigRecordReader.java:169)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.hadoop.mapred.MapT= ask$NewTrackingRecordReader.getProgress(MapTask.java:514)
=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.hadoop.mapred.MapTask$NewTracki= ngRecordReader.nextKeyValue(MapTask.java:539)
=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(Ma= pContext.java:67)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.hadoop.mapreduce.M= apper.run(Mapper.java:143)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at= org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.hadoop.mapred.MapTask.run= (MapTask.java:370)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apa= che.hadoop.mapred.Child$4.run(Child.java:255)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at java.security.AccessControlle= r.doPrivileged(Native Method)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0at javax.security.auth.Subject.doAs(Subject.java:396)
=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.hadoop.security.UserGroupInform= ation.doAs(UserGroupInformation.java:1121)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.hadoop.mapred.Chil= d.main(Child.java:249)
Caused by: TimedOutException()
=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.cassandra.thrift.Cassandra$get_= range_slices_result.read(Cassandra.java:12932)
=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0at org.apache.thrift.TServiceClient.receiveBase(TServiceC= lient.java:78)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.cassandra.thrift.C= assandra$Client.recv_get_range_slices(Cassandra.java:734)
=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0at org.apache.cassandra.thrift.Cassandra$Clie= nt.get_range_slices(Cassandra.java:718)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticR= owIterator.maybeInit(ColumnFamilyRecordReader.java:346)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0... 17 more
2013-04-23 17:35:= 11,427 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task
These Two tasks hanged for long time and crashes with timeout exceptio= n. Very interesting part is as follows
2013-04-23 16:09:17,838 INFO org.apache.pig.backend.hadoop.executionengine.= mapReduceLayer.PigRecordReader: Current split being processed ColumnFamilyS= plit((9197470410121435301, '-1] @[p00nosql02.00, p00nosql01.00])
Why it's split data from two nodes? we have 6 nodes cassandra cluster += hadoop slaves - =C2=A0every task should get local input split from local c= assandra - am i right?

--
Best regards
=C2=A0 Shamim A.
<= br>24.04.2013, 10:59, "Shamim" <srecon@yandex.ru>:
Hello Aron,
We have build up our new cluster f= rom the scratch with version 1.2 - partition murmor3. We are not using vnod= es at all.
Actually log is clean and nothing serious, now investigating = logs and post soon if found something criminal

=C2=A0Our cluster is evenly partitioned (Murmur3Partitioner) > &g= t; Murmor3Partitioner is only available in 1.2 and changing partitioners is= not supported. Did you change from Random Partitioner under 1.1? > >= Are you using virtual nodes in your 1.2 cluster ? > >>> We hav= e roughly 97million rows in our cluster. Why we are getting above behavior?= Do you have any suggestion or clue to trouble shoot in this issue? > &g= t; Can you make some of the logs from the tasks available? > > Cheers= > > --

--------------- > Aaron Morto= n > Freelance Cassandra Consultant > New Zealand > > @aaronmort= on > http://w= ww.thelastpickle.com > > On 23/04/2013, at 5:50 AM, Shamim =C2=A0= wrote: > >> We are using Hadoop 1.0.3 and pig 0.11.1 version >&= gt; >> -- >> Best regards >> Shamim A. >> >> = 22.04.2013, 21:48, "Shamim" : >> >>> Hello all, &g= t;>> recently we have upgrade our cluster (6 nodes) from cassandra ve= rsion 1.1.6 to 1.2.1. Our cluster is evenly partitioned (Murmur3Partitioner= ). We are using pig for parse and compute aggregate data. >>> >= >> When we submit job through pig, what i consistently see is that, w= hile most of the task have 20-25k row assigned each (Map input records), on= ly 2 of them (always 2 ) getting more than 2 million rows. This 2 tasks alw= ays complete 100% and hang for long time. Also most of the time we are gett= ing killed task (2%) with TimeoutException. >>> >>> We in= creased rpc_timeout to 60000, also set cassandra.input.split.size=3D1024 bu= t nothing help. >>> >>> We have roughly 97million rows in= our cluster. Why we are getting above behavior? Do you have any suggestion= or clue to trouble shoot in this issue? Any help will be highly thankful. = Thankx in advance. >>> >>> -- >>> Best regards &= gt;>> Shamim A. -- Best regards
=C2=A0 Shamim A.


--047d7b5dbd842b452f04ddc0343e--