Return-Path: X-Original-To: apmail-avro-user-archive@www.apache.org Delivered-To: apmail-avro-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8714F11E54 for ; Mon, 12 May 2014 01:59:36 +0000 (UTC) Received: (qmail 92971 invoked by uid 500); 12 May 2014 01:52:56 -0000 Delivered-To: apmail-avro-user-archive@avro.apache.org Received: (qmail 92718 invoked by uid 500); 12 May 2014 01:52:56 -0000 Mailing-List: contact user-help@avro.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@avro.apache.org Delivered-To: mailing list user@avro.apache.org Received: (qmail 92498 invoked by uid 99); 12 May 2014 01:52:56 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 May 2014 01:52:56 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of lewis.mcgibbney@gmail.com designates 209.85.192.51 as permitted sender) Received: from [209.85.192.51] (HELO mail-qg0-f51.google.com) (209.85.192.51) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 May 2014 01:52:50 +0000 Received: by mail-qg0-f51.google.com with SMTP id q107so6857839qgd.38 for ; Sun, 11 May 2014 18:52:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=O4NSNz9OU3HMbgsqDjyW6JPLnTfytP6ORj6QhsbXZEs=; b=J4wPXo+MBVGF2JVYphde/RYuqCfKbyrhfjv0xe09vT9nYOASd6fJrrGhhZ+ah1HuPE jraEni2I0GjHSmJSvy8pyi+/tJ+831ViK2WsialOER6C7Zkmr5ZMZr3qouGPSU+Uk4LP 65GqgVNvdnuFpMogpVWt9JF/Q/mdButVASZq0Sl/eun8S+fv0npl3TEvg3i+G7hiPUrZ 3sCMGt8Am+xq6aljLqYrFCK+wIilfXa5ZsU7x2daw+98hEfl3Qg8/0ZPFe1faiFzkFmR nEJV0l3KqV90tYl+5uLHJJkMlbCSocNPmaifMxVUaayhvCJCBwK5VK+T9wh7lHhM4wl6 Efyw== MIME-Version: 1.0 X-Received: by 10.224.72.12 with SMTP id k12mr34241617qaj.81.1399859549923; Sun, 11 May 2014 18:52:29 -0700 (PDT) Received: by 10.96.236.41 with HTTP; Sun, 11 May 2014 18:52:29 -0700 (PDT) Received: by 10.96.236.41 with HTTP; Sun, 11 May 2014 18:52:29 -0700 (PDT) In-Reply-To: <555D0745-B6F8-4826-95F8-1A252F680AF9@gmail.com> References: <555D0745-B6F8-4826-95F8-1A252F680AF9@gmail.com> Date: Sun, 11 May 2014 18:52:29 -0700 Message-ID: Subject: Re: 2.4 v of Hadoop causes IncompatibleClassChangeError From: Lewis John Mcgibbney To: user@avro.apache.org Content-Type: multipart/alternative; boundary=047d7bea2f02356e8e04f92a33de X-Virus-Checked: Checked by ClamAV on apache.org --047d7bea2f02356e8e04f92a33de Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable My guess is that this is Avro side. We've seen similar traces with Nutch. This looks like a JIRA ticket. On May 11, 2014 4:53 PM, "Deepak" wrote: > > > On 07-May-2014, at 7:35 am, =C3=90=CE=9E=E2=82=AC=CF=81@=D2=9C (=E0=B9=8F= =CC=AF=CD=A1=E0=B9=8F) wrote: > > Exception: > > jjava.lang.Exception: java.lang.IncompatibleClassChangeError: Found > interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was > expected > > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:= 462) > > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:52= 2) > > Caused by: java.lang.IncompatibleClassChangeError: Found interface > org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected > > at > org.apache.avro.mapreduce.AvroRecordReaderBase.initialize(AvroRecordReade= rBase.java:86) > > at > com.tracking.sdk.pig.load.format.AggregateRecordReader.initialize(Aggrega= teRecordReader.java:41) > > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordRea= der.initialize(PigRecordReader.java:192) > > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTa= sk.java:525) > > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobR= unner.java:243) > > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471= ) > > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > at java.lang.Thread.run(Thread.java:744) > > > Imports used in my recordreader class. > > import org.apache.avro.Schema; > > import org.apache.avro.mapreduce.AvroKeyValueRecordReader; > > import org.apache.hadoop.mapreduce.InputSplit; > > import org.apache.hadoop.mapreduce.TaskAttemptContext; > > Any suggestions ? Or does this require a fix from Avro ? > > Regards, > > Deepak > > --047d7bea2f02356e8e04f92a33de Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

My guess is that this is Avro side. We've seen similar t= races with Nutch.
This looks like a JIRA ticket.

On May 11, 2014 4:53 PM, "Deepak" <= deepujain@gmail.com> wrote:


On 07-May-2014, at 7:35 am, =C3= =90=CE=9E=E2=82=AC=CF=81@=D2=9C (=E0=B9=8F=CC=AF=CD=A1=E0=B9=8F) <deepujain@gmail.com&= gt; wrote:

Exception:

jjava.lang.Exception: java.lang.IncompatibleClassChangeError: Found inte= rface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expecte= d

at org.apache.hadoop.m= apred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)

at org.apache.hadoop.mapred= .LocalJobRunner$Job.run(LocalJobRunner.java:522)

Caused by: java.lang= .IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.= TaskAttemptContext, but class was expected

at org.apache.avro.mapreduc= e.AvroRecordReaderBase.initialize(AvroRecordReaderBase.java:86)

at com.tracking.sdk.pig.load.format= .AggregateRecordReader.initialize(AggregateRecordReader.java:41)

at org.apache.pig.backend.h= adoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordRe= ader.java:192)

at org.ap= ache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:= 525)

at org.apache.hadoop.mapred= .MapTask.runNewMapper(MapTask.java:763)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)

at org.apache.hadoop.mapred= .LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)

at java.util.concurrent.Executors$= RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.Fut= ureTask.run(FutureTask.java:262)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExe= cutor.java:1145)

at java.util.concurrent.Thr= eadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:744)<= /p>



Imports used in my recordreader class.

import org.apache.avro.Schema;

import org.apache.avro.mapreduce.AvroKeyValueRecordReader;<= /p>

import org.apache.hadoop.mapreduce.InputSplit;

import org.apache.hadoop.mapreduce.TaskAttemptContext;

<= p>Any suggestions ? Or does this require a fix from Avro ?

Regards,

Deepak

--047d7bea2f02356e8e04f92a33de--