Return-Path: X-Original-To: apmail-crunch-user-archive@www.apache.org Delivered-To: apmail-crunch-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 795A5184EB for ; Tue, 7 Jul 2015 16:45:52 +0000 (UTC) Received: (qmail 75076 invoked by uid 500); 7 Jul 2015 16:45:52 -0000 Delivered-To: apmail-crunch-user-archive@crunch.apache.org Received: (qmail 75034 invoked by uid 500); 7 Jul 2015 16:45:52 -0000 Mailing-List: contact user-help@crunch.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@crunch.apache.org Delivered-To: mailing list user@crunch.apache.org Received: (qmail 75023 invoked by uid 99); 7 Jul 2015 16:45:52 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 07 Jul 2015 16:45:52 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id C7E72D2CCE for ; Tue, 7 Jul 2015 16:45:51 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.9 X-Spam-Level: ** X-Spam-Status: No, score=2.9 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id ECqMklc0NNqN for ; Tue, 7 Jul 2015 16:45:43 +0000 (UTC) Received: from mail-wi0-f172.google.com (mail-wi0-f172.google.com [209.85.212.172]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id C65064C105 for ; Tue, 7 Jul 2015 16:45:42 +0000 (UTC) Received: by widjy10 with SMTP id jy10so194250915wid.1 for ; Tue, 07 Jul 2015 09:45:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :content-type; bh=9AoZ+aBBMi3QdyBSF4gmetGIlyoTtZ+RmDDCSAQQZgM=; b=CDjrJLN2GT56I5xRhQiLdCwBjVNfvZH5szGwaJwe4mbhnltLFDTAxsJu/dP7uIx412 Ogt49mIXqsOCbyifDBtzdTf2KbvdjmENq9nLedOGxGLwgCqWKRbASfBIFAhlAXtiu5mV rdHwgPcJt7SNoEgEYoGxTNvoz0WFvuAaAwqcZ0sq0UiGu3LrwuBgUR0otQTDqffxFOkI /y1734Q4LCTP26uY65KV8VdpkoFvXXRpEiYVkjt62PUj/1dvnZQgV6E/mTWnm4sGGxcZ Mw5THnWkDeqynezMfoXxL3FA02z1ieyi3AZYaH7sEurEOKkoqHopeG/eRKWULI4b9t7n z27Q== X-Received: by 10.194.178.99 with SMTP id cx3mr10057728wjc.33.1436287541981; Tue, 07 Jul 2015 09:45:41 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Gabriel Reid Date: Tue, 07 Jul 2015 16:45:32 +0000 Message-ID: Subject: Re: Error while trying to obtain the top elements. To: user@crunch.apache.org Content-Type: multipart/alternative; boundary=089e0141aafebc3782051a4bc0c4 --089e0141aafebc3782051a4bc0c4 Content-Type: text/plain; charset=UTF-8 Hi Florin, Thanks for the very detailed report. That appears to be a bug, brought on by the way that ObjectInputStream works with classloaders, together with how Hadoop manipulates classloaders. I've logged a JIRA ticket [1] for this. For now, like David I would recommend using Avros instead of Writables, as that should get around this issue without having any other consequences for now. - Gabriel 1. https://issues.apache.org/jira/browse/CRUNCH-539 On Tue, Jul 7, 2015 at 3:27 PM David Ortiz wrote: > That looks weird. Can you try it using Avros in place of Writables and > see if it does the same thing? > > On Tue, Jul 7, 2015, 3:43 AM Florin Tatu wrote: > >> Hi, >> >> I am having a job that processes a set of files containing climatic >> data(more exactly data from this location: >> ftp://ftp.ncdc.noaa.gov/pub/data/noaa/) >> >> I downloaded and merged the data using a script so I will have one folder >> (ncdc_data) having a .gz archive for each year(eg: 1901.gz, 1902.gz etc) >> Each archive contains only one text file. >> >> My code is: >> >> import com.google.common.base.Charsets; >> import com.google.common.io.Files; >> import org.apache.crunch.*; >> import org.apache.crunch.fn.Aggregators; >> import org.apache.crunch.impl.mr.MRPipeline; >> import org.apache.crunch.io.To; >> import java.io.File; >> import static org.apache.crunch.types.writable.Writables.ints; >> import static org.apache.crunch.types.writable.Writables.strings; >> import static org.apache.crunch.types.writable.Writables.tableOf; >> >> public class MaxTemperatureCrunch { >> >> public static void main(String[] args) throws Exception { >> if (args.length != 2) { >> System.err.println("Usage: MaxTemperatureCrunch >> "); >> System.exit(-1); >> } >> >> Pipeline pipeline = new MRPipeline(MaxTemperatureCrunch.class); >> >> PCollection records = pipeline.readTextFile(args[0]); >> >> PTable yearTemperatures = records >> .parallelDo(toYearTempPairsFn(), tableOf(strings(), >> ints())); >> >> PTable maxTemps = yearTemperatures >> .groupByKey() >> .combineValues(Aggregators.MAX_INTS()) >> .top(1); //LINE THAT CAUSES THE ERROR >> >> maxTemps.write(To.textFile(args[1])); >> >> PipelineResult result = pipeline.done(); >> String dot = >> pipeline.getConfiguration().get("crunch.planner.dotfile"); >> Files.write(dot, new File("pipeline.dot"), Charsets.UTF_8); >> Runtime.getRuntime().exec("dot -Tpng -O pipeline.dot"); >> System.exit(result.succeeded() ? 0 : 1); >> } >> >> static DoFn> toYearTempPairsFn() { >> return new DoFn>() { >> NcdcRecordParser parser = new NcdcRecordParser(); >> @Override >> public void process(String input, Emitter> Integer>> emitter) { >> parser.parse(input); >> if (parser.isValidTemperature()) { >> emitter.emit(Pair.of(parser.getYear(), >> parser.getAirTemperature())); >> } >> } >> }; >> } >> } >> >> >> Hadoop runs locally in standalone mode. >> Hadoop version is: 2.7.0 >> Crunch version is: 0.12.0 (maven dependency: 0.12.0-hadoop2) >> >> I build my application with: mvn package >> I run it with: hadoop jar target/crunch-demo-1.0-SNAPSHOT-job.jar >> ncdc_data/ output >> >> If I do not call .top(1) (see comment: //LINE THAT CAUSES THE ERROR) >> everything works fine, but I obtain the maximum temperatures for each year >> only and I want to obtain the overall maximum temperature or the top N >> temperatures for the whole data set. >> >> If I call .top(1) I obtain the following error: >> >> java.lang.Exception: org.apache.crunch.CrunchRuntimeException: Error >> reloading writable comparable codes >> at >> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) >> at >> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) >> Caused by: org.apache.crunch.CrunchRuntimeException: Error reloading >> writable comparable codes >> at >> org.apache.crunch.types.writable.TupleWritable.setConf(TupleWritable.java:71) >> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) >> at >> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) >> at >> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:66) >> at >> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42) >> at >> org.apache.hadoop.io.SequenceFile$Reader.deserializeValue(SequenceFile.java:2247) >> at >> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2220) >> at >> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:78) >> at >> org.apache.crunch.impl.mr.run.CrunchRecordReader.nextKeyValue(CrunchRecordReader.java:157) >> at >> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) >> at >> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) >> at >> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) >> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) >> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) >> at >> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) >> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >> at >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >> at java.lang.Thread.run(Thread.java:745) >> Caused by: java.lang.ClassNotFoundException: >> org.apache.crunch.types.writable.TupleWritable >> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) >> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) >> at java.security.AccessController.doPrivileged(Native Method) >> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) >> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) >> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) >> at java.lang.Class.forName0(Native Method) >> at java.lang.Class.forName(Class.java:274) >> at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:625) >> at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1612) >> at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517) >> at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1483) >> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1333) >> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370) >> at >> com.google.common.collect.Serialization.populateMap(Serialization.java:91) >> at com.google.common.collect.HashBiMap.readObject(HashBiMap.java:109) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017) >> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893) >> at >> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798) >> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) >> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370) >> at >> org.apache.crunch.types.writable.Writables.reloadWritableComparableCodes(Writables.java:145) >> at >> org.apache.crunch.types.writable.TupleWritable.setConf(TupleWritable.java:69) >> ... 20 more >> >> >> Did anyone encountered this issue? >> If you need any other details please ask me. >> >> Thank you, >> Florin >> > --089e0141aafebc3782051a4bc0c4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi Florin,
=

Thanks for the very detailed report. That appears to = be a bug, brought on by the way that ObjectInputStream works with classload= ers, together with how Hadoop manipulates classloaders.=C2=A0

I've logged a JIRA ticket [1] for this. For now,= like David I would recommend using Avros instead of Writables, as that sho= uld get around this issue without having any other consequences for now.

- Gabriel



On Tue, Jul 7, 2015 at 3:27 PM David Ortiz &= lt;dpo5003@gmail.com> wrote:

That looks weird.=C2= =A0 Can you try it using Avros in place of Writables and see if it does the= same thing?


On Tue, Jul 7, 2015, 3:43 A= M=C2=A0Florin Tatu <tatuflorin@gmail.com> wrote:
Hi,

I am having a= job that processes a set of files containing climatic data(more exactly da= ta from this location:=C2=A0ftp://ftp.ncdc.noaa.gov/pub/data/noaa/)

I downloaded and merged the data using a script so I will h= ave one folder (ncdc_data) having a .gz archive for each year(eg: 1901.gz, = 1902.gz etc)
Each archive contains only one text file.
=
My code is:=C2=A0

import com.g= oogle.common.base.Charsets;
import com.google.common.io.Files;
import org.apache.crunch.*;
import org.apache.crunch.fn.A= ggregators;
import org.apache.crunch.impl.mr.MRPipeline;
import org.a= pache.crunch.io.To;
import java.io.File;
import sta= tic org.apache.crunch.types.writable.Writables.ints;
import stati= c org.apache.crunch.types.writable.Writables.strings;
import stat= ic org.apache.crunch.types.writable.Writables.tableOf;

=
public class MaxTemperatureCrunch {

=C2= =A0 =C2=A0 public static void main(String[] args) throws Exception {
<= div>=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (args.length !=3D 2) {
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 System.err.println("Usage: MaxTempe= ratureCrunch <input path> <output path>");
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 System.exit(-1);
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 }

=C2=A0 =C2=A0 =C2=A0 =C2=A0= Pipeline pipeline =3D new MRPipeline(MaxTemperatureCrunch.class);

=C2=A0 =C2=A0 =C2=A0 =C2=A0 PCollection<String> reco= rds =3D pipeline.readTextFile(args[0]);

=C2=A0 =C2= =A0 =C2=A0 =C2=A0 PTable<String, Integer> yearTemperatures =3D record= s
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .parall= elDo(toYearTempPairsFn(), tableOf(strings(), ints()));

=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 PTable<String, Integer> maxTemps =3D= yearTemperatures
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 .groupByKey()
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 .combineValues(Aggregators.MAX_INTS())
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .top(1); =C2=A0 //LINE THA= T CAUSES THE ERROR

=C2=A0 =C2=A0 =C2=A0 =C2=A0 max= Temps.write(To.textFile(args[1]));

=C2=A0 =C2=A0 = =C2=A0 =C2=A0 PipelineResult result =3D pipeline.done();
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 String dot =3D pipeline.getConfiguration().get("c= runch.planner.dotfile");
=C2=A0 =C2=A0 =C2=A0 =C2=A0 Files.w= rite(dot, new File("pipeline.dot"), Charsets.UTF_8);
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 Runtime.getRuntime().exec("dot -Tpng -O pi= peline.dot");
=C2=A0 =C2=A0 =C2=A0 =C2=A0 System.exit(result= .succeeded() ? 0 : 1);
=C2=A0 =C2=A0 }

= =C2=A0 =C2=A0 static DoFn<String, Pair<String, Integer>> toYear= TempPairsFn() {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 return new DoFn<St= ring, Pair<String, Integer>>() {
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 NcdcRecordParser parser =3D new NcdcRecordParser();
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 @Override
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 public void process(String input, Em= itter<Pair<String, Integer>> emitter) {
=C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 parser.parse(input);
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (parser.isValidT= emperature()) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 emitter.emit(Pair.of(parser.getYear(), parser.getAirTe= mperature()));
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 };
=C2=A0 =C2=A0 }
}
<= /div>


Hadoop runs locally in standalone m= ode.
Hadoop version is: 2.7.0
Crunch version is: 0.12.0= =C2=A0(maven dependency: 0.12.0-hadoop2)

I build = my application with: mvn package
I run it with:=C2=A0hadoop jar t= arget/crunch-demo-1.0-SNAPSHOT-job.jar ncdc_data/ output

If I do not call .top(1) (see comment: =C2=A0//LINE THAT CAUSES THE = ERROR) everything works fine, but I obtain the maximum temperatures for eac= h year only and I want to obtain the overall maximum temperature or the top= N temperatures for the whole data set.

If I call = .top(1) I obtain the following error:=C2=A0

j= ava.lang.Exception: org.apache.crunch.CrunchRuntimeException: Error reloadi= ng writable comparable codes
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRu= nner.java:462)
at or= g.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: org.apache.crunch.CrunchRuntimeException: Error reloading = writable comparable codes
<= /span>at org.apache.crunch.types.writable.TupleWritable.setConf(TupleWritab= le.java:71)
at org.a= pache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.Re= flectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.io.serializer.Writab= leSerialization$WritableDeserializer.deserialize(WritableSerialization.java= :66)
at org.apache.h= adoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(= WritableSerialization.java:42)
at org.apache.hadoop.io.SequenceFile$Reader.deserializeValue(Seq= uenceFile.java:2247)
at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.j= ava:2220)
at org.apa= che.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(Sequen= ceFileRecordReader.java:78)
= at org.apache.crunch.impl.mr.run.CrunchRecordReader.nextKeyValue(Cr= unchRecordReader.java:157)
= at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyV= alue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapConte= xtImpl.java:80)
at o= rg.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(Wrapp= edMapper.java:91)
at= org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.ru= nNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.= LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
<= span style=3D"white-space:pre-wrap"> at java.util.concurrent.Executo= rs$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.j= ava:262)
at java.uti= l.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent= .ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java= :745)
Caused by: java.lang.ClassNotFoundException: org.apache.cru= nch.types.writable.TupleWritable
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoad= er$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)=
at java.net.URLClas= sLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:= 425)
at sun.misc.Lau= ncher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoade= r.java:358)
at java.= lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at java.io.ObjectInputStream.resolveC= lass(ObjectInputStream.java:625)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStrea= m.java:1612)
at java= .io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
= at java.io.ObjectInputStream.r= eadClass(ObjectInputStream.java:1483)
at java.io.ObjectInputStream.readObject0(ObjectInputStrea= m.java:1333)
at java= .io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at com.google.common.collect.Seria= lization.populateMap(Serialization.java:91)
at com.google.common.collect.HashBiMap.readObject(H= ashBiMap.java:109)
a= t sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccess= orImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.inv= oke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStre= amClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialDa= ta(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre= am.java:1798)
at jav= a.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
<= span style=3D"white-space:pre-wrap"> at java.io.ObjectInputStream.re= adObject(ObjectInputStream.java:370)
at org.apache.crunch.types.writable.Writables.reloadWritab= leComparableCodes(Writables.java:145)
at org.apache.crunch.types.writable.TupleWritable.setConf= (TupleWritable.java:69)
... 20 more


Did anyone en= countered this issue?
If you need any other details please ask me= .

Thank you,
Florin=C2=A0
--089e0141aafebc3782051a4bc0c4--