hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kim Vogt <...@simplegeo.com>
Subject Re: Problem with DistributedCache after upgrading to CDH3b2
Date Tue, 05 Oct 2010 21:30:07 GMT
I'm experiencing the same problem.  I was hoping there were be a reply to
this.  Anyone? Bueller?


On Fri, Jul 16, 2010 at 1:58 AM, Jamie Cockrill <jamie.cockrill@gmail.com>wrote:

> Dear All,
> We recently upgraded from CDH3b1 to b2 and ever since, all our
> mapreduce jobs that use the DistributedCache have failed. Typically,
> we add files to the cache prior to job startup, using
> addCacheFile(URI, conf) and then get them on the other side, using
> getLocalCacheFiles(conf). I believe the hadoop-core versions for these
> are 0.20.2+228 and +320 respectively.
> We then open the files and read them in using a standard FileReader,
> using the toString on the path object as the constructor parameter,
> which has worked fine up to now. However, we're now getting
> FileNotFound exceptions when the file reader tries to open the file.
> Unfortunately the cluster is on an airgapped network, but the
> FileNotFound line comes out like:
> java.io.FileNotFoundException:
> /tmp/hadoop-hadoop/mapred/local/taskTracker/archive/master/path/to/my/file/filename.txt/filename.txt
> Note, the duplication of filename.txt is deliberate. I'm not sure if
> that's strange or not as this has previously worked absolutely fine.
> Has anyone else experienced this? Apologies if this is known, I've
> only just joined the list.
> Many thanks,
> Jamie

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message