www-builds mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lewis John Mcgibbney <lewis.mcgibb...@gmail.com>
Subject Nutch build failures on Solaris slaves
Date Thu, 03 Nov 2011 13:04:57 GMT

We've been experiencing unpredictable nightly failures for both Nutch trunk
over the last while. One of our memory intensive tests pops up as the
common denominator with the following output.

2011-11-03 04:17:37,478 WARN  mapred.LocalJobRunner
(LocalJobRunner.java:run(256)) - job_local_0001

org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
in any of the configured local directories

	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)

	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
	at org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:50)
	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:193)

Between us we've agreed that the DiskCheckerException usually smells like
running out of disk space in the designated tmp dir, therefore it has been
suggested that we configure the build to build on any given machine from a
number of available slaves. Can anyone please provide a bit of insight as
to whether this is a suitable thing to do, and secondly how I can go about
configuring the builds as per my comments.

Thank you


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message