hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Oliver B. Fischer" <o.b.fisc...@swe-blog.net>
Subject Re: Why "java.util.zip.ZipOutputStream" need to use /tmp?
Date Thu, 03 Sep 2009 14:29:18 GMT
Hello Steve,

I assume what the java.io.FileOutputStream uses /tmp as tempdir. As you
can see, the errors occurs in a native method. As far I know, /tmp is
standard temp directory on UNIX systems automatically used by many
native library calls. May you can set $TEMPDIR
(http://en.wikipedia.org/wiki/TMPDIR) to another directory?

Best regards,

Oliver

Steve Gao schrieb:
> 
> The hadoop version is 0.18.3 . Recently we got "out of space" issue. It's from "java.util.zip.ZipOutputStream".
> We found that /tmp is full and after cleaning /tmp the problem is solved.
> 
> However why hadoop needs to use /tmp? We had already configured hadoop tmp to a local
disk in: hadoop-site.xml
> 
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value> ... some large local disk ... </value>
> </property>
> 
> 
> Could it because java.util.zip.ZipOutputStream uses /tmp even if we configured hadoop.tmp.dir
to a large local disk?
> 
> The error log is here FYI:
> 
> java.io.IOException: No space left on device         
> at java.io.FileOutputStream.write(Native Method)        
>  at java.util.zip.ZipOutputStream.writeInt(ZipOutputStream.java:445)         
> at java.util.zip.ZipOutputStream.writeEXT(ZipOutputStream.java:362)         
> at java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:220)         
> at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:301)         
> at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:146)         
> at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)         
> at org.apache.hadoop.streaming.JarBuilder.merge(JarBuilder.java:79)         
> at org.apache.hadoop.streaming.StreamJob.packageJobJar(StreamJob.java:628)         
> at org.apache.hadoop.streaming.StreamJob.setJobConf(StreamJob.java:843)         
> at org.apache.hadoop.streaming.StreamJob.go(StreamJob.java:110)         
> at org.apache.hadoop.streaming.HadoopStreaming.main(HadoopStreaming.java:33)        

> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)         
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)    
    
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        
> at java.lang.reflect.Method.invoke(Method.java:597)         
> at org.apache.hadoop.util.RunJar.main(RunJar.java:155)         
> at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)         
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)         
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)         
> at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)  
> Executing Hadoop job failure
> 
> 
> 
> 
>       


-- 
Oliver B. Fischer, Schönhauser Allee 64, 10437 Berlin
Tel. +49 30 44793251, Mobil: +49 178 7903538
Mail: o.b.fischer@swe-blog.net Blog: http://www.swe-blog.net


Mime
View raw message