hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lars Hofhansl (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-3782) Multi-Family support for bulk upload tools causes File Not Found Exception
Date Thu, 13 Sep 2012 23:34:07 GMT

    [ https://issues.apache.org/jira/browse/HBASE-3782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13455436#comment-13455436
] 

Lars Hofhansl commented on HBASE-3782:
--------------------------------------

This is no longer an issue it seems. HFileOutputFormat is now HFile.Writer.appendFileInfo.
Still... Hard to verify.
                
> Multi-Family support for bulk upload tools causes File Not Found Exception
> --------------------------------------------------------------------------
>
>                 Key: HBASE-3782
>                 URL: https://issues.apache.org/jira/browse/HBASE-3782
>             Project: HBase
>          Issue Type: Bug
>          Components: mapreduce
>    Affects Versions: 0.90.3
>            Reporter: Nichole Treadway
>         Attachments: HBASE-3782.patch
>
>
> I've been testing HBASE-1861 in 0.90.2, which adds multi-family support for bulk upload
tools.
> I found that when running the importtsv program, some reduce tasks fail with a File Not
Found exception if there are no keys in the input data which fall into the region assigned
to that reduce task.  From what I can determine, it seems that an output directory is created
in the write() method and expected to exist in the writeMetaData() method...if there are no
keys to be written for that reduce task, the write method is never called and the output directory
is never created, but writeMetaData is expecting the output directory to exist...thus the
FnF exception:
> 2011-03-17 11:52:48,095 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
> java.io.FileNotFoundException: File does not exist: hdfs://master:9000/awardsData/_temporary/_attempt_201103151859_0066_r_000000_0
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:468)
> 	at org.apache.hadoop.hbase.regionserver.StoreFile.getUniqueFile(StoreFile.java:580)
> 	at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat$1.writeMetaData(HFileOutputFormat.java:186)
> 	at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat$1.close(HFileOutputFormat.java:247)
> 	at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:567)
> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Simply checking if the file exists should fix the issue. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message