hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "sunil ranjan khuntia (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-5735) MultipleOutputs of hadoop not working properly with s3 filesyatem
Date Fri, 07 Feb 2014 03:22:19 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894160#comment-13894160

sunil ranjan khuntia commented on MAPREDUCE-5735:

Like you said I have tried it on Hadoop 2.2.0. And its still the same. I am not getting the

> MultipleOutputs of hadoop not working properly with s3 filesyatem
> -----------------------------------------------------------------
>                 Key: MAPREDUCE-5735
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5735
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: sunil ranjan khuntia
>            Priority: Minor
> I have written a mapreduce job and used MultipleOutputs(org.apache.hadoop.mapreduce.lib.output.MultipleOutputs)
calss to put the resultant file in a specific user defined directory path(instead of getting
the o/p file part-r-00000 i want to have dir1/dir2/dir3/d-r-00000). This works fine for hdfs.
> But when I run the same mapreduce job with s3 file sytem the user defined directory structure
is not created in s3. Is it that MultipleOutputs is not suported in S3? if so, any alternate
way by which I can customize my mapreduce o/p file directory path in s3.

This message was sent by Atlassian JIRA

View raw message