mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] Updated: (MAHOUT-249) Make WikipediaXmlSplitter able to write the chunks directly to HDFS or S3
Date Fri, 22 Jan 2010 14:45:21 GMT

     [ https://issues.apache.org/jira/browse/MAHOUT-249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sean Owen updated MAHOUT-249:
-----------------------------

    Resolution: Fixed
      Assignee: Olivier Grisel
        Status: Resolved  (was: Patch Available)

> Make WikipediaXmlSplitter able to write the chunks directly to HDFS or S3
> -------------------------------------------------------------------------
>
>                 Key: MAHOUT-249
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-249
>             Project: Mahout
>          Issue Type: Improvement
>          Components: Classification
>    Affects Versions: 0.2
>            Reporter: Olivier Grisel
>            Assignee: Olivier Grisel
>            Priority: Minor
>             Fix For: 0.3
>
>         Attachments: MAHOUT-249-2.patch, MAHOUT-249-v2.patch, MAHOUT-249-WikipediaXMLSplitterHDFS.patch
>
>
> By using the Hadoop FS abstraction it should be possible to avoid writing the chunks
on the local hard drive before uploading them to HDFS or S3.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message