hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1926) Design/implement a set of compression benchmarks for the map-reduce framework
Date Tue, 02 Oct 2007 14:31:50 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Arun C Murthy updated HADOOP-1926:

    Attachment: HADOOP-1926_1_20071002.patch

Here is an implementation of a *randomtextwriter* which can generate random textual data in
any output-format (e.g. SequenceFileOutputFormat/TextOutputFormat etc.). 

This patch also enhances examples/Sort and test/SortValidator to ensure they can be used with

> Design/implement a set of compression benchmarks for the map-reduce framework
> -----------------------------------------------------------------------------
>                 Key: HADOOP-1926
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1926
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Arun C Murthy
>            Assignee: Arun C Murthy
>             Fix For: 0.15.0
>         Attachments: HADOOP-1926_1_20071002.patch
> It would be nice to benchmark various compression codecs for use in the hadoop (existing
codecs like zlib, lzo and in-future bzip2 etc.) and run these along with our nightlies or
> Here are some steps:
> a) Fix HADOOP-1851 ( Map output compression codec cannot be set independently of job
output compression codec)
> b) Implement a random-text-writer along the lines of examples/randomwriter to generate
large amounts of synthetic textual data for use in sort. One way to do this is to pick a word
randomly from {{/usr/share/dict/words}} till we get enough bytes per map. To be safe, we could
store an array of Strings of a snap-shot of the words in examples/RandomTextWriter.java.
> c) Take a dump of wikipedia (http://download.wikimedia.org/enwiki/) and/or the ebooks
from Project Gutenberg (http://www.gutenberg.org/MIRRORS.ALL) and use them as non-synthetic
data to run sort/wordcount against.
> For both b) and c) we should setup nightly/weekly benchmark runs with different codecs
for reduce-outputs and map-outputs (shuffle) and track each.
> Thoughts?

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message