hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nandakumar (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12071) Ozone: Corona: Implementation of Corona
Date Fri, 28 Jul 2017 10:27:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16104772#comment-16104772

Nandakumar commented on HDFS-12071:

[~cheersyang], HDFS-12179 adds support in {{bin/hdfs}} to run corona. Now corona can be started
using {{bin/hdfs corona}}, to get the list of supported options {{bin/hdfs corona -help}}.

[~/A/hadoop-3.0.0-beta1-SNAPSHOT]-130-$ bin/hdfs corona -help
Options supported are:
-mode [online | offline]        specifies the mode in which Corona should run.
-source <url>                   specifies the URL of s3 commoncrawl warc file to be
used when the mode is online.
-numOfVolumes <value>           specifies number of Volumes to be created in offline
-numOfBuckets <value>           specifies number of Buckets to be created per Volume
in offline mode
-numOfKeys <value>              specifies number of Keys to be created per Bucket in
offline mode
-help                           prints usage.

> Ozone: Corona: Implementation of Corona
> ---------------------------------------
>                 Key: HDFS-12071
>                 URL: https://issues.apache.org/jira/browse/HDFS-12071
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>            Reporter: Nandakumar
>            Assignee: Nandakumar
>              Labels: tool
>             Fix For: HDFS-7240
>         Attachments: HDFS-12071-HDFS-7240.000.patch, HDFS-12071-HDFS-7240.001.patch,
> Tool to populate ozone with data for testing.
> This is not a map-reduce program and this is not for benchmarking Ozone write throughput.
> It supports both online and offline modes. Default mode is offline, {{-mode}} can be
used to change the mode.
> In online mode, active internet connection is required, common crawl data from AWS will
be used. Default source is [CC-MAIN-2017-17/warc.paths.gz | https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2017-17/warc.paths.gz]
(it contains the path to actual data segment), user can override this using {{-source}}.
> The following values are derived from URL of Common Crawl data
> * Domain will be used as Volume
> * URL will be used as Bucket
> * FileName will be used as Key
> In offline mode, the data will be random bytes and size of data will be 10 KB.
> * Default number of Volumes 10, {{-numOfVolumes}} can be used to override 
> * Default number of Buckets per Volume 1000, {{-numOfBuckets}} can be used to override

> * Default number of Keys per Bucket 500000, {{-numOfKeys}} can be used to override 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message