hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Amar Kamat (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (MAPREDUCE-2192) Implement gridmix system tests with different time intervals for MR streaming job traces.
Date Wed, 15 Jun 2011 16:45:47 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Amar Kamat resolved MAPREDUCE-2192.
-----------------------------------

       Resolution: Duplicate
    Fix Version/s: 0.23.0

This is already committed to trunk.

> Implement gridmix system tests with different time intervals for MR streaming job traces.
> -----------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2192
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2192
>             Project: Hadoop Map/Reduce
>          Issue Type: Task
>          Components: contrib/gridmix
>            Reporter: Vinay Kumar Thota
>            Assignee: Vinay Kumar Thota
>             Fix For: 0.23.0
>
>         Attachments: MAPREDUCE-2192.patch, MAPREDUCE-2192.patch
>
>
> Develop gridmix system tests for below scenarios by using different time intervals of
 MR streaming jobs.
> 1. Generate input data based on cluster size and create the synthetic jobs by using the
2 min folded MR streaming jobs trace and submit the jobs with below arguments.
> GRIDMIX_JOB_TYPE = LOADJOB
> GRIDMIX_USER_RESOLVER = SubmitterUserResolver
> GRIDMIX_SUBMISSION_POLICY = STRESS
> GRIDMIX_JOB_SUBMISSION_QUEUE_IN_TRACE = True
> Input Size = 250 MB * No. of nodes in cluster.
> MINIMUM_FILE_SIZE=150MB
> TRACE_FILE = 2 min folded trace.
> Verify JobStatus for each job, input split size for each job and summary (QueueName,
UserName, StatTime, FinishTime, maps, reducers and counters etc) after completion of execution.
> 2.  Generate input data based on cluster size and create the synthetic jobs by using
the 3 min folded MR streaming jobs trace and submit the jobs with below arguments.
> GRIDMIX_JOB_TYPE = LoadJob
> GRIDMIX_USER_RESOLVER = RoundRobinUserResolver
> GRIDMIX_BYTES_PER_FILE = 150 MB
> GRIDMIX_SUBMISSION_POLICY = REPLAY
> GRIDMIX_JOB_SUBMISSION_QUEUE_IN_TRACE = True
> Input Size = 200 MB * No. of nodes in cluster.
> PROXY_USERS = proxy users file path
> TRACE_FILE = 3 min folded trace.
> Verify JobStatus for each job, input split size for each job and summary (QueueName,
UserName, StatTime, FinishTime, maps, reducers and counters etc) after completion of execution.
> 3. Generate input data based on cluster size and create the synthetic jobs by using the
5 min MR streaming jobs trace and submit the jobs with below arguments.
> GRIDMIX_JOB_TYPE = LoadJob
> GRIDMIX_USER_RESOLVER = SubmitterUserResolver
> GRIDMIX_SUBMISSION_POLICY = SERIAL
> GRIDMIX_JOB_SUBMISSION_QUEUE_IN_TRACE = false
> GRIDMIX_KEY_FRC = 0.5f
> Input Size = 200MB * No. of nodes in cluster.
> TRACE_FILE = 5 min folded trace.
> Verify JobStatus for each job and summary (QueueName, UserName, StatTime, FinishTime,
MAPS, REDUCERS and COUNTERS etc) after completion of execution.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message