hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcos Ortiz <mlor...@uci.cu>
Subject Re: FW: NNbench and MRBench
Date Sun, 08 May 2011 03:09:13 GMT
El 5/7/2011 10:33 PM, stanley.shi@emc.com escribió:
> Thanks, Marcos,
>
> Through these links, I still can't find anything about the NNbench and MRBench.
>
> -----Original Message-----
> From: Marcos Ortiz [mailto:mlortiz@uci.cu]
> Sent: 2011年5月8日 10:23
> To: mapreduce-user@hadoop.apache.org
> Cc: Shi, Stanley
> Subject: Re: FW: NNbench and MRBench
>
> El 5/7/2011 8:53 PM, stanley.shi@emc.com escribió:
>    
>> Hi guys,
>>
>> I have a cluster of 16 machines running Hadoop. Now I want to do some benchmark on
this cluster with the "nnbench" and "mrbench".
>> I'm new to the hadoop thing and have no one to refer to. I don't know what the supposed
result should I have?
>> Now for mrbench, I have an average time of 22sec for a one map job. Is this too bad?
What the supposed results might be?
>>
>> For nnbench, what's the supposed results? Below is my result.
>> ================
>>                              Date&   time: 2011-05-05 20:40:25,459
>>
>>                           Test Operation: rename
>>                               Start time: 2011-05-05 20:40:03,820
>>                              Maps to run: 1
>>                           Reduces to run: 1
>>                       Block Size (bytes): 1
>>                           Bytes to write: 0
>>                       Bytes per checksum: 1
>>                          Number of files: 10000
>>                       Replication factor: 1
>>               Successful file operations: 10000
>>
>>           # maps that missed the barrier: 0
>>                             # exceptions: 0
>>
>>                              TPS: Rename: 1763
>>               Avg Exec time (ms): Rename: 0.5672
>>                     Avg Lat (ms): Rename: 0.4844
>> null
>>
>>                    RAW DATA: AL Total #1: 4844
>>                    RAW DATA: AL Total #2: 0
>>                 RAW DATA: TPS Total (ms): 5672
>>          RAW DATA: Longest Map Time (ms): 5672.0
>>                      RAW DATA: Late maps: 0
>>                RAW DATA: # of exceptions: 0
>> =============================
>> One more question, when I set maps number to bigger, I get all zeros results:
>> =============================
>> Test Operation: create_write
>>                               Start time: 2011-05-03 23:22:39,239
>>                              Maps to run: 160
>>                           Reduces to run: 160
>>                       Block Size (bytes): 1
>>                           Bytes to write: 0
>>                       Bytes per checksum: 1
>>                          Number of files: 1
>>                       Replication factor: 1
>>               Successful file operations: 0
>>
>>           # maps that missed the barrier: 0
>>                             # exceptions: 0
>>
>>                  TPS: Create/Write/Close: 0
>> Avg exec time (ms): Create/Write/Close: 0.0
>>               Avg Lat (ms): Create/Write: NaN
>>                      Avg Lat (ms): Close: NaN
>>
>>                    RAW DATA: AL Total #1: 0
>>                    RAW DATA: AL Total #2: 0
>>                 RAW DATA: TPS Total (ms): 0
>>          RAW DATA: Longest Map Time (ms): 0.0
>>                      RAW DATA: Late maps: 0
>>                RAW DATA: # of exceptions: 0
>> =====================
>>
>> Can anyone point me to some documents?
>> I really appreciate your help :)
>>
>> Thanks,
>> stanley
>>
>>      
> You can use these resources:
> http://www.michael-noll.com/blog/2011/04/09/benchmarking-and-stress-testing-an-hadoop-cluster-with-terasort-testdfsio-nnbench-mrbench/
> http://answers.oreilly.com/topic/460-how-to-benchmark-a-hadoop-cluster/
> http://wiki.apache.org/hadoop/HardwareBenchmarks
> http://www.quora.com/Apache-Hadoop/Are-there-any-good-Hadoop-benchmark-problems
>
> Regards
>
>    
Well, on the Micheal Noll's post says this:

NameNode benchmark (nnbench)
=======================
NNBench (see src/test/org/apache/hadoop/hdfs/NNBench.java) is useful for 
load testing the NameNode hardware and configuration. It generates a lot 
of HDFS-related requests with normally very small "payloads" for the 
sole purpose of putting a high HDFS management stress on the NameNode. 
The benchmark can simulate requests for creating, reading, renaming and 
deleting files on HDFS.

I like to run this test simultaneously from several machines -- e.g. 
from a set of DataNode boxes -- in order to hit the NameNode from 
multiple locations at the same time.

The syntax of NNBench is as follows:

NameNode Benchmark 0.4
Usage: nnbench <options>
Options:
         -operation <Available operations are create_write open_read 
rename delete. This option is mandatory>
          * NOTE: The open_read, rename and delete operations assume 
that the files they operate on, are already available. The create_write 
operation must be run before running the other operations.
         -maps <number of maps. default is 1. This is not mandatory>
         -reduces <number of reduces. default is 1. This is not mandatory>
         -startTime <time to start, given in seconds from the epoch. 
Make sure this is far enough into the future, so all maps (operations) 
will start at the same time>. default is launch time + 2 mins. This is 
not mandatory
         -blockSize <Block size in bytes. default is 1. This is not 
mandatory>
         -bytesToWrite <Bytes to write. default is 0. This is not mandatory>
         -bytesPerChecksum <Bytes per checksum for the files. default is 
1. This is not mandatory>
         -numberOfFiles <number of files to create. default is 1. This 
is not mandatory>
         -replicationFactorPerFile <Replication factor for the files. 
default is 1. This is not mandatory>
         -baseDir <base DFS path. default is /becnhmarks/NNBench. This 
is not mandatory>
         -readFileAfterOpen <true or false. if true, it reads the file 
and reports the average time to read. This is valid with the open_read 
operation. default is false. This is not mandatory>
         -help: Display the help statement

The following command will run a NameNode benchmark that creates 1000 
files using 12 maps and 6 reducers. It uses a custom output directory 
based on the machine's short hostname. This is a simple trick to ensure 
that one box does not accidentally write into the same output directory 
of another box running NNBench at the same time.

$ hadoop jar hadoop-*-test.jar nnbench -operation create_write \
     -maps 12 -reduces 6 -blockSize 1 -bytesToWrite 0 -numberOfFiles 1000 \
     -replicationFactorPerFile 3 -readFileAfterOpen true \
     -baseDir /benchmarks/NNBench-`hostname -s`

Note that by default the benchmark waits 2 minutes before it actually 
starts!

MapReduce benchmark (mrbench)
=======================

MRBench (see src/test/org/apache/hadoop/mapred/MRBench.java) loops a 
small job a number of times. As such it is a very complimentary 
benchmark to the "large-scale" TeraSort benchmark suite because MRBench 
checks whether small job runs are responsive and running efficiently on 
your cluster. It puts its focus on the MapReduce layer as its impact on 
the HDFS layer is very limited.

This test should be run from a single box (see caveat below). The 
command syntax can be displayed via mrbench --help:

MRBenchmark.0.0.2
Usage: mrbench [-baseDir ]
           [-jar ]
           [-numRuns ]
           [-maps ]
           [-reduces ]
           [-inputLines ]
           [-inputType ]
           [-verbose]

     Important note: In Hadoop 0.20.2, setting the -baseDir parameter 
has no effect. This means that multiple parallel MRBench runs (e.g. 
started from different boxes) might interfere with each other. This is a 
known bug (MAPREDUCE-2398). I have submitted a patch but it has not been 
integrated yet.

In Hadoop 0.20.2, the parameters default to:

-baseDir: /benchmarks/MRBench  [*** see my note above ***]
-numRuns: 1
-maps: 2
-reduces: 1
-inputLines: 1
-inputType: ascending

The command to run a loop of 50 small test jobs is:

$ hadoop jar hadoop-*-test.jar mrbench -numRuns 50

Exemplary output of the above command:

DataLines       Maps    Reduces AvgTime (milliseconds)
1               2       1       31414

This means that the average finish time of executed jobs was 31 seconds.

Can you check this?
http://www.slideshare.net/ydn/ahis2011-platform-hadoop-simulation-and-performance
http://issues.apache.org/jira/browse/HADOOP-5867

Did you search on the current documentation of the API?

Regards

-- 
Marcos Luís Ortíz Valmaseda
  Software Engineer (Large-Scaled Distributed Systems)
  University of Information Sciences,
  La Habana, Cuba
  Linux User # 418229
  http://about.me/marcosortiz


Mime
View raw message