accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Fuchs (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-3303) funky performance with large WAL
Date Fri, 07 Nov 2014 22:27:34 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14202830#comment-14202830
] 

Adam Fuchs commented on ACCUMULO-3303:
--------------------------------------

I isolated this to just HDFS by writing a simple client that measures write performance over
time and saw the same behavior. The following output includes lines of the form "<total_bytes_written>
<time_since_last_sample> <recent_MB/s>". The first output is performance when
writing to a 4E09 byte file with a 2GB block size. The second output is performance when writing
to a 4E09 byte file with a 2.6GB block size. The 2.6GB block size oscillates between periods
of >100MB/s and periods of <30MB/s, while the 2GB block size remains steady at >100MB/s:

{code}
[root@n2 ~]# java -cp wal_perf_test-0.0.1-SNAPSHOT.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/hadoop-auth.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar
BenchmarkWrites -D fs.defaultFS=hdfs://n1:8020 -D perftest.chunkSize=1000 -D perftest.fileSize=4000000000
-D perftest.perfSampleSize=100000000 -D perftest.sync=true -D perftest.replicationFactor=1
-D perftest.blockSize=2147483648
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
fsURI: hdfs://n1:8020
numFiles: 4
fileSize: 4000000000
blockSize: 2147483648
chunkSize: 1000
perfSampleSize: 100000000
Creation: 42
100000000 996 95.75043337412149
200000000 854 111.67146562134074
300000000 1057 90.22462785300378
400000000 892 106.9141610320908
500000000 867 109.9970376477797
600000000 908 105.03021105795705
700000000 849 112.32913031875736
800000000 902 105.72885991200111
900000000 856 111.41055098203856
1000000000 922 103.4353922349512
1100000000 943 101.13195295930541
1200000000 845 112.86086584689349
1300000000 859 111.02145709036671
1400000000 882 108.12633972859977
1500000000 880 108.37208140980114
1600000000 916 104.1129166382369
1700000000 846 112.72746056811465
1800000000 888 107.39575635205519
1900000000 964 98.92887099649896
2000000000 924 103.2115061045725
2100000000 850 112.19697840073529
2200000000 1029 92.67971976737124
2300000000 964 98.92887099649896
2400000000 965 98.82635403173575
2500000000 943 101.13195295930541
2600000000 1061 89.88447845487748
2700000000 1008 94.6105472625248
2800000000 964 98.92887099649896
2900000000 957 99.6524886526907
3000000000 1047 91.08637214959408
3100000000 1003 95.08218508536889
3200000000 973 98.01380435829907
3300000000 1037 91.96473639404532
3400000000 987 96.62353762981256
3500000000 955 99.86118496400523
3600000000 1042 91.52344687200096
3700000000 999 95.46289453516016
3800000000 919 103.77304857521763
3900000000 1008 94.6105472625248
4000000000 990 96.33073903093434
wrote 4000000000 bytes in 37793ms
{code}

{code}
[root@n2 ~]# java -cp wal_perf_test-0.0.1-SNAPSHOT.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/hadoop-auth.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar
BenchmarkWrites -D fs.defaultFS=hdfs://n1:8020 -D perftest.chunkSize=1000 -D perftest.fileSize=4000000000
-D perftest.perfSampleSize=100000000 -D perftest.sync=true -D perftest.replicationFactor=1
-D perftest.blockSize=2787484160
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
fsURI: hdfs://n1:8020
numFiles: 4
fileSize: 4000000000
blockSize: 2787484160
chunkSize: 1000
perfSampleSize: 100000000
Creation: 42
100000000 4404 21.654730163629655
200000000 3582 26.624073601514517
300000000 3516 27.123842901201648
400000000 3197 29.830288282960588
500000000 3093 30.83331123201584
600000000 2872 33.20593023698642
700000000 1726 55.25343663999131
800000000 911 104.68433769552689
900000000 905 105.37837750345304
1000000000 885 107.75980976341808
1100000000 934 102.10645785934155
1200000000 850 112.19697840073529
1300000000 866 110.12405501226905
1400000000 882 108.12633972859977
1500000000 916 104.1129166382369
1600000000 888 107.39575635205519
1700000000 887 107.51683386767193
1800000000 960 99.34107462565105
1900000000 913 104.45501822631435
2000000000 931 102.435479742884
2100000000 912 104.56955223752742
2200000000 900 105.96381293402777
2300000000 901 105.84620603842953
2400000000 917 103.99938019697383
2500000000 907 105.14601062913451
2600000000 852 111.93360521200117
2700000000 900 105.96381293402777
2800000000 1196 79.73865521791387
2900000000 3205 29.7558289050312
3000000000 3235 29.479886133114373
3100000000 3324 28.690563068780083
3200000000 3249 29.352856768428747
3300000000 3293 28.960653398307016
3400000000 2993 31.863492028274308
3500000000 1340 71.16972510494404
3600000000 884 107.88171000070702
3700000000 881 108.24907110173099
3800000000 939 101.5627600006656
3900000000 830 114.90052004894578
4000000000 828 115.17805753698671
wrote 4000000000 bytes in 66646ms
{code}

Bottom line: this appears to be an HDFS performance bug. Make sure you always use a tserver.wal.blocksize
of no more than 2G when using a tserver.walog.max.size of more than 2G.

> funky performance with large WAL
> --------------------------------
>
>                 Key: ACCUMULO-3303
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3303
>             Project: Accumulo
>          Issue Type: Bug
>          Components: logger, tserver
>    Affects Versions: 1.6.1
>            Reporter: Adam Fuchs
>         Attachments: 1GB_WAL.png, 2GB_WAL.png, 4GB_WAL.png, 512MB_WAL.png, 8GB_WAL.png,
WAL_disabled.png
>
>
> The tserver seems to get into a funky state when writing to a large write-ahead log.
I ran some continuous ingest tests varying tserver.walog.max.size in {512M, 1G, 2G, 4G, 8G}
and got some results that I have yet to understand. I was expecting to see the effects of
walog metadata management as described in ACCUMULO-2889, but I also found an additional behavior
of ingest slowing down for long periods when using a large walog size.
> The cluster configuration was as follows:
> {code}
> Accumulo version: 1.6.2-SNAPSHOT (current head of origin/1.6)
> Nodes: 4
> Masters: 1
> Slaves: 3
> Cores per node: 24
> Drives per node: 8x1TB data + 2 raided system
> Memory per node: 64GB
> tserver.memory.maps.max=2G
> table.file.compress.type=snappy (for ci table only)
> tserver.mutation.queue.max=16M
> tserver.wal.sync.method=hflush
> Native maps enabled
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message