accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Newton (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-3303) funky performance with large WAL
Date Thu, 06 Nov 2014 21:12:34 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200940#comment-14200940
] 

Eric Newton commented on ACCUMULO-3303:
---------------------------------------

Nevermind... HDFS allows for blocks > 2 ^31^ .  Since 0.22.

> funky performance with large WAL
> --------------------------------
>
>                 Key: ACCUMULO-3303
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3303
>             Project: Accumulo
>          Issue Type: Bug
>          Components: logger, tserver
>    Affects Versions: 1.6.1
>            Reporter: Adam Fuchs
>         Attachments: 1GB_WAL.png, 2GB_WAL.png, 4GB_WAL.png, 512MB_WAL.png, 8GB_WAL.png,
WAL_disabled.png
>
>
> The tserver seems to get into a funky state when writing to a large write-ahead log.
I ran some continuous ingest tests varying tserver.walog.max.size in {512M, 1G, 2G, 4G, 8G}
and got some results that I have yet to understand. I was expecting to see the effects of
walog metadata management as described in ACCUMULO-2889, but I also found an additional behavior
of ingest slowing down for long periods when using a large walog size.
> The cluster configuration was as follows:
> {code}
> Accumulo version: 1.6.2-SNAPSHOT (current head of origin/1.6)
> Nodes: 4
> Masters: 1
> Slaves: 3
> Cores per node: 24
> Drives per node: 8x1TB data + 2 raided system
> Memory per node: 64GB
> tserver.memory.maps.max=2G
> table.file.compress.type=snappy (for ci table only)
> tserver.mutation.queue.max=16M
> tserver.wal.sync.method=hflush
> Native maps enabled
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message