accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Fuchs (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (ACCUMULO-3303) funky performance with large WAL
Date Wed, 05 Nov 2014 23:31:34 GMT

     [ https://issues.apache.org/jira/browse/ACCUMULO-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Adam Fuchs updated ACCUMULO-3303:
---------------------------------
    Attachment: WAL_disabled.png
                512MB_WAL.png
                8GB_WAL.png
                4GB_WAL.png
                2GB_WAL.png
                1GB_WAL.png

There is a big effect of WAL metadata management visible in all with WAL enabled, as ingest
drops to zero whenever 3xWAL size data has been ingested (three logs across the three servers).
This is probably due to the serialized WAL flushes of metadata updates (WAL registration)
on all 1.03K tablets. However, that doesn't explain the characteristic change in behavior
between 2GB and 4GB WAL sizes in which performance drops for long periods of time.

> funky performance with large WAL
> --------------------------------
>
>                 Key: ACCUMULO-3303
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3303
>             Project: Accumulo
>          Issue Type: Bug
>          Components: logger, tserver
>    Affects Versions: 1.6.1
>            Reporter: Adam Fuchs
>         Attachments: 1GB_WAL.png, 2GB_WAL.png, 4GB_WAL.png, 512MB_WAL.png, 8GB_WAL.png,
WAL_disabled.png
>
>
> The tserver seems to get into a funky state when writing to a large write-ahead log.
I ran some continuous ingest tests varying tserver.walog.max.size in {512M, 1G, 2G, 4G, 8G}
and got some results that I have yet to understand. I was expecting to see the effects of
walog metadata management as described in ACCUMULO-2889, but I also found an additional behavior
of ingest slowing down for long periods when using a large walog size.
> The cluster configuration was as follows:
> {code}
> Accumulo version: 1.6.2-SNAPSHOT (current head of origin/1.6)
> Nodes: 4
> Masters: 1
> Slaves: 3
> Cores per node: 24
> Drives per node: 8x1TB data + 2 raided system
> Memory per node: 64GB
> tserver.memory.maps.max=2G
> table.file.compress.type=snappy (for ci table only)
> tserver.mutation.queue.max=16M
> tserver.wal.sync.method=hflush
> Native maps enabled
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message