hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stephen O'Donnell (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry
Date Tue, 14 Jun 2016 17:50:27 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330018#comment-15330018

Stephen O'Donnell commented on HADOOP-13263:

[~arpitagarwal] I looked into this a bit further. In the current implementation the Guava
cache is setup to request that a key is:

1) Refreshed after HADOOP_SECURITY_GROUPS_CACHE_SECS since the last write (default setting
is 5 minutes)

2) Evicted after 10 * HADOOP_SECURITY_GROUPS_CACHE_SECS since the last write (default is therefore
50 minutes)

I tested this scenario - if a refresh fails, it doesn't update the write time, so if you have
refresh failing over and over, eventually the key will get evicted.

So, in the current implementation, if LDAP is down and the key is older than HADOOP_SECURITY_GROUPS_CACHE_SECS
but less that HADOOP_SECURITY_GROUPS_CACHE_SECS * 10, the thread that attempts to do the refresh
will fail to update the cache, but it will return the old value, as will all other threads.
After HADOOP_SECURITY_GROUPS_CACHE_SECS * 10, the key will have been evicted and the call
to getGroups will throw an exception.

The background refresh introduced by this patch behaves basically in the same way:

1) If the background refresh repeaditly fails (eg LDAP is down), the old values in the cache
returned are returned until 10 * HADOOP_SECURITY_GROUPS_CACHE_SECS at which point they will
be evicted.

2) If LDAP is still throwing errors after eviction, then a call to getGroups will throw an
exception that will propagate up through the code.

So with respect to expiring entries during a long LDAP outage, both solutions will do that
in the same way.

Does that sound acceptable?

> Reload cached groups in background after expiry
> -----------------------------------------------
>                 Key: HADOOP-13263
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13263
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Stephen O'Donnell
>            Assignee: Stephen O'Donnell
>         Attachments: HADOOP-13263.001.patch
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the Namenode group
cache to run in the background, avoiding many slow group lookups. Even with this change, I
have seen quite a few clusters with issues due to slow group lookups. The problem is most
prevalent in HA clusters, where a slow group lookup on the hdfs user can fail to return for
over 45 seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user blocks until
it returns. Any subsequent threads requesting that user block until that first thread populates
the cache.
> 2) When the key expires, the first thread to hit the cache after expiry blocks. While
it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on slow group
lookups. If the call from the FC is the one that blocks and lookups are slow, if can cause
the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, where the
first thread that hits an expired key schedules a background cache reload, but still returns
the old value. Then the cache is eventually updated. This patch introduces this background
reload feature. There are two new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the current
behaviour. Set to true to enable a small thread pool and background refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if the above
is set to true. Controls how many threads are in the background refresh pool. Default is 1,
which is likely to be enough.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message