hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sergey Shelukhin (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HIVE-10617) LLAP: fix allocator concurrency rarely causing spurious failure to allocate due to "partitioned" locking
Date Wed, 06 May 2015 00:46:00 GMT

     [ https://issues.apache.org/jira/browse/HIVE-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sergey Shelukhin updated HIVE-10617:
------------------------------------
    Description: 
See HIVE-10482 and the comment in code. Right now this is worked around by retrying.
Simple case - thread can reserve memory from manager and bounce between checking arena 1 and
arena 2 for memory as other threads allocate and deallocate from respective arenas in reverse
order, making it look like there's no memory. More importantly this can happen when buddy
blocks are split when lots of stuff is allocated.

This can be solved either with some form of helping (esp. for split case) or by making allocator
an "actor" (or set of actors, one per 1-N arenas that they would own), to satisfy alloc requests
more deterministically (and also get rid of most sync).

  was:
See HIVE-10482 and the comment in code.
Simple case - thread can reserve memory from manager and bounce between checking arena 1 and
arena 2 for memory as other threads allocate and deallocate from respective arenas in reverse
order, making it look like there's no memory. More importantly this can happen when buddy
blocks are split when lots of stuff is allocated.

This can be solved either with some form of helping (esp. for split case) or by making allocator
an "actor" (or set of actors, one per 1-N arenas that they would own), to satisfy alloc requests
more deterministically (and also get rid of most sync).


> LLAP: fix allocator concurrency rarely causing spurious failure to allocate due to "partitioned"
locking
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-10617
>                 URL: https://issues.apache.org/jira/browse/HIVE-10617
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Sergey Shelukhin
>
> See HIVE-10482 and the comment in code. Right now this is worked around by retrying.
> Simple case - thread can reserve memory from manager and bounce between checking arena
1 and arena 2 for memory as other threads allocate and deallocate from respective arenas in
reverse order, making it look like there's no memory. More importantly this can happen when
buddy blocks are split when lots of stuff is allocated.
> This can be solved either with some form of helping (esp. for split case) or by making
allocator an "actor" (or set of actors, one per 1-N arenas that they would own), to satisfy
alloc requests more deterministically (and also get rid of most sync).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message