hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Clampffer (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators
Date Mon, 16 May 2016 15:23:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15284723#comment-15284723
] 

James Clampffer commented on HDFS-10188:
----------------------------------------

Macro idea looks good to me.  If you do this method you can actually determine the size at
compile time, at least for operator new/delete (not vectorized operators).

{code}
static void operator delete(void* p) { \
  mem_struct* header = (mem_struct*)p; \
  size_t size = (--header)->mem_size; \
  ::memset(p, 0, size); \
  ::free(header); \
} \
{code}

Since it's just text manipulation decltype(this) is a valid expression assuming the macro
is expanded in a struct or class.  We can use that for the size.
{code}
static void operator delete(void* p) { \
  ::memset(p, 0, sizeof( decltype(this) )); \
  ::free(p); \
} \
{code}
It's slightly less expensive and it avoids the pointer arithmetic that could lead to endianness
issues.  I'm not sure if it's possible to do something analogous for new[]/delete[].  If there's
some other reasons to keep the header tag around I'm fine with that too.

> libhdfs++: Implement debug allocators
> -------------------------------------
>
>                 Key: HDFS-10188
>                 URL: https://issues.apache.org/jira/browse/HDFS-10188
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: Xiaowei Zhu
>         Attachments: HDFS-10188.HDFS-8707.000.patch, HDFS-10188.HDFS-8707.001.patch,
HDFS-10188.HDFS-8707.002.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional checking to detect
double deletes, read-after-delete, and write-after-deletes to help debug resource ownership
issues and prevent new ones from entering the library.
> One of the most common issues we have is use-after-free issues.  The continuation pattern
makes these really tricky to debug because by the time a segsegv is raised the context of
what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the following, in order
of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's been deleted;
obviously this can't be left to run forever because the memory is never unmapped
> This should also put some groundwork in place for implementing specialized allocators
for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message