hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Clampffer (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators
Date Thu, 12 May 2016 15:38:13 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281627#comment-15281627

James Clampffer commented on HDFS-10188:

I spoke too soon..  I ran locally as a sanity test before committing and valgrind was unhappy:
     [exec] ==15234== Invalid read of size 8
     [exec] ==15234==    at 0x6594D1: operator delete(void*) (string3.h:84)
     [exec] ==15234==    by 0x5567D84: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19)
     [exec] ==15234==    by 0x5C7F258: __run_exit_handlers (exit.c:82)
     [exec] ==15234==    by 0x5C7F2A4: exit (exit.c:104)
     [exec] ==15234==    by 0x5C64ECB: (below main) (libc-start.c:321)
     [exec] ==15234==  Address 0x68127d8 is 8 bytes before a block of size 24 alloc'd
     [exec] ==15234==    at 0x4C2B3B0: operator new(unsigned long, std::nothrow_t const&)
(in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
     [exec] ==15234==    by 0x5567EBD: __cxa_thread_atexit (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19)
     [exec] ==15234==    by 0x554846: ReportError(int, std::string const&) (hdfs.cc:88)
     [exec] ==15234==    by 0x55602B: hdfsBuilderConfGetInt (hdfs.cc:673)
     [exec] ==15234==    by 0x522B42: HdfsBuilderTest_TestRead_Test::TestBody() (hdfs_builder_test.cc:76)
     [exec] ==15234==    by 0x5531A2: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test,
void>(testing::Test*, void (testing::Test::*)(), char const*) (gmock-gtest-all.cc:3562)
     [exec] ==15234==    by 0x547256: testing::Test::Run() (gmock-gtest-all.cc:3635)
     [exec] ==15234==    by 0x5472FD: testing::TestInfo::Run() (gmock-gtest-all.cc:3810)
     [exec] ==15234==    by 0x547404: testing::TestCase::Run() (gmock-gtest-all.cc:3928)
     [exec] ==15234==    by 0x5476B7: testing::internal::UnitTestImpl::RunAllTests() (gmock-gtest-all.cc:5799)
     [exec] ==15234==    by 0x547973: testing::UnitTest::Run() (gmock-gtest-all.cc:3562)
     [exec] ==15234==    by 0x51E1FF: main (gtest.h:20058)
     [exec] ==15234== 


Looks like gmock is newing objects with the default (or it's own) new implementation and then
freeing with the fancy delete.  When the debug delete decrements the address it ends up in
memory it doesn't own.

Also get the following indicating default delete is being called on the internal pointer created
by the debug new.
*** Error in `/home/jclampffer/apache_hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/libhdfs_threaded_hdfspp_test_shim_static':
free(): invalid pointer: 0x00007f7a98006938 ***

Seeming how this passed CI and in your machine I'm guessing it's a platform dependent issue
caused by the order of symbol resolution at link time.

> libhdfs++: Implement debug allocators
> -------------------------------------
>                 Key: HDFS-10188
>                 URL: https://issues.apache.org/jira/browse/HDFS-10188
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: Xiaowei Zhu
>         Attachments: HDFS-10188.HDFS-8707.000.patch, HDFS-10188.HDFS-8707.001.patch
> I propose implementing a set of memory new/delete pairs with additional checking to detect
double deletes, read-after-delete, and write-after-deletes to help debug resource ownership
issues and prevent new ones from entering the library.
> One of the most common issues we have is use-after-free issues.  The continuation pattern
makes these really tricky to debug because by the time a segsegv is raised the context of
what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the following, in order
of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's been deleted;
obviously this can't be left to run forever because the memory is never unmapped
> This should also put some groundwork in place for implementing specialized allocators
for tiny objects that we churn through like std::string.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message