hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Anatoli Shein (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind
Date Mon, 23 Oct 2017 19:48:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Anatoli Shein updated HDFS-11807:
---------------------------------
    Attachment: HDFS-11807.HDFS-8707.001.patch

In this patch I fixed the problem with null-termination of a file descriptor when it is passed
as an argument (which caused some hang-ups previously), added checks for the return values
of reads and writes from/to the sockets, added the shutting down of the protobuf library (to
avoid valgrind static data leaks), and added more comments. Currently it works on both my
local machine and on docker. Please review.

> libhdfs++: Get minidfscluster tests running under valgrind
> ----------------------------------------------------------
>
>                 Key: HDFS-11807
>                 URL: https://issues.apache.org/jira/browse/HDFS-11807
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: Anatoli Shein
>         Attachments: HDFS-11807.HDFS-8707.000.patch, HDFS-11807.HDFS-8707.001.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory stomps.
 A good way to expose these is running libhdfs++ stress tests and tools under valgrind and
pointing them at a real cluster.  Right now the CI tools don't do that so bugs occasionally
slip in and aren't caught until they cause trouble in applications that use libhdfs++ for
HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the GC and JIT
compiler in the embedded JVM do things that look like errors to valgrind.  I'd like to have
these tests do some basic setup and then fork into two processes: one for the minidfscluster
stuff and one for the libhdfs++ client test.  A small amount of shared memory can be used
to provide a place for the minidfscluster to stick the hdfsBuilder object that the client
needs to get info about which port to connect to.  Can also stick a condition variable there
to let the minidfscluster know when it can shut down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message