hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alan Burlison (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.
Date Fri, 09 Oct 2015 08:36:27 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14950072#comment-14950072
] 

Alan Burlison commented on HADOOP-11127:
----------------------------------------

I need to modify the scheme to include the OS as well as the CPU architecture, I think the
path will have to be os.name/os.arch.

Whilst it's technically feasible to distribute the JNI component inside the Hadoop JAR it's
far from trivial in build terms to get them all in the same place for packaging. Also, you
can't load a shared object from inside a JAR file so there'd have to be logic to identify
the corect one and extract it to the filesystem. And then you'd have to deal with the issues
of where in the filesystem that should be, and of multiple Hadoop instances all wanting to
write *their* version of the JNI library to the filesystem, potentially simultaneously.

For those reasons I though the JNI in JAR proposal was unworkable, at least as a first phase.



> Improve versioning and compatibility support in native library for downstream hadoop-common
users.
> --------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-11127
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11127
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: native
>            Reporter: Chris Nauroth
>            Assignee: Alan Burlison
>         Attachments: HADOOP-11064.003.patch, proposal.txt
>
>
> There is no compatibility policy enforced on the JNI function signatures implemented
in the native library.  This library typically is deployed to all nodes in a cluster, built
from a specific source code version.  However, downstream applications that want to run in
that cluster might choose to bundle a hadoop-common jar at a different version.  Since there
is no compatibility policy, this can cause link errors at runtime when the native function
signatures expected by hadoop-common.jar do not exist in libhadoop.so/hadoop.dll.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message