hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11329) should add HADOOP_HOME as part of kms's startup options
Date Sat, 29 Nov 2014 04:30:13 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228625#comment-14228625
] 

Allen Wittenauer commented on HADOOP-11329:
-------------------------------------------

bq.  It's just that KMS has a dependency on the hadoop libraries (in this case hadoop-common),
not necessarily the hadoop installation per-se.

But this is clearly false, given the failure.  Certain parts of hadoop-common have expectations
about the operating environment.  KMS is not fulfilling those expectations and therefore causing
this stack trace.

bq. KMS can be independently built, packaged and deployed as long as all the jar and libraries
are mvn downloadable .

KMS is part of the Hadoop source and utilizes Hadoop common code.  There isn't much reason
for it to play these games.  If it wants to be a separate project, then it should be a separate
project.  If it wants to be part of Hadoop, then it should integrate properly.  This means
trying to avoid being an operational burden by requiring extra downloads and setting extra
environment variables that Hadoop already handles.

bq. Unfortunately, I don't think currently, the hadoop-common native libraries are exposed
as maven artifacts..

Given that they are extremely architecture and platform dependent, this isn't surprising.
 This is compiled code that now, unfortunately, has a lot of chipset dependent code.


> should add HADOOP_HOME as part of kms's startup options
> -------------------------------------------------------
>
>                 Key: HADOOP-11329
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11329
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: kms, security
>            Reporter: Dian Fu
>            Assignee: Arun Suresh
>         Attachments: HADOOP-11329.1.patch, HADOOP-11329.2.patch, HADOOP-11329.3.patch,
HADOOP-11329.4.patch
>
>
> Currently, HADOOP_HOME isn't part of the start up options of KMS. If I add the the following
configuration to core-site.xml of kms,
> {code} <property>
>   <name>hadoop.security.crypto.codec.classes.aes.ctr.nopadding</name>
>   <value>org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec</value>
>  </property>
> {code} kms server will throw the following exception when receive "generateEncryptedKey"
request
> {code}
> 2014-11-24 10:23:18,189 DEBUG org.apache.hadoop.crypto.OpensslCipher: Failed to load
OpenSSL Cipher.
> java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
>         at org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native Method)
>         at org.apache.hadoop.crypto.OpensslCipher.<clinit>(OpensslCipher.java:85)
>         at org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.<init>(OpensslAesCtrCryptoCodec.java:50)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
>         at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:67)
>         at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:100)
>         at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension$DefaultCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:256)
>         at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
>         at org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension$EncryptedQueueRefiller.fillQueueForKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:77)
>         at org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:181)
>         at org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:175)
>         at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>         at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>         at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>         at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>         at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>         at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>         at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>         at org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:256)
>         at org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:226)
>         at org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension.generateEncryptedKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:126)
>         at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
>         at org.apache.hadoop.crypto.key.kms.server.KeyAuthorizationKeyProvider.generateEncryptedKey(KeyAuthorizationKeyProvider.java:192)
>         at org.apache.hadoop.crypto.key.kms.server.KMS$9.run(KMS.java:379)
>         at org.apache.hadoop.crypto.key.kms.server.KMS$9.run(KMS.java:375
> {code}
> The reason is that it cannot find libhadoop.so. This will prevent KMS to response to
"generateEncryptedKey" requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message