hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Elek, Marton (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
Date Thu, 28 Mar 2019 13:55:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803943#comment-16803943
] 

Elek, Marton commented on HDDS-1333:
------------------------------------

There are two kind of class incompatibilities:

1. When new/incompatible classes are used in the OzoneFileSystem/FileSystem (eg. KeyProviderTokenIssuer
is not available in the older version of hadoops).

2. When new/incompatible classes are used in the deep implementation (eg. hadoop rpc modifications).

The second one is solved with a specific class loader which can be activated when using hadoop2.7-3.1

The first one is more tricky question. I created a simplified OzoneFileSystem (BasicOzoneFileSystem)
which is compatible with hadoop2.7 but doesn't include statistics and KeyProviderTokenIssuer.
For older version of hadoop/spark it can be used together with the isolated class loaders.

I also added a robot test to test hadoop2.7/2.9/3.1/3.2 + spark(+hadoop2.7) together with
o3fs.

This robot test is not compatible with the test.sh (test.sh executes all the tests in the
same container but for this test we need to execute multiple tests on multiple nodes).

As of now the tests can be executed manually but it will be integrated in a follow-up jira.

{code}
cd hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose/ozonefs
docker-compose up -d
docker-compose scale datanode=3
#drink a coffee or wait about 1 minute
robot .


==============================================================================
Ozonefs                                                                       
==============================================================================
Ozonefs.Hadoopo3Fs :: Test ozone fs usage from Hdfs and Spark                 
==============================================================================
Create bucket and volume to test                                      | PASS |
------------------------------------------------------------------------------
Test hadoop 3.1                                                       | PASS |
------------------------------------------------------------------------------
Test hadoop 3.2                                                       | PASS |
------------------------------------------------------------------------------
Test hadoop 2.9                                                       | PASS |
------------------------------------------------------------------------------
Test hadoop 2.7                                                       | PASS |
------------------------------------------------------------------------------
Test spark 2.3                                                        | PASS |
------------------------------------------------------------------------------
Ozonefs.Hadoopo3Fs :: Test ozone fs usage from Hdfs and Spark         | PASS |
6 critical tests, 6 passed, 0 failed
6 tests total, 6 passed, 0 failed
==============================================================================
Ozonefs                                                               | PASS |
6 critical tests, 6 passed, 0 failed
6 tests total, 6 passed, 0 failed
==============================================================================
Output:  /home/elek/projects/hadoop-review/hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose/ozonefs/output.xml
Log:     /home/elek/projects/hadoop-review/hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose/ozonefs/log.html
Report:  /home/elek/projects/hadoop-review/hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose/ozonefs/report.html
{code}

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
> -------------------------------------------------------------------------------------
>
>                 Key: HDDS-1333
>                 URL: https://issues.apache.org/jira/browse/HDDS-1333
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator
endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>         at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
>         at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:348)
>         at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
>         at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
>         at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
>         at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>         at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
>         at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
>         at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
>         at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
>         at org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
>         at org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
>         at org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
>         at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
>         at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
>         at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
>         at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
>         at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
>         at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         ... 43 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message