falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Venkat Ranganathan <n....@live.com>
Subject Re: Review Request 42642: FALCON-1729 - Database Import and Export to support password alias via Java keystore
Date Thu, 04 Feb 2016 00:10:57 GMT


> On Feb. 1, 2016, 1:40 p.m., Venkat Ranganathan wrote:
> > common/src/main/java/org/apache/falcon/entity/DatasourceHelper.java, line 211
> > <https://reviews.apache.org/r/42642/diff/2/?file=1227071#file1227071line211>
> >
> >     I think you are doing this for ServiceLoader META-INF entries not properly merged.
 We should see if we can have some maven strategy to fix this
> >     
> >     How will this work other filesystem solutions like WASB?
> >     
> >     How are we passing the principal for remote Filesystems when the file is on
a remote system?
> 
> Venkatesan Ramachandran wrote:
>     The datasouce definition specifies the HDFS pathname for the jceks provider -- it
is inferred as HDFS for now and so the conf.set(). Regarding WASB, I have not tried it, but
I assume Sqoop can work with WASB OOB. If so, this should also work since I used the CredentiaProviderFactory
comes with hadoop-common.
>     
>     Similarly, the jceks provider file has to be on the cluster where the datasource
entity exists.
> 
> Venkat Ranganathan wrote:
>     You are right.  Sqoop can work  with HDFS and WASB OOB - and sqoop actions when run
via oozie have the jks file localized or accessed as a HDFS/WASB location as the case may
be.   But here the FS implementation is explicitly set  for HDFS because of the implementation
for other FS are not available because of some issues. May be I don't fully grok the details,
but can't we make it work independent of the FS where the file is located?   For example,
in a HDI installation, fs.defaultFS will be a WASB location - it will not work there.   It
will be good to not restrict to HDFS like this as there are other filesystems used also.
>     
>     Regarding the security, here there is a call to new Configuration().   It will get
the current Falcon cluster configuration with the principals from the current cluster where
Falcon server is running.   What if the file is on a remote cluster?   Don't we have to make
sure we use the filesystem and principals related to that Filesystem that we pick up from
the corresponding cluster entity?
> 
> Venkatesan Ramachandran wrote:
>     Let me check if it works without HDFS impl hard coded. if so, we should be ok with
WASB.
>     
>     Not sure I understand the issue about file on a remote cluster. Let's talk about
it offline.
> 
> Venkatesan Ramachandran wrote:
>     hard coding filesystem is superflous. removed it and made sure it works ends to end
with UT/IT.
>     conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
>     
>     Regarding the remote cluster accessing the password provider file on another cluster
with kerberos -- will open a separate JIRA (https://issues.apache.org/jira/browse/FALCON-1814)
to address. This feature should work with kerberos and one cluster setup.

I am ok with the remote cluster setup being tracked and handled properly.   It looks like
we did not have ServiceLoader META-INF/ entries being squashed that I initially thought was
the reason you had the explicit fs.hdfs..   values.

Thanks for creating a separate JIRA for that


- Venkat


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/42642/#review117274
-----------------------------------------------------------


On Feb. 2, 2016, 6:08 p.m., Venkatesan Ramachandran wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/42642/
> -----------------------------------------------------------
> 
> (Updated Feb. 2, 2016, 6:08 p.m.)
> 
> 
> Review request for Falcon.
> 
> 
> Repository: falcon-git
> 
> 
> Description
> -------
> 
> Support password alias for database import and export using java keystore
> 
> 
> Diffs
> -----
> 
>   client/src/main/resources/datasource-0.1.xsd beb82cc 
>   common/src/main/java/org/apache/falcon/entity/DatasourceHelper.java 1f1a193 
>   common/src/main/java/org/apache/falcon/entity/parser/DatasourceEntityParser.java e58b1e9

>   common/src/main/java/org/apache/falcon/security/CredentialProviderHelper.java PRE-CREATION

>   common/src/main/java/org/apache/falcon/util/HdfsClassLoader.java 786ffea 
>   common/src/test/java/org/apache/falcon/entity/parser/DatasourceEntityParserTest.java
9567eab 
>   common/src/test/resources/config/datasource/datasource-file-0.1.xml 3ee40ed 
>   common/src/test/resources/config/datasource/datasource-file-0.2.xml PRE-CREATION 
>   oozie/src/main/java/org/apache/falcon/oozie/DatabaseExportWorkflowBuilder.java f1fb337

>   oozie/src/main/java/org/apache/falcon/oozie/DatabaseImportWorkflowBuilder.java 19fa931

>   oozie/src/main/java/org/apache/falcon/oozie/ImportExportCommon.java PRE-CREATION 
>   oozie/src/main/java/org/apache/falcon/oozie/ImportWorkflowBuilder.java 4892ecb 
>   pom.xml 12672bd 
>   webapp/pom.xml 7ecfbaf 
>   webapp/src/test/java/org/apache/falcon/lifecycle/FeedImportIT.java b55d660 
>   webapp/src/test/java/org/apache/falcon/resource/TestContext.java 321a5cf 
>   webapp/src/test/java/org/apache/falcon/util/HsqldbTestUtils.java a92629f 
>   webapp/src/test/resources/credential_provider.jceks PRE-CREATION 
>   webapp/src/test/resources/datasource-template.xml fb7a329 
>   webapp/src/test/resources/datasource-template1.xml PRE-CREATION 
>   webapp/src/test/resources/datasource-template2.xml PRE-CREATION 
>   webapp/src/test/resources/datasource-template3.xml PRE-CREATION 
>   webapp/src/test/resources/datasource-template4.xml PRE-CREATION 
>   webapp/src/test/resources/feed-template3.xml a6c1d6b 
>   webapp/src/test/resources/feed-template4.xml PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/42642/diff/
> 
> 
> Testing
> -------
> 
> Unit tests and Manual end to end testing both on regular and secure cluster.
> 
> 
> Thanks,
> 
> Venkatesan Ramachandran
> 
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message