falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Balu Vellanki" <bvella...@hortonworks.com>
Subject Re: Review Request 40894: FALCON-1647 Unable to create feed : FilePermission error under cluster staging directory
Date Fri, 04 Dec 2015 18:06:38 GMT


> On Dec. 4, 2015, 2:25 a.m., Ying Zheng wrote:
> > Nitpick: In createStagingSubdirs, you try to create two subdirectories and these
two pieces of code have overlaps. To simply the code, better to define one function, e.g.
createStagingSubdir, to create one directory at a time, and call this function twice to create
two subdirectories 'feed' and 'process' in your case.

I agree, I will make this change


- Balu


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/40894/#review108928
-----------------------------------------------------------


On Dec. 3, 2015, 4:56 a.m., Balu Vellanki wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/40894/
> -----------------------------------------------------------
> 
> (Updated Dec. 3, 2015, 4:56 a.m.)
> 
> 
> Review request for Falcon and Venkat Ranganathan.
> 
> 
> Bugs: falcon-1647
>     https://issues.apache.org/jira/browse/falcon-1647
> 
> 
> Repository: falcon-git
> 
> 
> Description
> -------
> 
> Submit a cluster entity as user "user1", schedule a feed entity as "user1". Now submit
and schedule a feed entity as "user2" and feed submission can fail with the following error
> 
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=user2, access=WRITE, inode="/apps/falcon-user1/staging/falcon/workflows/feed":user1:falcon:drwxr-xr-x
>    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
>    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
>    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
>    
> This is caused because Falcon creates <staging_dir>/falcon/workflows/feed and <staging_dir>/falcon/workflows/process
only when a feed/process entity are scheduled. The owner of these dirs is the user scheduling
the entity. The permissions are based on the default umask of the FS. If a new feed/process
entity are being scheduled by a different user, things can fail.
> Solution is to make <staging_dir>/falcon/workflows/feed and <staging_dir>/falcon/workflows/process
owned by Falcon with permissions 777.
> 
> 
> Diffs
> -----
> 
>   common/src/main/java/org/apache/falcon/entity/parser/ClusterEntityParser.java b4f61d7

>   common/src/test/java/org/apache/falcon/entity/parser/ClusterEntityParserTest.java cd61a8c

> 
> Diff: https://reviews.apache.org/r/40894/diff/
> 
> 
> Testing
> -------
> 
> end2end testing done, submitted a cluster as ambari-qa, scheduled a feed as user ambari-qa
and another feed as user root.
> 
> 
> Thanks,
> 
> Balu Vellanki
> 
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message