falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Venkatesh Seetharam (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (FALCON-673) feed schedule trying to create staging path as ACL owner
Date Mon, 08 Sep 2014 21:14:29 GMT

    [ https://issues.apache.org/jira/browse/FALCON-673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14126099#comment-14126099
] 

Venkatesh Seetharam edited comment on FALCON-673 at 9/8/14 9:14 PM:
--------------------------------------------------------------------

I don't think thats the behavior. I coded this as part of FALCON-11 for staging and working
dirs MUST be configured on HDFS to be owned by user running falcon. The generated workflow
definitions for oozie are serialized to the staging dir as falcon but world readable so oozie
can read these workflow definitions. 

{code}
org.apache.falcon.entity.parser.ClusterEntityParser#checkPathOwner
Checks for path being owned by user who started falcon

org.apache.falcon.oozie.OozieEntityBuilder#marshal
            FileSystem fs = HadoopClientFactory.get().createFileSystem(
                outPath.toUri(), ClusterHelper.getConfiguration(cluster));
            OutputStream out = fs.create(outPath);

Which means its 755.
{code}

However, I think this needs to be thought through before working on a solution. I have had
brief discussions with [~sriksun].

The issue that staging/working dirs are owned by falcon but world readable is NOT acceptable
to certain users for security purposes. Ideally, the dir for a particular entity in staging
must be owned by the user but the group can be falcon.

We need to think more on a cleaner solution to this problem.


was (Author: svenkat):
I don't think thats the behavior. I coded this as part of FALCON-11 for staging and working
dirs MUST be configured on HDFS to be owned by user running falcon. The generated workflow
definitions for oozie are serialized to the staging dir as falcon but world readable so oozie
can read these workflow definitions. 

{code}
org.apache.falcon.entity.parser.ClusterEntityParser#checkPathOwner
Checks for path being owned by user who started falcon
{code}

However, I think this needs to be thought through before working on a solution. I have had
brief discussions with [~sriksun].

The issue that staging/working dirs are owned by falcon but world readable is NOT acceptable
to certain users for security purposes. Ideally, the dir for a particular entity in staging
must be owned by the user but the group can be falcon.

We need to think more on a cleaner solution to this problem.

> feed schedule trying to create staging path as ACL owner
> --------------------------------------------------------
>
>                 Key: FALCON-673
>                 URL: https://issues.apache.org/jira/browse/FALCON-673
>             Project: Falcon
>          Issue Type: Bug
>          Components: feed
>            Reporter: Samarth Gupta
>            Assignee: Suhas Vasu
>            Priority: Blocker
>
> while scheduling the feed, falcon is trying to create the workflow xml on hdfs as the
user who is mentioned in ACL tag in the feed. However it is not necessary that ACL owner has
the write permissions to the cluster workflow location. in such a case feed schedule fails.
Logs below :
> {code}
> 2014-09-04 06:20:06,901 DEBUG - [864244733@qtp-1633673452-2:samarth.gupta:POST//entities/schedule/feed/raaw-logs16-55cc9994
1dd9ea6e-822d-4a99-8225-79cbaeb7acd0] ~ Writing definition to /projects/ivory/staging/falcon/workflows/feed/raaw-logs16-55cc9994/92fc5fd4476e4a6977dedf9f1f3a632d_1409811606326/RETENTION/workflow.xml
on cluster corp-91e54ac3 (OozieEntityBuilder:139)
> {code}
> {code}
> 2014-09-04 06:20:06,975 ERROR - [864244733@qtp-1633673452-2:samarth.gupta:POST//entities/schedule/feed/raaw-logs16-55cc9994
1dd9ea6e-822d-4a99-8225-79cbaeb7acd0] ~ Action failed: Bad Request
> Error: org.apache.falcon.FalconException: Unable to marshall app object
>         at org.apache.falcon.oozie.OozieEntityBuilder.marshal(OozieEntityBuilder.java:155)
>         at org.apache.falcon.oozie.OozieOrchestrationWorkflowBuilder.marshal(OozieOrchestrationWorkflowBuilder.java:184)
>         at org.apache.falcon.oozie.feed.FeedRetentionWorkflowBuilder.build(FeedRetentionWorkflowBuilder.java:70)
>         at org.apache.falcon.oozie.feed.FeedRetentionCoordinatorBuilder.buildCoords(FeedRetentionCoordinatorBuilder.java:98)
>         at org.apache.falcon.oozie.feed.FeedBundleBuilder.buildCoords(FeedBundleBuilder.java:43)
>         at org.apache.falcon.oozie.OozieBundleBuilder.build(OozieBundleBuilder.java:71)
>         at org.apache.falcon.workflow.engine.OozieWorkflowEngine.schedule(OozieWorkflowEngine.java:150)
>         at org.apache.falcon.resource.AbstractSchedulableEntityManager.scheduleInternal(AbstractSchedulableEntityManager.java:69)
>         at org.apache.falcon.resource.AbstractSchedulableEntityManager.schedule(AbstractSchedulableEntityManager.java:58)
>         at org.apache.falcon.resource.SchedulableEntityManager.schedule(SchedulableEntityManager.java:85)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:622)
>         at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>         at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>         at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>         at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
>         at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>         at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>         at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>         at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>         at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
>         at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>         at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
>         at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>         at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
>         at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
>         at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>         at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>         at org.apache.falcon.security.BasicAuthFilter$2.doFilter(BasicAuthFilter.java:184)
>         at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
at org.apache.falcon.security.BasicAuthFilter.doFilter(BasicAuthFilter.java:222)
>         at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>         at org.mortbay.jetty.Server.handle(Server.java:326)
>         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>         at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
>         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
>         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
>         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>         at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=dataqa,
access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1839)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:1771)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1747)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:418)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:207)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44942)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message