falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "karan kumar (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (FALCON-910) Better error messages when creating cluster's directories
Date Tue, 20 Jan 2015 08:35:35 GMT

     [ https://issues.apache.org/jira/browse/FALCON-910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

karan kumar updated FALCON-910:
-------------------------------
    Attachment: FALCON-910-rev1.patch

Attaching initial patch
1.Enforcing location name constraints
2.Checking for Same path in locations
3.Checking for perms of location based on name
4.Generating working location in the subdir of staging when working dir not given by user

5.Adding unit/IT test cases

> Better error messages when creating cluster's directories
> ---------------------------------------------------------
>
>                 Key: FALCON-910
>                 URL: https://issues.apache.org/jira/browse/FALCON-910
>             Project: Falcon
>          Issue Type: Improvement
>          Components: client
>    Affects Versions: 0.7
>            Reporter: Adam Kawa
>            Assignee: karan kumar
>            Priority: Minor
>         Attachments: FALCON-910-rev1.patch
>
>
> I followed the example from http://hortonworks.com/blog/introduction-apache-falcon-hadoop,
where all locations (i.e. staging, working, temp) of the cluster are set to the same directory.
> {code}
> <?xml version="1.0" encoding="UTF-8"?>
> <cluster colo="toronto" description="Primary Cluster"
> (...)
>     <locations>
>         <location name="staging" path="/tmp/falcon"/>
>         <location name="working" path="/tmp/falcon"/>
>         <location name="temp" path="/tmp/falcon"/>
>     </locations>
> </cluster>
> {code}
> When submitting such a cluster entity, I got:
> {code}
> bash-4.1$ ./bin/falcon entity -submit -type cluster -file cluster.xml
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Bad Request;Path /tmp/falcon has permissions:
rwxr-xr-x, should be rwxrwxrwx
> 	at org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
> 	at org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162)
> 	at org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684)
> 	at org.apache.falcon.client.FalconClient.submit(FalconClient.java:323)
> 	at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:361)
> 	at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182)
> 	at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
> bash-4.1$ ./bin/falcon entity -submit -type cluster -file cluster.xml
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Bad Request;Path /tmp/falcon has permissions:
rwxrwxrwx, should be rwxr-xr-x
> 	at org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
> 	at org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162)
> 	at org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684)
> 	at org.apache.falcon.client.FalconClient.submit(FalconClient.java:323)
> 	at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:361)
> 	at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182)
> 	at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
> {code}
> I can change these permission forever with the same effect :)
> According to https://github.com/apache/incubator-falcon/blob/master/common/src/main/java/org/apache/falcon/entity/parser/ClusterEntityParser.java
> {code}
> for (Location location : cluster.getLocations().getLocations()) {
>             final String locationName = location.getName();
>             if (locationName.equals("temp")) {
>                 continue;
>             }
>             try {
>                 checkPathOwnerAndPermission(cluster.getName(), location.getPath(), fs,
>                         "staging".equals(locationName)
>                                 ? HadoopClientFactory.ALL_PERMISSION
>                                 : HadoopClientFactory.READ_EXECUTE_PERMISSION);
>             } catch (IOException e) {
> (...)
>             }
>         }
> {code}
> This basically means:
> * staging directory must have exactly ALL permissions
> * execute directory must have exactly READ_EXECUTE permissions
> If the staging and execute directories are the same, then we have the misconfiguration
that is hard to detect based on the current message.
> Therefore:
> * a better (less confusing) message could be printed
> * or, code could be fixed that execute directory should have at least (not exactly) READ_EXECUTE
permissions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message