kylin-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shaofeng SHI (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (KYLIN-3028) Build cube error when set S3 as working-dir
Date Thu, 01 Feb 2018 10:39:00 GMT

    [ https://issues.apache.org/jira/browse/KYLIN-3028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16348360#comment-16348360
] 

Shaofeng SHI edited comment on KYLIN-3028 at 2/1/18 10:38 AM:
--------------------------------------------------------------

It is a bug of kylin: if hdfs-working-dir is configured as a non-default file system, and
not configure hbase.cluster-fs, this error will happen.

 

Now the behavior is consistent to: if not configure 'hbase.cluster-fs', will use the FS of
'hdfs-working-dir' as the FS for all intermediate data.


was (Author: shaofengshi):
It is a bug of kylin: if hdfs-working-dir is configured as a non-default file system, and
not configure hbase.cluster-fs, this error will happen.

> Build cube error when set S3 as working-dir
> -------------------------------------------
>
>                 Key: KYLIN-3028
>                 URL: https://issues.apache.org/jira/browse/KYLIN-3028
>             Project: Kylin
>          Issue Type: Bug
>          Components: Job Engine
>    Affects Versions: v2.2.0
>         Environment: AWS EMR 5.7, Apache Kylin 2.2 for HBase 1.x
>            Reporter: Shaofeng SHI
>            Assignee: Shaofeng SHI
>            Priority: Minor
>             Fix For: v2.3.0
>
>
> 1. Start an AWS EMR cluster, with HBase selected (data stored on S3);
> 2. Download and expand apache-kylin-2.2 for hbase 1.x binary package on EMR 5.7's master
node. Copy the "hbase.zookeeper.quorum" property from /etc/hbase/conf/hbase-site.xml to $KYLIN_HOME/conf/kylin_job_conf.xml;
 In kylin.properties, set: "kylin.env.hdfs-working-dir=s3://mybucket/kylin"
> 3. Build the sample cube, in the job failed at "Create HTable" step, error is:
> {code}
> 2017-11-10 08:21:35,011 DEBUG [http-bio-7070-exec-2] cachesync.Broadcaster:290 : Done
broadcastingUPDATE, cube, kylin_sales_cube
> 2017-11-10 08:21:35,013 ERROR [Scheduler 1778356018 Job 5a2893c9-3a76-458c-a03e-5cd97839fca5-393]
common.HadoopShellExecutable:64 : error execute HadoopShellExecutable{id=5a2893c9-3a76-458c-a03e-5cd97839fca5-05,
name=Create HTable, state=RUNNING}
> org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE,
inode="/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats":hdfs:hadoop:drwxr-xr-x
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>         at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
>         at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
>         at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1929)
>         at org.apache.kylin.storage.hbase.steps.CreateHTableJob.saveHFileSplits(CreateHTableJob.java:282)
>         at org.apache.kylin.storage.hbase.steps.CreateHTableJob.getRegionSplitsFromCuboidStatistics(CreateHTableJob.java:239)
>         at org.apache.kylin.storage.hbase.steps.CreateHTableJob.run(CreateHTableJob.java:99)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=root, access=WRITE, inode="/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats":hdfs:hadoop:drwxr-xr-x
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1475)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1412)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>         at com.sun.proxy.$Proxy76.mkdirs(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy77.mkdirs(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
>         ... 20 more
> In create htable step:
> org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE,
inode="/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats":hdfs:hadoop:drwxr-xr-x
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> {code}
> Checked the "/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats"
folder doesn't exist in S3. It should be a bug, in the second step, Kylin tried to write this
onto HDFS but failed (job not error); then in the "create htable" step, it couldn't read the
file, then reported error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message