drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hema Kumar S (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (DRILL-1075) can not create hdfs as connection type in storage engine : server throws http 500 error
Date Fri, 20 Feb 2015 17:20:12 GMT

    [ https://issues.apache.org/jira/browse/DRILL-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14326276#comment-14326276
] 

Hema Kumar S edited comment on DRILL-1075 at 2/20/15 5:19 PM:
--------------------------------------------------------------

[~amitskatti]
I'm using cdh 4.5 and drill 0.7.0 .I'm trying to store file in hdfs, (using create table),

If I add CDH jars to drill classpath. I got below error. 
Java.lang.UnsupportedOperationException: This is supposed to be overridden by subclasses.
{quote}
 at com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180) ~[protobuf-java-2.5.0.jar:na]
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:30108)
~[hadoop-hdfs-2.0.0-cdh4.5.0.jar:na]
        at com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
~[protobuf-java-2.5.0.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)
~[hadoop-common-2.0.0-cdh4.5.0.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)
~[hadoop-common-2.0.0-cdh4.5.0.jar:na]
        at com.sun.proxy.$Proxy33.getFileInfo(Unknown Source) ~[na:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_51]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_51]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_51]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_51]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
~[hadoop-common-2.0.0-cdh4.5.0.jar:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
~[hadoop-common-2.0.0-cdh4.5.0.jar:na]
        at com.sun.proxy.$Proxy33.getFileInfo(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:629)
~[hadoop-hdfs-2.0.0-cdh4.5.0.jar:na]
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1545) ~[hadoop-hdfs-2.0.0-cdh4.5.0.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:820)
~[hadoop-hdfs-2.0.0-cdh4.5.0.jar:na]
{quote}

if I don't add cdh jars to Drill classpath. It's giving below error. 

ERROR o.a.d.e.s.text.DrillTextRecordWriter - Unable to create file: /tmp/table/1_13_0.csv
{quote}
java.io.IOException: Failed on local exception: java.nio.channels.ClosedByInterruptException;
Host Details : local host is: "***********"; destination host is: "**********":8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1414) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1363) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
~[hadoop-common-2.4.1.jar:na]
        at com.sun.proxy.$Proxy38.create(Unknown Source) ~[na:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_51]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_51]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_51]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_51]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
~[hadoop-common-2.4.1.jar:na]
        at com.sun.proxy.$Proxy38.create(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:258)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1600)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1465) ~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390) ~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:773) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.drill.exec.store.text.DrillTextRecordWriter.startNewSchema(DrillTextRecordWriter.java:81)
~[drill-java-exec-0.7.0-rebuffed.jar:0.7.0]
        at org.apache.drill.exec.store.StringOutputRecordWriter.updateSchema(StringOutputRecordWriter.java:57)
[drill-java-exec-0.7.0-rebuffed.jar:0.7.0]
        at org.apache.drill.exec.physical.impl.WriterRecordBatch.setupNewSchema(WriterRecordBatch.java:162)
[drill-java-exec-0.7.0-rebuffed.jar:0.7.0]
        at org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext(WriterRecordBatch.java:113)
[drill-java-exec-0.7.0-rebuffed.jar:0.7.0]
{quote}
Does drill 0.70 support hdfs storage in CDH 4.5?




was (Author: hema.kumar):
[~amitskatti]
I'm using cdh 4.5 and drill 0.7.0 .I'm trying to store file in hdfs, (using create table),

If I add CDH jars to drill classpath. I got below error. 
Java.lang.UnsupportedOperationException: This is supposed to be overridden by subclasses.
{quote}
 at com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180) ~[protobuf-java-2.5.0.jar:na]
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:30108)
~[hadoop-hdfs-2.0.0-cdh4.5.0.jar:na]
        at com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
~[protobuf-java-2.5.0.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)
~[hadoop-common-2.0.0-cdh4.5.0.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)
~[hadoop-common-2.0.0-cdh4.5.0.jar:na]
        at com.sun.proxy.$Proxy33.getFileInfo(Unknown Source) ~[na:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_51]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_51]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_51]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_51]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
~[hadoop-common-2.0.0-cdh4.5.0.jar:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
~[hadoop-common-2.0.0-cdh4.5.0.jar:na]
        at com.sun.proxy.$Proxy33.getFileInfo(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:629)
~[hadoop-hdfs-2.0.0-cdh4.5.0.jar:na]
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1545) ~[hadoop-hdfs-2.0.0-cdh4.5.0.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:820)
~[hadoop-hdfs-2.0.0-cdh4.5.0.jar:na]
{quote}

if I don't add cdh jars to Drill classpath. It's giving below error. 

ERROR o.a.d.e.s.text.DrillTextRecordWriter - Unable to create file: /tmp/table/1_13_0.csv
{quote}
java.io.IOException: Failed on local exception: java.nio.channels.ClosedByInterruptException;
Host Details : local host is: "***********"; destination host is: "**********":8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1414) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1363) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
~[hadoop-common-2.4.1.jar:na]
        at com.sun.proxy.$Proxy38.create(Unknown Source) ~[na:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_51]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_51]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_51]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_51]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
~[hadoop-common-2.4.1.jar:na]
        at com.sun.proxy.$Proxy38.create(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:258)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1600)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1465) ~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390) ~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334)
~[hadoop-hdfs-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:773) ~[hadoop-common-2.4.1.jar:na]
        at org.apache.drill.exec.store.text.DrillTextRecordWriter.startNewSchema(DrillTextRecordWriter.java:81)
~[drill-java-exec-0.7.0-rebuffed.jar:0.7.0]
        at org.apache.drill.exec.store.StringOutputRecordWriter.updateSchema(StringOutputRecordWriter.java:57)
[drill-java-exec-0.7.0-rebuffed.jar:0.7.0]
        at org.apache.drill.exec.physical.impl.WriterRecordBatch.setupNewSchema(WriterRecordBatch.java:162)
[drill-java-exec-0.7.0-rebuffed.jar:0.7.0]
        at org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext(WriterRecordBatch.java:113)
[drill-java-exec-0.7.0-rebuffed.jar:0.7.0]
{quote}
Does drill 0.70 support hdfs storage in CDH? or is this a bug? 



> can not create hdfs as connection type in storage engine : server throws http 500 error
> ---------------------------------------------------------------------------------------
>
>                 Key: DRILL-1075
>                 URL: https://issues.apache.org/jira/browse/DRILL-1075
>             Project: Apache Drill
>          Issue Type: Bug
>            Reporter: Vivian Summers
>            Assignee: Jacques Nadeau
>            Priority: Critical
>             Fix For: 0.4.0
>
>
> Server at 8047 throws:
> HTTP ERROR 500
> Problem accessing /storage/config/update. Reason:
>     Request failed.
> configure file:
> {
>   "type" : "file",
>   "enabled" : true,
>   "connection" : "hdfs:///",
>   "workspaces" : {
>     "root" : {
>       "location" : "/",
>       "writable" : false,
>       "storageformat" : null
>     },
>     "default" : {
>       "location" : "/user/root",
>       "writable" : true,
>       "storageformat" : null
>     },
>     "tmp" : {
>       "location" : "/tmp",
>       "writable" : true,
>       "storageformat" : "csv"
>     }
>   },
>   "formats" : {
>     "psv" : {
>       "type" : "text",
>       "extensions" : [ "tbl" ],
>       "delimiter" : "|"
>     },
>     "csv" : {
>       "type" : "text",
>       "extensions" : [ "csv" ],
>       "delimiter" : ","
>     },
>     "tsv" : {
>       "type" : "text",
>       "extensions" : [ "tsv" ],
>       "delimiter" : "\t"
>     },
>     "parquet" : {
>       "type" : "parquet"
>     },
>     "json" : {
>       "type" : "json"
>     }
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message