drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jacques Nadeau <jacq...@apache.org>
Subject Re: How connect Drill with HDFS
Date Fri, 06 Dec 2013 15:21:44 GMT
The message isn't DrillClient > DrillBit.  Its Drillbit > HDFS.  If looks
like you're trying to connect to an incompatible HDFS cluster with the HDFS
version that comes prepackaged with Drill.  I believe the current Drill
package is Hadoop ~1.10.  If you're running something like Hadoop2, you can
try to switch out the Hadoop jars in the Drill lib directory and see what
happens.  Since the first milestone of Drill came out before the GA release
of Hadoop 2 (2.10 I believe), we didn't include that in the libs.
 Additionally, it would be good if you filed a JIRA so that Drill can
support a Hadoop2 build profile.  For future reference, what version of
HDFS are you running?

Jacques


On Fri, Dec 6, 2013 at 7:10 AM, Timothy Chen <tnachen@gmail.com> wrote:

> Have you tried to run without your changes?
>
> It seems like it can't even connect to the drillbit in the first place.
>
> Tim
>
> Sent from my iPhone
>
> > On Dec 6, 2013, at 1:38 AM, Rajika Kumarasiri <
> rajika.kumarasiri@gmail.com> wrote:
> >
> > According to the log it means it's a client compatibility issue.
> >
> > Rajika
> >
> >
> > On Fri, Dec 6, 2013 at 4:32 AM, Michael Hausenblas <
> > michael.hausenblas@gmail.com> wrote:
> >
> >>
> >> Thank you, Guo Ying. I must admit that I’ve not seen this one before but
> >> I’d expect that Jason would have an idea … let’s see when the West
> coast of
> >> the US and A wakes up ;)
> >>
> >> Cheers,
> >>                Michael
> >>
> >> --
> >> Michael Hausenblas
> >> Ireland, Europe
> >> http://mhausenblas.info/
> >>
> >>> On 6 Dec 2013, at 09:27, Guo, Ying Y <ying.y.guo@intel.com> wrote:
> >>>
> >>> Hi Michael,
> >>>      Thanks for your reply!
> >>> The errors are:
> >>> |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@33:27 - no
> >> applicable action for [level], current ElementPath  is
> >> [[configuration][appender][level]]
> >>>
> >>> Error: Failure trying to connect to Drill. (state=,code=0)
> >>> java.sql.SQLException: Failure trying to connect to Drill.
> >>>       at
> >>
> org.apache.drill.jdbc.DrillHandler.onConnectionInit(DrillHandler.java:131)
> >>>       at
> >>
> net.hydromatic.optiq.jdbc.UnregisteredDriver.connect(UnregisteredDriver.java:127)
> >>>       at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4802)
> >>>       at
> >> sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4853)
> >>>       at sqlline.SqlLine$Commands.connect(SqlLine.java:4094)
> >>>       at sqlline.SqlLine$Commands.connect(SqlLine.java:4003)
> >>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>       at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>>       at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>>       at java.lang.reflect.Method.invoke(Method.java:606)
> >>>       at
> >> sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2964)
> >>>       at sqlline.SqlLine.dispatch(SqlLine.java:878)
> >>>       at sqlline.SqlLine.initArgs(SqlLine.java:652)
> >>>       at sqlline.SqlLine.begin(SqlLine.java:699)
> >>>       at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:460)
> >>>       at sqlline.SqlLine.main(SqlLine.java:443)
> >>> Caused by: org.apache.drill.exec.exception.SetupException: Failure
> >> setting up new storage engine configuration for config
> >> org.apache.drill.exec.store.parquet.ParquetStorageEngineConfig@617e8cc0
> >>>       at
> >>
> org.apache.drill.exec.store.SchemaProviderRegistry.getSchemaProvider(SchemaProviderRegistry.java:76)
> >>>       at
> >>
> org.apache.drill.jdbc.DrillHandler.onConnectionInit(DrillHandler.java:116)
> >>>       ... 15 more
> >>> Caused by: java.lang.RuntimeException: Error setting up filesystem
> >>>       at
> >>
> org.apache.drill.exec.store.parquet.ParquetSchemaProvider.<init>(ParquetSchemaProvider.java:49)
> >>>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> Method)
> >>>       at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> >>>       at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>>       at
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >>>       at
> >>
> org.apache.drill.exec.store.SchemaProviderRegistry.getSchemaProvider(SchemaProviderRegistry.java:72)
> >>>       ... 16 more
> >>> Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 9
> >> cannot communicate with client version 4
> >>>       at org.apache.hadoop.ipc.Client.call(Client.java:1113)
> >>>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >>>       at com.sun.proxy.$Proxy18.getProtocolVersion(Unknown Source)
> >>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>       at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>>       at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>>       at java.lang.reflect.Method.invoke(Method.java:606)
> >>>       at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >>>       at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >>>       at com.sun.proxy.$Proxy18.getProtocolVersion(Unknown Source)
> >>>       at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
> >>>       at
> >> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
> >>>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
> >>>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
> >>>       at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
> >>>       at
> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
> >>>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
> >>>       at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
> >>>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
> >>>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:124)
> >>>       at
> >>
> org.apache.drill.exec.store.parquet.ParquetSchemaProvider.<init>(ParquetSchemaProvider.java:47)
> >>>       ... 21 more
> >>>
> >>> B.R.
> >>> Guo Ying
> >>>
> >>>
> >>>
> >>> -----Original Message-----
> >>> From: Michael Hausenblas [mailto:michael.hausenblas@gmail.com]
> >>> Sent: Friday, December 06, 2013 5:16 PM
> >>> To: Apache Drill User
> >>> Subject: Re: How connect Drill with HDFS
> >>>
> >>>
> >>>> But when we run “./sqlline -u jdbc:drill:schema=parquet -n admin -p
> >> admin” there are some ERRORs and Failure trying to connect to Drill.
> >>>
> >>> In order to help you, it would certainly help if you share these
> errors,
> >> either here via copy and paste or put it on pastebin/gist and link to
> it.
> >>>
> >>> Cheers,
> >>>              Michael
> >>>
> >>> --
> >>> Michael Hausenblas
> >>> Ireland, Europe
> >>> http://mhausenblas.info/
> >>>
> >>>> On 6 Dec 2013, at 09:09, Guo, Ying Y <ying.y.guo@intel.com> wrote:
> >>>>
> >>>> Hi all,
> >>>>      We have modified ./sqlparser/target/classes/storage-engines.json:
> >>>> "parquet" :
> >>>>    {
> >>>>      "type":"parquet",
> >>>>      "dfsName" : "hdfs:// localhost:9000"
> >>>>    }
> >>>> We also have recompiled and generated new
> >> drill-sqlparser-1.0.0-m2-incubating-SNAPSHOT.jar.
> >>>> But when we run “./sqlline -u jdbc:drill:schema=parquet -n admin -p
> >> admin” there are some ERRORs and Failure trying to connect to Drill.
> >>>> I don't know why.  Do you know what else need to do?
> >>>>
> >>>> .
> >>
> >>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message