drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rajika Kumarasiri <rajika.kumaras...@gmail.com>
Subject Re: quick start guide
Date Mon, 16 Dec 2013 14:48:11 GMT
Thank you for your reply. It caused by the env variable, HADOOP_HOME
pointing to /home/rajika/project/accumulo/hadoop-1.0.4. I removed this and
that error is gone. I am still getting the
line, /home/rajika/.sqlline/sqlline.properties (but I hope it will not
affect the functionality)  and the same exception when trying to execute a
query.

loaded singnal handler: SunSignalHandler
/home/rajika/.sqlline/sqlline.properties (No such file or directory)
scan complete in 38ms
scan complete in 2709ms
Connecting to jdbc:drill:schema=parquet-local
Connected to: Drill (version 1.0)
Driver: Apache Drill JDBC Driver (version 1.0)
Autocommit status: true
Transaction isolation: TRANSACTION_REPEATABLE_READ
sqlline version ??? by Marc Prud'hommeaux
0: jdbc:drill:schema=parquet-local>



On Mon, Dec 16, 2013 at 1:01 AM, Jacques Nadeau <jacques@apache.org> wrote:

> The line at the top:
>
> ls: cannot access /home/rajika/project/accumulo/hadoop-1.0.4/lib/*jar: No
> such file or directory
>
> Seem a bit strange.  What are your env variables?
>
>
> On Sat, Dec 14, 2013 at 5:18 PM, Rajika Kumarasiri <
> rajika.kumarasiri@gmail.com> wrote:
>
> > BTW, there is a typo in the following log line.
> >
> > Loaded singnal handler: SunSignalHandler
> >
> >
> > Rajika
> >
> >
> > On Sat, Dec 14, 2013 at 8:10 PM, Rajika Kumarasiri <
> > rajika.kumarasiri@gmail.com> wrote:
> >
> > > log file can be configured using the enviornment varible DRILL_LOG_DIR.
> > > Now the out put is following.
> > >
> > > [rajika@localhost bin]$ ./sqlline -u jdbc:drill:schema=parquet-local
> -n
> > > admin -p admin
> > > ls: cannot access /home/rajika/project/accumulo/hadoop-1.0.4/lib/*jar:
> No
> > > such file or directory
> > >
> > > Loaded singnal handler: SunSignalHandler
> > > /home/rajika/.sqlline/sqlline.properties (No such file or directory)
> > > scan complete in 32ms
> > > scan complete in 2701ms
> > > Connecting to jdbc:drill:schema=parquet-local
> > > Connected to: Drill (version 1.0)
> > > Driver: Apache Drill JDBC Driver (version 1.0)
> > > Autocommit status: true
> > > Transaction isolation: TRANSACTION_REPEATABLE_READ
> > > sqlline version ??? by Marc Prud'hommeaux
> > >
> > >
> > >
> > > And query execution gives;
> > >
> > > 0: jdbc:drill:schema=parquet-local> select * from
> > > "sample-data/region.parquet";
> > > Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure
> > while
> > > running query.[error_id: "e91fcd81-50af-4904-b3a3-eef3c45a13c6"
> > > endpoint {
> > >   address: "localhost"
> > >   user_port: 31010
> > >   bit_port: 31011
> > > }
> > > error_type: 0
> > > message: "Failure while converting logical plan to physical plan. <
> > > OptimizerException:[ Failure while attempting to retrieve storage
> > engine. ]
> > > < FileNotFoundException:[ File sample-data/region.parquet does not
> > exist. ]"
> > > ]
> > > java.lang.RuntimeException: org.apache.drill.exec.rpc.RpcException:
> > Remote
> > > failure while running query.[error_id:
> > > "e91fcd81-50af-4904-b3a3-eef3c45a13c6"
> > > endpoint {
> > >   address: "localhost"
> > >   user_port: 31010
> > >   bit_port: 31011
> > > }
> > > error_type: 0
> > > message: "Failure while converting logical plan to physical plan. <
> > > OptimizerException:[ Failure while attempting to retrieve storage
> > engine. ]
> > > < FileNotFoundException:[ File sample-data/region.parquet does not
> > exist. ]"
> > > ]
> > > at
> > >
> >
> org.apache.drill.sql.client.full.ResultEnumerator.moveNext(ResultEnumerator.java:61)
> > > at
> > >
> >
> net.hydromatic.optiq.runtime.ObjectEnumeratorCursor.next(ObjectEnumeratorCursor.java:44)
> > >  at
> > net.hydromatic.optiq.jdbc.OptiqResultSet.next(OptiqResultSet.java:162)
> > > at sqlline.SqlLine$BufferedRows.<init>(SqlLine.java:2499)
> > >  at sqlline.SqlLine.print(SqlLine.java:1886)
> > > at sqlline.SqlLine$Commands.execute(SqlLine.java:3835)
> > > at sqlline.SqlLine$Commands.sql(SqlLine.java:3738)
> > >  at sqlline.SqlLine.dispatch(SqlLine.java:882)
> > > at sqlline.SqlLine.begin(SqlLine.java:717)
> > > at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:460)
> > >  at sqlline.SqlLine.main(SqlLine.java:443)
> > > Caused by: org.apache.drill.exec.rpc.RpcException: Remote failure while
> > > running query.[error_id: "e91fcd81-50af-4904-b3a3-eef3c45a13c6"
> > > endpoint {
> > >   address: "localhost"
> > >   user_port: 31010
> > >   bit_port: 31011
> > > }
> > > error_type: 0
> > > message: "Failure while converting logical plan to physical plan. <
> > > OptimizerException:[ Failure while attempting to retrieve storage
> > engine. ]
> > > < FileNotFoundException:[ File sample-data/region.parquet does not
> > exist. ]"
> > > ]
> > > at
> > >
> >
> org.apache.drill.exec.rpc.user.QueryResultHandler.batchArrived(QueryResultHandler.java:72)
> > > at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:79)
> > >  at
> > >
> >
> org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:48)
> > > at
> > >
> >
> org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:33)
> > >  at
> > > org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:142)
> > > at
> > org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:127)
> > >  at
> > >
> >
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
> > > at
> > >
> >
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
> > >  at
> > >
> >
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
> > > at
> > >
> >
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> > >  at
> > >
> >
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
> > > at
> > >
> >
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
> > >  at
> > >
> >
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:173)
> > > at
> > >
> >
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
> > >  at
> > >
> >
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
> > > at
> > >
> >
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)
> > >  at
> > >
> >
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:100)
> > > at
> > >
> >
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:497)
> > >  at
> > >
> >
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:465)
> > > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:359)
> > >  at
> > >
> >
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
> > > at java.lang.Thread.run(Thread.java:724)
> > > 0: jdbc:drill:schema=parquet-local>
> > >
> > >
> > > On Thu, Dec 12, 2013 at 5:42 AM, Rajika Kumarasiri <
> > > rajika.kumarasiri@gmail.com> wrote:
> > >
> > >> Thank you for your reply.
> > >>
> > >> Yes I am using the binary in side the
> > >> distribution/target,
> > apache-drill-1.0.0-m2-incubating-SNAPSHOT-binary-release.tar.gz.
> > >>
> > >> It does have the file, region.parquet inside the sample folder.
> > >>
> > >> Rajika
> > >>
> > >>
> > >> On Thu, Dec 12, 2013 at 12:25 AM, Jacques Nadeau <jacques@apache.org
> > >wrote:
> > >>
> > >>> It looks like you are having an issue where your distribution folder
> > >>> isn't
> > >>> set up right.  Did you go into the distribution/target folder and use
> > >>> that
> > >>> for your experimentation?  It looks like you're missing the parquet
> > >>> sample
> > >>> data files (or they are in the wrong location).
> > >>>
> > >>> The logging errors shouldn't impact function.  The important line is:
> > >>> File
> > >>> sample-data/region.parquet does not exist.
> > >>>
> > >>> There is a way to change the location of the log file.  I think it
is
> > in
> > >>> one of the files in the conf directory.  I'm thinking logback.xml but
> > I'm
> > >>> not 100%.  Maybe Steven can remind us.
> > >>>
> > >>> Jacques
> > >>>
> > >>>
> > >>> On Wed, Dec 11, 2013 at 3:10 PM, Rajika Kumarasiri <
> > >>> rajika.kumarasiri@gmail.com> wrote:
> > >>>
> > >>> > Any pointers to solve the issue ?
> > >>> >
> > >>> > Thank you.
> > >>> >
> > >>> > Rajika
> > >>> >
> > >>> >
> > >>> > On Wed, Dec 11, 2013 at 2:03 AM, Rajika Kumarasiri <
> > >>> > rajika.kumarasiri@gmail.com> wrote:
> > >>> >
> > >>> > > 1. Yeah I tried.
> > >>> > >
> > >>> > > 0: jdbc:drill:schema=parquet-local> select * from
> > >>> > > "sample-data/region.parquet";
> > >>> > > Query failed: org.apache.drill.exec.rpc.RpcException: Remote
> > failure
> > >>> > while
> > >>> > > running query.[error_id: "8ad8adcf-d926-49f5-9a6e-9089ac155f77"
> > >>> > > endpoint {
> > >>> > >   address: "localhost"
> > >>> > >   user_port: 31010
> > >>> > >   bit_port: 31011
> > >>> > > }
> > >>> > > error_type: 0
> > >>> > > message: "Failure while converting logical plan to physical
> plan. <
> > >>> > > OptimizerException:[ Failure while attempting to retrieve
storage
> > >>> > engine. ]
> > >>> > > < FileNotFoundException:[ File sample-data/region.parquet
does
> not
> > >>> > exist. ]"
> > >>> > > ]
> > >>> > > java.lang.RuntimeException:
> org.apache.drill.exec.rpc.RpcException:
> > >>> > Remote
> > >>> > > failure while running query.[error_id:
> > >>> > > "8ad8adcf-d926-49f5-9a6e-9089ac155f77"
> > >>> > > endpoint {
> > >>> > >   address: "localhost"
> > >>> > >   user_port: 31010
> > >>> > >   bit_port: 31011
> > >>> > > }
> > >>> > > error_type: 0
> > >>> > > message: "Failure while converting logical plan to physical
> plan. <
> > >>> > > OptimizerException:[ Failure while attempting to retrieve
storage
> > >>> > engine. ]
> > >>> > > < FileNotFoundException:[ File sample-data/region.parquet
does
> not
> > >>> > exist. ]"
> > >>> > > ]
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> org.apache.drill.sql.client.full.ResultEnumerator.moveNext(ResultEnumerator.java:61)
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> net.hydromatic.optiq.runtime.ObjectEnumeratorCursor.next(ObjectEnumeratorCursor.java:44)
> > >>> > >  at
> > >>> >
> > net.hydromatic.optiq.jdbc.OptiqResultSet.next(OptiqResultSet.java:162)
> > >>> > > at sqlline.SqlLine$BufferedRows.<init>(SqlLine.java:2499)
> > >>> > >  at sqlline.SqlLine.print(SqlLine.java:1886)
> > >>> > > at sqlline.SqlLine$Commands.execute(SqlLine.java:3835)
> > >>> > > at sqlline.SqlLine$Commands.sql(SqlLine.java:3738)
> > >>> > >  at sqlline.SqlLine.dispatch(SqlLine.java:882)
> > >>> > > at sqlline.SqlLine.begin(SqlLine.java:717)
> > >>> > > at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:460)
> > >>> > >  at sqlline.SqlLine.main(SqlLine.java:443)
> > >>> > > Caused by: org.apache.drill.exec.rpc.RpcException: Remote
failure
> > >>> while
> > >>> > > running query.[error_id: "8ad8adcf-d926-49f5-9a6e-9089ac155f77"
> > >>> > > endpoint {
> > >>> > >   address: "localhost"
> > >>> > >   user_port: 31010
> > >>> > >   bit_port: 31011
> > >>> > > }
> > >>> > > error_type: 0
> > >>> > > message: "Failure while converting logical plan to physical
> plan. <
> > >>> > > OptimizerException:[ Failure while attempting to retrieve
storage
> > >>> > engine. ]
> > >>> > > < FileNotFoundException:[ File sample-data/region.parquet
does
> not
> > >>> > exist. ]"
> > >>> > > ]
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> org.apache.drill.exec.rpc.user.QueryResultHandler.batchArrived(QueryResultHandler.java:72)
> > >>> > > at
> > >>> org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:79)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:48)
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:33)
> > >>> > >  at
> > >>> > >
> > >>>
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:142)
> > >>> > > at
> > >>> >
> > org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:127)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:173)
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:100)
> > >>> > > at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:497)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:465)
> > >>> > > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:359)
> > >>> > >  at
> > >>> > >
> > >>> >
> > >>>
> >
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
> > >>> > > at java.lang.Thread.run(Thread.java:724)
> > >>> > > 0: jdbc:drill:schema=parquet-local>
> > >>> > >
> > >>> > >
> > >>> > > 2. Where can I configure that log path instead default
> > >>> /var/log/drill ?
> > >>> > >
> > >>> > >
> > >>> > > There are more errors;
> > >>> > >
> > >>> > > ls: cannot access
> > >>> /home/rajika/project/accumulo/hadoop-1.0.4/lib/*jar: No
> > >>> > > such file or directory
> > >>> > >
> > >>> > > Rajika
> > >>> > >
> > >>> > >
> > >>> > >
> > >>> > > On Wed, Dec 11, 2013 at 1:53 AM, Harri Kinnunen <
> > >>> > > Harri.Kinnunen@hitsaamo.fi> wrote:
> > >>> > >
> > >>> > >> No access to drill right now... But:
> > >>> > >>
> > >>> > >> 1) Have tried to run a query in the prompt anyways? (select
*
> from
> > >>> > >> "sample-data/region.parquet";  for example)
> > >>> > >> 2) It says there:
> > >>> > >> java.io.FileNotFoundException: /var/log/drill/sqlline.log
(No
> such
> > >>> file
> > >>> > >> ordirectory)
> > >>> > >> --> Have you checked the file/directory permissions?
> > >>> > >>
> > >>> > >> Cheers,
> > >>> > >> -Harri
> > >>> > >>
> > >>> > >>
> > >>> > >> -----Original Message-----
> > >>> > >> From: Rajika Kumarasiri [mailto:rajika.kumarasiri@gmail.com]
> > >>> > >> Sent: 11. joulukuuta 2013 8:42
> > >>> > >> To: drill-user@incubator.apache.org
> > >>> > >> Subject: quick start guide
> > >>> > >>
> > >>> > >> I built the binary distribution from the source. When
I try the
> > >>> guide
> > >>> > >>
> > https://cwiki.apache.org/confluence/display/DRILL/Running+Queriesit
> > >>> > >> gave me below errors. It seems I didn't complete some
> > prerequisites.
> > >>> > Where
> > >>> > >> can I read about them ?
> > >>> > >>
> > >>> > >> Rajika
> > >>> > >>
> > >>> > >>
> > >>> > >> [rajika@localhost bin]$ ./sqlline -u
> > >>> jdbc:drill:schema=parquet-local -n
> > >>> > >> admin -p admin
> > >>> > >> ls: cannot access
> > >>> /home/rajika/project/accumulo/hadoop-1.0.4/lib/*jar:
> > >>> > No
> > >>> > >> such file or directory
> > >>> > >>
> > >>> > >> Loaded singnal handler: SunSignalHandler
> > >>> > >> /home/rajika/.sqlline/sqlline.properties (No such file
or
> > directory)
> > >>> > scan
> > >>> > >> complete in 39ms
> > >>> > >> 01:32:55,640 |-INFO in
> > >>> ch.qos.logback.classic.LoggerContext[default] -
> > >>> > >> Could NOT find resource [logback.groovy]
> > >>> > >> 01:32:55,641 |-INFO in
> > >>> ch.qos.logback.classic.LoggerContext[default] -
> > >>> > >> Could NOT find resource [logback-test.xml]
> > >>> > >> 01:32:55,641 |-INFO in
> > >>> ch.qos.logback.classic.LoggerContext[default] -
> > >>> > >> Found resource [logback.xml] at
> > >>> > >>
> > >>> >
> > >>>
> >
> [file:/home/rajika/project/apache/drill/distribution/target/apache-drill-1.0.0-m2-incubating-SNAPSHOT/conf/logback.xml]
> > >>> > >> 01:32:55,805 |-INFO in
> > >>> > >> ch.qos.logback.classic.joran.action.ConfigurationAction
- debug
> > >>> > attribute
> > >>> > >> not set
> > >>> > >> 01:32:55,838 |-INFO in
> > >>> ch.qos.logback.core.joran.action.AppenderAction -
> > >>> > >> About to instantiate appender of type
> > >>> > >>
> > [de.huxhorn.lilith.logback.appender.ClassicMultiplexSocketAppender]
> > >>> > >> 01:32:55,854 |-INFO in
> > >>> ch.qos.logback.core.joran.action.AppenderAction -
> > >>> > >> Naming appender as [SOCKET]
> > >>> > >> 01:32:55,884 |-INFO in
> > >>> > >>
> > >>> >
> > >>>
> > de.huxhorn.lilith.logback.appender.ClassicMultiplexSocketAppender[SOCKET]
> > >>> > >> - Waiting 1s to establish connections.
> > >>> > >> 01:32:56,884 |-INFO in
> > >>> > >>
> > >>> >
> > >>>
> > de.huxhorn.lilith.logback.appender.ClassicMultiplexSocketAppender[SOCKET]
> > >>> > >> - Started
> > >>> > >>
> > >>> >
> > >>>
> > de.huxhorn.lilith.logback.appender.ClassicMultiplexSocketAppender[SOCKET]
> > >>> > >> 01:32:56,884 |-INFO in
> > >>> ch.qos.logback.core.joran.action.AppenderAction -
> > >>> > >> About to instantiate appender of type
> > >>> > [ch.qos.logback.core.ConsoleAppender]
> > >>> > >> 01:32:56,893 |-INFO in
> > >>> ch.qos.logback.core.joran.action.AppenderAction -
> > >>> > >> Naming appender as [STDOUT]
> > >>> > >> 01:32:56,900 |-INFO in
> > >>> > >> ch.qos.logback.core.joran.action.NestedComplexPropertyIA
-
> > Assuming
> > >>> > >> default type
> [ch.qos.logback.classic.encoder.PatternLayoutEncoder]
> > >>> for
> > >>> > >> [encoder] property
> > >>> > >> 01:32:56,979 |-INFO in
> > >>> ch.qos.logback.core.joran.action.AppenderAction -
> > >>> > >> About to instantiate appender of type
> > >>> > >> [ch.qos.logback.core.rolling.RollingFileAppender]
> > >>> > >> 01:32:56,981 |-INFO in
> > >>> ch.qos.logback.core.joran.action.AppenderAction -
> > >>> > >> Naming appender as [FILE]
> > >>> > >> 01:32:56,996 |-INFO in
> > >>> > >> ch.qos.logback.core.rolling.FixedWindowRollingPolicy@2ed6ddda
-
> > No
> > >>> > >> compression will be used
> > >>> > >> 01:32:57,003 |-INFO in
> > >>> > >> ch.qos.logback.core.joran.action.NestedComplexPropertyIA
-
> > Assuming
> > >>> > >> default type
> [ch.qos.logback.classic.encoder.PatternLayoutEncoder]
> > >>> for
> > >>> > >> [encoder] property
> > >>> > >> 01:32:57,004 |-INFO in
> > >>> > >> ch.qos.logback.core.rolling.RollingFileAppender[FILE]
- Active
> log
> > >>> file
> > >>> > >> name: /var/log/drill/sqlline.log
> > >>> > >> 01:32:57,004 |-INFO in
> > >>> > >> ch.qos.logback.core.rolling.RollingFileAppender[FILE]
- File
> > >>> property is
> > >>> > >> set to [/var/log/drill/sqlline.log]
> > >>> > >> 01:32:57,005 |-ERROR in
> > >>> > >> ch.qos.logback.core.rolling.RollingFileAppender[FILE]
- Failed
> to
> > >>> create
> > >>> > >> parent directories for [/var/log/drill/sqlline.log]
> > >>> > >> 01:32:57,005 |-ERROR in
> > >>> > >> ch.qos.logback.core.rolling.RollingFileAppender[FILE]
-
> > >>> > >> openFile(/var/log/drill/sqlline.log,true) call failed.
> > >>> > >> java.io.FileNotFoundException: /var/log/drill/sqlline.log
(No
> such
> > >>> file
> > >>> > or
> > >>> > >> directory)
> > >>> > >> at java.io.FileNotFoundException: /var/log/drill/sqlline.log
(No
> > >>> such
> > >>> > >> file or directory) at at java.io.FileOutputStream.open(Native
> > >>> Method)
> > >>> > at at
> > >>> > >> java.io.FileOutputStream.<init>(FileOutputStream.java:212)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.recovery.ResilientFileOutputStream.<init>(ResilientFileOutputStream.java:28)
> > >>> > >> at at
> > >>> ch.qos.logback.core.FileAppender.openFile(FileAppender.java:149)
> > >>> > >> at at
> > ch.qos.logback.core.FileAppender.start(FileAppender.java:108)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.rolling.RollingFileAppender.start(RollingFileAppender.java:86)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.joran.action.AppenderAction.end(AppenderAction.java:96)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:317)
> > >>> > >> at at
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:196)
> > >>> > >> at at
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:182)
> > >>> > >> at at
> > >>> > ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:62)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:149)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:135)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:99)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:49)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:75)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:148)
> > >>> > >> at at
> > >>> org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:85)
> > >>> > >> at at
> > >>> > >>
> > >>>
> org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
> > >>> > >> at at org.slf4j.LoggerFactory.bind(LoggerFactory.java:128)
> > >>> > >> at at
> > >>> > >>
> > >>> org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:107)
> > >>> > >> at at
> > >>> org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:295)
> > >>> > >> at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:269)
> > >>> > >> at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
> > >>> > >> at at
> > >>> org.apache.drill.jdbc.DrillHandler.<clinit>(DrillHandler.java:51)
> > >>> > >> at at
> > >>> org.apache.drill.jdbc.RefDriver.createHandler(RefDriver.java:65)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> net.hydromatic.optiq.jdbc.UnregisteredDriver.<init>(UnregisteredDriver.java:52)
> > >>> > >> at at org.apache.drill.jdbc.RefDriver.<init>(RefDriver.java:32)
> > >>> > >> at at
> org.apache.drill.jdbc.RefDriver.<clinit>(RefDriver.java:38)
> > >>> > >> at at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > >>> > >> Method) at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > >>> > >> at at
> > >>> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> > >>> > >> at at java.lang.Class.newInstance(Class.java:374)
> > >>> > >> at at sqlline.SqlLine.scanDrivers(SqlLine.java:1763)
> > >>> > >> at at sqlline.SqlLine.scanForDriver(SqlLine.java:1687)
> > >>> > >> at at sqlline.SqlLine.access$2300(SqlLine.java:58)
> > >>> > >> at at sqlline.SqlLine$Commands.connect(SqlLine.java:4069)
> > >>> > >> at at sqlline.SqlLine$Commands.connect(SqlLine.java:4003)
> > >>> > >> at at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > >>> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > >>> > >> at at
> > >>> > >>
> > >>> > >>
> > >>> >
> > >>>
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >>> > >> at at java.lang.reflect.Method.invoke(Method.java:606)
> > >>> > >> at at
> > >>> > sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2964)
> > >>> > >> at at sqlline.SqlLine.dispatch(SqlLine.java:878)
> > >>> > >> at at sqlline.SqlLine.initArgs(SqlLine.java:652)
> > >>> > >> at at sqlline.SqlLine.begin(SqlLine.java:699)
> > >>> > >> at at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:460)
> > >>> > >> at at sqlline.SqlLine.main(SqlLine.java:443)
> > >>> > >> 01:32:57,006 |-INFO in
> > >>> ch.qos.logback.classic.joran.action.LoggerAction
> > >>> > -
> > >>> > >> Setting additivity of logger [org.apache.drill] to false
> > >>> > >> 01:32:57,006 |-INFO in
> > >>> ch.qos.logback.classic.joran.action.LevelAction -
> > >>> > >> org.apache.drill level set to INFO
> > >>> > >> 01:32:57,006 |-INFO in
> > >>> > ch.qos.logback.core.joran.action.AppenderRefAction
> > >>> > >> - Attaching appender named [FILE] to Logger[org.apache.drill]
> > >>> > >> 01:32:57,006 |-INFO in
> > >>> ch.qos.logback.classic.joran.action.LoggerAction
> > >>> > -
> > >>> > >> Setting additivity of logger [org.apache.drill] to false
> > >>> > >> 01:32:57,007 |-INFO in
> > >>> ch.qos.logback.classic.joran.action.LevelAction -
> > >>> > >> org.apache.drill level set to DEBUG
> > >>> > >> 01:32:57,007 |-INFO in
> > >>> > ch.qos.logback.core.joran.action.AppenderRefAction
> > >>> > >> - Attaching appender named [SOCKET] to Logger[org.apache.drill]
> > >>> > >> 01:32:57,007 |-INFO in
> > >>> ch.qos.logback.classic.joran.action.LevelAction -
> > >>> > >> ROOT level set to ERROR
> > >>> > >> 01:32:57,007 |-INFO in
> > >>> > ch.qos.logback.core.joran.action.AppenderRefAction
> > >>> > >> - Attaching appender named [STDOUT] to Logger[ROOT]
> > >>> > >> 01:32:57,007 |-INFO in
> > >>> > >> ch.qos.logback.classic.joran.action.ConfigurationAction
- End of
> > >>> > >> configuration.
> > >>> > >> 01:32:57,007 |-INFO in
> > >>> > >> ch.qos.logback.classic.joran.JoranConfigurator@75a2edef
-
> > >>> Registering
> > >>> > >> current configuration as safe fallback point
> > >>> > >>
> > >>> > >> scan complete in 3196ms
> > >>> > >> Connecting to jdbc:drill:schema=parquet-local Connected
to:
> Drill
> > >>> > >> (version 1.0)
> > >>> > >> Driver: Apache Drill JDBC Driver (version 1.0) Autocommit
> status:
> > >>> true
> > >>> > >> Transaction isolation: TRANSACTION_REPEATABLE_READ sqlline
> version
> > >>> ???
> > >>> > by
> > >>> > >> Marc Prud'hommeaux
> > >>> > >> 0: jdbc:drill:schema=parquet-local>
> > >>> > >> 0: jdbc:drill:schema=parquet-local>
> > >>> > >>
> > >>> > >
> > >>> > >
> > >>> >
> > >>>
> > >>
> > >>
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message