Return-Path: X-Original-To: apmail-ignite-user-archive@minotaur.apache.org Delivered-To: apmail-ignite-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8AF7218C49 for ; Fri, 5 Feb 2016 18:09:30 +0000 (UTC) Received: (qmail 39255 invoked by uid 500); 5 Feb 2016 18:09:30 -0000 Delivered-To: apmail-ignite-user-archive@ignite.apache.org Received: (qmail 39213 invoked by uid 500); 5 Feb 2016 18:09:30 -0000 Mailing-List: contact user-help@ignite.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@ignite.apache.org Delivered-To: mailing list user@ignite.apache.org Received: (qmail 39202 invoked by uid 99); 5 Feb 2016 18:09:30 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 05 Feb 2016 18:09:30 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id B4C0DC25DD for ; Fri, 5 Feb 2016 18:09:29 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.199 X-Spam-Level: * X-Spam-Status: No, score=1.199 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id nIstxzRVLNBg for ; Fri, 5 Feb 2016 18:09:27 +0000 (UTC) Received: from mail-io0-f172.google.com (mail-io0-f172.google.com [209.85.223.172]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 53CEC439ED for ; Fri, 5 Feb 2016 18:09:27 +0000 (UTC) Received: by mail-io0-f172.google.com with SMTP id f81so136758896iof.0 for ; Fri, 05 Feb 2016 10:09:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=H1fyB51pQd2bj/36bfbqNLh1KDUMztVY6Tc//CJkN2c=; b=bIY/QhxbJisIcea9tKKCyPN8E8vs0xoyFGQyceNY/+uMFJ6GvyI9TDyNDgyuZ3iqiC uJJlTm9kmoZLFZjyRo+Et0Zwqrmm23gIVejMINDSndpVJww4Q0Cva1nvFAkoLIB16We4 SUQYZ9BdNFD6XuF4XyudKvFCMMGXWkc44KfKpalcqPdEtV0YtqfFZ+js0Bc6uD5H+Y9X sTJ2KTu/6Kj6OnaQaTwPR9fKyR0kXAsoTTERZY2Es0RL3nGfQUiA+kRp3HiOd2pBazts psrukF0DMvjVwveDVyR1LbzFbm2T5kA2FUUx1Z/k7SHQS5SGdxfddrnaXSpTR2mLQrg4 i43g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=H1fyB51pQd2bj/36bfbqNLh1KDUMztVY6Tc//CJkN2c=; b=kqflXVMoUG0mFlcNEMxi3HCKQsA67XO8PQQ7tnvgj9SKNmKMp4yF2Mv7LrOvo2e/ol Uvr31gAJSmmq7Vfm1Dby9ftAaOOtLaxJwnqjPgul/ZN/ZP02r3oK9pSYGadTaUEO9uJM E3jXPtWcRHsDZELdGiBzXljuvjSll5ZqZotUpOiJQd+Wd6RanxCIG6OmSDHEZnTLEnbo /darLEL183QUJutMdtDZljCtVBdgkJAyQOrE3vDVH2pcoJx4T8AacKue5V4PXoWxXdBI yh6no6YtRDeuyB5hG+/FGANtJ1ZF/LIN7gkX0jWNqaw6KZwkPSPAvHPy9kgrOM5V0SPF LZTA== X-Gm-Message-State: AG10YOQcDcSyBtuxEBpHhdA/AUnxru1b/yHj9hQgrmTPpRKs8HY5/cjb4JwonLfh+qpjgzHnaM++iUsRbF0B1w== X-Received: by 10.107.35.16 with SMTP id j16mr15901952ioj.10.1454695766934; Fri, 05 Feb 2016 10:09:26 -0800 (PST) MIME-Version: 1.0 Received: by 10.107.3.152 with HTTP; Fri, 5 Feb 2016 10:09:07 -0800 (PST) In-Reply-To: References: <1454622967955-2840.post@n6.nabble.com> <1454625724982-2841.post@n6.nabble.com> <1454626634024-2842.post@n6.nabble.com> From: Jason Altekruse Date: Fri, 5 Feb 2016 10:09:07 -0800 Message-ID: Subject: Re: Apache Drill querying IGFS-accelerated (H)DFS? To: dev Cc: user@ignite.apache.org Content-Type: multipart/alternative; boundary=001a11405fc071ea33052b09c080 --001a11405fc071ea33052b09c080 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hello Vladimir, I am not certain if Drill will source the core-site.xml file from the hadoop directory, but I know that you can provide one in the Drill conf/ directory. That being said, I did not think that a new core-site entry was needed to add a filesystem implementation. I thought that the JARs simply needed to be added to the classpath. Have you added the JARs containing the ignite FileSystem implementation to the Drill classpath? The easiest way I know to do this is by copying them into jars/3rdparty directory in the Drill installation. - Jason On Fri, Feb 5, 2016 at 9:55 AM, Vladimir Ozerov wrote: > *Peter,* > > I created the ticket in Ignite JIRA. Hope someone form community will be > able to throw a glance on it soon - > https://issues.apache.org/jira/browse/IGNITE-2568 > Please keep an eye on it. > > Cross-posting the issue to Drill dev list. > > *Dear Drill folks,* > > We have our own implementation of Hadoop FileSystem here in Ignite. It ha= s > unique URI prefix ("igfs://") and is normally registered in Hadoop's > core-site.xml like this: > > > fs.igfs.impl > > org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem > > > However, when we try to use this file system as data source in Drill, the > exception is thrown (see stack trace below). I suspect that default Hadoo= p > core-site.xml is not taken in consideration by Drill somehow. Could you > please give us a hint on how to properly configure custom Hadoop FileSyst= em > implementation in your system? > > Thank you!. > > Vladimir. > > Stack trace: > > java.io.IOException: No FileSystem for scheme: igfs > at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:264= 4) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687= ) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170) > ~[hadoop-common-2.7.1.jar:na] > at > > org.apache.drill.exec.store.dfs.DrillFileSystem.(DrillFileSystem.ja= va:92) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java= :213) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java= :210) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at java.security.AccessController.doPrivileged(Native Method) > ~[na:1.8.0_40-ea] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea] > at > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1657) > ~[hadoop-common-2.7.1.jar:na] > at > > org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(Impersonati= onUtil.java:210) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(Impersonati= onUtil.java:202) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(Workspa= ceSchemaFactory.java:150) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.= (FileSystemSchemaFactory.java:78) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(F= ileSystemSchemaFactory.java:65) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSyst= emPlugin.java:131) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.regi= sterSchemas(StoragePluginRegistry.java:403) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:16= 6) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:15= 5) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:14= 3) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.j= ava:129) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > org.apache.drill.exec.planner.sql.DrillSqlWorker.(DrillSqlWorker.ja= va:93) > [drill-java-exec-1.4.0.jar:1.4.0] > at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907) > [drill-java-exec-1.4.0.jar:1.4.0] > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244) > [drill-java-exec-1.4.0.jar:1.4.0] > at > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1142) > [na:1.8.0_40-ea] > at > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:617) > [na:1.8.0_40-ea] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea] > > > > On Fri, Feb 5, 2016 at 4:18 PM, pshomov wrote: > > > > > =E2=80=8BHi Vladimir, > > > > My bad about that ifgs://, fixed it but it changed nothing. > > > > I don=E2=80=99t think Drill cares much about Hadoop settings. It never = asked me > to > > point it to an installation or configuration of Hadoop. I believe they > have > > their own storage plugin mechanism and one of their built-in plugins > > happens to be the HDFS one. > > > > Here is (part of) the Drill log > > > > 2016-02-05 13:14:03,507 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman] > > ERROR o.a.d.exec.util.ImpersonationUtil - Failed to create > DrillFileSystem > > for proxy user: No FileSystem for scheme: igfs > > java.io.IOException: No FileSystem for scheme: igfs > > at > > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644= ) > > ~[hadoop-common-2.7.1.jar:na] > > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:265= 1) > > ~[hadoop-common-2.7.1.jar:na] > > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92) > > ~[hadoop-common-2.7.1.jar:na] > > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687) > > ~[hadoop-common-2.7.1.jar:na] > > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669) > > ~[hadoop-common-2.7.1.jar:na] > > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) > > ~[hadoop-common-2.7.1.jar:na] > > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170) > > ~[hadoop-common-2.7.1.jar:na] > > at > > > org.apache.drill.exec.store.dfs.DrillFileSystem.(DrillFileSystem.ja= va:92) > > ~[drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java= :213) > > ~[drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java= :210) > > ~[drill-java-exec-1.4.0.jar:1.4.0] > > at java.security.AccessController.doPrivileged(Native Method) > > ~[na:1.8.0_40-ea] > > at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea] > > at > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1657) > > ~[hadoop-common-2.7.1.jar:na] > > at > > > org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(Impersonati= onUtil.java:210) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(Impersonati= onUtil.java:202) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(Workspa= ceSchemaFactory.java:150) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.= (FileSystemSchemaFactory.java:78) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(F= ileSystemSchemaFactory.java:65) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSyst= emPlugin.java:131) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.regi= sterSchemas(StoragePluginRegistry.java:403) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:16= 6) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:15= 5) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:14= 3) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.j= ava:129) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.(DrillSqlWorker.ja= va:93) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244) > > [drill-java-exec-1.4.0.jar:1.4.0] > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1142) > > [na:1.8.0_40-ea] > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:617) > > [na:1.8.0_40-ea] > > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea] > > 2016-02-05 13:14:03,556 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman] > > ERROR o.a.drill.exec.work.foreman.Foreman - SYSTEM ERROR: IOException: = No > > FileSystem for scheme: igfs > > > > > > [Error Id: 6c95179a-6d26-498c-905f-dc18509c1651 on 192.168.1.42:31010] > > org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: > > IOException: No FileSystem for scheme: igfs > > > > > > I copied the same ignite jars that go into Hadoop to Drill just in case > > but that did not help either. > > I think the only way is to write a Drill storage plugin for Ignite. Or > > somehow make the Ignite caching happen inside Hadoop and be totally > > transparent to Drill. > > > > Thank you for detailed help, any further ideas are as always welcome ;) > > > > Best regards, > > > > Petar > > > > ------------------------------ > > View this message in context: Re: Apache Drill querying IGFS-accelerate= d > > (H)DFS? > > < > http://apache-ignite-users.70518.x6.nabble.com/Apache-Drill-querying-IGFS= -accelerated-H-DFS-tp2840p2859.html > > > > Sent from the Apache Ignite Users mailing list archive > > at Nabble.com. > > > --001a11405fc071ea33052b09c080 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hello Vladimir,

I am not certa= in if Drill will source the core-site.xml file from the hadoop directory, b= ut I know that you can provide one in the Drill conf/ directory.

That being said, I did not think that a new core-site entry was needed t= o add a filesystem implementation. I thought that the JARs simply needed to= be added to the classpath. Have you added the JARs containing the ignite F= ileSystem implementation to the Drill classpath? The easiest way I know to = do this is by copying them into jars/3rdparty directory in the Drill instal= lation.

- Jason

On Fri, Feb 5, 2016 at 9:55 AM, Vladimir Ozerov <v= ozerov@gridgain.com> wrote:
*Peter,*

I created the ticket in Ignite JIRA. Hope someone form community will be able to throw a glance on it soon -
https://issues.apache.org/jira/browse/IGNITE-2568
Please keep an eye on it.

Cross-posting the issue to Drill dev list.

*Dear Drill folks,*

We have our own implementation of Hadoop FileSystem here in Ignite. It has<= br> unique URI prefix ("igfs://") and is normally registered in Hadoo= p's
core-site.xml like this:

<property>
=C2=A0 =C2=A0 <name>fs.igfs.impl</name>
=C2=A0 =C2=A0 <value>org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileS= ystem</value></property>


However, when we try to use this file system as data source in Drill, the exception is thrown (see stack trace below). I suspect that default Hadoop<= br> core-site.xml is not taken in consideration by Drill somehow. Could you
please give us a hint on how to properly configure custom Hadoop FileSystem=
implementation in your system?

Thank you!.

Vladimir.

Stack trace:

java.io.IOException: No FileSystem for scheme: igfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)=
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651) ~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)<= br> ~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
~[hadoop-common-2.7.1.jar:na]
at
org.apache.drill.exec.store.dfs.DrillFileSystem.<init>(DrillFileSyste= m.java:92)
~[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:2= 13)
~[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:2= 10)
~[drill-java-exec-1.4.0.jar:1.4.0]
at java.security.AccessController.doPrivileged(Native Method)
~[na:1.8.0_40-ea]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea]
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j= ava:1657)
~[hadoop-common-2.7.1.jar:na]
at
org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(Impersonation= Util.java:210)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(Impersonation= Util.java:202)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(Workspace= SchemaFactory.java:150)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.&l= t;init>(FileSystemSchemaFactory.java:78)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(Fil= eSystemSchemaFactory.java:65)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystem= Plugin.java:131)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.regist= erSchemas(StoragePluginRegistry.java:403)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:166)=
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:155)=
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:143)=
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.jav= a:129)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.planner.sql.DrillSqlWorker.<init>(DrillSqlWorke= r.java:93)
[drill-java-exec-1.4.0.jar:1.4.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907)
[drill-java-exec-1.4.0.jar:1.4.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244)
[drill-java-exec-1.4.0.jar:1.4.0]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 142)
[na:1.8.0_40-ea]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 617)
[na:1.8.0_40-ea]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea]



On Fri, Feb 5, 2016 at 4:18 PM, pshomov <
petar@activitystream.com> wrote:

>
> =E2=80=8BHi Vladimir,
>
> My bad about that ifgs://, fixed it but it changed nothing.
>
> I don=E2=80=99t think Drill cares much about Hadoop settings. It never= asked me to
> point it to an installation or configuration of Hadoop. I believe they= have
> their own storage plugin mechanism and one of their built-in plugins > happens to be the HDFS one.
>
> Here is (part of) the Drill log
>
> 2016-02-05 13:14:03,507 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman]=
> ERROR o.a.d.exec.util.ImpersonationUtil - Failed to create DrillFileSy= stem
> for proxy user: No FileSystem for scheme: igfs
> java.io.IOException: No FileSystem for scheme: igfs
> at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:264= 4)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:26= 51)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2= 687)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> ~[hadoop-common-2.7.1.jar:na]
> at
> org.apache.drill.exec.store.dfs.DrillFileSystem.<init>(DrillFile= System.java:92)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.j= ava:213)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.j= ava:210)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.8.0_40-ea]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea= ]
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1657)
> ~[hadoop-common-2.7.1.jar:na]
> at
> org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(Imperson= ationUtil.java:210)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(Imperson= ationUtil.java:202)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(Work= spaceSchemaFactory.java:150)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche= ma.<init>(FileSystemSchemaFactory.java:78)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema= s(FileSystemSchemaFactory.java:65)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileS= ystemPlugin.java:131)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.r= egisterSchemas(StoragePluginRegistry.java:403)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java= :166)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java= :155)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java= :143)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContex= t.java:129)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.planner.sql.DrillSqlWorker.<init>(DrillSql= Worker.java:93)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907)=
> [drill-java-exec-1.4.0.jar:1.4.0]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244) > [drill-java-exec-1.4.0.jar:1.4.0]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j= ava:1142)
> [na:1.8.0_40-ea]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:617)
> [na:1.8.0_40-ea]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea]
> 2016-02-05 13:14:03,556 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman]=
> ERROR o.a.drill.exec.work.foreman.Foreman - SYSTEM ERROR: IOException:= No
> FileSystem for scheme: igfs
>
>
> [Error Id: 6c95179a-6d26-498c-905f-dc18509c1651 on 192.168.1.42:31010<= /a>]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> IOException: No FileSystem for scheme: igfs
>
>
> I copied the same ignite jars that go into Hadoop to Drill just in cas= e
> but that did not help either.
> I think the only way is to write a Drill storage plugin for Ignite. Or=
> somehow make the Ignite caching happen inside Hadoop and be totally > transparent to Drill.
>
> Thank you for detailed help, any further ideas are as always welcome ;= )
>
> Best regards,
>
> Petar
>
> ------------------------------
> View this message in context: Re: Apache Drill querying IGFS-accelerat= ed
> (H)DFS?
> <
http://apache-ignite-users.70518.x6.nabble.com/Apache-Dril= l-querying-IGFS-accelerated-H-DFS-tp2840p2859.html>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.c= om/> at Nabble.com.
>

--001a11405fc071ea33052b09c080--