Return-Path: X-Original-To: apmail-drill-issues-archive@minotaur.apache.org Delivered-To: apmail-drill-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3AEEE18803 for ; Tue, 21 Jul 2015 17:51:05 +0000 (UTC) Received: (qmail 36946 invoked by uid 500); 21 Jul 2015 17:51:05 -0000 Delivered-To: apmail-drill-issues-archive@drill.apache.org Received: (qmail 36845 invoked by uid 500); 21 Jul 2015 17:51:05 -0000 Mailing-List: contact issues-help@drill.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@drill.apache.org Delivered-To: mailing list issues@drill.apache.org Received: (qmail 36648 invoked by uid 99); 21 Jul 2015 17:51:05 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Jul 2015 17:51:05 +0000 Date: Tue, 21 Jul 2015 17:51:05 +0000 (UTC) From: "Daniel Barclay (Drill) (JIRA)" To: issues@drill.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (DRILL-3530) Query : Regarding drill jdbc with big csv file. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/DRILL-3530?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Barclay (Drill) updated DRILL-3530: ------------------------------------------ Component/s: (was: Client - JDBC) > Query : Regarding drill jdbc with big csv file. > ----------------------------------------------- > > Key: DRILL-3530 > URL: https://issues.apache.org/jira/browse/DRILL-3530 > Project: Apache Drill > Issue Type: Bug > Affects Versions: 1.1.0 > Environment: MapR-Sandbox-For-Apache-Drill-1.0.0-4.1.0.ova > OR > Centos 6 , RAM 16 GB, HHD 1TB > Reporter: kunal > Assignee: Daniel Barclay (Drill) > Priority: Blocker > > I am using "MapR-Sandbox-For-Apache-Drill-1.0.0-4.1.0.ova" and copy the 1= 0GB CSV(220 columns and 5 millions rows) file in /mapr/demo.mapr.com/data. > I am running my application on Apache tomcat server and connecting it th= rough JDBC to DrillBit. I am getting error that says OutOfMemory(Stack trac= e below) > Also i see a lot of logs are being written to tomcat console eg. json ob= ject etc. I would like to remove that from console. Is there any configurat= ion option available for this?? > If yes, how ?? > =20 > con =3D DriverManager.getConnection("jdbc:drill:drillbit=3Dlocalhost:3101= 0","admin","admin"); > or > con =3D DriverManager.getConnection("jdbc:drill:zk=3Dlocalhost:2181"); > Query : > select=20 > columns[0] col1,columns[1] col2,columns[2] col3,columns[3] col4,columns[4= ] col5,columns[5] col6,columns[6] col7,columns[7] col8,columns[8] col9,colu= mns[9] col10,columns[10] col11,columns[11] col12,columns[12] col13,columns[= 13] col14,columns[14] col15,columns[15] col16,columns[16] col17,columns[17]= col18,columns[18] col19,columns[19] col20,columns[20] col21,columns[21] co= l22,columns[22] col23,columns[23] col24,columns[24] col25,columns[25] col26= ,columns[26] col27,columns[27] col28,columns[28] col29,columns[29] col30,co= lumns[30] col31,columns[31] col32,columns[32] col33,columns[33] col34,colum= ns[34] col35,columns[35] col36,columns[36] col37,columns[37] col38,columns[= 38] col39,columns[39] col40,columns[40] col41,columns[41] col42,columns[42]= col43,columns[43] col44,columns[44] col45,columns[45] col46,columns[46] co= l47,columns[47] col48,columns[48] col49,columns[49] col50,columns[50] col51= ,columns[51] col52,columns[52] col53,columns[53] col54,columns[54] col55,co= lumns[55] col56,columns[56] col57,columns[57] col58,columns[58] col59,colum= ns[59] col60,columns[60] col61,columns[61] col62,columns[62] col63,columns[= 63] col64,columns[64] col65,columns[65] col66,columns[66] col67,columns[67]= col68,columns[68] col69,columns[69] col70,columns[70] col71,columns[71] co= l72,columns[72] col73,columns[73] col74,columns[74] col75,columns[75] col76= ,columns[76] col77,columns[77] col78,columns[78] col79,columns[79] col80,co= lumns[80] col81,columns[81] col82,columns[82] col83,columns[83] col84,colum= ns[84] col85,columns[85] col86,columns[86] col87,columns[87] col88,columns[= 88] col89,columns[89] col90,columns[90] col91,columns[91] col92,columns[92]= col93,columns[93] col94,columns[94] col95,columns[95] col96,columns[96] co= l97,columns[97] col98,columns[98] col99,columns[99] col100,columns[100] col= 101,columns[101] col102,columns[102] col103,columns[103] col104,columns[104= ] col105,columns[105] col106,columns[106] col107,columns[107] col108,column= s[108] col109,columns[109] col110,columns[110] col111,columns[111] col112,c= olumns[112] col113,columns[113] col114,columns[114] col115,columns[115] col= 116,columns[116] col117,columns[117] col118,columns[118] col119,columns[119= ] col120,columns[120] col121,columns[121] col122,columns[122] col123,column= s[123] col124,columns[124] col125,columns[125] col126,columns[126] col127,c= olumns[127] col128,columns[128] col129,columns[129] col130,columns[130] col= 131,columns[131] col132,columns[132] col133,columns[133] col134,columns[134= ] col135,columns[135] col136,columns[136] col137,columns[137] col138,column= s[138] col139,columns[139] col140,columns[140] col141,columns[141] col142,c= olumns[142] col143,columns[143] col144,columns[144] col145,columns[145] col= 146,columns[146] col147,columns[147] col148,columns[148] col149,columns[149= ] col150,columns[150] col151,columns[151] col152,columns[152] col153,column= s[153] col154,columns[154] col155,columns[155] col156,columns[156] col157,c= olumns[157] col158,columns[158] col159,columns[159] col160,columns[160] col= 161,columns[161] col162,columns[162] col163,columns[163] col164,columns[164= ] col165,columns[165] col166,columns[166] col167,columns[167] col168,column= s[168] col169,columns[169] col170,columns[170] col171,columns[171] col172,c= olumns[172] col173,columns[173] col174,columns[174] col175,columns[175] col= 176,columns[176] col177,columns[177] col178,columns[178] col179,columns[179= ] col180,columns[180] col181,columns[181] col182,columns[182] col183,column= s[183] col184,columns[184] col185,columns[185] col186,columns[186] col187,c= olumns[187] col188,columns[188] col189,columns[189] col190,columns[190] col= 191,columns[191] col192,columns[192] col193,columns[193] col194,columns[194= ] col195,columns[195] col196,columns[196] col197,columns[197] col198,column= s[198] col199,columns[199] col200,columns[200] col201,columns[201] col202,c= olumns[202] col203,columns[203] col204,columns[204] col205,columns[205] col= 206,columns[206] col207,columns[207] col208,columns[208] col209,columns[209= ] col210,columns[210] col211,columns[211] col212,columns[212] col213,column= s[213] col214,columns[214] col215,columns[215] col216,columns[216] col217,c= olumns[217] col218,columns[218] col219,columns[219] col220 > from dfs.root.`SampleData220_Cols.csv` > =20 > io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Dire= ct buffer memory > at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageD= ecoder.java:233) ~[drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(Abstr= actChannelHandlerContext.java:339) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(Abstrac= tChannelHandlerContext.java:324) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInbou= ndHandlerAdapter.java:86) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(Abstr= actChannelHandlerContext.java:339) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(Abstrac= tChannelHandlerContext.java:324) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannel= Pipeline.java:847) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(Abstrac= tNioByteChannel.java:131) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java= :511) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEven= tLoop.java:468) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.jav= a:382) [drill-jdbc-all-1.1.0.jar:na] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [drill-jd= bc-all-1.1.0.jar:na] > at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadE= ventExecutor.java:111) [drill-jdbc-all-1.1.0.jar:na] > at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45] > Caused by: java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_45] > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) ~[na:1.7.0= _45] > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) ~[na:1.7.0_45] > at io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:= 443) ~[drill-jdbc-all-1.1.0.jar:na] > at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:187) ~[drill-jdb= c-all-1.1.0.jar:na] > at io.netty.buffer.PoolArena.allocate(PoolArena.java:165) ~[drill-jdbc-al= l-1.1.0.jar:na] > at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280) ~[drill-jdbc-= all-1.1.0.jar:na] > at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110) ~[drill= -jdbc-all-1.1.0.jar:na] > at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:25= 1) ~[drill-jdbc-all-1.1.0.jar:na] > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849) ~= [drill-jdbc-all-1.1.0.jar:na] > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841) ~= [drill-jdbc-all-1.1.0.jar:na] > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:831) ~= [drill-jdbc-all-1.1.0.jar:na] > at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:600) ~[d= rill-jdbc-all-1.1.0.jar:na] > at io.netty.buffer.UnsafeDirectLittleEndian.writeBytes(UnsafeDirectLittle= Endian.java:28) ~[drill-jdbc-all-1.1.0.jar:na] > at io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDe= coder.java:92) ~[drill-jdbc-all-1.1.0.jar:na] > at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageD= ecoder.java:227) ~[drill-jdbc-all-1.1.0.jar:na] > ... 13 common frames omitted -- This message was sent by Atlassian JIRA (v6.3.4#6332)