From user-return-1370-archive-asf-public=cust-asf.ponee.io@kudu.apache.org Mon May 14 00:10:01 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 6A52C180636 for ; Mon, 14 May 2018 00:09:59 +0200 (CEST) Received: (qmail 55725 invoked by uid 500); 13 May 2018 22:09:58 -0000 Mailing-List: contact user-help@kudu.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@kudu.apache.org Delivered-To: mailing list user@kudu.apache.org Received: (qmail 55715 invoked by uid 99); 13 May 2018 22:09:57 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 13 May 2018 22:09:57 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id EC904C00DA for ; Sun, 13 May 2018 22:09:56 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.881 X-Spam-Level: *** X-Spam-Status: No, score=3.881 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, KAM_BADIPHTTP=2, NORMAL_HTTP_TO_IP=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=cloudera.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id C1q6h4rJhLdd for ; Sun, 13 May 2018 22:09:52 +0000 (UTC) Received: from mail-lf0-f52.google.com (mail-lf0-f52.google.com [209.85.215.52]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 440B05F17D for ; Sun, 13 May 2018 22:09:52 +0000 (UTC) Received: by mail-lf0-f52.google.com with SMTP id z142-v6so15126919lff.5 for ; Sun, 13 May 2018 15:09:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudera.com; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=sQZtmE4ruqwDysNCVv1H2WSk8AX3c28SNAtzk9Vpby0=; b=V9bHD0PPFf7sLv4LBnqPOAj/cJX6EJARxVZDFvEpNgncYC8UbB+wgmg9nbyEqpY0mK Ai0KyDvy2lWdvq6GISSN+eTbEA7bSZdRc3wmVI97Dg/8Y+NOzA54zS3EO8Li/gOmAAmG Dn4Wef6TjcAe/BVGJ/I4PQZtG0EUBV+j9xT3hkksSlpsEJJMD/a9yxofRrGaE7Xo0qnO /fwf9RbsioSxpwjuJHy4VtSNmcZV6aaVFwkQUQaAmIEbzXTbjA1cDkF/fAbpbEq+fHAZ DYoyKyEeLBewHA4ajnsIb9Ol4JjBdNti7fd41znxxk7Mvq7Vvt7RKJKjSe6J3LgSc83G dlMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=sQZtmE4ruqwDysNCVv1H2WSk8AX3c28SNAtzk9Vpby0=; b=PJbNv9KHrFRuuYDsKTKguibKAwAgVZKO4cYcrZ8F1AI7feHCSZApa/mqi4mCNTqmWM usrpSMgptShUe3i7pj7at1TBcamsMCcPOIKmiAveezDa9tVWKgozOSVhGOhgUuy+ESiQ KN994OzcJcjfel8soXteHQddCNvgvNdAWzZ+nsy00DX2VuQnQFgXWKyqtSXTSwX9Sv4F nQX3kvICpaMt4GQIRsYTxWDNBBMZVMmth5xZXc/VtLOZkhND0w59sTMGsj3wHG65nrXf g4Ag+JM8+h7DEVj7kK08e8871YMwGfRL4B6PybmNSXcURf2Ph2QZwo7yqnkmtFhh35Bn 4x1A== X-Gm-Message-State: ALKqPwdymblqjxB0eRx2oK1GdrZSd6Njg+2M7WQnaGVVEkjIWAvscKwO PSndGctGeITTQMXlLYqn0P3a94zBCZ2cqnHZkl/CBSrb X-Google-Smtp-Source: AB8JxZpZ8OnSwEIQNzagqVhWJRsNLlPVXyVKNrVg9QxN4dmq+1d+aJW5+b+HlCs4wquEQ4+KwrLOedD0htC5iyUIY28= X-Received: by 2002:a2e:9ed7:: with SMTP id h23-v6mr3043673ljk.88.1526249391013; Sun, 13 May 2018 15:09:51 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a2e:528a:0:0:0:0:0 with HTTP; Sun, 13 May 2018 15:09:29 -0700 (PDT) In-Reply-To: References: <003c01d3e5c8$c2b8b0a0$482a11e0$@corp.netease.com> From: Todd Lipcon Date: Sun, 13 May 2018 15:09:29 -0700 Message-ID: Subject: =?UTF-8?B?UmU6IOetlOWkjTogSXNzdWUgaW4gZGF0YSBsb2FkaW5nIGluIEltcGFsYSArIEt1ZHU=?= To: user@kudu.apache.org Content-Type: multipart/alternative; boundary="000000000000ca63ef056c1da000" --000000000000ca63ef056c1da000 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable What version of Kudu? Also, just one disk on these nodes? It sounds like it is not able to keep up flushing at the rate you are inserting, and then memory is filling up. I would double check that your disks are appropriate for the workload. -Todd On Sat, May 12, 2018 at 1:46 AM, Geetika Gupta wrote: > Hi community, > > We were trying to load 500GB of TPCH data in the lineitem table using the > following query: > > insert into LINEITEM select L_ORDERKEY, L_LINENUMBER, L_PARTKEY, > L_SUPPKEY, L_SHIPDATE,L_RECEIPTDATE, L_SHIPMODE, > L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT, L_TAX, L_RETURNFLAG,L_LINESTATUS, > L_COMMITDATE,L_SHIPINSTRUCT, L_COMMENT from PARQUETIMPALA500.LINEITEM > > but the query gives us the following exception: > > *Status:* Kudu error(s) reported, first error: Timed out: Failed to write > batch of 51973 ops to tablet 2b1e63c335b646f2859ba583d736f109 after 337 > attempt(s): Failed to write to server: (no server available): Write(table= t: > 2b1e63c335b646f2859ba583d736f109, num_ops: 51973, num_attempts: 337) > passed its deadline: Remote error: Service unavailable: Soft memory limit > exceeded (at 99.66% of capacity)We are using the default configuration > properties for KUDU. The values for some configuration parameters is as > follows: > --memory_limit_soft_percentage=3D80 > --memory_limit_hard_bytes=3D0 > > We are executing the queries on an impala cluster. Below are the > configuration of the nodes: > > Cluster : 8 Node Cluster (48 GB RAM , 8 CPU Core and 2 TB hard-disk each, > Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz > > We also looked into the tablet servers tab in the kudu master UI, all the > tablet servers were active there, could not figure out what was the actua= l > reason for the exception. > > On Mon, May 7, 2018 at 11:30 AM, helifu wrote= : > >> Hi Geetika, >> >> It would be better to ask this question in impala user mail list. Here i= s >> the impala community: https://impala.apache.org/community.html >> >> >> >> =E4=BD=95=E6=9D=8E=E5=A4=AB >> >> 2018-05-07 13:56:02 >> >> >> >> *=E5=8F=91=E4=BB=B6=E4=BA=BA:* user-return-1353-hzhelifu=3Dcorp.netease.= com@kudu.apache.org >> *=E4=BB= =A3=E8=A1=A8 *Geetika >> Gupta >> *=E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4:* 2018=E5=B9=B45=E6=9C=887=E6=97= =A5 13:42 >> *=E6=94=B6=E4=BB=B6=E4=BA=BA:* user@kudu.apache.org >> *=E4=B8=BB=E9=A2=98:* Issue in data loading in Impala + Kudu >> >> >> >> Hi community, >> >> >> >> I was trying to load 500GB of TPCH data into kudu table using the >> following query: >> >> >> >> *insert into lineitem select * from PARQUETIMPALA500.LINEITEM* >> >> >> While executing the query for around 17 hrs it got cancelled as the >> impalad process of that machine got aborted. Here are the logs of the >> impalad process. >> >> >> >> *impalad.ERROR* >> >> >> >> Log file created at: 2018/05/06 13:40:34 >> >> Running on machine: slave2 >> >> Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg >> >> E0506 13:40:34.097759 28730 logging.cc:121] stderr will be logged to thi= s >> file. >> >> SLF4J: Class path contains multiple SLF4J bindings. >> >> SLF4J: Found binding in [jar:file:/root/softwares/impa >> la/fe/target/dependency/slf4j-log4j12-1.7.25.jar!/org/slf4j/ >> impl/StaticLoggerBinder.class] >> >> SLF4J: Found binding in [jar:file:/root/softwares/impa >> la/testdata/target/dependency/slf4j-log4j12-1.7.25.jar!/org/ >> slf4j/impl/StaticLoggerBinder.class] >> >> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an >> explanation. >> >> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] >> >> 18/05/06 13:40:34 WARN util.NativeCodeLoader: Unable to load >> native-hadoop library for your platform... using builtin-java classes wh= ere >> applicable >> >> 18/05/06 13:40:36 WARN shortcircuit.DomainSocketFactory: The >> short-circuit local reads feature cannot be used because libhadoop canno= t >> be loaded. >> >> tcmalloc: large alloc 1073741824 bytes =3D=3D 0x484434000 @ 0x4135176 >> 0x7fd9e9fc3929 >> >> tcmalloc: large alloc 2147483648 bytes =3D=3D 0x7fd540f18000 @ 0x413517= 6 >> 0x7fd9e9fc3929 >> >> F0507 09:46:12.673912 29258 error-util.cc:148] Check failed: >> log_entry.count > 0 (-1831809966 vs. 0) >> >> *** Check failure stack trace: *** >> >> @ 0x3fc0c0d google::LogMessage::Fail() >> >> @ 0x3fc24b2 google::LogMessage::SendToLog() >> >> @ 0x3fc05e7 google::LogMessage::Flush() >> >> @ 0x3fc3bae google::LogMessageFatal::~LogMessageFatal() >> >> @ 0x1bbcb31 impala::PrintErrorMap() >> >> @ 0x1bbcd07 impala::PrintErrorMapToString() >> >> @ 0x2decbd7 impala::Coordinator::GetErrorLog() >> >> @ 0x1a8d634 impala::ImpalaServer::UnregisterQuery() >> >> @ 0x1b29264 impala::ImpalaServer::CloseOperation() >> >> @ 0x2c5ce86 apache::hive::service::cli::th >> rift::TCLIServiceProcessor::process_CloseOperation() >> >> @ 0x2c56b8c apache::hive::service::cli::th >> rift::TCLIServiceProcessor::dispatchCall() >> >> @ 0x2c2fcb1 impala::ImpalaHiveServer2Servi >> ceProcessor::dispatchCall() >> >> @ 0x16fdb20 apache::thrift::TDispatchProcessor::process() >> >> @ 0x18ea6b3 apache::thrift::server::TAccep >> tQueueServer::Task::run() >> >> @ 0x18e2181 impala::ThriftThread::RunRunnable() >> >> @ 0x18e3885 boost::_mfi::mf2<>::operator()() >> >> @ 0x18e371b boost::_bi::list3<>::operator()<>() >> >> @ 0x18e3467 boost::_bi::bind_t<>::operator()() >> >> @ 0x18e337a boost::detail::function::void_ >> function_obj_invoker0<>::invoke() >> >> @ 0x192761c boost::function0<>::operator()() >> >> @ 0x1c3ebf7 impala::Thread::SuperviseThread() >> >> @ 0x1c470cd boost::_bi::list5<>::operator()<>() >> >> @ 0x1c46ff1 boost::_bi::bind_t<>::operator()() >> >> @ 0x1c46fb4 boost::detail::thread_data<>::run() >> >> @ 0x2eedb4a thread_proxy >> >> @ 0x7fda1dbb16ba start_thread >> >> @ 0x7fda1d8e741d clone >> >> Wrote minidump to /tmp/minidumps/impalad/a9113d9 >> b-bc3d-488a-1feebf9b-47b42022.dmp >> >> >> >> *impalad.FATAL* >> >> >> >> Log file created at: 2018/05/07 09:46:12 >> >> Running on machine: slave2 >> >> Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg >> >> F0507 09:46:12.673912 29258 error-util.cc:148] Check failed: >> log_entry.count > 0 (-1831809966 vs. 0) >> >> >> >> *Impalad.INFO* >> >> edentials=3D{real_user=3Droot}} blocked reactor thread for 34288.6us >> >> I0507 09:38:14.943245 29882 outbound_call.cc:288] RPC callback for RPC >> call kudu.tserver.TabletServerService.Write -> {remote=3D136.243.74.42:7= 050 >> (slave5), user_credentials=3D{real_user=3Droot}} blocked reactor thread = for >> 35859.8us >> >> I0507 09:38:15.942150 29882 outbound_call.cc:288] RPC callback for RPC >> call kudu.tserver.TabletServerService.Write -> {remote=3D136.243.74.42:7= 050 >> (slave5), user_credentials=3D{real_user=3Droot}} blocked reactor thread = for >> 40664.9us >> >> I0507 09:38:17.495046 29882 outbound_call.cc:288] RPC callback for RPC >> call kudu.tserver.TabletServerService.Write -> {remote=3D136.243.74.42:7= 050 >> (slave5), user_credentials=3D{real_user=3Droot}} blocked reactor thread = for >> 49514.6us >> >> I0507 09:46:12.664149 4507 coordinator.cc:783] Release admission contro= l >> resources for query_id=3D3e4a4c646800e1d9:c859bb7f00000000 >> >> F0507 09:46:12.673912 29258 error-util.cc:148] Check failed: >> log_entry.count > 0 (-1831809966 vs. 0) >> >> Wrote minidump to /tmp/minidumps/impalad/a9113d9 >> b-bc3d-488a-1feebf9b-47b42022.dmp >> >> >> >> *Note*: >> >> We are executing the queries on 8 node cluster with the following >> configuration >> >> Cluster : 8 Node Cluster (48 GB RAM , 8 CPU Core and 2 TB hard-disk each= , >> Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz >> >> >> >> >> >> -- >> >> Regards, >> >> Geetika Gupta >> > > > > -- > Regards, > Geetika Gupta > --=20 Todd Lipcon Software Engineer, Cloudera --000000000000ca63ef056c1da000 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
What version of Kudu?

Also, just one di= sk on these nodes? It sounds like it is not able to keep up flushing at the= rate you are inserting, and then memory is filling up. I would double chec= k that your disks are appropriate for the workload.

-Todd

On Sat, May 12, 2018 at 1:46 AM, Geetika Gupta <geetika.gupta@kno= ldus.in> wrote:
Hi community,

We were=C2=A0trying to load 500GB of= TPCH data in the lineitem table using the following query:

<= /div>
i= nsert into LINEITEM select L_ORDERKEY, L_LINENUMBER, L_PARTKEY, L_SUPPKEY, = L_SHIPDATE,L_RECEIPTDATE, L_SHIPMODE, L_QUANTITY,L_EXTENDEDPRICE,L_DIS= COUNT, L_TAX, L_RETURNFLAG,L_LINESTATUS, L_COMMITDATE,L_SHIPINSTRUCT, L_COM= MENT from PARQUETIMPALA500.LINEITEM

but the= =C2=A0query gives us the following exception:

Status:=C2=A0Kudu error(s) reported, first error: Timed o= ut: Failed to write batch of 51973 ops to tablet 2b1e63c335b646f2859ba583d7= 36f109 after 337 attempt(s): Failed to write to server: (no server ava= ilable): Write(tablet: 2b1e63c335b646f2859ba583d736f109, num_ops: 5197= 3, num_attempts: 337) passed its deadline: Remote error: Service unavailabl= e: Soft memory limit exceeded (at 99.66% of capacity)

We are usi= ng the default configuration properties for KUDU. The values for some confi= guration parameters is as follows:
--memory_limit_soft_= percentage=3D80
--memory_limit_hard_bytes=3D0

We are=C2=A0executing the queries on an impala cluster. Below are the= configuration of the nodes:

Cluster : 8 Node Cluster (48 GB RAM , 8 CPU Core= and 2 TB hard-disk each,

Intel(R) Co= re(TM) i7 CPU 950 @ 3.07GHz

We also looked into the tablet servers tab in the = kudu master UI, all the tablet servers were active there, could not figure = out what was the actual reason for the exception.

On Mon, May 7, 2018 at 11:30 AM, helifu <hzhelifu@corp.netea= se.com> wrote:

Hi Geetika,

It would be bette= r to ask this question in impala user mail list. Here is the impala communi= ty: = https://impala.apache.org/community.html

<= p class=3D"MsoNormal">=C2=A0

=E4=BD=95=E6=9D=8E=E5=A4=AB

2018-05-07 13:56:02

= =C2=A0

=E5=8F=91=E4=BB=B6=E4=BA=BA: user-return-1353-hzhelifu=3Dcorp.neteas= e.com@kudu.apache.org <user-return-1353-hzhelifu=3Dcorp.netease.com@= kudu.apache.org> =E4=BB=A3=E8=A1=A8 Geetika Gupta
=E5=8F= =91=E9=80=81=E6=97=B6=E9=97=B4: 20= 18
=E5= =B9=B45=E6=9C=887= =E6=97=A5 13:42
=E6=94=B6=E4=BB=B6=E4=BA= =BA: user@kudu.apache.org
=E4=B8=BB=E9=A2=98: Issue in data loading in Impala + Kudu

=C2=A0

Hi community,

=C2=A0

I was trying to load 500GB o= f TPCH data into kudu=C2=A0table using the following query:

=C2= =A0

insert into lineitem select * from PARQUETIMPA= LA500.LINEITEM


While executing the query= for around 17 hrs it got cancelled as the impalad process of that machine = got aborted. Here are the logs of the impalad process.
=

=C2=A0=

im= palad.ERROR

<= div>

=C2=A0=

Log file cr= eated at: 2018/05/06 13:40:34

Running on machine: slave2

Log line= format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg

E0506 13:40= :34.097759 28730 logging.cc:121] stderr will be logged to this file.=

SL= F4J: Class path contains multiple SLF4J bindings.

<= /div>

SLF4J: Found binding = in [jar:file:/root/softwares/impala/fe/target/dependency/slf4j-lo= g4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: = Found binding in [jar:file:/root/softwares/impala/testdata/target/depe= ndency/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinde= r.class]

SLF4J: Actual binding is of type = [org.slf4j.impl.Log4jLoggerFactory]

18/05/06 13:40:34 WARN util.Na= tiveCodeLoader: Unable to load native-hadoop library for your platform... u= sing builtin-java classes where applicable

18/05/06 13:40:36 WARN short= circuit.DomainSocketFactory: The short-circuit local reads feature can= not be used because libhadoop cannot be loaded.

tcmalloc: large alloc 1= 073741824 bytes =3D=3D 0x484434000 @=C2=A0 0x4135176 0x7fd9e9fc3929<= u>

tcm= alloc: large alloc 2147483648 bytes =3D=3D 0x7fd540f18000 @=C2=A0 0x4135176= 0x7fd9e9fc3929

<= span lang=3D"EN-US">F0507 09:46:12.673912 29258 error-util.cc:148] Check fa= iled: log_entry.count > 0 (-1831809966 vs. 0)=C2=A0=

*** Check failur= e stack trace: ***

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x= 3fc0c0d=C2=A0 google::LogMessage::Fail()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 0x3fc24b2=C2=A0 google::LogMessage::SendToLog()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x3fc05e7=C2=A0 google= ::LogMessage::Flush()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= 0x3fc3bae=C2=A0 google::LogMessageFatal::~LogMessageFatal()=

=C2= =A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x1bbcb31=C2=A0 impala::Prin= tErrorMap()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x1bbcd07= =C2=A0 impala::PrintErrorMapToString()

<= div>

=C2=A0 =C2=A0 @=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 0x2decbd7=C2=A0 impala::Coordinator::GetErrorLog()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x1a8d634=C2= =A0 impala::ImpalaServer::UnregisterQuery()

=C2=A0 =C2=A0 @=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x1b29264=C2=A0 impala::ImpalaServer::CloseOpe<= wbr>ration()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x2c5ce8= 6=C2=A0 apache::hive::service::cli::thrift::TCLIServiceProcessor::process_CloseOperation()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 0x2c56b8c=C2=A0 apache::hive::service::cli::thrift::TCLIService= Processor::dispatchCall()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 0x2c2fcb1=C2=A0 impala::ImpalaHiveServer2ServiceProcessor::= dispatchCall()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x16fd= b20=C2=A0 apache::thrift::TDispatchProcessor::process()<= /span>

=C2=A0 =C2= =A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x18ea6b3=C2=A0 apache::thrift::ser= ver::TAcceptQueueServer::Task::run()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 0x18e2181=C2=A0 impala::ThriftThread::RunRunnable= ()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x18e3885=C2=A0 bo= ost::_mfi::mf2<>::operator()()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 0x18e371b=C2=A0 boost::_bi::list3<>::operator()<>()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x18e3= 467=C2=A0 boost::_bi::bind_t<>::operator()()

=C2=A0 =C2=A0 @= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x18e337a=C2=A0 boost::detail::function:= :void_function_obj_invoker0<>::invoke()

=C2=A0 =C2=A0 @= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x192761c=C2=A0 boost::function0<>= ::operator()()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0= x1c3ebf7=C2=A0 impala::Thread::SuperviseThread()<= /p>

=C2=A0 =C2=A0 @= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x1c470cd=C2=A0 boost::_bi::list5<>= ;::operator()<>()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 0x1c46ff1=C2=A0 boost::_bi::bind_t<>::operator()()

= =C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x1c46fb4=C2=A0 boost::de= tail::thread_data<>::run()

=C2=A0 =C2=A0 @=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 0x2eedb4a=C2=A0 thread_proxy

=C2=A0 =C2=A0 @=C2=A0 = =C2=A0 =C2=A00x7fda1dbb16ba=C2=A0 start_thread

=C2=A0 =C2=A0 @=C2=A0 = =C2=A0 =C2=A00x7fda1d8e741d=C2=A0 clone

=

Wrote minidump to /tmp/minidump= s/impalad/a9113d9b-bc3d-488a-1feebf9b-47b42022.dmp<= /span>

=C2= =A0

impalad.FATAL

<= /div>

=C2=A0<= /span>

Log f= ile created at: 2018/05/07 09:46:12

Running on machine: slave2

Log = line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg

F0507 0= 9:46:12.673912 29258 error-util.cc:148] Check failed: log_entry.count > = 0 (-1831809966 vs. 0)=C2=A0

=C2=A0

=

Impalad.INFO=

edentials=3D{real_user=3Droot}} blocked reac= tor thread for 34288.6us

I0507 09:38:14.943245 29882 outbound_call.cc:2= 88] RPC callback for RPC call kudu.tserver.TabletServerService.Write -= > {remote=3D136.= 243.74.42:7050 (slave5), user_credentials=3D{real_user=3Droot}} bl= ocked reactor thread for 35859.8us

I0507 09:38:15.942150 29882 outbound= _call.cc:288] RPC callback for RPC call kudu.tserver.TabletServerServi= ce.Write -> {remote=3D136.243.74.42:7050 (slave5), user_credentials=3D{real_user=3Droot}} blocked reactor thread for 40664.9us

=

I0507 09:38:17.495046 2988= 2 outbound_call.cc:288] RPC callback for RPC call kudu.tserver.TabletServer= Service.Write -> {remote=3D136.243.74.42:7050 (slave5), user_credentials=3D{real_u= ser=3Droot}} blocked reactor thread for 49514.6us=

I0507 09:46:12.6= 64149=C2=A0 4507 coordinator.cc:783] Release admission control resources fo= r query_id=3D3e4a4c646800e1d9:c859bb7f00000000

F0507 09:46:12.6739= 12 29258 error-util.cc:148] Check failed: log_entry.count > 0 (-18318099= 66 vs. 0)=C2=A0

<= span lang=3D"EN-US">Wrote minidump to /tmp/minidumps/impalad/a9113d9b-= bc3d-488a-1feebf9b-47b42022.dmp

=C2=A0<= /p>

Note:=C2=A0

We are executing the= queries on 8 node cluster with the following configuration

Cluster : 8 Node Cluster (48 GB RAM , 8 CPU= Core and 2 TB hard-disk each,
Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz

=C2=A0=

=C2=A0

--

Regards,

Geetika Gupta

<= br>

--
Regards,
Geetika Gupta



--
Todd Lipcon
Soft= ware Engineer, Cloudera
--000000000000ca63ef056c1da000--