From user-return-838-archive-asf-public=cust-asf.ponee.io@impala.apache.org Thu May 10 07:03:37 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 65B9318063A for ; Thu, 10 May 2018 07:03:36 +0200 (CEST) Received: (qmail 73361 invoked by uid 500); 10 May 2018 05:03:34 -0000 Mailing-List: contact user-help@impala.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@impala.apache.org Delivered-To: mailing list user@impala.apache.org Received: (qmail 73351 invoked by uid 99); 10 May 2018 05:03:34 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 May 2018 05:03:34 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id F0442C0333 for ; Thu, 10 May 2018 05:03:33 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.902 X-Spam-Level: *** X-Spam-Status: No, score=3.902 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, KAM_BADIPHTTP=2, MIME_QP_LONG_LINE=0.001, NORMAL_HTTP_TO_IP=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=cloudera.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 4Mvyw5C_T0qW for ; Thu, 10 May 2018 05:03:29 +0000 (UTC) Received: from mail-pg0-f49.google.com (mail-pg0-f49.google.com [74.125.83.49]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 1CF875F254 for ; Thu, 10 May 2018 05:03:29 +0000 (UTC) Received: by mail-pg0-f49.google.com with SMTP id w3-v6so418915pgv.12 for ; Wed, 09 May 2018 22:03:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudera.com; s=google; h=from:content-transfer-encoding:mime-version:date:subject:message-id :references:in-reply-to:to; bh=CJpU4lU9BmJzXVBkcp/yl6AVHfLeou83qUN6O9tZDXA=; b=VPhvvKxd8onUQTrAus4z4rM3nCBVDMpJxsLKRMf/kgiy39V9hv2Hi087lVZ+rv5brb N+dn92DyyKTtFtz3PzjIQd5N0nlSHQ7jcgYnxe45oHReIvsalWIIE1mt10eVygo2SuOQ W05SpZQffSmP877s2sDB2mYjxh7/D4UGhkYkENWse5mPnJqo0XvGiA9FAOXAQy07lI+j 9Ek+kSzHko29VyOT0aN2gYV8IJN6N9yf3xsVx9gllIDZrSFz3tOyhNFD4tvsVEyDcJ3a t4RUHhX9CONgOMP505yh901h9+JC46XpGz4rCbcdyv+UIp0wI2vwhFCl0hyCiiYnfd1/ CPgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:content-transfer-encoding:mime-version:date :subject:message-id:references:in-reply-to:to; bh=CJpU4lU9BmJzXVBkcp/yl6AVHfLeou83qUN6O9tZDXA=; b=JL5joeNn8NLGRjy3Wno9jdzM4kjBXUPcJQNrPA3WkzlrLvZkk8ka7XXnYUhcLuKdqD MlpyNlr/NwM3dPi9K9hXuJ8PpTrvz8tUNQYr7kDP4LaclkLXXhpRyafbOd1e8dyTHd4n NvP3TIC80Og+aky5SRPfMWxukv627luIqDCNlIbi55gdSpfLTW1BpYCyAXvuyFXM412B GHKUsPRzCLtdfO1HFKzCq48+grs6kO5YJ+Wc//q9kxdPIa+ZjONYCw/h0qgrmpMjcCX9 xxkKBfF634ICnCMcDCH9WtsbnwLvF9BsZvJKKIIi1hpsONcd5YupXH8U9gTp1KL05eFW 3u8Q== X-Gm-Message-State: ALQs6tBXs2cVNQS77swTsA6L3iPPhWeCHHiOoDp45TqUgBsAmfopUp7v YUbWXHLIJDDQeS8QwXFta1vq1W5qVZs= X-Google-Smtp-Source: AB8JxZoto4Q8u2bORbLr+gO08m8ZHCPKyCyKL+aOQfGNkEB1uy/fWoWCgZ9xFrYvamUgFWU1TI0L4g== X-Received: by 2002:a63:7e09:: with SMTP id z9-v6mr38497214pgc.437.1525928601184; Wed, 09 May 2018 22:03:21 -0700 (PDT) Received: from ?IPv6:2607:fb90:9cb4:e46a:55cf:9975:ec14:fac3? ([2607:fb90:9cb4:e46a:55cf:9975:ec14:fac3]) by smtp.gmail.com with ESMTPSA id r8-v6sm33742588pgn.2.2018.05.09.22.03.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 22:03:20 -0700 (PDT) From: Mostafa Mokhtar Content-Type: multipart/alternative; boundary=Apple-Mail-6BEF9EEC-9602-42A7-9789-E2DAC1869736 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (1.0) Date: Wed, 9 May 2018 22:03:19 -0700 Subject: Re: Issue in data loading in Impala + Kudu Message-Id: References: In-Reply-To: To: user@impala.apache.org X-Mailer: iPhone Mail (15E216) --Apple-Mail-6BEF9EEC-9602-42A7-9789-E2DAC1869736 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Can you share the query profile for the successful insert query?=20 Thanks=20 Mostafa > On May 9, 2018, at 9:55 PM, Geetika Gupta wrote= : >=20 > Thanks, Jeszy. >=20 > We build impala again with --release flag and data load was successful aft= er that. >=20 > But now we are facing another issue. The table in which we loaded the data= has less number of rows. We executed the following command: >=20 > insert into LINEITEM select * from PARQUETIMPALA500.LINEITEM >=20 > This query was successful, but when we tried the count(*) on both the tabl= es, row count was different: >=20 > 0: jdbc:hive2://slave2:21050/default> select count(*) from lineitem > . . . . . . . . . . . . . . . . . . > ; > 536870912 >=20 > 0: jdbc:hive2://slave2:21050/default> select count(*) from parquetimpala50= 0.lineitem; > 3000028242 >=20 > Do you have any idea about this issue. >=20 >=20 >> On Mon, May 7, 2018 at 12:06 PM, Jeszy wrote: >> Impala doesn't store the data itself, so you can switch versions >> without rewriting data. But you don't have to do that, you would just >> have to build impala using the -release flag (of buildall.sh) and run >> it using the release binaries (versus the debug ones). If you would be >> looking at performance, using the release version is highly >> recommended anyway. >>=20 >> On 7 May 2018 at 08:30, Geetika Gupta wrote: >> > Hi Jeszy, >> > >> > Currently, we are using the apache impala's Github master branch code. W= e >> > tried using the released version but we encountered some errors related= to >> > downloading of dependencies and could not complete the installation. >> > >> > The current version of impala we are using: 2.12 >> > >> > We can't try with the new release as we have already loaded 500GB of TP= CH >> > data on our cluster. >> > >> > On Mon, May 7, 2018 at 11:43 AM, Jeszy wrote: >> >> >> >> What version of Impala are you using? >> >> DCHECKs won't be triggered if you run a release build. Looking at the >> >> code, it should work with bad values if not for the DCHECK. Can you >> >> try using a release build? >> >> >> >> On 7 May 2018 at 08:04, Geetika Gupta wrote= : >> >> > Hi community, >> >> > >> >> > I was trying to load 500GB of TPCH data into kudu table using the >> >> > following >> >> > query: >> >> > >> >> > insert into lineitem select * from PARQUETIMPALA500.LINEITEM >> >> > >> >> > While executing the query for around 17 hrs it got cancelled as the >> >> > impalad >> >> > process of that machine got aborted. Here are the logs of the impala= d >> >> > process. >> >> > >> >> > impalad.ERROR >> >> > >> >> > Log file created at: 2018/05/06 13:40:34 >> >> > Running on machine: slave2 >> >> > Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg >> >> > E0506 13:40:34.097759 28730 logging.cc:121] stderr will be logged to= >> >> > this >> >> > file. >> >> > SLF4J: Class path contains multiple SLF4J bindings. >> >> > SLF4J: Found binding in >> >> > >> >> > [jar:file:/root/softwares/impala/fe/target/dependency/slf4j-log4j12-= 1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] >> >> > SLF4J: Found binding in >> >> > >> >> > [jar:file:/root/softwares/impala/testdata/target/dependency/slf4j-lo= g4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] >> >> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an >> >> > explanation. >> >> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]= >> >> > 18/05/06 13:40:34 WARN util.NativeCodeLoader: Unable to load >> >> > native-hadoop >> >> > library for your platform... using builtin-java classes where applic= able >> >> > 18/05/06 13:40:36 WARN shortcircuit.DomainSocketFactory: The >> >> > short-circuit >> >> > local reads feature cannot be used because libhadoop cannot be loade= d. >> >> > tcmalloc: large alloc 1073741824 bytes =3D=3D 0x484434000 @ 0x41351= 76 >> >> > 0x7fd9e9fc3929 >> >> > tcmalloc: large alloc 2147483648 bytes =3D=3D 0x7fd540f18000 @ 0x41= 35176 >> >> > 0x7fd9e9fc3929 >> >> > F0507 09:46:12.673912 29258 error-util.cc:148] Check failed: >> >> > log_entry.count >> >> >> 0 (-1831809966 vs. 0) >> >> > *** Check failure stack trace: *** >> >> > @ 0x3fc0c0d google::LogMessage::Fail() >> >> > @ 0x3fc24b2 google::LogMessage::SendToLog() >> >> > @ 0x3fc05e7 google::LogMessage::Flush() >> >> > @ 0x3fc3bae google::LogMessageFatal::~LogMessageFatal(= ) >> >> > @ 0x1bbcb31 impala::PrintErrorMap() >> >> > @ 0x1bbcd07 impala::PrintErrorMapToString() >> >> > @ 0x2decbd7 impala::Coordinator::GetErrorLog() >> >> > @ 0x1a8d634 impala::ImpalaServer::UnregisterQuery() >> >> > @ 0x1b29264 impala::ImpalaServer::CloseOperation() >> >> > @ 0x2c5ce86 >> >> > >> >> > apache::hive::service::cli::thrift::TCLIServiceProcessor::process_Cl= oseOperation() >> >> > @ 0x2c56b8c >> >> > apache::hive::service::cli::thrift::TCLIServiceProcessor::dispatchCa= ll() >> >> > @ 0x2c2fcb1 >> >> > impala::ImpalaHiveServer2ServiceProcessor::dispatchCall() >> >> > @ 0x16fdb20 apache::thrift::TDispatchProcessor::proces= s() >> >> > @ 0x18ea6b3 >> >> > apache::thrift::server::TAcceptQueueServer::Task::run() >> >> > @ 0x18e2181 impala::ThriftThread::RunRunnable() >> >> > @ 0x18e3885 boost::_mfi::mf2<>::operator()() >> >> > @ 0x18e371b boost::_bi::list3<>::operator()<>() >> >> > @ 0x18e3467 boost::_bi::bind_t<>::operator()() >> >> > @ 0x18e337a >> >> > boost::detail::function::void_function_obj_invoker0<>::invoke() >> >> > @ 0x192761c boost::function0<>::operator()() >> >> > @ 0x1c3ebf7 impala::Thread::SuperviseThread() >> >> > @ 0x1c470cd boost::_bi::list5<>::operator()<>() >> >> > @ 0x1c46ff1 boost::_bi::bind_t<>::operator()() >> >> > @ 0x1c46fb4 boost::detail::thread_data<>::run() >> >> > @ 0x2eedb4a thread_proxy >> >> > @ 0x7fda1dbb16ba start_thread >> >> > @ 0x7fda1d8e741d clone >> >> > Wrote minidump to >> >> > /tmp/minidumps/impalad/a9113d9b-bc3d-488a-1feebf9b-47b42022.dmp >> >> > >> >> > impalad.FATAL >> >> > >> >> > Log file created at: 2018/05/07 09:46:12 >> >> > Running on machine: slave2 >> >> > Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg >> >> > F0507 09:46:12.673912 29258 error-util.cc:148] Check failed: >> >> > log_entry.count >> >> >> 0 (-1831809966 vs. 0) >> >> > >> >> > Impalad.INFO >> >> > edentials=3D{real_user=3Droot}} blocked reactor thread for 34288.6us= >> >> > I0507 09:38:14.943245 29882 outbound_call.cc:288] RPC callback for R= PC >> >> > call >> >> > kudu.tserver.TabletServerService.Write -> {remote=3D136.243.74.42:70= 50 >> >> > (slave5), user_credentials=3D{real_user=3Droot}} blocked reactor thr= ead for >> >> > 35859.8us >> >> > I0507 09:38:15.942150 29882 outbound_call.cc:288] RPC callback for R= PC >> >> > call >> >> > kudu.tserver.TabletServerService.Write -> {remote=3D136.243.74.42:70= 50 >> >> > (slave5), user_credentials=3D{real_user=3Droot}} blocked reactor thr= ead for >> >> > 40664.9us >> >> > I0507 09:38:17.495046 29882 outbound_call.cc:288] RPC callback for R= PC >> >> > call >> >> > kudu.tserver.TabletServerService.Write -> {remote=3D136.243.74.42:70= 50 >> >> > (slave5), user_credentials=3D{real_user=3Droot}} blocked reactor thr= ead for >> >> > 49514.6us >> >> > I0507 09:46:12.664149 4507 coordinator.cc:783] Release admission >> >> > control >> >> > resources for query_id=3D3e4a4c646800e1d9:c859bb7f00000000 >> >> > F0507 09:46:12.673912 29258 error-util.cc:148] Check failed: >> >> > log_entry.count >> >> >> 0 (-1831809966 vs. 0) >> >> > Wrote minidump to >> >> > /tmp/minidumps/impalad/a9113d9b-bc3d-488a-1feebf9b-47b42022.dmp >> >> > >> >> > Note: >> >> > We are executing the queries on 8 node cluster with the following >> >> > configuration >> >> > Cluster : 8 Node Cluster (48 GB RAM , 8 CPU Core and 2 TB hard-disk >> >> > each, >> >> > Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz >> >> > >> >> > >> >> > -- >> >> > Regards, >> >> > Geetika Gupta >> > >> > >> > >> > >> > -- >> > Regards, >> > Geetika Gupta >=20 >=20 >=20 > --=20 > Regards, > Geetika Gupta --Apple-Mail-6BEF9EEC-9602-42A7-9789-E2DAC1869736 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Can you share the query profile for the suc= cessful insert query? 

Thanks&nbs= p;
Mostafa

On May 9, 2018, at 9:55 PM, Geetika Gupta= <geetika.gupta@knoldus.in> wrote:


On Mon, May 7, 2018 at 12:06 PM= , Jeszy <jeszyb@gmail.com> wrote:
Impala doesn't store the data itself, so you can switch versions
without rewriting data. But you don't have to do that, you would just
have to build impala using the -release flag (of buildall.sh) and run
it using the release binaries (versus the debug ones). If you would be
looking at performance, using the release version is highly
recommended anyway.

On 7 May 2018 at 08:30, Geetika Gupta <geetika.gupta@knoldus.in> wrote:
> Hi Jeszy,
>
> Currently, we are using the apache impala's Github master branch code. W= e
> tried using the released version but we encountered some errors related= to
> downloading of dependencies and could not complete the installation. >
> The current version of impala we are using: 2.12
>
> We can't try with the new release as we have already loaded 500GB of TP= CH
> data on our cluster.
>
> On Mon, May 7, 2018 at 11:43 AM, Jeszy <jeszyb@gmail.com> wrote:
>>
>> What version of Impala are you using?
>> DCHECKs won't be triggered if you run a release build. Looking at t= he
>> code, it should work with bad values if not for the DCHECK. Can you=
>> try using a release build?
>>
>> On 7 May 2018 at 08:04, Geetika Gupta <geetika.gupta@knoldus.in> wrote:
>> > Hi community,
>> >
>> > I was trying to load 500GB of TPCH data into kudu table using t= he
>> > following
>> > query:
>> >
>> > insert into lineitem select * from PARQUETIMPALA500.LINEITEM >> >
>> > While executing the query for around 17 hrs it got cancelled a= s the
>> > impalad
>> > process of that machine got aborted. Here are the logs of the i= mpalad
>> > process.
>> >
>> > impalad.ERROR
>> >
>> > Log file created at: 2018/05/06 13:40:34
>> > Running on machine: slave2
>> > Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line= ] msg
>> > E0506 13:40:34.097759 28730 = logging.cc:121] stderr will be logged to
>> > this
>> > file.
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> > [jar:file:/root/softwares/impala/fe/target/dependency/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.clas= s]
>> > SLF4J: Found binding in
>> >
>> > [jar:file:/root/softwares/impala/testdata/target/dep= endency/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinde= r.class]
>> > SLF4J: See http://www.slf4j.org/codes.<= wbr>html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLog= gerFactory]
>> > 18/05/06 13:40:34 WARN util.NativeCodeLoader: Unable to load >> > native-hadoop
>> > library for your platform... using builtin-java classes where a= pplicable
>> > 18/05/06 13:40:36 WARN shortcircuit.DomainSocketFactory: T= he
>> > short-circuit
>> > local reads feature cannot be used because libhadoop cannot be= loaded.
>> > tcmalloc: large alloc 1073741824 bytes =3D=3D 0x484434000 @&nb= sp; 0x4135176
>> > 0x7fd9e9fc3929
>> > tcmalloc: large alloc 2147483648 bytes =3D=3D 0x7fd540f18000 @=   0x4135176
>> > 0x7fd9e9fc3929
>> > F0507 09:46:12.673912 29258 error-util.cc:148] Check failed:
>> > log_entry.count
>> >> 0 (-1831809966 vs. 0)
>> > *** Check failure stack trace: ***
>> >     @          0x3fc0c= 0d  google::LogMessage::Fail()
>> >     @          0x3fc24= b2  google::LogMessage::SendToLog()
>> >     @          0x3fc05= e7  google::LogMessage::Flush()
>> >     @          0x3fc3b= ae  google::LogMessageFatal::~LogMessageFatal()
>> >     @          0x1bbcb= 31  impala::PrintErrorMap()
>> >     @          0x1bbcd= 07  impala::PrintErrorMapToString()
>> >     @          0x2decb= d7  impala::Coordinator::GetErrorLog()
>> >     @          0x1a8d6= 34  impala::ImpalaServer::UnregisterQuery()
>> >     @          0x1b292= 64  impala::ImpalaServer::CloseOperation()
>> >     @          0x2c5ce= 86
>> >
>> > apache::hive::service::cli::thrift::TCLIServiceProcessor:= :process_CloseOperation()
>> >     @          0x2c56b= 8c
>> > apache::hive::service::cli::thrift::TCLIServiceProcessor:= :dispatchCall()
>> >     @          0x2c2fc= b1
>> > impala::ImpalaHiveServer2ServiceProcessor::dispatchC= all()
>> >     @          0x16fdb= 20  apache::thrift::TDispatchProcessor::process()
>> >     @          0x18ea6= b3
>> > apache::thrift::server::TAcceptQueueServer::Task::run()
>> >     @          0x18e21= 81  impala::ThriftThread::RunRunnable()
>> >     @          0x18e38= 85  boost::_mfi::mf2<>::operator()()
>> >     @          0x18e37= 1b  boost::_bi::list3<>::operator()<>()
>> >     @          0x18e34= 67  boost::_bi::bind_t<>::operator()()
>> >     @          0x18e33= 7a
>> > boost::detail::function::void_function_obj_invoker0<&g= t;::invoke()
>> >     @          0x19276= 1c  boost::function0<>::operator()()
>> >     @          0x1c3eb= f7  impala::Thread::SuperviseThread()
>> >     @          0x1c470= cd  boost::_bi::list5<>::operator()<>()
>> >     @          0x1c46f= f1  boost::_bi::bind_t<>::operator()()
>> >     @          0x1c46f= b4  boost::detail::thread_data<>::run()
>> >     @          0x2eedb= 4a  thread_proxy
>> >     @     0x7fda1dbb16ba  s= tart_thread
>> >     @     0x7fda1d8e741d  c= lone
>> > Wrote minidump to
>> > /tmp/minidumps/impalad/a9113d9b-bc3d-488a-1feebf9b-4= 7b42022.dmp
>> >
>> > impalad.FATAL
>> >
>> > Log file created at: 2018/05/07 09:46:12
>> > Running on machine: slave2
>> > Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line= ] msg
>> > F0507 09:46:12.673912 29258 error-util.cc:148] Check failed:
>> > log_entry.count
>> >> 0 (-1831809966 vs. 0)
>> >
>> > Impalad.INFO
>> > edentials=3D{real_user=3Droot}} blocked reactor thread for 342= 88.6us
>> > I0507 09:38:14.943245 29882 outbound_call.cc:288] RPC callback for RPC
>> > call
>> > kudu.tserver.TabletServerService.Write -> {remote=3D136= .243.74.42:7050
>> > (slave5), user_credentials=3D{real_user=3Droot}} blocked r= eactor thread for
>> > 35859.8us
>> > I0507 09:38:15.942150 29882 outbound_call.cc:288] RPC callback for RPC
>> > call
>> > kudu.tserver.TabletServerService.Write -> {remote=3D136= .243.74.42:7050
>> > (slave5), user_credentials=3D{real_user=3Droot}} blocked r= eactor thread for
>> > 40664.9us
>> > I0507 09:38:17.495046 29882 outbound_call.cc:288] RPC callback for RPC
>> > call
>> > kudu.tserver.TabletServerService.Write -> {remote=3D136= .243.74.42:7050
>> > (slave5), user_credentials=3D{real_user=3Droot}} blocked r= eactor thread for
>> > 49514.6us
>> > I0507 09:46:12.664149  4507 coordinator.cc:783] Release admission
>> > control
>> > resources for query_id=3D3e4a4c646800e1d9:c859bb7f0000000= 0
>> > F0507 09:46:12.673912 29258 error-util.cc:148] Check failed:
>> > log_entry.count
>> >> 0 (-1831809966 vs. 0)
>> > Wrote minidump to
>> > /tmp/minidumps/impalad/a9113d9b-bc3d-488a-1feebf9b-4= 7b42022.dmp
>> >
>> > Note:
>> > We are executing the queries on 8 node cluster with the follow= ing
>> > configuration
>> > Cluster : 8 Node Cluster (48 GB RAM , 8 CPU Core and 2 TB hard= -disk
>> > each,
>> > Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz
>> >
>> >
>> > --
>> > Regards,
>> > Geetika Gupta
>
>
>
>
> --
> Regards,
> Geetika Gupta



--
<= div class=3D"gmail_signature" data-smartmail=3D"gmail_signature">
Regards,
Geetika Gupta
= --Apple-Mail-6BEF9EEC-9602-42A7-9789-E2DAC1869736--