Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 045A7200AF7 for ; Tue, 14 Jun 2016 22:56:32 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 0197D160A56; Tue, 14 Jun 2016 20:56:32 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 2C4C1160A06 for ; Tue, 14 Jun 2016 22:56:31 +0200 (CEST) Received: (qmail 5961 invoked by uid 500); 14 Jun 2016 20:56:30 -0000 Mailing-List: contact issues-help@drill.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@drill.apache.org Delivered-To: mailing list issues@drill.apache.org Received: (qmail 5923 invoked by uid 99); 14 Jun 2016 20:56:30 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Jun 2016 20:56:30 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 0D1622C033A for ; Tue, 14 Jun 2016 20:56:30 +0000 (UTC) Date: Tue, 14 Jun 2016 20:56:30 +0000 (UTC) From: "Deneche A. Hakim (JIRA)" To: issues@drill.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (DRILL-4723) GC Memory Allocation Issues MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 14 Jun 2016 20:56:32 -0000 [ https://issues.apache.org/jira/browse/DRILL-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330579#comment-15330579 ] Deneche A. Hakim commented on DRILL-4723: ----------------------------------------- {noformat} 1819.137: [Full GC (Allocation Failure) 23G->23G(24G), 4.0657064 secs] {noformat} This line basically means JVM had to start a Full GC because it couldn't allocate more heap (Allocation Failure), the Full GC took 4s and couldn't reduce the heap usage from 23G which means all objects in the heap are still referenced. This is most likely a heap memory leak in Drill. If this node is still up and running you should get a heap dump, then you can follow the [following blog|http://www.techpaste.com/2015/07/how-to-analyse-large-heap-dumps/] to get a general analysis of the heap dump right on the server: > GC Memory Allocation Issues > --------------------------- > > Key: DRILL-4723 > URL: https://issues.apache.org/jira/browse/DRILL-4723 > Project: Apache Drill > Issue Type: Bug > Components: Server > Affects Versions: 1.6.0 > Environment: 5 Drill bits, MapR 1.6 release. 84GB of Direct Memory, 24GB of Heap > Reporter: John Omernik > Labels: memory, stability > Fix For: Future > > > This issue is reposted from the Drill User List. More logs available on request in comments (please specify which logs you may want) > High level: Drill Cluster gets into a bad state when this occurs. > This is what I have thus far... I can provide more complete logs on a one on one basis. > The cluster was completely mine, with fresh logs. I ran a CTAS query on a large table that over 100 fields. This query works well in other cases, however I was working with the Block size, both in MapR FS and Drill Parquet. I had successfully tested 512m on each, this case was different. Here are the facts in this setup: > - No Compression in MapRFS - Using Standard Parquet Snappy Compression > - MapR Block Size 1024m > - Parquet Block size 1024m > - Query ends up disappearing in the profiles > - The UI page listing bits only show 4 bits however 5 are running (node 03 process is running, but no longer in the bit) > - Error (copied below) from rest API > - No output in STD out or STD error on node3. Only two nodes actually had "Parquet Writing" logs. The other three on Stdout, did not have any issues/errors/ > - I have standard log files, gclogs, the profile.json (before it disappeared), and the physical plan. Only some components that looked possibly at issue included here > - The Node 3 GC log shows a bunch of "Full GC Allocation Failures" that take 4 seconds or more (When I've seen this in other cases, this time can balloon to 8 secs or more) > - The node 3 output log show some issues with really long RPC issues. Perhaps the GCs prevent RPC communication and create a snowball loop effect? > Other logs if people are interested can be provided upon request. I just didn't want to flood the whole list with all the logs. > Thanks! > John > Rest Error: > ./load_day.py 2016-05-09 > Drill Rest Endpoint: https://drillprod.marathonprod.zeta:20000 > Day: 2016-05-09 > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:769: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html > InsecureRequestWarning) > Authentication successful > Error encountered: 500 > { > "errorMessage" : "SYSTEM ERROR: ForemanException: One more more nodes lost connectivity during query. Identified nodes were [atl1ctuzeta03:20001].\n\n\n[Error Id: d7dd0120-f7c0-44ef-ac54-29c746b26354 on atl1ctuzeta01:20001" > } > Possible issue in Node3 Log: > 2016-06-14 17:25:27,860 [289fc208-7266-6a81-73a1-709efff6c412:frag:1:90] INFO o.a.d.e.w.f.FragmentStatusReporter - 289fc208-7266-6a81-73a1-709efff6c412:1:90: State to report: RUNNING > 2016-06-14 17:25:27,871 [289fc208-7266-6a81-73a1-709efff6c412:frag:1:70] INFO o.a.d.e.w.fragment.FragmentExecutor - 289fc208-7266-6a81-73a1-709efff6c412:1:70: State change requested AWAITING_ALLOCATION --> RUNNING > 2016-06-14 17:25:27,871 [289fc208-7266-6a81-73a1-709efff6c412:frag:1:70] INFO o.a.d.e.w.f.FragmentStatusReporter - 289fc208-7266-6a81-73a1-709efff6c412:1:70: State to report: RUNNING > 2016-06-14 17:43:41,869 [BitServer-4] WARN o.a.d.exec.rpc.control.ControlClient - Message of mode RESPONSE of rpc type 1 took longer than 500ms. Actual duration was 4192ms. > 2016-06-14 17:45:36,720 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.fragment.FragmentExecutor - 289fc208-7266-6a81-73a1-709efff6c412:1:0: State change requested RUNNING --> CANCELLATION_REQUESTED > 2016-06-14 17:45:45,740 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f.FragmentStatusReporter - 289fc208-7266-6a81-73a1-709efff6c412:1:0: State to report: CANCELLATION_REQUESTED > 2016-06-14 17:46:15,318 [BitServer-3] WARN o.a.d.exec.rpc.control.ControlServer - Message of mode REQUEST of rpc type 6 took longer than 500ms. Actual duration was 55328ms. > 2016-06-14 17:46:36,057 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.fragment.FragmentExecutor - 289fc208-7266-6a81-73a1-709efff6c412:1:5: State change requested RUNNING --> CANCELLATION_REQUESTED > 2016-06-14 17:46:44,620 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f.FragmentStatusReporter - 289fc208-7266-6a81-73a1-709efff6c412:1:5: State to report: CANCELLATION_REQUESTED > 2016-06-14 17:47:01,393 [BitServer-3] WARN o.a.d.exec.rpc.control.ControlServer - Message of mode REQUEST of rpc type 6 took longer than 500ms. Actual duration was 29781ms. > 2016-06-14 17:47:09,463 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.fragment.FragmentExecutor - 289fc208-7266-6a81-73a1-709efff6c412:1:10: State change requested RUNNING --> CANCELLATION_REQUESTED > 2016-06-14 17:47:26,967 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f.FragmentStatusReporter - 289fc208-7266-6a81-73a1-709efff6c412:1:10: State to report: CANCELLATION_REQUESTED > 2016-06-14 17:47:55,593 [BitServer-3] WARN o.a.d.exec.rpc.control.ControlServer - Message of mode REQUEST of rpc type 6 took longer than 500ms. Actual duration was 46130ms. > 2016-06-14 17:48:04,497 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.fragment.FragmentExecutor - 289fc208-7266-6a81-73a1-709efff6c412:1:15: State change requested RUNNING --> CANCELLATION_REQUESTED > 2016-06-14 17:48:12,742 [BitServer-4] WARN o.a.d.exec.rpc.control.ControlClient - Message of mode RESPONSE of rpc type 1 took longer than 500ms. Actual duration was 4236ms. > 2016-06-14 17:48:42,328 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f.FragmentStatusReporter - 289fc208-7266-6a81-73a1-709efff6c412:1:15: State to report: CANCELLATION_REQUESTED > 2016-06-14 17:49:36,351 [BitServer-4] WARN o.a.d.exec.rpc.control.ControlClient - Message of mode RESPONSE of rpc type 1 took longer than 500ms. Actual duration was 4260ms. > 2016-06-14 17:49:36,351 [BitServer-3] WARN o.a.d.exec.rpc.control.ControlServer - Message of mode REQUEST of rpc type 6 took longer than 500ms. Actual duration was 91854ms. > 2016-06-14 17:50:35,273 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.fragment.FragmentExecutor - 289fc208-7266-6a81-73a1-709efff6c412:1:20: State change requested RUNNING --> CANCELLATION_REQUESTED > 2016-06-14 17:50:39,322 [289fc208-7266-6a81-73a1-709efff6c412:frag:1:15] INFO o.a.d.e.w.fragment.FragmentExecutor - 289fc208-7266-6a81-73a1-709efff6c412:1:15: State change requested CANCELLATION_REQUESTED --> FAILED > 2016-06-14 17:50:51,546 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f.FragmentStatusReporter - 289fc208-7266-6a81-73a1-709efff6c412:1:20: State to report: CANCELLATION_REQUESTED > 2016-06-14 17:51:36,905 [BitServer-4] WARN o.a.d.exec.rpc.control.ControlClient - Message of mode RESPONSE of rpc type 1 took longer than 500ms. Actual duration was 4426ms. > 2016-06-14 17:52:30,805 [BitServer-3] WARN o.a.d.exec.rpc.control.ControlServer - Message of mode REQUEST of rpc type 6 took longer than 500ms. Actual duration was 98767ms. > 2016-06-14 17:52:47,042 [BitServer-4] WARN o.a.d.exec.rpc.control.ControlClient - Message of mode RESPONSE of rpc type 1 took longer than 500ms. Actual duration was 12041ms. > Possible issues in Node3 gclog: > 1819.137: [Full GC (Allocation Failure) 23G->23G(24G), 4.0657064 secs] > 1823.205: [GC concurrent-mark-abort] > 1823.221: [GC pause (G1 Evacuation Pause) (young)-- 23G->23G(24G), 0.0382934 secs] > 1823.271: [GC pause (G1 Evacuation Pause) (young) (initial-mark) 23G->23G(24G), 0.0250215 secs] > 1823.296: [GC concurrent-root-region-scan-start] > 1823.296: [GC concurrent-root-region-scan-end, 0.0000105 secs] > 1823.296: [GC concurrent-mark-start] > 1823.308: [Full GC (Allocation Failure) 23G->23G(24G), 4.3719713 secs] > 1827.694: [GC pause (G1 Evacuation Pause) (young)-- 23G->23G(24G), 0.0312666 secs] > 1827.727: [GC concurrent-mark-abort] > 1827.735: [GC pause (G1 Evacuation Pause) (young) 23G->23G(24G), 0.0212359 secs] > 1827.766: [Full GC (Allocation Failure) 23G->23G(24G), 3.9308980 secs] > 1831.710: [GC pause (G1 Evacuation Pause) (young)-- 23G->23G(24G), 0.0298229 secs] > 1831.750: [GC pause (G1 Evacuation Pause) (young) (initial-mark) 23G->23G(24G), 0.0268410 secs] > 1831.777: [GC concurrent-root-region-scan-start] > 1831.777: [GC concurrent-root-region-scan-end, 0.0000212 secs] > 1831.777: [GC concurrent-mark-start] > 1831.789: [Full GC (Allocation Failure) 23G->23G(24G), 3.9250410 secs] -- This message was sent by Atlassian JIRA (v6.3.4#6332)