From hdfs-issues-return-234962-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Wed Sep 19 19:47:05 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 4E8A5180621 for ; Wed, 19 Sep 2018 19:47:04 +0200 (CEST) Received: (qmail 95088 invoked by uid 500); 19 Sep 2018 17:47:03 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 95077 invoked by uid 99); 19 Sep 2018 17:47:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Sep 2018 17:47:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id E596E185D5D for ; Wed, 19 Sep 2018 17:47:02 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.301 X-Spam-Level: X-Spam-Status: No, score=-110.301 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id Kmv6HmQGDaJN for ; Wed, 19 Sep 2018 17:47:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 777A45F3F4 for ; Wed, 19 Sep 2018 17:47:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id E43F5E25AA for ; Wed, 19 Sep 2018 17:47:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 650F123FA5 for ; Wed, 19 Sep 2018 17:47:00 +0000 (UTC) Date: Wed, 19 Sep 2018 17:47:00 +0000 (UTC) From: "Xiaoyu Yao (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDDS-507) eventQueue should be shutdown on SCM shutdown MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDDS-507?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDDS-507: ---------------------------- Attachment: (was: HDDS-507-ozone-0.2.001.patch) > eventQueue should be shutdown on SCM shutdown > --------------------------------------------- > > Key: HDDS-507 > URL: https://issues.apache.org/jira/browse/HDDS-507 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Reporter: Xiaoyu Yao > Assignee: Xiaoyu Yao > Priority: Major > Attachments: HDDS-507-ozone-0.2.001.patch > > > This can be repro-ed by=C2=A0when TestNodeFailure multiple times. Jenkins= sometimes also hit this.=C2=A0 > =C2=A0 > {code} > Current thread (0x00007fbe6f018800):=C2=A0 JavaThread "EventQueue-Pipelin= eCloseForPipelineCloseHandler" daemon [_thread_in_native, id=3D58639, stack= (0x0000700018009000,0x0000700018109000)] > =C2=A0 > siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x000= 000040000001d > =C2=A0 > =C2=A0 > =C2=A0 > Stack: [0x0000700018009000,0x0000700018109000],=C2=A0 sp=3D0x000070001810= 8128,=C2=A0 free space=3D1020k > Native frames: (J=3Dcompiled Java code, j=3Dinterpreted, Vv=3DVM code, C= =3Dnative code) > C=C2=A0 [librocksdbjni6372054043595793813.jnilib+0x163ac8]=C2=A0 rocksdb:= :GetColumnFamilyID(rocksdb::ColumnFamilyHandle*)+0x8 > C=C2=A0 [librocksdbjni6372054043595793813.jnilib+0x228368]=C2=A0 rocksdb:= :DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksd= b::Slice const&, rocksdb::Slice const&)+0x58 > C=C2=A0 [librocksdbjni6372054043595793813.jnilib+0x2282fe]=C2=A0 rocksdb:= :DBImpl::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, ro= cksdb::Slice const&, rocksdb::Slice const&)+0xe > C=C2=A0 [librocksdbjni6372054043595793813.jnilib+0x171c84]=C2=A0 rocksdb:= :CompactedDBImpl::Open(rocksdb::Options const&, std::__1::basic_string, std::__1::allocator > const&, rocksdb:= :DB**)+0x2a4 > C=C2=A0 [librocksdbjni6372054043595793813.jnilib+0x971f7]=C2=A0 rocksdb_p= ut_helper(JNIEnv_*, rocksdb::DB*, rocksdb::WriteOptions const&, rocksdb::Co= lumnFamilyHandle*, _jbyteArray*, int, int, _jbyteArray*, int, int)+0x137 > j=C2=A0 org.rocksdb.RocksDB.put(JJ[BII[BII)V+0 > j=C2=A0 org.rocksdb.RocksDB.put(Lorg/rocksdb/WriteOptions;[B[B)V+17 > j=C2=A0 org.apache.hadoop.utils.RocksDBStore.put([B[B)V+10 > j=C2=A0 org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.updatePipel= ineState(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;Lorg= /apache/hadoop/hdds/protocol/proto/HddsProtos$LifeCycleEvent;)V+222 > j=C2=A0 org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.finalizePip= eline(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;)V+75 > j=C2=A0 org.apache.hadoop.hdds.scm.container.ContainerMapping.handlePipel= ineClose(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;)V= +18 > j=C2=A0 org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessa= ge(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;Lorg/apa= che/hadoop/hdds/server/events/EventPublisher;)V+5 > j=C2=A0 org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessa= ge(Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)= V+6 > J 5844 C1 org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambd= a$onMessage$1(Lorg/apache/hadoop/hdds/server/events/EventHandler;Ljava/lang= /Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V (41 bytes) = @ 0x0000000115c80bc4 [0x0000000115c80aa0+0x124] > J 5670 C1 org.apache.hadoop.hdds.server.events.SingleThreadExecutor$$Lamb= da$143.run()V (20 bytes) @ 0x00000001168f625c [0x00000001168f61c0+0x9c] > j=C2=A0 java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/conc= urrent/ThreadPoolExecutor$Worker;)V+95 > J 3226 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes)= @ 0x0000000116356e44 [0x0000000116356d40+0x104] > J 3107 C1 java.lang.Thread.run()V (17 bytes) @ 0x0000000115d7b0c4 [0x0000= 000115d7af80+0x144] > v=C2=A0 ~StubRoutines::call_stub > V=C2=A0 [libjvm.dylib+0x2ef1f6]=C2=A0 JavaCalls::call_helper(JavaValue*, = methodHandle*, JavaCallArguments*, Thread*)+0x6ae > V=C2=A0 [libjvm.dylib+0x2ef99a]=C2=A0 JavaCalls::call_virtual(JavaValue*,= KlassHandle, Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x164 > V=C2=A0 [libjvm.dylib+0x2efb46]=C2=A0 JavaCalls::call_virtual(JavaValue*,= Handle, KlassHandle, Symbol*, Symbol*, Thread*)+0x4a > V=C2=A0 [libjvm.dylib+0x34a46d]=C2=A0 thread_entry(JavaThread*, Thread*)+= 0x7c > V=C2=A0 [libjvm.dylib+0x56eb0f]=C2=A0 JavaThread::thread_main_inner()+0x9= b > V=C2=A0 [libjvm.dylib+0x57020a]=C2=A0 JavaThread::run()+0x1c2 > V=C2=A0 [libjvm.dylib+0x48d4a6]=C2=A0 java_start(Thread*)+0xf6 > C=C2=A0 [libsystem_pthread.dylib+0x3661]=C2=A0 _pthread_body+0x154 > C=C2=A0 [libsystem_pthread.dylib+0x350d]=C2=A0 _pthread_body+0x0 > C=C2=A0 [libsystem_pthread.dylib+0x2bf9]=C2=A0 thread_start+0xd > C=C2=A0 0x0000000000000000 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org