Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 79B0C200CB3 for ; Mon, 26 Jun 2017 14:44:07 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 78792160BDE; Mon, 26 Jun 2017 12:44:07 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 989CE160BD9 for ; Mon, 26 Jun 2017 14:44:06 +0200 (CEST) Received: (qmail 69883 invoked by uid 500); 26 Jun 2017 12:44:04 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 69757 invoked by uid 99); 26 Jun 2017 12:44:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 26 Jun 2017 12:44:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 702991A7B7D for ; Mon, 26 Jun 2017 12:44:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id GOMsEdRp3BNZ for ; Mon, 26 Jun 2017 12:44:03 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 3F01E5FDAD for ; Mon, 26 Jun 2017 12:44:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 121CFE0D54 for ; Mon, 26 Jun 2017 12:44:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 7220C240C9 for ; Mon, 26 Jun 2017 12:44:00 +0000 (UTC) Date: Mon, 26 Jun 2017 12:44:00 +0000 (UTC) From: "Brahma Reddy Battula (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Reopened] (HDFS-9473) access standy namenode slow MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 26 Jun 2017 12:44:07 -0000 [ https://issues.apache.org/jira/browse/HDFS-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula reopened HDFS-9473: ---------------------------------------- > access standy namenode slow > --------------------------- > > Key: HDFS-9473 > URL: https://issues.apache.org/jira/browse/HDFS-9473 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: wei.he > > access standy namenode slow > we have a hadoop cluster with 200 nodes. And use ha namenodes. > nn1 hadoop109 active > nn2 hadoop110 standby > after we switchover nms, > hadoop110(nn2) is active > hadoop109(nn1) is standy > when we access hdfs://hadoop109:8020 ( hdfs dfs -ls hdfs://hadoop109:8020), that get the response sometimes fast, sometimes slow. > I tuned the rpc parameters about dfs.namenode.handler.count and dfs.namenode.service.handler.count . the value from 105 to 150 ( >=20*ln(datanodes) ). > But the problem still occured. > Does someone have the same problem, and could give some suggestion ? > take a look the debug log.... > 15/11/27 19:37:46 DEBUG util.Shell: setsid exited with exit code 0 > 15/11/27 19:37:46 DEBUG conf.Configuration: parsing URL jar:file:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0.jar!/core-default.xml > 15/11/27 19:37:46 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@720c653f > 15/11/27 19:37:46 DEBUG conf.Configuration: parsing URL file:/usr/local/hadoop-2.4.0/etc/hadoop/core-site.xml > 15/11/27 19:37:46 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@7038ce7b > 15/11/27 19:37:47 DEBUG security.Groups: Creating new Groups object > 15/11/27 19:37:47 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... > 15/11/27 19:37:47 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library > 15/11/27 19:37:47 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution > 15/11/27 19:37:47 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping > 15/11/27 19:37:47 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000 > 15/11/27 19:37:47 DEBUG security.UserGroupInformation: hadoop login > 15/11/27 19:37:47 DEBUG security.UserGroupInformation: hadoop login commit > 15/11/27 19:37:47 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hdfs > 15/11/27 19:37:47 DEBUG security.UserGroupInformation: UGI loginUser:hdfs (auth:SIMPLE) > 15/11/27 19:37:47 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false > 15/11/27 19:37:47 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = true > 15/11/27 19:37:47 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false > 15/11/27 19:37:47 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket > 15/11/27 19:37:47 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null > 15/11/27 19:37:48 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@739820e5 > 15/11/27 19:37:48 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@42988fee > 15/11/27 19:37:49 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$1@7116778b: starting with interruptCheckPeriodMs = 60000 > 15/11/27 19:37:49 DEBUG hdfs.BlockReaderLocal: The short-circuit local reads feature is enabled. > 15/11/27 19:37:49 DEBUG ipc.Client: The ping interval is 60000 ms. > 15/11/27 19:37:49 DEBUG ipc.Client: Connecting to hadoop109:8020 > 15/11/27 19:37:49 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs: starting, having connections 1 > .........# response 4 mins #........... > 15/11/27 19:37:49 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs sending #0 > 15/11/27 19:41:16 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs got value #0 > ...................... > ls: Operation category READ is not supported in state standby > 15/11/27 19:41:16 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@42988fee > 15/11/27 19:41:16 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@42988fee > 15/11/27 19:41:16 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@42988fee > 15/11/27 19:41:16 DEBUG ipc.Client: Stopping client > 15/11/27 19:41:16 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs: closed > 15/11/27 19:41:16 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs: stopped, remaining connections 0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org