Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 159B118BEB for ; Wed, 20 Jan 2016 11:27:03 +0000 (UTC) Received: (qmail 95092 invoked by uid 500); 20 Jan 2016 11:26:58 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 95022 invoked by uid 500); 20 Jan 2016 11:26:58 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 93851 invoked by uid 99); 20 Jan 2016 11:26:57 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Jan 2016 11:26:57 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 74440C3709 for ; Wed, 20 Jan 2016 11:26:57 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.88 X-Spam-Level: ** X-Spam-Status: No, score=2.88 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id wgxnfftfqCDQ for ; Wed, 20 Jan 2016 11:26:50 +0000 (UTC) Received: from mail-lb0-f169.google.com (mail-lb0-f169.google.com [209.85.217.169]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 563BE42A28 for ; Wed, 20 Jan 2016 11:26:49 +0000 (UTC) Received: by mail-lb0-f169.google.com with SMTP id bc4so3295440lbc.2 for ; Wed, 20 Jan 2016 03:26:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-type; bh=or+z4iRjBERfLAzD+3/HRZ4UcjXtYOYbV/yqSzmbxi4=; b=cNRu5mRRXbN2EfUEJ1TXdb5+ByUN0gAaMmZurvV+1iHq0Cbf+mMrOR2IvjgJW25pxa J1/fNlPuOHLOaPTQ99mOfCO4biDS7dRtOeZqVNss2zqyfOAM3TpgsZlyNIzYAhiobmbQ hIlEBdK/lUe6cJP/fBhCZvweqsB+/BrshY7wVTgY2yDmT55BWhWt7Qq2+ewgAQhGbsrU hFQaPve1joK0EXPGEajULFhLjmGndPmZLGYWWLoBgOEcaJexKnUzR8MFEzuZfmWEVHBQ DVFFmVAE4xguNHAcyduLEbN2lxfrqybaDl4HpS7yKpOFb/d+powVUCfSd8piAVQ2VYG+ wHjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-type; bh=or+z4iRjBERfLAzD+3/HRZ4UcjXtYOYbV/yqSzmbxi4=; b=YeXtHATi+dOP56hj/4/7Is75PZ8oCLba9jg3Qpv3FlXl2w3FR/XLu5qt3TCkWMgfWH ppG4bSo3SR+pp6ajrKQIr1dwObmAmV7s9NF34Bpg9pScZ13BSQZ0fSyrW60OBy5Ckf5o z30l63m3JMf2jG16KyqT3DU75xdb92N2BzOHZe7Qu1tgVLIVIykcEieKTgULgfgpu8bk 4EcDdw8a1FBUZ62qcZg5zs+54V6MAwPzYb36b+1B9y4CIXoiICJCF5oF2aFi/X2q4LN1 vpyxVwng8WdLskEezgF6eqUhoDEP2rp2ec03A40+ctmhF/J0zQwPW/2Los2AcNyZCe9m pNPQ== X-Gm-Message-State: AG10YOQJdvQL6mWbntf1vWsd6jX1kR4ISRVXGkiG2GPwkQV/xyaQqMircLrRb1JlY8IjuLPHJEWDfjP3n8jj7g== X-Received: by 10.112.126.72 with SMTP id mw8mr6219332lbb.14.1453289208119; Wed, 20 Jan 2016 03:26:48 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Dejan Menges Date: Wed, 20 Jan 2016 11:26:38 +0000 Message-ID: Subject: Re: HDFS short-circuit tokens expiring To: Nick Dimiduk , user@hadoop.apache.org Cc: hbase-user Content-Type: multipart/alternative; boundary=001a11c37656018a0a0529c24369 --001a11c37656018a0a0529c24369 Content-Type: text/plain; charset=UTF-8 Hi Nick, I had exactly the same case, and in our case it was that tokens were expiring too quickly. What we increased was dfs.client.read.shortcircuit.streams.cache.size and dfs.client.read.shortcircuit.streams.cache.expiry.ms. Hope this helps. Best, Dejan On Wed, Jan 20, 2016 at 12:15 AM Nick Dimiduk wrote: > Hi folks, > > This looks like it sits in the intersection between HDFS and HBase. My > region server logs are flooding with messages like > "SecretManager$InvalidToken: access control error while attempting to set > up short-circuit access to ... is expired" [0]. > > These logs correspond with responseTooSlow WARNings from the region server. > > Maybe I have misconfigured short-circuit reads? Such an expiration seems > like something the client or client consumer should handle re-negotiations. > > Thanks a lot, > -n > > [0] > > 2016-01-19 22:10:14,432 INFO > [B.defaultRpcServer.handler=4,queue=1,port=16020] > shortcircuit.ShortCircuitCache: ShortCircuitCache(0x71bdc547): could not > load 1074037633_BP-1145309065-XXX-1448053136416 due to InvalidToken > exception. > org.apache.hadoop.security.token.SecretManager$InvalidToken: access > control error while attempting to set up short-circuit access to path> token with block_token_identifier (expiryDate=1453194430724, > keyId=1508822027, userId=hbase, > blockPoolId=BP-1145309065-XXX-1448053136416, blockId=1074037633, access > modes=[READ]) is expired. > at > org.apache.hadoop.hdfs.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:591) > at > org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:490) > at > org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:782) > at > org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:716) > at > org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422) > at > org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333) > at > org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:618) > at > org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896) > at java.io.DataInputStream.read(DataInputStream.java:149) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:678) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1372) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1591) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:259) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:614) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:267) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:181) > at > org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:256) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:817) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:792) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:621) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5410) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$1.next(GroupedAggregateRegionObserver.java:510) > at > org.apache.phoenix.coprocessor.BaseRegionScanner.next(BaseRegionScanner.java:40) > at > org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:60) > at > org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > > --001a11c37656018a0a0529c24369--