Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 52F52200C7B for ; Sat, 20 May 2017 13:28:11 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 516E4160BBC; Sat, 20 May 2017 11:28:11 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 7153D160BAD for ; Sat, 20 May 2017 13:28:10 +0200 (CEST) Received: (qmail 67843 invoked by uid 500); 20 May 2017 11:28:09 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 67832 invoked by uid 99); 20 May 2017 11:28:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 May 2017 11:28:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 0C106C28F9 for ; Sat, 20 May 2017 11:28:09 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id WWfA6-d04tsm for ; Sat, 20 May 2017 11:28:07 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id CE1AD5FCFA for ; Sat, 20 May 2017 11:28:06 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id E0AA2E0D4D for ; Sat, 20 May 2017 11:28:05 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id BDDE421B59 for ; Sat, 20 May 2017 11:28:04 +0000 (UTC) Date: Sat, 20 May 2017 11:28:04 +0000 (UTC) From: "Xiang Li (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-17997) jruby-complete-1.6.8.jar is in cached_classpath.txt MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Sat, 20 May 2017 11:28:11 -0000 [ https://issues.apache.org/jira/browse/HBASE-17997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16018402#comment-16018402 ] Xiang Li commented on HBASE-17997: ---------------------------------- Hi [~ted_yu], I am unable to reproduce the exception as stated in the description. Here is what I did: mvn assembly:single against the latest master branch and deploy a standalone HBase using the tar.gz file. Then update hbase-site.xml to set hbase.rootdir to be on s3a. Standalone HBase could run successfully. Would you please tell me more about how you produced the error? By the way, the specification is for the usage of Maven dependency plugin itself. I am not sure if it leads to the error you met. Also, regarding bq. This issue is for dev environment - we should allow developers to start hbase processes using s3a as storage. Would you please describe more? > jruby-complete-1.6.8.jar is in cached_classpath.txt > --------------------------------------------------- > > Key: HBASE-17997 > URL: https://issues.apache.org/jira/browse/HBASE-17997 > Project: HBase > Issue Type: Bug > Reporter: Ted Yu > > HBASE-15199 moves jruby-complete-1.6.8.jar to lib/ruby directory. > However, jruby-complete-1.6.8.jar still appears in cached_classpath.txt > This means that user would see exception similar to the following when starting hbase in standalone mode with s3a as rootdir : > {code} > 2017-05-04 16:41:32,854 WARN [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=38659] internal.S3MetadataResponseHandler: Unable to parse last modified date: Thu, 04 May 2017 16:27:09 GMT > java.lang.IllegalStateException: Joda-time 2.2 or later version is required, but found version: null > at com.amazonaws.util.DateUtils.handleException(DateUtils.java:149) > at com.amazonaws.util.DateUtils.parseRFC822Date(DateUtils.java:195) > at com.amazonaws.services.s3.internal.ServiceUtils.parseRfc822Date(ServiceUtils.java:78) > at com.amazonaws.services.s3.internal.AbstractS3ResponseHandler.populateObjectMetadata(AbstractS3ResponseHandler.java:115) > at com.amazonaws.services.s3.internal.S3ObjectResponseHandler.handle(S3ObjectResponseHandler.java:52) > at com.amazonaws.services.s3.internal.S3ObjectResponseHandler.handle(S3ObjectResponseHandler.java:30) > at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:1072) > at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:746) > at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489) > at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310) > at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785) > at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1191) > at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:148) > at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:281) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:364) > at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75) > at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92) > at org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:722) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1420) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1677) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:439) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:904) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:267) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:169) > at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:2132) > at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2122) > at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5687) > at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2679) > at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2665) > at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2647) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6906) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6885) > at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2007) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)