Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7030DFF88 for ; Thu, 2 May 2013 20:24:11 +0000 (UTC) Received: (qmail 10873 invoked by uid 500); 2 May 2013 20:24:09 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 10811 invoked by uid 500); 2 May 2013 20:24:09 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 10803 invoked by uid 99); 2 May 2013 20:24:09 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 02 May 2013 20:24:09 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: error (athena.apache.org: local policy) Received: from [209.85.220.174] (HELO mail-vc0-f174.google.com) (209.85.220.174) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 02 May 2013 20:24:04 +0000 Received: by mail-vc0-f174.google.com with SMTP id hf12so876941vcb.5 for ; Thu, 02 May 2013 13:23:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type:x-gm-message-state; bh=nMBMFAeVzAeBXpl5HAtVIl4ArNC70yg0uka7YOwFpas=; b=E7MNGMby9F6IqigN/dyKvESXeEa7E0zCb9S8fXL0P1NmFHqdSewDQNUdK5Ixeir+zH KR7egFecrr77A+mHlQzfTTBWwZfr/2DVKNrmOiaC1HNLIxFpn7xvygvFNhYQE4a8TAPN WBtDdZgPua9fyO4LEBHqdTSwq+y+J8B4bJeMpOBY3GDxPVCaQslBEjsHtOBe1cNZxBaf WAmHVXHefMwIO/9tDhH8qOgND30XOa2vyGfnDhaLbmHFN0mx8ysj3zLbVLIo6TOatXdw KFrcNKRSf0/HpdU5TZZIPHqthVJYRAKDaWc44TaeptWtj/2xRwIt1OCcrJbNiNOmhBu7 FgxA== X-Received: by 10.52.117.147 with SMTP id ke19mr2318116vdb.24.1367526203878; Thu, 02 May 2013 13:23:23 -0700 (PDT) MIME-Version: 1.0 Received: by 10.52.227.130 with HTTP; Thu, 2 May 2013 13:23:03 -0700 (PDT) In-Reply-To: References: From: Jean-Marc Spaggiari Date: Thu, 2 May 2013 16:23:03 -0400 Message-ID: Subject: Re: Premature EOF: no length prefix available To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=bcaec54859e09a739604dbc201a1 X-Gm-Message-State: ALoCoQkFGqRYZSQMqXYtP4niXBv52EjJlRO2F8w2nd++adVC+OZmhTXMBBEPYJFZkLN5FiGmXpQU X-Virus-Checked: Checked by ClamAV on apache.org --bcaec54859e09a739604dbc201a1 Content-Type: text/plain; charset=UTF-8 Hi Andrew, No, this AWS instance is configured with instance stores too. What do you mean by "ephemeral"? JM 2013/5/2 Andrew Purtell > Oh, I have faced issues with Hadoop on AWS personally. :-) But not this > one. I use instance-store aka "ephemeral" volumes for DataNode block > storage. Are you by chance using EBS? > > > On Thu, May 2, 2013 at 1:10 PM, Jean-Marc Spaggiari < > jean-marc@spaggiari.org > > wrote: > > > But that's wierld. This instance is running on AWS. If there issues with > > Hadoop and AWS I think some other people will have faced it before me. > > > > Ok. I will move the discussion on the Hadoop mailing list since it seems > to > > be more related to hadoop vs OS. > > > > Thank, > > > > JM > > > > 2013/5/2 Andrew Purtell > > > > > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: > > Exception > > > in > > > createBlockOutputStream java.io.EOFException: Premature EOF: no length > > > prefix available > > > > > > The DataNode aborted the block transfer. > > > > > > > 2013-05-02 14:02:41,063 ERROR org.apache.hadoop.hdfs.server. > > > datanode.DataNode: > > > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver > > > error processing WRITE_BLOCK operation src: /10.238.38.193:39831dest: > > / > > > 10.238.38.193:50010 java.io.FileNotFoundException: > > /mnt/dfs/dn/current/BP- > > > 1179773663-10.238.38.193-1363960970263/current/rbw/blk_ > > > 7082931589039745816_1955950.meta (Invalid argument) > > > > at java.io.RandomAccessFile.open(Native Method) > > > > at java.io.RandomAccessFile.(RandomAccessFile.java:216) > > > > > > This looks like the native (OS level) side of RAF got EINVAL back from > > > create() or open(). Go from there. > > > > > > > > > > > > On Thu, May 2, 2013 at 12:27 PM, Jean-Marc Spaggiari < > > > jean-marc@spaggiari.org> wrote: > > > > > > > Hi, > > > > > > > > Any idea what can be the cause of a "Premature EOF: no length prefix > > > > available" error? > > > > > > > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: > > Exception > > > in > > > > createBlockOutputStream > > > > java.io.EOFException: Premature EOF: no length prefix available > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1105) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487) > > > > 2013-05-02 14:02:41,064 INFO org.apache.hadoop.hdfs.DFSClient: > > Abandoning > > > > > > BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950 > > > > 2013-05-02 14:02:41,068 INFO org.apache.hadoop.hdfs.DFSClient: > > Excluding > > > > datanode 10.238.38.193:50010 > > > > > > > > > > > > > > > > I'm getting that on a server start. Logs are splitted correctly, > > > > coprocessors deployed corretly, and then I'm getting this exception. > > It's > > > > excluding the datanode, and because of that almost everything > remaining > > > is > > > > failing. > > > > > > > > There is only one server in this "cluster"... But even so, it should > be > > > > working. There is one master, one RS, one NN and one DN. On a AWS > host. > > > > > > > > At the same time on the hadoop datanode side I'm getting that: > > > > > > > > 2013-05-02 14:02:41,063 INFO > > > > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock > > > > > > BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950 > > > > received exception java.io.FileNotFoundException: > > > > > > > > > > > > > > /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta > > > > (Invalid argument) > > > > 2013-05-02 14:02:41,063 ERROR > > > > org.apache.hadoop.hdfs.server.datanode.DataNode: > > > > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver error > > > > processing WRITE_BLOCK operation src: /10.238.38.193:39831 dest: / > > > > 10.238.38.193:50010 > > > > java.io.FileNotFoundException: > > > > > > > > > > > > > > /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta > > > > (Invalid argument) > > > > at java.io.RandomAccessFile.open(Native Method) > > > > at java.io.RandomAccessFile.(RandomAccessFile.java:216) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.createStreams(ReplicaInPipeline.java:187) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:199) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:457) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) > > > > at java.lang.Thread.run(Thread.java:662) > > > > > > > > > > > > Does is sound more an hadoop issue than an HBase one? > > > > > > > > JM > > > > > > > > > > > > > > > > -- > > > Best regards, > > > > > > - Andy > > > > > > Problems worthy of attack prove their worth by hitting back. - Piet > Hein > > > (via Tom White) > > > > > > > > > -- > Best regards, > > - Andy > > Problems worthy of attack prove their worth by hitting back. - Piet Hein > (via Tom White) > --bcaec54859e09a739604dbc201a1--