Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5026E18963 for ; Fri, 30 Oct 2015 16:21:18 +0000 (UTC) Received: (qmail 49910 invoked by uid 500); 30 Oct 2015 16:21:12 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 49808 invoked by uid 500); 30 Oct 2015 16:21:12 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 49797 invoked by uid 99); 30 Oct 2015 16:21:12 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 30 Oct 2015 16:21:12 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id A7DABC14F3 for ; Fri, 30 Oct 2015 16:21:11 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.98 X-Spam-Level: ** X-Spam-Status: No, score=2.98 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=webaction_com.20150623.gappssmtp.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id G9Kh01UmQQAu for ; Fri, 30 Oct 2015 16:21:09 +0000 (UTC) Received: from mail-pa0-f49.google.com (mail-pa0-f49.google.com [209.85.220.49]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id 8128F20524 for ; Fri, 30 Oct 2015 16:21:08 +0000 (UTC) Received: by pacfv9 with SMTP id fv9so81801369pac.3 for ; Fri, 30 Oct 2015 09:21:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=webaction_com.20150623.gappssmtp.com; s=20150623; h=from:content-type:subject:message-id:date:to:mime-version; bh=W8V9bIXGw9XtnLUv7eOX0Oh91bZOzCbLkLEkopR3ExE=; b=NjqD/fX19bjxNvmWhILl4F726dOGSdFN195icagsuL5Vq9XfW0j74Qbibq7xp6NQpn YJDRvBLNAQRKmSkbtaotY/bWHF5ZvCd1OeuFVdF4hq2ABij0LnCehfXYzMV2WtTMjNoG 8+7sfoNG+N95KbTi/J9ouzPeg2Ytt3W1ap/Mx1I5R/KCux5OgCg++ZfKCMUGtUE6zwpb wZzyzkYsaLaztawNPUdruMibyLAFw5V2dakWETlRCViG4ybe4D/sBNCNF+Ol5fp3h+4m BtPobJqLaamxilQjqDmPZzI871ct7paJba1IoEJ6ZPTY/GCkZjM4Sw9biJDq6MumkY0u Wjew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:content-type:subject:message-id:date:to :mime-version; bh=W8V9bIXGw9XtnLUv7eOX0Oh91bZOzCbLkLEkopR3ExE=; b=h0+0pkcZAjIHwXNhijOhrS1DlCzUFsQZ56GGM3oWmKovt4EjqvYocj2YxNUr7Dgp5n eaJFI8HblFFIOpoYmd6vkuUM8zicGQ2k1IqfgqDmeBxjXqF5hZnX18gDnAZqgHfLaFm0 Qi/Br8/WNInwAcAFHjaz2KnzAhdMa00O3YywlhLSfAuj1eq3Ym36DwZtLMzNofmE40Hu jio5acYaRE9PIZ0ZhInLWFsj3qQ5LqoTC/lp3mGRvzL044h2UbKwA3ctNLuc6G0Ct5RB 6m5JW9sAVaUuP8VGseRw6Zjxo2fnZxHhBPg5tF8JSmftwYyRRqf2s0okL+k6vzQwrASI qv5w== X-Gm-Message-State: ALoCoQm46D9asEV4fenfSw+31vnUit0H+/ykMuHdtDmKfbAopXeF+lYi5rRAfkMjFegTrl/Xx4uH X-Received: by 10.68.170.4 with SMTP id ai4mr9569155pbc.52.1446222067203; Fri, 30 Oct 2015 09:21:07 -0700 (PDT) Received: from [192.168.2.12] ([49.207.184.31]) by smtp.gmail.com with ESMTPSA id zk3sm9038599pbb.41.2015.10.30.09.21.05 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 30 Oct 2015 09:21:06 -0700 (PDT) From: Niranjan Subramanian Content-Type: multipart/alternative; boundary="Apple-Mail=_AE34E875-B414-41EA-AD75-D9D17D1E9303" Subject: Exception while appending to an existing file in HDFS Message-Id: <4CFC781D-F9A4-4A5E-97EC-3E876899B089@webaction.com> Date: Fri, 30 Oct 2015 21:50:33 +0530 To: user@hadoop.apache.org Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) X-Mailer: Apple Mail (2.1878.6) --Apple-Mail=_AE34E875-B414-41EA-AD75-D9D17D1E9303 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Hi guys,=20 I'm basically trying to append data to an already exisiting file in = HDFS. This is the exception I get 03:49:54,456WARN = org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run:628 DataStreamer = Exception java.lang.NullPointerException at = com.google.protobuf.AbstractMessageLite$Builder.checkForNullValues(Abstrac= tMessageLite.java:336) at = com.google.protobuf.AbstractMessageLite$Builder.addAll(AbstractMessageLite= .java:323) at = org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$UpdateP= ipelineRequestProto$Builder.addAllStorageIDs(ClientNamenodeProtocolProtos.= java) at = org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updat= ePipeline(ClientNamenodeProtocolTranslatorPB.java:842) at = org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppend= OrRecovery(DFSOutputStream.java:1238) at = org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.ja= va:532) My replication factor is 1. I'm using 2.5.0 of Apache's Hadoop = distribution. This is the code snippet which I'm using for creating a = file if it doesn't exist or create in append mode if it exists String url =3D getHadoopUrl()+fileName; Path file =3D new Path(url); try { if(append) { if(hadoopFileSystem.exists(file)) fsDataOutputStream =3D hadoopFileSystem.append(file); else fsDataOutputStream =3D hadoopFileSystem.create(file); } else fsDataOutputStream =3D hadoopFileSystem.create(file);=20 Appending stack trace that I found in datanode's log 2015-10-30 16:19:54,435 INFO = org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving = BP-1012136337-192.168.123.103-1411103100884:blk_1073742239_1421 src: = /127.0.0.1:54160 dest: /127.0.0.1:50010 2015-10-30 16:19:54,435 INFO = org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: = Appending to FinalizedReplica, blk_1073742239_1421, FINALIZED getNumBytes() =3D 812 getBytesOnDisk() =3D 812 getVisibleLength()=3D 812 getVolume() =3D /Users/niranjan/hadoop/hdfs/datanode/current getBlockFile() =3D /Users/niranjan/hadoop/hdfs/datanode/current/BP- = = 1012136337-192.168.123.103-1411103100884/current/finalized/blk_1073742239 unlinked =3Dfalse 2015-10-30 16:19:54,461 INFO = org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for = BP-1012136337-192.168.123.103-1411103100884:blk_1073742239_1422 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) at = org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(Pa= cketReceiver.java:213) at = org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketR= eceiver.java:134) at = org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPac= ket(PacketReceiver.java:109) at = org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockRe= ceiver.java:435) at = org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockRec= eiver.java:693) at = org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.= java:569) at = org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receive= r.java:115) at = org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.j= ava:68) at = org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:22= 1) at java.lang.Thread.run(Thread.java:745) It's not quite clear what is causing this exception. Also I'm quite = confused after reading various sources whether appending is supported in = HDFS or not. Let me know what I'm missing here Regards, Niranjan= --Apple-Mail=_AE34E875-B414-41EA-AD75-D9D17D1E9303 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii Hi = guys, 

I'm = basically trying to append data to an already exisiting file in HDFS. = This is the exception I get

03:49:54,456WARN =
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run:628 =
DataStreamer =
Exception
java.lang.NullPointerException
at com.google.protobuf.AbstractMessageLite$Builder.checkForNullValues(AbstractMessageLite.java:336)
at com.google.protobuf.AbstractMessageLite$Builder.addAll(AbstractMessageLite.java:323)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$UpdatePipelineRequestProto$Builder.addAllStorageIDs(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:842)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1238)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
My replication factor is 1. I'm using 2.5.0 of Apache's Hadoop = distribution. This is the code snippet which I'm using for creating a = file if it doesn't exist or create in append mode if it = exists

String =
url =3D =
getHadoopUrl()+fileName;
    Path =
file =3D =
new =
Path(url);
    try =
{
        if(append) {
            if(hadoopFileSystem.exists(file))
                fsDataOutputStream =3D hadoopFileSystem.append(file);
            else
                fsDataOutputStream =3D hadoopFileSystem.create(file);
        }
        else
            fsDataOutputStream =3D hadoopFileSystem.create(file); 
Appending stack trace that I = found in datanode's log

2015-10-30 =
16:19:54,435 INFO  org.apache.hadoop.hdfs.server.datanode.DataNode: =
Receiving =
BP-1012136337-192.168.123.103-1411103100884:blk_1073742239_1421 src: /127.0.0.1:54160  =
dest: =
/127.0.0.1:50010
 2015-10-30 =
16:19:54,435 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Appending to FinalizedReplica, blk_1073742239_1421, FINALIZED
 getNumBytes()  =
   =3D =
812
 getBytesOnDisk()  =
=3D 812
 getVisibleLength()=3D 812
 getVolume()  =
     =3D =
/Users/niranjan/hadoop/hdfs/datanode/current
 getBlockFile()  =
  =3D /Users/niranjan/hadoop/hdfs/datanode/current/BP-        1012136337-192.168.123.103-1411103100884/current/finalized/blk_1073742239
 unlinked          =3Dfalse
2015-10-30 =
16:19:54,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: =
Exception =
for =
BP-1012136337-192.168.123.103-1411103100884:blk_1073742239_1422
java.io.IOException: Premature EOF from inputStream
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
    at java.lang.Thread.run(Thread.java:745)

It's not quite clear what is causing this = exception. Also I'm quite confused after reading various sources whether = appending is supported in HDFS or not. Let me know what I'm missing = here

Regards,
Niranjan
= --Apple-Mail=_AE34E875-B414-41EA-AD75-D9D17D1E9303--