Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4264811DA0 for ; Tue, 8 Jul 2014 13:29:06 +0000 (UTC) Received: (qmail 28693 invoked by uid 500); 8 Jul 2014 13:29:05 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 28455 invoked by uid 500); 8 Jul 2014 13:29:04 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 28366 invoked by uid 99); 8 Jul 2014 13:29:04 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Jul 2014 13:29:04 +0000 Date: Tue, 8 Jul 2014 13:29:04 +0000 (UTC) From: "Brahma Reddy Battula (JIRA)" To: hdfs-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (HDFS-6641) [ HDFS- File Concat ] Concat will fail when block is not full MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Brahma Reddy Battula created HDFS-6641: ------------------------------------------ Summary: [ HDFS- File Concat ] Concat will fail when block is not full Key: HDFS-6641 URL: https://issues.apache.org/jira/browse/HDFS-6641 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.4.1 Reporter: Brahma Reddy Battula sually we can't ensure lastblock alwaysfull...please let me know purpose of following check.. long blockSize = trgInode.getPreferredBlockSize(); // check the end block to be full final BlockInfo last = trgInode.getLastBlock(); if(blockSize != last.getNumBytes()) { throw new HadoopIllegalArgumentException("The last block in " + target + " is not full; last block size = " + last.getNumBytes() + " but file block size = " + blockSize); } If it is issue, I'll file jira. Following is the trace.. exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.HadoopIllegalArgumentException): The last block in /Test.txt is not full; last block size = 14 but file block size = 134217728 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInternal(FSNamesystem.java:1887) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInt(FSNamesystem.java:1833) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:1795) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:704) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:512) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) -- This message was sent by Atlassian JIRA (v6.2#6252)