Return-Path: Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: (qmail 71432 invoked from network); 8 Dec 2009 19:06:53 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 8 Dec 2009 19:06:53 -0000 Received: (qmail 11713 invoked by uid 500); 8 Dec 2009 19:06:53 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 11691 invoked by uid 500); 8 Dec 2009 19:06:53 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 11681 invoked by uid 99); 8 Dec 2009 19:06:53 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Dec 2009 19:06:53 +0000 X-ASF-Spam-Status: No, hits=-2.6 required=5.0 tests=BAYES_00 X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Dec 2009 19:06:49 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id B20752388962; Tue, 8 Dec 2009 19:06:29 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r888525 - in /hadoop/hdfs/branches/branch-0.21: ./ .eclipse.templates/.launches/ src/contrib/ src/contrib/hdfsproxy/ src/java/ src/java/org/apache/hadoop/hdfs/ src/java/org/apache/hadoop/hdfs/protocol/ src/java/org/apache/hadoop/hdfs/server... Date: Tue, 08 Dec 2009 19:06:28 -0000 To: hdfs-commits@hadoop.apache.org From: hairong@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20091208190629.B20752388962@eris.apache.org> Author: hairong Date: Tue Dec 8 19:06:27 2009 New Revision: 888525 URL: http://svn.apache.org/viewvc?rev=888525&view=rev Log: Merge -r 888507 and 888519 to move the change of HDFS-793 from trunk to branch 0.21. Modified: hadoop/hdfs/branches/branch-0.21/ (props changed) hadoop/hdfs/branches/branch-0.21/.eclipse.templates/.launches/ (props changed) hadoop/hdfs/branches/branch-0.21/CHANGES.txt (contents, props changed) hadoop/hdfs/branches/branch-0.21/build.xml (props changed) hadoop/hdfs/branches/branch-0.21/src/contrib/build.xml (props changed) hadoop/hdfs/branches/branch-0.21/src/contrib/hdfsproxy/ (props changed) hadoop/hdfs/branches/branch-0.21/src/java/ (props changed) hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/DFSClient.java hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/protocol/DataTransferProtocol.java hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/protocol/RecoveryInProgressException.java (props changed) hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java (props changed) hadoop/hdfs/branches/branch-0.21/src/test/aop/org/apache/hadoop/hdfs/protocol/ (props changed) hadoop/hdfs/branches/branch-0.21/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj hadoop/hdfs/branches/branch-0.21/src/test/hdfs/ (props changed) hadoop/hdfs/branches/branch-0.21/src/test/hdfs/org/apache/hadoop/hdfs/TestDataTransferProtocol.java hadoop/hdfs/branches/branch-0.21/src/webapps/datanode/ (props changed) hadoop/hdfs/branches/branch-0.21/src/webapps/hdfs/ (props changed) hadoop/hdfs/branches/branch-0.21/src/webapps/secondary/ (props changed) Propchange: hadoop/hdfs/branches/branch-0.21/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,3 +1,3 @@ /hadoop/core/branches/branch-0.19/hdfs:713112 /hadoop/hdfs/branches/HDFS-265:796829-820463 -/hadoop/hdfs/trunk:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Propchange: hadoop/hdfs/branches/branch-0.21/.eclipse.templates/.launches/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1 +1 @@ -/hadoop/hdfs/trunk/.eclipse.templates/.launches:817853-817863,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/.eclipse.templates/.launches:817853-817863,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Modified: hadoop/hdfs/branches/branch-0.21/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/branch-0.21/CHANGES.txt?rev=888525&r1=888524&r2=888525&view=diff ============================================================================== --- hadoop/hdfs/branches/branch-0.21/CHANGES.txt (original) +++ hadoop/hdfs/branches/branch-0.21/CHANGES.txt Tue Dec 8 19:06:27 2009 @@ -508,6 +508,9 @@ HDFS-723. Fix deadlock in DFSClient#DFSOutputStream. (hairong) + HDFS-793. Data node should receive the whole packet ack message before it + constructs and sends its own ack message for the packet. (hairong) + Release 0.20.1 - 2009-09-01 IMPROVEMENTS Propchange: hadoop/hdfs/branches/branch-0.21/CHANGES.txt ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,3 +1,3 @@ /hadoop/core/branches/branch-0.19/hdfs/CHANGES.txt:713112 /hadoop/hdfs/branches/HDFS-265/CHANGES.txt:796829-820463 -/hadoop/hdfs/trunk/CHANGES.txt:817853-817863,818294-818298,818801,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/CHANGES.txt:817853-817863,818294-818298,818801,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Propchange: hadoop/hdfs/branches/branch-0.21/build.xml ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,4 +1,4 @@ /hadoop/core/branches/branch-0.19/hdfs/build.xml:713112 /hadoop/core/trunk/build.xml:779102 /hadoop/hdfs/branches/HDFS-265/build.xml:796829-820463 -/hadoop/hdfs/trunk/build.xml:817853-817863,818294-818298,818801,824552,824944,825229,826149,828116,828926,829258,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/build.xml:817853-817863,818294-818298,818801,824552,824944,825229,826149,828116,828926,829258,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Propchange: hadoop/hdfs/branches/branch-0.21/src/contrib/build.xml ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,3 +1,3 @@ /hadoop/core/branches/branch-0.19/hdfs/src/contrib/build.xml:713112 /hadoop/hdfs/branches/HDFS-265/src/contrib/build.xml:796829-820463 -/hadoop/hdfs/trunk/src/contrib/build.xml:817853-817863,818294-818298,818801,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/contrib/build.xml:817853-817863,818294-818298,818801,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Propchange: hadoop/hdfs/branches/branch-0.21/src/contrib/hdfsproxy/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,4 +1,4 @@ /hadoop/core/branches/branch-0.19/hdfs/src/contrib/hdfsproxy:713112 /hadoop/core/trunk/src/contrib/hdfsproxy:776175-784663 /hadoop/hdfs/branches/HDFS-265/src/contrib/hdfsproxy:796829-820463 -/hadoop/hdfs/trunk/src/contrib/hdfsproxy:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/contrib/hdfsproxy:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Propchange: hadoop/hdfs/branches/branch-0.21/src/java/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,4 +1,4 @@ /hadoop/core/branches/branch-0.19/hdfs/src/java:713112 /hadoop/core/trunk/src/hdfs:776175-785643,785929-786278 /hadoop/hdfs/branches/HDFS-265/src/java:796829-820463 -/hadoop/hdfs/trunk/src/java:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/java:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Modified: hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/DFSClient.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/DFSClient.java?rev=888525&r1=888524&r2=888525&view=diff ============================================================================== --- hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/DFSClient.java (original) +++ hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/DFSClient.java Tue Dec 8 19:06:27 2009 @@ -85,6 +85,7 @@ import org.apache.hadoop.hdfs.protocol.LocatedBlocks; import org.apache.hadoop.hdfs.protocol.NSQuotaExceededException; import org.apache.hadoop.hdfs.protocol.DataTransferProtocol.BlockConstructionStage; +import org.apache.hadoop.hdfs.protocol.DataTransferProtocol.PipelineAck; import org.apache.hadoop.hdfs.security.BlockAccessToken; import org.apache.hadoop.hdfs.security.InvalidAccessTokenException; import org.apache.hadoop.hdfs.server.common.HdfsConstants; @@ -2852,15 +2853,20 @@ public void run() { this.setName("ResponseProcessor for block " + block); + PipelineAck ack = new PipelineAck(); while (!responderClosed && clientRunning && !isLastPacketInBlock) { // process responses from datanodes. try { - // verify seqno from datanode - long seqno = blockReplyStream.readLong(); - LOG.debug("DFSClient received ack for seqno " + seqno); + // read an ack from the pipeline + ack.readFields(blockReplyStream); + if (LOG.isDebugEnabled()) { + LOG.debug("DFSClient " + ack); + } + + long seqno = ack.getSeqno(); Packet one = null; - if (seqno == -1) { + if (seqno == PipelineAck.HEART_BEAT.getSeqno()) { continue; } else if (seqno == -2) { // no nothing @@ -2877,20 +2883,9 @@ } // processes response status from all datanodes. - String replies = null; - if (LOG.isDebugEnabled()) { - replies = "DFSClient Replies for seqno " + seqno + " are"; - } - for (int i = 0; i < targets.length && clientRunning; i++) { - final DataTransferProtocol.Status reply - = DataTransferProtocol.Status.read(blockReplyStream); - if (LOG.isDebugEnabled()) { - replies += " " + reply; - } + for (int i = ack.getNumOfReplies()-1; i >=0 && clientRunning; i--) { + final DataTransferProtocol.Status reply = ack.getReply(i); if (reply != SUCCESS) { - if (LOG.isDebugEnabled()) { - LOG.debug(replies); - } errorIndex = i; // first bad datanode throw new IOException("Bad response " + reply + " for block " + block + @@ -2899,10 +2894,6 @@ } } - if (LOG.isDebugEnabled()) { - LOG.debug(replies); - } - if (one == null) { throw new IOException("Panic: responder did not receive " + "an ack for a packet: " + seqno); Modified: hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/protocol/DataTransferProtocol.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/protocol/DataTransferProtocol.java?rev=888525&r1=888524&r2=888525&view=diff ============================================================================== --- hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/protocol/DataTransferProtocol.java (original) +++ hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/protocol/DataTransferProtocol.java Tue Dec 8 19:06:27 2009 @@ -26,6 +26,7 @@ import org.apache.hadoop.hdfs.security.BlockAccessToken; import org.apache.hadoop.io.Text; +import org.apache.hadoop.io.Writable; import org.apache.hadoop.io.WritableUtils; /** @@ -39,12 +40,11 @@ * when protocol changes. It is not very obvious. */ /* - * Version 17: - * Change the block write protocol to support pipeline recovery. - * Additional fields, like recovery flags, new GS, minBytesRcvd, - * and maxBytesRcvd are included. + * Version 19: + * Change the block packet ack protocol to include seqno, + * numberOfReplies, reply0, reply1, ... */ - public static final int DATA_TRANSFER_VERSION = 17; + public static final int DATA_TRANSFER_VERSION = 19; /** Operation */ public enum Op { @@ -453,4 +453,94 @@ return t; } } + + /** reply **/ + public static class PipelineAck implements Writable { + private long seqno; + private Status replies[]; + final public static PipelineAck HEART_BEAT = new PipelineAck(-1, new Status[0]); + + /** default constructor **/ + public PipelineAck() { + } + + /** + * Constructor + * @param seqno sequence number + * @param replies an array of replies + */ + public PipelineAck(long seqno, Status[] replies) { + this.seqno = seqno; + this.replies = replies; + } + + /** + * Get the sequence number + * @return the sequence number + */ + public long getSeqno() { + return seqno; + } + + /** + * Get the number of replies + * @return the number of replies + */ + public short getNumOfReplies() { + return (short)replies.length; + } + + /** + * get the ith reply + * @return the the ith reply + */ + public Status getReply(int i) { + return replies[i]; + } + + /** + * Check if this ack contains error status + * @return true if all statuses are SUCCESS + */ + public boolean isSuccess() { + for (Status reply : replies) { + if (reply != Status.SUCCESS) { + return false; + } + } + return true; + } + + /**** Writable interface ****/ + @Override // Writable + public void readFields(DataInput in) throws IOException { + seqno = in.readLong(); + short numOfReplies = in.readShort(); + replies = new Status[numOfReplies]; + for (int i=0; i datanode.socketTimeout/2) { - replyOut.writeLong(-1); // send heartbeat + PipelineAck.HEART_BEAT.write(replyOut); // send heart beat replyOut.flush(); + if (LOG.isDebugEnabled()) { + LOG.debug("PacketResponder " + numTargets + + " for block " + block + + " sent a heartbeat"); + } lastHeartbeat = now; } } @@ -843,7 +848,7 @@ lastPacket = true; } - ackReply(expected); + new PipelineAck(expected, new Status[]{SUCCESS}).write(replyOut); replyOut.flush(); // remove the packet from the ack queue removeAckHead(); @@ -870,14 +875,6 @@ " for block " + block + " terminating"); } - // This method is introduced to facilitate testing. Otherwise - // there was a little chance to bind an AspectJ advice to such a sequence - // of calls - private void ackReply(long expected) throws IOException { - replyOut.writeLong(expected); - SUCCESS.write(replyOut); - } - /** * Thread to process incoming acks. * @see java.lang.Runnable#run() @@ -896,24 +893,23 @@ boolean isInterrupted = false; try { - DataTransferProtocol.Status op = SUCCESS; boolean didRead = false; Packet pkt = null; long expected = -2; + PipelineAck ack = new PipelineAck(); try { - // read seqno from downstream datanode - long seqno = mirrorIn.readLong(); + // read an ack from downstream datanode + ack.readFields(mirrorIn); + if (LOG.isDebugEnabled()) { + LOG.debug("PacketResponder " + numTargets + " got " + ack); + } + long seqno = ack.getSeqno(); didRead = true; - if (seqno == -1) { - replyOut.writeLong(-1); // send keepalive + if (seqno == PipelineAck.HEART_BEAT.getSeqno()) { + ack.write(replyOut); replyOut.flush(); - LOG.debug("PacketResponder " + numTargets + " got -1"); continue; - } else if (seqno == -2) { - LOG.debug("PacketResponder " + numTargets + " got -2"); - } else { - LOG.debug("PacketResponder " + numTargets + " got seqno = " + - seqno); + } else if (seqno >= 0) { synchronized (this) { while (running && datanode.shouldRun && ackQueue.size() == 0) { if (LOG.isDebugEnabled()) { @@ -931,7 +927,6 @@ } pkt = ackQueue.getFirst(); expected = pkt.seqno; - LOG.debug("PacketResponder " + numTargets + " seqno = " + seqno); if (seqno != expected) { throw new IOException("PacketResponder " + numTargets + " for block " + block + @@ -964,10 +959,6 @@ continue; } - if (!didRead) { - op = ERROR; - } - // If this is the last packet in block, then close block // file and finalize the block before responding success if (lastPacketInBlock) { @@ -990,54 +981,42 @@ } } - // send my status back to upstream datanode - ackReply(expected); - - LOG.debug("PacketResponder " + numTargets + - " for block " + block + - " responded my status " + - " for seqno " + expected); - - boolean success = true; - // forward responses from downstream datanodes. - for (int i = 0; i < numTargets && datanode.shouldRun; i++) { - try { - if (op == SUCCESS) { - op = Status.read(mirrorIn); - if (op != SUCCESS) { - success = false; - LOG.debug("PacketResponder for block " + block + - ": error code received from downstream " + - " datanode[" + i + "] " + op); - } - } - } catch (Throwable e) { - op = ERROR; - success = false; + // construct my ack message + Status[] replies = null; + if (!didRead) { // no ack is read + replies = new Status[2]; + replies[0] = SUCCESS; + replies[1] = ERROR; + } else { + replies = new Status[1+ack.getNumOfReplies()]; + replies[0] = SUCCESS; + for (int i=0; ireplicaInfo.getBytesAcked()) { + if (replyAck.isSuccess() && + pkt.lastByteInBlock>replicaInfo.getBytesAcked()) { replicaInfo.setBytesAcked(pkt.lastByteInBlock); } } - // If we were unable to read the seqno from downstream, then stop. - if (expected == -2) { - running = false; - } // If we forwarded an error response from a downstream datanode // and we are acting on behalf of a client, then we quit. The // client will drive the recovery mechanism. - if (op == ERROR && receiver.clientName.length() > 0) { + if (!replyAck.isSuccess() && receiver.clientName.length() > 0) { running = false; } } catch (IOException e) { Propchange: hadoop/hdfs/branches/branch-0.21/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -3,4 +3,4 @@ /hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DatanodeBlockInfo.java:776175-785643,785929-786278 /hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java:776175-785643,785929-786278 /hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java:796829-820463 -/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Propchange: hadoop/hdfs/branches/branch-0.21/src/test/aop/org/apache/hadoop/hdfs/protocol/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1 +1 @@ -/hadoop/hdfs/trunk/src/test/aop/org/apache/hadoop/hdfs/protocol:817853-817863,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/test/aop/org/apache/hadoop/hdfs/protocol:817853-817863,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Modified: hadoop/hdfs/branches/branch-0.21/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/branch-0.21/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj?rev=888525&r1=888524&r2=888525&view=diff ============================================================================== --- hadoop/hdfs/branches/branch-0.21/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj (original) +++ hadoop/hdfs/branches/branch-0.21/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj Tue Dec 8 19:06:27 2009 @@ -18,6 +18,7 @@ package org.apache.hadoop.hdfs.server.datanode; import java.io.DataInput; +import java.io.DataOutput; import java.io.IOException; import java.io.OutputStream; @@ -31,6 +32,7 @@ import org.apache.hadoop.hdfs.PipelinesTestUtil.PipelinesTest; import org.apache.hadoop.hdfs.PipelinesTestUtil.NodeBytes; import org.apache.hadoop.hdfs.protocol.DataTransferProtocol.Status; +import org.apache.hadoop.hdfs.protocol.DataTransferProtocol.PipelineAck; import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration; import org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException; @@ -140,7 +142,7 @@ } pointcut preventAckSending () : - call (void ackReply(long)) + call (void PipelineAck.write(DataOutput)) && within (PacketResponder); static int ackCounter = 0; @@ -193,7 +195,7 @@ } pointcut pipelineAck(BlockReceiver.PacketResponder packetresponder) : - call (Status Status.read(DataInput)) + call (void PipelineAck.readFields(DataInput)) && this(packetresponder); after(BlockReceiver.PacketResponder packetresponder) throws IOException Propchange: hadoop/hdfs/branches/branch-0.21/src/test/hdfs/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,4 +1,4 @@ /hadoop/core/branches/branch-0.19/hdfs/src/test/hdfs:713112 /hadoop/core/trunk/src/test/hdfs:776175-785643 /hadoop/hdfs/branches/HDFS-265/src/test/hdfs:796829-820463 -/hadoop/hdfs/trunk/src/test/hdfs:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/test/hdfs:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Modified: hadoop/hdfs/branches/branch-0.21/src/test/hdfs/org/apache/hadoop/hdfs/TestDataTransferProtocol.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/branch-0.21/src/test/hdfs/org/apache/hadoop/hdfs/TestDataTransferProtocol.java?rev=888525&r1=888524&r2=888525&view=diff ============================================================================== --- hadoop/hdfs/branches/branch-0.21/src/test/hdfs/org/apache/hadoop/hdfs/TestDataTransferProtocol.java (original) +++ hadoop/hdfs/branches/branch-0.21/src/test/hdfs/org/apache/hadoop/hdfs/TestDataTransferProtocol.java Tue Dec 8 19:06:27 2009 @@ -19,6 +19,8 @@ import static org.apache.hadoop.hdfs.protocol.DataTransferProtocol.Op.READ_BLOCK; import static org.apache.hadoop.hdfs.protocol.DataTransferProtocol.Op.WRITE_BLOCK; +import static org.apache.hadoop.hdfs.protocol.DataTransferProtocol.PipelineAck; +import static org.apache.hadoop.hdfs.protocol.DataTransferProtocol.Status; import static org.apache.hadoop.hdfs.protocol.DataTransferProtocol.Status.ERROR; import static org.apache.hadoop.hdfs.protocol.DataTransferProtocol.Status.SUCCESS; @@ -157,9 +159,8 @@ //ok finally write a block with 0 len SUCCESS.write(recvOut); - Text.writeString(recvOut, ""); // first bad node - recvOut.writeLong(100); // sequencenumber - SUCCESS.write(recvOut); + Text.writeString(recvOut, ""); + new PipelineAck(100, new Status[]{SUCCESS}).write(recvOut); sendRecvData(description, false); } @@ -381,9 +382,8 @@ // bad data chunk length sendOut.writeInt(-1-random.nextInt(oneMil)); SUCCESS.write(recvOut); - Text.writeString(recvOut, ""); // first bad node - recvOut.writeLong(100); // sequencenumber - ERROR.write(recvOut); + Text.writeString(recvOut, ""); + new PipelineAck(100, new Status[]{ERROR}).write(recvOut); sendRecvData("negative DATA_CHUNK len while writing block " + newBlockId, true); @@ -406,9 +406,8 @@ sendOut.flush(); //ok finally write a block with 0 len SUCCESS.write(recvOut); - Text.writeString(recvOut, ""); // first bad node - recvOut.writeLong(100); // sequencenumber - SUCCESS.write(recvOut); + Text.writeString(recvOut, ""); + new PipelineAck(100, new Status[]{SUCCESS}).write(recvOut); sendRecvData("Writing a zero len block blockid " + newBlockId, false); /* Test OP_READ_BLOCK */ Propchange: hadoop/hdfs/branches/branch-0.21/src/webapps/datanode/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,4 +1,4 @@ /hadoop/core/branches/branch-0.19/hdfs/src/webapps/datanode:713112 /hadoop/core/trunk/src/webapps/datanode:776175-784663 /hadoop/hdfs/branches/HDFS-265/src/webapps/datanode:796829-820463 -/hadoop/hdfs/trunk/src/webapps/datanode:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/webapps/datanode:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Propchange: hadoop/hdfs/branches/branch-0.21/src/webapps/hdfs/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,4 +1,4 @@ /hadoop/core/branches/branch-0.19/hdfs/src/webapps/hdfs:713112 /hadoop/core/trunk/src/webapps/hdfs:776175-784663 /hadoop/hdfs/branches/HDFS-265/src/webapps/hdfs:796829-820463 -/hadoop/hdfs/trunk/src/webapps/hdfs:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/webapps/hdfs:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519 Propchange: hadoop/hdfs/branches/branch-0.21/src/webapps/secondary/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Dec 8 19:06:27 2009 @@ -1,4 +1,4 @@ /hadoop/core/branches/branch-0.19/hdfs/src/webapps/secondary:713112 /hadoop/core/trunk/src/webapps/secondary:776175-784663 /hadoop/hdfs/branches/HDFS-265/src/webapps/secondary:796829-820463 -/hadoop/hdfs/trunk/src/webapps/secondary:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084 +/hadoop/hdfs/trunk/src/webapps/secondary:817853-817863,818294-818298,824552,824944,826149,828116,828926,829880,829894,830003,831436,831455-831490,832043,833499,835728,880971,881014,881017,884432,888084,888507,888519