Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id BD86D200D23 for ; Thu, 19 Oct 2017 22:15:07 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id BBF511609D7; Thu, 19 Oct 2017 20:15:07 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id B39031609EE for ; Thu, 19 Oct 2017 22:15:06 +0200 (CEST) Received: (qmail 51411 invoked by uid 500); 19 Oct 2017 20:15:05 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 51400 invoked by uid 99); 19 Oct 2017 20:15:05 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Oct 2017 20:15:05 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id F3ABF1A003A for ; Thu, 19 Oct 2017 20:15:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -98.701 X-Spam-Level: X-Spam-Status: No, score=-98.701 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_NUMSUBJECT=0.5, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id IVC5ny8ITCx6 for ; Thu, 19 Oct 2017 20:15:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 09B4A5FDD9 for ; Thu, 19 Oct 2017 20:15:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 3DCEFE0D65 for ; Thu, 19 Oct 2017 20:15:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id ED8B321EE7 for ; Thu, 19 Oct 2017 20:15:00 +0000 (UTC) Date: Thu, 19 Oct 2017 20:15:00 +0000 (UTC) From: =?utf-8?Q?=C3=8D=C3=B1igo_Goiri_=28JIRA=29?= To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-12620) Backporting HDFS-10467 to branch-2 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 19 Oct 2017 20:15:07 -0000 [ https://issues.apache.org/jira/browse/HDFS-12620?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1621= 1660#comment-16211660 ]=20 =C3=8D=C3=B1igo Goiri commented on HDFS-12620: ------------------------------------ I got the following: {code} Total Elapsed time: 925m 35s -1 overall _____ _ _ _ | ___|_ _(_) |_ _ _ __ ___| | | |_ / _` | | | | | | '__/ _ \ | | _| (_| | | | |_| | | | __/_| |_| \__,_|_|_|\__,_|_| \___(_) | Vote | Subsystem | Runtime | Comment =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D | 0 | shellcheck | 0m 0s | Shellcheck was not available. | 0 | findbugs | 0m 0s | Findbugs executables are not availab= le. | +1 | @author | 0m 0s | The patch does not contain any @auth= or | | | | tags. | +1 | test4tests | 0m 0s | The patch appears to include 25 new = or | | | | modified test files. | +1 | mvninstall | 5m 53s | branch-2 passed | +1 | compile | 0m 37s | branch-2 passed | +1 | checkstyle | 0m 28s | branch-2 passed | +1 | mvnsite | 0m 45s | branch-2 passed | +1 | mvneclipse | 0m 14s | branch-2 passed | +1 | javadoc | 0m 50s | branch-2 passed | +1 | mvninstall | 0m 38s | the patch passed | +1 | compile | 0m 36s | the patch passed | +1 | cc | 0m 36s | the patch passed | +1 | javac | 0m 36s | the patch passed | -1 | checkstyle | 0m 27s | hadoop-hdfs-project/hadoop-hdfs: The | | | | patch generated 11 new + 624 unchang= ed - | | | | 0 fixed =3D 635 total (was 624) | +1 | mvnsite | 0m 44s | the patch passed | +1 | mvneclipse | 0m 11s | the patch passed | +1 | shelldocs | 0m 3s | There were no new shelldocs issues. | +1 | whitespace | 0m 0s | The patch has no whitespace issues. | +1 | xml | 0m 1s | The patch has no ill-formed XML file= . | +1 | javadoc | 0m 54s | the patch passed | -1 | unit | 891m 25s | hadoop-hdfs in the patch failed. | +1 | asflicense | 20m 29s | The patch does not generate ASF Lice= nse | | | | warnings. | | | 925m 35s | Reason | Tests Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodePeerMetr= ics | hadoop.hdfs.server.datanode.TestDataNodeUUID | hadoop.hdfs.server.namenode.startupprogress.Test= StartupProgress | hadoop.hdfs.server.namenode.ha.TestStandbyBlockM= anagement Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestBlock= Recovery | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestLazyPersistPolicy | org.apache.hadoop.hdfs.server.namenode.ha.TestLo= ssyRetryInvocationHandler | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeVolumeFailure | org.apache.hadoop.hdfs.TestRestartDFS | org.apache.hadoop.hdfs.server.namenode.ha.TestBo= otstrapStandby | org.apache.hadoop.hdfs.server.datanode.TestBlock= CountersInPendingIBR | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeVolumeFailureToleration | org.apache.hadoop.hdfs.server.datanode.TestReadO= nlySharedStorage | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestLazyWriter | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeMetricsLogger | org.apache.hadoop.hdfs.server.namenode.ha.TestXA= ttrsWithHA | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestLazyPersistReplicaRecovery | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeFaultInjector | org.apache.hadoop.hdfs.server.namenode.TestNeste= dEncryptionZones | org.apache.hadoop.hdfs.server.datanode.TestNNHan= dlesCombinedBlockReport | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestLazyPersistLockedMemory | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeMXBean | org.apache.hadoop.hdfs.server.namenode.TestBlock= PlacementPolicyRackFaultTolerant | org.apache.hadoop.hdfs.server.datanode.TestDnRes= pectsBlockReportSplitThreshold | org.apache.hadoop.hdfs.server.datanode.TestIncre= mentalBlockReports | org.apache.hadoop.hdfs.server.datanode.TestFsDat= asetCache | org.apache.hadoop.hdfs.server.namenode.ha.TestDN= Fencing | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestLazyPersistFiles | org.apache.hadoop.hdfs.server.datanode.TestBlock= Scanner | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeLifeline | org.apache.hadoop.hdfs.server.datanode.TestDiskE= rror | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestFsDatasetImpl | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeReconfiguration | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeMultipleRegistrations | org.apache.hadoop.hdfs.server.datanode.TestNNHan= dlesBlockReportPerStorage | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeTransferSocketSize | org.apache.hadoop.hdfs.server.namenode.TestNameN= odeMetadataConsistency | org.apache.hadoop.hdfs.server.namenode.TestHDFSC= oncat | org.apache.hadoop.hdfs.server.datanode.TestRefre= shNamenodes | org.apache.hadoop.hdfs.server.datanode.TestTrans= ferRbw | org.apache.hadoop.hdfs.server.datanode.TestDelet= eBlockPool | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeVolumeMetrics | org.apache.hadoop.hdfs.server.datanode.TestBlock= HasMultipleReplicasOnSameDN | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeVolumeFailureReporting | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestLazyPersistReplicaPlacement | org.apache.hadoop.hdfs.server.datanode.TestLarge= BlockReport | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeHotSwapVolumes | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeExit | org.apache.hadoop.hdfs.server.datanode.TestBatch= Ibr | org.apache.hadoop.hdfs.server.datanode.TestDirec= toryScanner | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestSpaceReservation | org.apache.hadoop.hdfs.server.datanode.TestHSync | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestWriteToReplica | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestInterDatanodeProtocol | org.apache.hadoop.hdfs.server.datanode.TestCachi= ngStrategy | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeECN | org.apache.hadoop.hdfs.server.datanode.TestIncre= mentalBrVariations | org.apache.hadoop.hdfs.server.datanode.fsdataset= .impl.TestDatanodeRestart | org.apache.hadoop.hdfs.server.namenode.ha.TestDe= legationTokensWithHA | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeMetrics | org.apache.hadoop.hdfs.server.datanode.TestDataN= odeRollingUpgrade | org.apache.hadoop.hdfs.server.datanode.TestStora= geReport | org.apache.hadoop.hdfs.server.datanode.TestBlock= Replacement || Subsystem || Report/Notes || =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D | JIRA Issue | HDFS-12620 | | Optional Tests | asflicense xml compile javac javadoc mvninstall m= vnsite unit shellcheck shelldocs findbugs checkstyle cc | | uname | Linux cisl-linux-002 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:4= 2:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/hadoop/hadoop-oss/patchprocess/yetus-0.3.0/lib/precom= mit/personality/hadoop.sh | | git revision | branch-2 / 0aa1b62 | | Default Java | 1.7.0_151 | | checkstyle | /tmp/yetus-32367.13477/diff-checkstyle-hadoop-hdfs-project_h= adoop-hdfs.txt | | unit | /tmp/yetus-32367.13477/patch-unit-hadoop-hdfs-project_hadoop-hdfs.= txt | | unit test logs | /tmp/yetus-32367.13477/patch-unit-hadoop-hdfs-project_h= adoop-hdfs.txt | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoo= p-hdfs | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | {code} The unit tests don't seem very realistic though. I test all of them one by one and all passed. The check styles are what I've already mentioned. I'm trying one last time with jenkins with version 011 but if it doesn't go= through, I would stick to this. > Backporting HDFS-10467 to branch-2 > ---------------------------------- > > Key: HDFS-12620 > URL: https://issues.apache.org/jira/browse/HDFS-12620 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: =C3=8D=C3=B1igo Goiri > Assignee: =C3=8D=C3=B1igo Goiri > Attachments: HDFS-10467-branch-2.001.patch, HDFS-10467-branch-2.0= 02.patch, HDFS-10467-branch-2.003.patch, HDFS-10467-branch-2.patch, HDFS-12= 620-branch-2.000.patch, HDFS-12620-branch-2.004.patch, HDFS-12620-branch-2.= 005.patch, HDFS-12620-branch-2.006.patch, HDFS-12620-branch-2.007.patch, HD= FS-12620-branch-2.008.patch, HDFS-12620-branch-2.009.patch, HDFS-12620-bran= ch-2.010.patch, HDFS-12620-branch-2.011.patch > > > When backporting HDFS-10467, there are a few things that changed: > * {{bin\hdfs}} > * {{ClientProtocol}} > * Java 7 not supporting referencing functions > * {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is {{org.mortbay.util.= ajax.JSON}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org