From hdfs-issues-return-260293-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Mon Apr 15 04:41:02 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 80C1E180638 for ; Mon, 15 Apr 2019 06:41:02 +0200 (CEST) Received: (qmail 16590 invoked by uid 500); 15 Apr 2019 04:41:01 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 16569 invoked by uid 99); 15 Apr 2019 04:41:01 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 15 Apr 2019 04:41:01 +0000 Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id AE8BAE02FD for ; Mon, 15 Apr 2019 04:41:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 457BC24599 for ; Mon, 15 Apr 2019 04:41:00 +0000 (UTC) Date: Mon, 15 Apr 2019 04:41:00 +0000 (UTC) From: "Hadoop QA (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-14117?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1681= 7553#comment-16817553 ]=20 Hadoop QA commented on HDFS-14117: ---------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s= {color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0= m 0s{color} | {color:green} The patch does not contain any @author tags. {= color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green}= 0m 0s{color} | {color:green} The patch appears to include 7 new or modif= ied test files. {color} | || || || || {color:brown} HDFS-13891 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}= 20m 18s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0= m 32s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}= 0m 21s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0= m 37s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:gree= n} 13m 5s{color} | {color:green} branch has no errors when building and te= sting our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} = 1m 0s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0= m 38s{color} | {color:green} HDFS-13891 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}= 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0= m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m = 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}= 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0= m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green}= 0m 0s{color} | {color:green} The patch has no whitespace issues. {color}= | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:gree= n} 13m 23s{color} | {color:green} patch has no errors when building and tes= ting our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} = 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0= m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 46s{col= or} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green}= 0m 37s{color} | {color:green} The patch does not generate ASF License war= nings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 13s{colo= r} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterTrash= | | | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant | | | hadoop.hdfs.server.federation.router.TestRouterRpc | \\ \\ || Subsystem || Report/Notes || | Docker | Client=3D17.05.0-ce Server=3D17.05.0-ce Image:yetus/hadoop:8f97d= 6f | | JIRA Issue | HDFS-14117 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1296589= 8/HDFS-14117-HDFS-13891.018.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstal= l mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 62cc7e27ffbf 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 = 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-13891 / e508ab9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/26632/artifact/= out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/26632/= testReport/ | | Max. process+thread count | 1006 (vs. ulimit of 10000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/h= adoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/26632= /console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > RBF: We can only delete the files or dirs of one subcluster in a cluster = with multiple subclusters when trash is enabled > -------------------------------------------------------------------------= ----------------------------------------------- > > Key: HDFS-14117 > URL: https://issues.apache.org/jira/browse/HDFS-14117 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: venkata ramkumar > Assignee: venkata ramkumar > Priority: Major > Labels: RBF > Attachments: HDFS-14117-HDFS-13891.001.patch, HDFS-14117-HDFS-138= 91.002.patch, HDFS-14117-HDFS-13891.003.patch, HDFS-14117-HDFS-13891.004.pa= tch, HDFS-14117-HDFS-13891.005.patch, HDFS-14117-HDFS-13891.006.patch, HDFS= -14117-HDFS-13891.007.patch, HDFS-14117-HDFS-13891.008.patch, HDFS-14117-HD= FS-13891.009.patch, HDFS-14117-HDFS-13891.010.patch, HDFS-14117-HDFS-13891.= 011.patch, HDFS-14117-HDFS-13891.012.patch, HDFS-14117-HDFS-13891.013.patch= , HDFS-14117-HDFS-13891.014.patch, HDFS-14117-HDFS-13891.015.patch, HDFS-14= 117-HDFS-13891.016.patch, HDFS-14117-HDFS-13891.017.patch, HDFS-14117-HDFS-= 13891.018.patch, HDFS-14117.001.patch, HDFS-14117.002.patch, HDFS-14117.003= .patch, HDFS-14117.004.patch, HDFS-14117.005.patch > > > When we delete files or dirs in hdfs, it will move the deleted files or d= irs to trash by default. > But in the global path we can only mount one trash dir /user. So we mount= trash dir /user of the subcluster ns1 to the global path /user. Then we ca= n delete files or dirs of ns1, but when we delete the files or dirs of anot= her subcluser, such as hacluster, it will be failed. > h1. Mount Table > ||Global path||Target nameservice||Target path||Order||Read only||Owner||= Group||Permission||Quota/Usage||Date Modified||Date Created|| > |/test|hacluster2|/test|=C2=A0|=C2=A0|securedn|users|rwxr-xr-x|[NsQuota: = -/-, SsQuota: -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42| > |/tmp|hacluster1|/tmp|=C2=A0|=C2=A0|securedn|users|rwxr-xr-x|[NsQuota: -/= -, SsQuota: -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05| > |/user|hacluster2,hacluster1|/user|HASH|=C2=A0|securedn|users|rwxr-xr-x|[= NsQuota: -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20| > commands:=C2=A0 > {noformat} > 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/. > 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoo= p library for your platform... using builtin-java classes where applicable > Found 1 items > -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd > 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/. > 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoo= p library for your platform... using builtin-java classes where applicable > Found 1 items > -rw-r--r-- 3 securedn supergroup 6311 2018-11-30 10:57 /tmp/mapre= d.cmd > 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/map= red.cmd > 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoo= p library for your platform... using builtin-java classes where applicable > rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destina= tion parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found. > 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdf= s.cmd > 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoo= p library for your platform... using builtin-java classes where applicable > 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 'hdfs://router/test/= hdfs.cmd' to trash at: hdfs://router/user/securedn/.Trash/Current/test/hdfs= .cmd > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org