hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bharat Viswanadham (Jira)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-2043) "VOLUME_NOT_FOUND" exception thrown while listing volumes
Date Tue, 27 Aug 2019 17:39:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916909#comment-16916909
] 

Bharat Viswanadham commented on HDDS-2043:
------------------------------------------

Hi,

Thanks for reporting this issue.

 

I tried on latest trunk.

bash-4.2$ ozone sh volume create /vol1 --user hrt_qa

2019-08-27 17:35:55 INFO  RpcClient:293 - Creating Volume: vol1, with hrt_qa as owner.

bash-4.2$ ozone sh volume create /vol1 --user hrt_qa

2019-08-27 17:36:00 INFO  RpcClient:293 - Creating Volume: vol1, with hrt_qa as owner.

VOLUME_ALREADY_EXISTS 

 

bash-4.2$ ozone sh volume list --user hrt_qa

[ {

  "owner" : {

    "name" : "hrt_qa"

  },

  "quota" : {

    "unit" : "TB",

    "size" : 1048576

  },

  "volumeName" : "vol1",

  "createdOn" : "Tue, 27 Aug 2019 17:35:55 GMT",

  "createdBy" : "hrt_qa"

} ]

 

I see it is working fine.

For the test, I have used docker compose ozone cluster.

 

This kind of issue is fixed in HDDS-1926. With with Non-HA code path completely does not use
HA at all. Let me know if you see any more issue.

 

> "VOLUME_NOT_FOUND" exception thrown while listing volumes
> ---------------------------------------------------------
>
>                 Key: HDDS-2043
>                 URL: https://issues.apache.org/jira/browse/HDDS-2043
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone CLI, Ozone Manager
>            Reporter: Nilotpal Nandi
>            Priority: Blocker
>
> ozone list volume command throws OMException
> bin/ozone sh volume list --user root
>  VOLUME_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Volume info not
found for vol-test-putfile-1566902803
>  
> On enabling DEBUG log , here is the console output :
>  
>  
> {noformat}
> bin/ozone sh volume create /nnnnn1 ; echo $?
> 2019-08-27 11:47:16 DEBUG ThriftSenderFactory:33 - Using the UDP Sender to send spans
to the agent.
> 2019-08-27 11:47:16 DEBUG SenderResolver:86 - Using sender UdpSender()
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops,
always=false, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate of successful
kerberos logins and latency (milliseconds)])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops,
always=false, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate of failed kerberos
logins and latency (milliseconds)])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops,
always=false, valueName=Time, about=, interval=10, type=DEFAULT, value=[GetGroups])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field private org.apache.hadoop.metrics2.lib.MutableGaugeLong
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation
@org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, valueName=Time,
about=, interval=10, type=DEFAULT, value=[Renewal failures since startup])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field private org.apache.hadoop.metrics2.lib.MutableGaugeInt
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation
@org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, valueName=Time,
about=, interval=10, type=DEFAULT, value=[Renewal failures since last successful login])
> 2019-08-27 11:47:16 DEBUG MetricsSystemImpl:231 - UgiMetrics, User and group related
metrics
> 2019-08-27 11:47:16 DEBUG SecurityUtil:124 - Setting hadoop.security.token.service.use_ip
to true
> 2019-08-27 11:47:16 DEBUG Shell:821 - setsid exited with exit code 0
> 2019-08-27 11:47:16 DEBUG Groups:449 - Creating new Groups object
> 2019-08-27 11:47:16 DEBUG Groups:151 - Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
cacheTimeout=300000; warningDeltaMs=5000
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:254 - hadoop login
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:187 - hadoop login commit
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:215 - using local user:UnixPrincipal:
root
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:221 - Using user: "UnixPrincipal: root"
with name root
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:235 - User entry: "root"
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:766 - UGI loginUser:root (auth:SIMPLE)
> 2019-08-27 11:47:16 DEBUG OzoneClientFactory:287 - Using org.apache.hadoop.ozone.client.rpc.RpcClient
as client protocol.
> 2019-08-27 11:47:16 DEBUG Server:280 - rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class
org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@710f4dc7
> 2019-08-27 11:47:16 DEBUG Client:63 - getting client out of cache: org.apache.hadoop.ipc.Client@24313fcc
> 2019-08-27 11:47:16 DEBUG Client:487 - The ping interval is 60000 ms.
> 2019-08-27 11:47:16 DEBUG Client:785 - Connecting to nnandi-1.gce.cloudera.com/172.31.117.213:9862
> 2019-08-27 11:47:16 DEBUG Client:1064 - IPC Client (580871917) connection to nnandi-1.gce.cloudera.com/172.31.117.213:9862
from root: starting, having connections 1
> 2019-08-27 11:47:16 DEBUG Client:1127 - IPC Client (580871917) connection to nnandi-1.gce.cloudera.com/172.31.117.213:9862
from root sending #0 org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol.submitRequest
> 2019-08-27 11:47:17 DEBUG Client:1181 - IPC Client (580871917) connection to nnandi-1.gce.cloudera.com/172.31.117.213:9862
from root got value #0
> 2019-08-27 11:47:17 DEBUG ProtobufRpcEngine:249 - Call: submitRequest took 230ms
> 2019-08-27 11:47:17 DEBUG Client:63 - getting client out of cache: org.apache.hadoop.ipc.Client@24313fcc
> 2019-08-27 11:47:17 DEBUG Groups:312 - GroupCacheLoader - load.
> 2019-08-27 11:47:17 INFO RpcClient:288 - Creating Volume: nnnnn1, with root as owner.
> 2019-08-27 11:47:17 DEBUG Client:1127 - IPC Client (580871917) connection to nnandi-1.gce.cloudera.com/172.31.117.213:9862
from root sending #1 org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol.submitRequest
> 2019-08-27 11:47:17 DEBUG Client:1181 - IPC Client (580871917) connection to nnandi-1.gce.cloudera.com/172.31.117.213:9862
from root got value #1
> 2019-08-27 11:47:17 DEBUG ProtobufRpcEngine:249 - Call: submitRequest took 83ms
> 2019-08-27 11:47:17 DEBUG OzoneClient:55 - Call: public abstract void org.apache.hadoop.ozone.client.protocol.ClientProtocol.createVolume(java.lang.String,org.apache.hadoop.ozone.client.VolumeArgs)
throws java.io.IOException took 114 ms
> 0
>  
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message