hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiaoyu Yao (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDDS-1326) putkey operation failed with java.lang.ArrayIndexOutOfBoundsException
Date Sat, 23 Mar 2019 16:41:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Xiaoyu Yao resolved HDDS-1326.
------------------------------
    Resolution: Duplicate

> putkey operation failed with java.lang.ArrayIndexOutOfBoundsException
> ---------------------------------------------------------------------
>
>                 Key: HDDS-1326
>                 URL: https://issues.apache.org/jira/browse/HDDS-1326
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Nilotpal Nandi
>            Assignee: Shashikant Banerjee
>            Priority: Blocker
>
> steps taken :
> -------------------
>  # trying to write key in 40 node cluster.
>  # write failed.
> client output
> -------------------
>  
> {noformat}
> e530-491c-ab03-3b1c34d1a751:c80390, 974a806d-bf7d-4f1b-adb4-d51d802d368a:c80390, 469bd8c4-5da2-43bb-bc4b-7edd884931e5:c80390]
> 2019-03-22 10:56:19,592 [main] WARN - Encountered exception {}
> java.io.IOException: Unexpected Storage Container Exception: java.util.concurrent.ExecutionException:
java.util.concurrent.CompletionException: org.apache.ratis.protocol.StateMachineException:
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException from Server
5d3eb91f-e530-491c-ab03-3b1c34d1a751: Container 1269 in CLOSED state
>  at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:511)
>  at org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:144)
>  at org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:565)
>  at org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:329)
>  at org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:273)
>  at org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>  at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:96)
>  at org.apache.hadoop.ozone.web.ozShell.keys.PutKeyHandler.call(PutKeyHandler.java:111)
>  at org.apache.hadoop.ozone.web.ozShell.keys.PutKeyHandler.call(PutKeyHandler.java:53)
>  at picocli.CommandLine.execute(CommandLine.java:919)
>  at picocli.CommandLine.access$700(CommandLine.java:104)
>  at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
>  at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
>  at picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
>  at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
>  at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
>  at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.execute(Shell.java:82)
>  at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:93)
> Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.CompletionException:
org.apache.ratis.protocol.StateMachineException: org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException
from Server 5d3eb91f-e530-491c-ab03-3b1c34d1a751: Container 1269 in CLOSED state
>  at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>  at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:529)
>  at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:481)
>  at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:496)
>  ... 19 more
> Caused by: java.util.concurrent.CompletionException: org.apache.ratis.protocol.StateMachineException:
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException from Server
5d3eb91f-e530-491c-ab03-3b1c34d1a751: Container 1269 in CLOSED state
>  at org.apache.ratis.client.impl.RaftClientImpl.handleStateMachineException(RaftClientImpl.java:402)
>  at org.apache.ratis.client.impl.RaftClientImpl.lambda$sendAsync$3(RaftClientImpl.java:198)
>  at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>  at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>  at org.apache.ratis.client.impl.RaftClientImpl$PendingAsyncRequest.setReply(RaftClientImpl.java:95)
>  at org.apache.ratis.client.impl.RaftClientImpl$PendingAsyncRequest.setReply(RaftClientImpl.java:75)
>  at org.apache.ratis.util.SlidingWindow$RequestMap.setReply(SlidingWindow.java:127)
>  at org.apache.ratis.util.SlidingWindow$Client.receiveReply(SlidingWindow.java:279)
>  at org.apache.ratis.client.impl.RaftClientImpl.lambda$sendRequestAsync$13(RaftClientImpl.java:344)
>  at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>  at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.lambda$onNext$0(GrpcClientProtocolClient.java:262)
>  at java.util.Optional.ifPresent(Optional.java:159)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers.handleReplyFuture(GrpcClientProtocolClient.java:314)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers.access$100(GrpcClientProtocolClient.java:247)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:262)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:250)
>  at org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onMessage(ClientCalls.java:421)
>  at org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onMessage(ForwardingClientCallListener.java:33)
>  at org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onMessage(ForwardingClientCallListener.java:33)
>  at org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInContext(ClientCallImpl.java:519)
>  at org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.ratis.protocol.StateMachineException: org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException
from Server 5d3eb91f-e530-491c-ab03-3b1c34d1a751: Container 1269 in CLOSED state
>  at org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.validateContainerCommand(HddsDispatcher.java:399)
>  at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.startTransaction(ContainerStateMachine.java:269)
>  at org.apache.ratis.server.impl.RaftServerImpl.submitClientRequestAsync(RaftServerImpl.java:546)
>  at org.apache.ratis.server.impl.RaftServerProxy.lambda$submitClientRequestAsync$7(RaftServerProxy.java:325)
>  at org.apache.ratis.server.impl.RaftServerProxy.lambda$null$5(RaftServerProxy.java:320)
>  at org.apache.ratis.util.JavaUtils.callAsUnchecked(JavaUtils.java:109)
>  at org.apache.ratis.server.impl.RaftServerProxy.lambda$submitRequest$6(RaftServerProxy.java:320)
>  at java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:981)
>  at java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2124)
>  at org.apache.ratis.server.impl.RaftServerProxy.submitRequest(RaftServerProxy.java:319)
>  at org.apache.ratis.server.impl.RaftServerProxy.submitClientRequestAsync(RaftServerProxy.java:325)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolService$RequestStreamObserver.processClientRequest(GrpcClientProtocolService.java:151)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolService$AppendRequestStreamObserver.processClientRequest(GrpcClientProtocolService.java:251)
>  at org.apache.ratis.util.SlidingWindow$Server.processRequestsFromHead(SlidingWindow.java:391)
>  at org.apache.ratis.util.SlidingWindow$Server.receivedRequest(SlidingWindow.java:383)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolService$AppendRequestStreamObserver.processClientRequest(GrpcClientProtocolService.java:257)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolService$RequestStreamObserver.onNext(GrpcClientProtocolService.java:171)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolService$RequestStreamObserver.onNext(GrpcClientProtocolService.java:118)
>  at org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  ... 5 more
> Caused by: org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException:
Container 1269 in CLOSED state
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at org.apache.ratis.util.ReflectionUtils.instantiateException(ReflectionUtils.java:222)
>  at org.apache.ratis.client.impl.ClientProtoUtils.wrapStateMachineException(ClientProtoUtils.java:286)
>  at org.apache.ratis.client.impl.ClientProtoUtils.toRaftClientReply(ClientProtoUtils.java:238)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:255)
>  at org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:250)
>  at org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onMessage(ClientCalls.java:421)
>  at org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onMessage(ForwardingClientCallListener.java:33)
>  at org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onMessage(ForwardingClientCallListener.java:33)
>  at org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInContext(ClientCallImpl.java:519)
>  ... 5 more
> 2019-03-22 10:56:19,595 [main] INFO - The last committed block length is 0, uncommitted
data length is 67108864
> 2019-03-22 10:56:19,666 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-03A022B396EA->f55d3aba-ffa1-4d82-9fe6-d754fa3d504a@group-F12EE47F2861,
cid=49, seq=0 RW, org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$64/209360730@28486680
> 2019-03-22 10:56:19,706 INFO client.GrpcClientProtocolClient: client-03A022B396EA->f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:
receive RaftClientReply:client-03A022B396EA->f55d3aba-ffa1-4d82-9fe6-d754fa3d504a@group-F12EE47F2861,
cid=49, FAILED org.apache.ratis.protocol.NotLeaderException: Server f55d3aba-ffa1-4d82-9fe6-d754fa3d504a
is not the leader (d491e500-4742-4de3-8730-dd3763bc7b76:172.27.12.145:9858). Request must
be sent to leader., logIndex=0, commits[f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80395, 42e1cfcf-b223-4afe-9855-45c99dc76583:c80395,
d491e500-4742-4de3-8730-dd3763bc7b76:c80395]
> 2019-03-22 10:56:20,214 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-03A022B396EA->42e1cfcf-b223-4afe-9855-45c99dc76583@group-F12EE47F2861,
cid=49, seq=0 RW, org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$64/209360730@28486680
> 2019-03-22 10:56:20,265 INFO client.GrpcClientProtocolClient: client-03A022B396EA->42e1cfcf-b223-4afe-9855-45c99dc76583:
receive RaftClientReply:client-03A022B396EA->42e1cfcf-b223-4afe-9855-45c99dc76583@group-F12EE47F2861,
cid=49, FAILED org.apache.ratis.protocol.NotLeaderException: Server 42e1cfcf-b223-4afe-9855-45c99dc76583
is not the leader (d491e500-4742-4de3-8730-dd3763bc7b76:172.27.12.145:9858). Request must
be sent to leader., logIndex=0, commits[42e1cfcf-b223-4afe-9855-45c99dc76583:c80395, f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80395,
d491e500-4742-4de3-8730-dd3763bc7b76:c80395]
> 2019-03-22 10:56:20,767 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=49, seq=0 RW, org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$64/209360730@28486680
> 2019-03-22 10:56:20,901 INFO client.GrpcClientProtocolClient: client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76:
receive RaftClientReply:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=49, SUCCESS, logIndex=80396, commits[d491e500-4742-4de3-8730-dd3763bc7b76:c80396, 42e1cfcf-b223-4afe-9855-45c99dc76583:c80395,
f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80395]
> 2019-03-22 10:56:20,906 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=50, seq=1 RW, org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$64/209360730@8ba84fa
> 2019-03-22 10:56:20,911 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=51, seq=2 RW, org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$64/209360730@64f5c2f3
> 2019-03-22 10:56:20,918 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=52, seq=3 RW, org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$64/209360730@e0f3906
> 2019-03-22 10:56:20,918 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=53, seq=4 RW, org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$64/209360730@360873cb
> 2019-03-22 10:56:21,031 INFO client.GrpcClientProtocolClient: client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76:
receive RaftClientReply:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=50, SUCCESS, logIndex=80398, commits[d491e500-4742-4de3-8730-dd3763bc7b76:c80398, 42e1cfcf-b223-4afe-9855-45c99dc76583:c80395,
f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80397]
> 2019-03-22 10:56:21,119 INFO client.GrpcClientProtocolClient: client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76:
receive RaftClientReply:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=51, SUCCESS, logIndex=80400, commits[d491e500-4742-4de3-8730-dd3763bc7b76:c80400, 42e1cfcf-b223-4afe-9855-45c99dc76583:c80395,
f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80399]
> 2019-03-22 10:56:21,165 INFO client.GrpcClientProtocolClient: client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76:
receive RaftClientReply:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=52, SUCCESS, logIndex=80401, commits[d491e500-4742-4de3-8730-dd3763bc7b76:c80403, 42e1cfcf-b223-4afe-9855-45c99dc76583:c80395,
f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80400]
> 2019-03-22 10:56:21,166 INFO client.GrpcClientProtocolClient: client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76:
receive RaftClientReply:client-03A022B396EA->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=53, SUCCESS, logIndex=80402, commits[d491e500-4742-4de3-8730-dd3763bc7b76:c80403, 42e1cfcf-b223-4afe-9855-45c99dc76583:c80395,
f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80400]
> 2019-03-22 10:56:21,170 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-A1DE55D84234->f55d3aba-ffa1-4d82-9fe6-d754fa3d504a@group-F12EE47F2861,
cid=54, seq=-1 Watch-ALL_COMMITTED(80402), null
> 2019-03-22 10:56:21,176 INFO client.GrpcClientProtocolClient: client-A1DE55D84234->f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:
receive RaftClientReply:client-A1DE55D84234->f55d3aba-ffa1-4d82-9fe6-d754fa3d504a@group-F12EE47F2861,
cid=54, FAILED org.apache.ratis.protocol.NotLeaderException: Server f55d3aba-ffa1-4d82-9fe6-d754fa3d504a
is not the leader (d491e500-4742-4de3-8730-dd3763bc7b76:172.27.12.145:9858). Request must
be sent to leader., logIndex=0, commits[f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80402, 42e1cfcf-b223-4afe-9855-45c99dc76583:c80395,
d491e500-4742-4de3-8730-dd3763bc7b76:c80402]
> 2019-03-22 10:56:21,680 INFO client.GrpcClientProtocolClient: schedule 3000ms timeout
check for RaftClientRequest:client-A1DE55D84234->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=54, seq=-1 Watch-ALL_COMMITTED(80402), null
> 2019-03-22 10:56:21,684 INFO client.GrpcClientProtocolClient: client-A1DE55D84234->d491e500-4742-4de3-8730-dd3763bc7b76:
receive RaftClientReply:client-A1DE55D84234->d491e500-4742-4de3-8730-dd3763bc7b76@group-F12EE47F2861,
cid=54, SUCCESS, logIndex=0, commits[d491e500-4742-4de3-8730-dd3763bc7b76:c80405, 42e1cfcf-b223-4afe-9855-45c99dc76583:c80402,
f55d3aba-ffa1-4d82-9fe6-d754fa3d504a:c80405]
> 2019-03-22 10:56:21,711 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889).
Trying to failover immediately.
> 2019-03-22 10:56:21,712 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 1 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,712 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 2 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,712 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 3 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,713 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 4 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,713 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 5 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,714 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 6 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,714 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 7 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,714 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 8 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,715 INFO retry.RetryInvocationHandler: com.google.protobuf.ServiceException:
java.lang.ArrayIndexOutOfBoundsException: 1, while invoking $Proxy14.submitRequest over null(ctr-e139-1542663976389-88823-01-000009.hwx.site:9889)
after 9 failover attempts. Trying to failover immediately.
> 2019-03-22 10:56:21,716 [main] ERROR - Failed to connect to OM. Attempted 10 retries
and 10 failovers
> 2019-03-22 10:56:21,717 [main] ERROR - Try to allocate more blocks for write failed,
already allocated 0 blocks for this write.
> com.google.protobuf.ServiceException: java.lang.ArrayIndexOutOfBoundsException: 1
>  
>  
>  
> {noformat}
>  
> ozone.log
> -----------------
> {noformat}
> 2019-03-22 10:56:19,595 [main] INFO (KeyOutputStream.java:405) - The last committed block
length is 0, uncommitted data length is 67108864
> 2019-03-22 10:56:20,891 [grpc-default-executor-238] DEBUG (ContainerStateMachine.java:388)
- writeChunk writeStateMachineData : blockId containerID: 1291
> localID: 101793934825535032
> blockCommitSequenceId: 0
>  logIndex 79616 chunkName b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1
> 2019-03-22 10:56:20,891 [pool-7-thread-8] DEBUG (ChunkManagerImpl.java:91) - writing
chunk:b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1
chunk stage:WRITE_DATA chunk file:/tmp/hadoop-root/dfs/data/hdds/b0db11bc-f9d2-4faf-890f-9c93823a5688/current/containerDir2/1291/chunks/b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1
tmp chunk file:/tmp/hadoop-root/dfs/data/hdds/b0db11bc-f9d2-4faf-890f-9c93823a5688/current/containerDir2/1291/chunks/b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1.tmp.6.79616
> 2019-03-22 10:56:20,895 [pool-7-thread-8] DEBUG (ChunkUtils.java:138) - Write Chunk completed
for chunkFile: /tmp/hadoop-root/dfs/data/hdds/b0db11bc-f9d2-4faf-890f-9c93823a5688/current/containerDir2/1291/chunks/b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1.tmp.6.79616,
size 1048576
> 2019-03-22 10:56:20,896 [pool-7-thread-8] DEBUG (ContainerStateMachine.java:395) - writeChunk
writeStateMachineData completed: blockId containerID: 1291
> localID: 101793934825535032
> blockCommitSequenceId: 0
>  logIndex 79616 chunkName b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1
> 2019-03-22 10:56:20,897 [pool-9-thread-1] DEBUG (ChunkManagerImpl.java:91) - writing
chunk:b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1
chunk stage:COMMIT_DATA chunk file:/tmp/hadoop-root/dfs/data/hdds/b0db11bc-f9d2-4faf-890f-9c93823a5688/current/containerDir2/1291/chunks/b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1
tmp chunk file:/tmp/hadoop-root/dfs/data/hdds/b0db11bc-f9d2-4faf-890f-9c93823a5688/current/containerDir2/1291/chunks/b504206fede1e0360f34b17c6e0ff813_stream_e5f7a41e-97a5-4c8d-b343-445b924c8a28_chunk_1.tmp.6.79616
> 2019-03-22 10:56:20,904 [grpc-default-executor-238] DEBUG (GrpcXceiverService.java:91)
- {}: ContainerCommand send completed
> 2019-03-22 10:56:20,918 [pool-9-thread-1] DEBUG (BlockManagerImpl.java:124) - Block conID:
1291 locID: 101793934825535032 bcId: 79618 successfully committed with bcsId 79618 chunk size
1
> 2019-03-22 10:56:21,711 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,712 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,712 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,713 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,713 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,713 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,714 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,714 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,714 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,715 [main] DEBUG (OMFailoverProxyProvider.java:215) - Failing over
OM proxy to index: 0, nodeId: omNodeIdDummy
> 2019-03-22 10:56:21,716 [main] ERROR (OzoneManagerProtocolClientSideTranslatorPB.java:235)
- Failed to connect to OM. Attempted 10 retries and 10 failovers
> 2019-03-22 10:56:21,717 [main] ERROR (KeyOutputStream.java:292) - Try to allocate more
blocks for write failed, already allocated 0 blocks for this write.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message