hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yiqun Lin (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-12565) Ozone: Put key operation concurrent executes failed on Windows
Date Fri, 29 Sep 2017 11:59:02 GMT

     [ https://issues.apache.org/jira/browse/HDFS-12565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yiqun Lin updated HDFS-12565:
-----------------------------
    Description: 
When creating a batch size of key under specified bucket, then the error happens on Windows.
This error was found by executing test {{TestOzoneShell#testListKey}}.
{noformat}
org.apache.hadoop.scm.container.common.helpers.StorageContainerException: org.apache.hadoop.scm.container.common.helpers.StorageContainerException:
Invalid write size found. Size: 1768160 Expected: 10
	at org.apache.hadoop.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:373)
	at org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeChunk(ContainerProtocolCalls.java:175)
	at org.apache.hadoop.scm.storage.ChunkOutputStream.writeChunkToContainer(ChunkOutputStream.java:224)
	at org.apache.hadoop.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:154)
	at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:265)
	at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:174)
	at org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:58)
	at org.apache.hadoop.ozone.web.storage.DistributedStorageHandler.commitKey(DistributedStorageHandler.java:405)
	at org.apache.hadoop.ozone.web.handlers.KeyHandler$2.doProcess(KeyHandler.java:196)
	at org.apache.hadoop.ozone.web.handlers.KeyProcessTemplate.handleCall(KeyProcessTemplate.java:91)
	at org.apache.hadoop.ozone.web.handlers.KeyHandler.putKey(KeyHandler.java:199)
{noformat}

The related code({{ChunkUtils#writeData}}):
{code}
 public static void writeData(File chunkFile, ChunkInfo chunkInfo,
      byte[] data) throws
      StorageContainerException, ExecutionException, InterruptedException,
      NoSuchAlgorithmException {
    ...

    try {
      file =
          AsynchronousFileChannel.open(chunkFile.toPath(),
              StandardOpenOption.CREATE,
              StandardOpenOption.WRITE,
              StandardOpenOption.SPARSE,
              StandardOpenOption.SYNC);
      lock = file.lock().get();
      if (chunkInfo.getChecksum() != null &&
          !chunkInfo.getChecksum().isEmpty()) {
        verifyChecksum(chunkInfo, data, log);
      }
      int size = file.write(ByteBuffer.wrap(data), chunkInfo.getOffset()).get();
      if (size != data.length) { <===== error was thrown
        log.error("Invalid write size found. Size:{}  Expected: {} ", size,   
            data.length);
        throw new StorageContainerException("Invalid write size found. " +
            "Size: " + size + " Expected: " + data.length, INVALID_WRITE_SIZE);
      }
...
{code}
But if we only put one single key, it runs well.

  was:
When creating a batch size of key under specified bucket, then the error happens on Windows.
This error was found by executing {{TestOzoneShell#testListKey()}}.
{noformat}
org.apache.hadoop.scm.container.common.helpers.StorageContainerException: org.apache.hadoop.scm.container.common.helpers.StorageContainerException:
Invalid write size found. Size: 1768160 Expected: 10
	at org.apache.hadoop.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:373)
	at org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeChunk(ContainerProtocolCalls.java:175)
	at org.apache.hadoop.scm.storage.ChunkOutputStream.writeChunkToContainer(ChunkOutputStream.java:224)
	at org.apache.hadoop.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:154)
	at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:265)
	at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:174)
	at org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:58)
	at org.apache.hadoop.ozone.web.storage.DistributedStorageHandler.commitKey(DistributedStorageHandler.java:405)
	at org.apache.hadoop.ozone.web.handlers.KeyHandler$2.doProcess(KeyHandler.java:196)
	at org.apache.hadoop.ozone.web.handlers.KeyProcessTemplate.handleCall(KeyProcessTemplate.java:91)
	at org.apache.hadoop.ozone.web.handlers.KeyHandler.putKey(KeyHandler.java:199)
{noformat}

The related codes(ChunkUtils#writeData):
{code}
 public static void writeData(File chunkFile, ChunkInfo chunkInfo,
      byte[] data) throws
      StorageContainerException, ExecutionException, InterruptedException,
      NoSuchAlgorithmException {
    ...

    try {
      file =
          AsynchronousFileChannel.open(chunkFile.toPath(),
              StandardOpenOption.CREATE,
              StandardOpenOption.WRITE,
              StandardOpenOption.SPARSE,
              StandardOpenOption.SYNC);
      lock = file.lock().get();
      if (chunkInfo.getChecksum() != null &&
          !chunkInfo.getChecksum().isEmpty()) {
        verifyChecksum(chunkInfo, data, log);
      }
      int size = file.write(ByteBuffer.wrap(data), chunkInfo.getOffset()).get();
      if (size != data.length) { <===== error was thrown
        log.error("Invalid write size found. Size:{}  Expected: {} ", size,   
            data.length);
        throw new StorageContainerException("Invalid write size found. " +
            "Size: " + size + " Expected: " + data.length, INVALID_WRITE_SIZE);
      }
...
{code}
But if we only put one single key, it runs well.


> Ozone: Put key operation concurrent executes failed on Windows
> --------------------------------------------------------------
>
>                 Key: HDFS-12565
>                 URL: https://issues.apache.org/jira/browse/HDFS-12565
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>    Affects Versions: HDFS-7240
>            Reporter: Yiqun Lin
>
> When creating a batch size of key under specified bucket, then the error happens on Windows.
This error was found by executing test {{TestOzoneShell#testListKey}}.
> {noformat}
> org.apache.hadoop.scm.container.common.helpers.StorageContainerException: org.apache.hadoop.scm.container.common.helpers.StorageContainerException:
Invalid write size found. Size: 1768160 Expected: 10
> 	at org.apache.hadoop.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:373)
> 	at org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeChunk(ContainerProtocolCalls.java:175)
> 	at org.apache.hadoop.scm.storage.ChunkOutputStream.writeChunkToContainer(ChunkOutputStream.java:224)
> 	at org.apache.hadoop.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:154)
> 	at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:265)
> 	at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:174)
> 	at org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:58)
> 	at org.apache.hadoop.ozone.web.storage.DistributedStorageHandler.commitKey(DistributedStorageHandler.java:405)
> 	at org.apache.hadoop.ozone.web.handlers.KeyHandler$2.doProcess(KeyHandler.java:196)
> 	at org.apache.hadoop.ozone.web.handlers.KeyProcessTemplate.handleCall(KeyProcessTemplate.java:91)
> 	at org.apache.hadoop.ozone.web.handlers.KeyHandler.putKey(KeyHandler.java:199)
> {noformat}
> The related code({{ChunkUtils#writeData}}):
> {code}
>  public static void writeData(File chunkFile, ChunkInfo chunkInfo,
>       byte[] data) throws
>       StorageContainerException, ExecutionException, InterruptedException,
>       NoSuchAlgorithmException {
>     ...
>     try {
>       file =
>           AsynchronousFileChannel.open(chunkFile.toPath(),
>               StandardOpenOption.CREATE,
>               StandardOpenOption.WRITE,
>               StandardOpenOption.SPARSE,
>               StandardOpenOption.SYNC);
>       lock = file.lock().get();
>       if (chunkInfo.getChecksum() != null &&
>           !chunkInfo.getChecksum().isEmpty()) {
>         verifyChecksum(chunkInfo, data, log);
>       }
>       int size = file.write(ByteBuffer.wrap(data), chunkInfo.getOffset()).get();
>       if (size != data.length) { <===== error was thrown
>         log.error("Invalid write size found. Size:{}  Expected: {} ", size,   
>             data.length);
>         throw new StorageContainerException("Invalid write size found. " +
>             "Size: " + size + " Expected: " + data.length, INVALID_WRITE_SIZE);
>       }
> ...
> {code}
> But if we only put one single key, it runs well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message