hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-14201) Fix some failing tests on windows
Date Sun, 19 Mar 2017 17:10:42 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931828#comment-15931828
] 

Steve Loughran commented on HADOOP-14201:
-----------------------------------------

{code}
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.555 sec <<< FAILURE!
- in org.apache.hadoop.fs.TestFsS
hellList
testList(org.apache.hadoop.fs.TestFsShellList)  Time elapsed: 0.095 sec  <<< ERROR!
org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory name, or volume label
syntax is incorrect.

        at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileOutputStreamWithMode(NativeIO.java:556)
        at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:231)
        at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:221)
        at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:319)
        at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:308)
        at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:339)
        at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:399)
        at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:462)
        at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:441)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:928)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:806)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:795)
        at org.apache.hadoop.fs.TestFsShellList.createFile(TestFsShellList.java:57)
        at org.apache.hadoop.fs.TestFsShellList.testList(TestFsShellList.java:69)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.fs.TestDU
Tests run: 3, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 17.349 sec <<< FAILURE!
- in org.apache.hadoop.fs.TestDU

testDU(org.apache.hadoop.fs.TestDU)  Time elapsed: 11.058 sec  <<< FAILURE!
junit.framework.AssertionFailedError: Invalid on-disk size
        at junit.framework.Assert.fail(Assert.java:57)
        at junit.framework.Assert.assertTrue(Assert.java:22)
        at junit.framework.TestCase.assertTrue(TestCase.java:192)
        at org.apache.hadoop.fs.TestDU.testDU(TestDU.java:87)

testDUSetInitialValue(org.apache.hadoop.fs.TestDU)  Time elapsed: 6.084 sec  <<<
FAILURE!
junit.framework.AssertionFailedError: Usage didn't get updated
        at junit.framework.Assert.fail(Assert.java:57)
        at junit.framework.Assert.assertTrue(Assert.java:22)
        at junit.framework.TestCase.assertTrue(TestCase.java:192)
        at org.apache.hadoop.fs.TestDU.testDUSetInitialValue(TestDU.java:133)
        
        
        testAppendBlockCompression(org.apache.hadoop.io.TestSequenceFileAppend)  Time elapsed:
0.028 sec  <<< ERROR!
java.io.IOException: not a gzip file
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.processBasicHeader(BuiltInGzipDecompressor.java:49
6)
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeHeaderState(BuiltInGzipDecompressor.java:25
7)
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:186)
        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:92)
        at java.io.DataInputStream.readByte(DataInputStream.java:265)
        at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
        at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
        at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2612)
        at org.apache.hadoop.io.TestSequenceFileAppend.verify2Values(TestSequenceFileAppend.java:362)
        at org.apache.hadoop.io.TestSequenceFileAppend.testAppendBlockCompression(TestSequenceFileAppend.java:194)

testAppendSort(org.apache.hadoop.io.TestSequenceFileAppend)  Time elapsed: 0.024 sec  <<<
ERROR!
java.io.IOException: not a gzip file
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.processBasicHeader(BuiltInGzipDecompressor.java:49
6)
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeHeaderState(BuiltInGzipDecompressor.java:25
7)
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:186)
        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:92)
        at java.io.DataInputStream.readByte(DataInputStream.java:265)
        at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
        at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
        at org.apache.hadoop.io.SequenceFile$Reader.nextRaw(SequenceFile.java:2516)
        at org.apache.hadoop.io.SequenceFile$Sorter$SortPass.run(SequenceFile.java:2947)
        at org.apache.hadoop.io.SequenceFile$Sorter.sortPass(SequenceFile.java:2885)
        at org.apache.hadoop.io.SequenceFile$Sorter.sort(SequenceFile.java:2833)
        at org.apache.hadoop.io.SequenceFile$Sorter.sort(SequenceFile.java:2874)
        at org.apache.hadoop.io.TestSequenceFileAppend.testAppendSort(TestSequenceFileAppend.java:353)

testAppendRecordCompression(org.apache.hadoop.io.TestSequenceFileAppend)  Time elapsed: 0.017
sec  <<< ERROR!
java.io.IOException: not a gzip file
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.processBasicHeader(BuiltInGzipDecompressor.java:49
6)
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeHeaderState(BuiltInGzipDecompressor.java:25
7)
        at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:186)
        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:92)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2303)
        at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2596)
        at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2606)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1319)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
        at org.apache.hadoop.io.serializer.JavaSerialization$JavaSerializationDeserializer.deserialize(JavaSerialization
.java:59)
        at org.apache.hadoop.io.serializer.JavaSerialization$JavaSerializationDeserializer.deserialize(JavaSerialization
.java:40)
        at org.apache.hadoop.io.SequenceFile$Reader.deserializeValue(SequenceFile.java:2343)
        at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2316)
        at org.apache.hadoop.io.TestSequenceFileAppend.verify2Values(TestSequenceFileAppend.java:363)
        at org.apache.hadoop.io.TestSequenceFileAppend.testAppendRecordCompression(TestSequenceFileAppend.java:160)


Tests run: 37, Failures: 1, Errors: 1, Skipped: 1, Time elapsed: 149.693 sec <<<
FAILURE! - in org.apache.hadoop.ipc.Tes
tIPC
testInsecureVersionMismatch(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 0.02 sec  <<<
ERROR!
java.io.IOException: Failed on local exception: java.io.IOException: An established connection
was aborted by the softwa
re in your host machine; Host Details : local host is: "morzine/192.168.1.24"; destination
host is: "0.0.0.0":53790;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:785)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1485)
        at org.apache.hadoop.ipc.Client.call(Client.java:1427)
        at org.apache.hadoop.ipc.TestIPC.call(TestIPC.java:155)
        at org.apache.hadoop.ipc.TestIPC.call(TestIPC.java:148)
        at org.apache.hadoop.ipc.TestIPC.checkVersionMismatch(TestIPC.java:1450)
        at org.apache.hadoop.ipc.TestIPC.testInsecureVersionMismatch(TestIPC.java:1415)
Caused by: java.io.IOException: An established connection was aborted by the software in your
host machine
        at sun.nio.ch.SocketDispatcher.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
        at sun.nio.ch.IOUtil.read(IOUtil.java:197)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
        at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:553)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1786)
        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1155)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1052)

testConnectionIdleTimeouts(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 4.181 sec  <<<
FAILURE!
java.lang.AssertionError: expected:<7> but was:<4>
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failNotEquals(Assert.java:743)
        at org.junit.Assert.assertEquals(Assert.java:118)
        at org.junit.Assert.assertEquals(Assert.java:555)
        at org.junit.Assert.assertEquals(Assert.java:542)
        at org.apache.hadoop.ipc.TestIPC.testConnectionIdleTimeouts(TestIPC.java:949)

        Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.726 sec - in org.apache.hadoop.metrics2.impl.Te
        ueue
        Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support
was removed in 8.0
          
Running org.apache.hadoop.metrics2.impl.TestStatsDMetrics
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.791 sec <<< FAILURE!
- in org.apache.hadoop.met
mpl.TestStatsDMetrics
testPutMetrics2(org.apache.hadoop.metrics2.impl.TestStatsDMetrics)  Time elapsed: 0.552 sec
 <<< ERROR!
org.apache.hadoop.metrics2.MetricsException: Error writing metric to StatsD
        at java.net.TwoStacksPlainDatagramSocketImpl.send(Native Method)
        at java.net.DatagramSocket.send(DatagramSocket.java:693)
        at org.apache.hadoop.metrics2.sink.StatsDSink$StatsD.write(StatsDSink.java:203)
        at org.apache.hadoop.metrics2.sink.StatsDSink.writeMetric(StatsDSink.java:151)
        at org.apache.hadoop.metrics2.sink.StatsDSink.putMetrics(StatsDSink.java:144)
        at org.apache.hadoop.metrics2.impl.TestStatsDMetrics.testPutMetrics2(TestStatsDMetrics.java:109)

testPutMetrics(org.apache.hadoop.metrics2.impl.TestStatsDMetrics)  Time elapsed: 0.005 sec
 <<< ERROR!
org.apache.hadoop.metrics2.MetricsException: Error writing metric to StatsD
        at java.net.TwoStacksPlainDatagramSocketImpl.send(Native Method)
        at java.net.DatagramSocket.send(DatagramSocket.java:693)
        at org.apache.hadoop.metrics2.sink.StatsDSink$StatsD.write(StatsDSink.java:203)
        at org.apache.hadoop.metrics2.sink.StatsDSink.writeMetric(StatsDSink.java:151)
        at org.apache.hadoop.metrics2.sink.StatsDSink.putMetrics(StatsDSink.java:144)
        at org.apache.hadoop.metrics2.impl.TestStatsDMetrics.testPutMetrics(TestStatsDMetrics.java:74)


testProxyUserFromEnvironment(org.apache.hadoop.security.TestProxyUserFromEnv)  Time elapsed:
0.58 sec  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[a]dministrator> but was:<[A]dministrator>
        at org.junit.Assert.assertEquals(Assert.java:115)
        at org.junit.Assert.assertEquals(Assert.java:144)
        at org.apache.hadoop.security.TestProxyUserFromEnv.testProxyUserFromEnvironment(TestProxyUserFromEnv.java:54)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0

  
Running org.apache.hadoop.security.TestShellBasedUnixGroupsMapping
Tests run: 4, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 0.367 sec <<< FAILURE!
- in org.apache.hadoop.security.T
estShellBasedUnixGroupsMapping
testGetNumericGroupsResolvable(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)
 Time elapsed: 0.206 sec  <<<
 FAILURE!
java.lang.AssertionError: null
        at org.junit.Assert.fail(Assert.java:86)
        at org.junit.Assert.assertTrue(Assert.java:41)
        at org.junit.Assert.assertTrue(Assert.java:52)
        at org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testGetNumericGroupsResolvable(TestShellBasedUnixG
roupsMapping.java:160)

testGetGroupsNotResolvable(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)  Time
elapsed: 0.003 sec  <<< FAI
LURE!
java.lang.AssertionError: null
        at org.junit.Assert.fail(Assert.java:86)
        at org.junit.Assert.assertTrue(Assert.java:41)
        at org.junit.Assert.assertTrue(Assert.java:52)
        at org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testGetGroupsNotResolvable(TestShellBasedUnixGroup
sMapping.java:112)

testGetGroupsResolvable(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)  Time
elapsed: 0.002 sec  <<< FAILUR
E!
java.lang.AssertionError: null
        at org.junit.Assert.fail(Assert.java:86)
        at org.junit.Assert.assertTrue(Assert.java:41)
        at org.junit.Assert.assertTrue(Assert.java:52)
        at org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testGetGroupsResolvable(TestShellBasedUnixGroupsMa
pping.java:206)


Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.506 sec - in org.apache.hadoop.test.TestJUnitSetup
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.test.TestLambdaTestUtils
Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.959 sec <<< FAILURE!
- in org.apache.hadoop.test.Test
LambdaTestUtils
testAwaitAlwaysFalse(org.apache.hadoop.test.TestLambdaTestUtils)  Time elapsed: 0.076 sec
 <<< FAILURE!
java.lang.AssertionError: null
        at org.junit.Assert.fail(Assert.java:86)
        at org.junit.Assert.assertTrue(Assert.java:41)
        at org.junit.Assert.assertTrue(Assert.java:52)
        at org.apache.hadoop.test.TestLambdaTestUtils.testAwaitAlwaysFalse(TestLambdaTestUtils.java:141)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.test.TestMultithreadedTestUtil
Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.197 sec <<< FAILURE!
- in org.apache.hadoop.test.TestM
ultithreadedTestUtil
testRepeatingThread(org.apache.hadoop.test.TestMultithreadedTestUtil)  Time elapsed: 4.522
sec  <<< FAILURE!
java.lang.AssertionError: Test took 4500ms
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.assertTrue(Assert.java:41)
        at org.apache.hadoop.test.TestMultithreadedTestUtil.testRepeatingThread(TestMultithreadedTestUtil.java:132)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.test.TestTimedOutTestsListener
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.686 sec - in org.apache.hadoop.test.TestTimedOutTestsL
  
  
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.502 sec <<< FAILURE!
- in org.apache.hadoop.util.Test
WinUtils
testTaskCreateWithLimits(org.apache.hadoop.util.TestWinUtils)  Time elapsed: 0.814 sec  <<<
FAILURE!
java.lang.AssertionError: Failed to get Shell.ExitCodeException with insufficient memory
  at org.junit.Assert.fail(Assert.java:88)
  at org.apache.hadoop.util.TestWinUtils.testTaskCreateWithLimits(TestWinUtils.java:605)

testNodeHealthScript(org.apache.hadoop.util.TestNodeHealthScriptRunner)  Time elapsed: 0.292
sec  <<< ERROR!
java.lang.NullPointerException: null
        at org.apache.hadoop.util.TestNodeHealthScriptRunner.writeNodeHealthScriptFile(TestNodeHealthScriptRunner.java:6
8)
        at org.apache.hadoop.util.TestNodeHealthScriptRunner.testNodeHealthScript(TestNodeHealthScriptRunner.java:112)

 {code}

> Fix some failing tests on windows
> ---------------------------------
>
>                 Key: HADOOP-14201
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14201
>             Project: Hadoop Common
>          Issue Type: Task
>          Components: test
>    Affects Versions: 2.8.0
>         Environment: Windows Server 2012.
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>
> Some of the 2.8.0 tests are failing locally, without much in the way of diagnostics.
They may be false alarms related to system, VM setup, performance, or they may be a sign of
a problem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message