db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Knut Anders Hatlen (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (DERBY-5801) Sub-processes should write EMMA coverage data to separate files
Date Thu, 07 Jun 2012 11:34:23 GMT

     [ https://issues.apache.org/jira/browse/DERBY-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Knut Anders Hatlen updated DERBY-5801:
--------------------------------------

    Attachment: d5801-1a.diff

Attaching a new patch that adds some comments to the code added in BaseTestCase, and it also
updates the emma-report target so that it picks up all of the coverage files.
                
> Sub-processes should write EMMA coverage data to separate files
> ---------------------------------------------------------------
>
>                 Key: DERBY-5801
>                 URL: https://issues.apache.org/jira/browse/DERBY-5801
>             Project: Derby
>          Issue Type: Improvement
>          Components: Test
>    Affects Versions: 10.10.0.0
>            Reporter: Knut Anders Hatlen
>            Assignee: Knut Anders Hatlen
>            Priority: Minor
>         Attachments: d5801-1a.diff, separate.diff
>
>
> When generating EMMA coverage reports after running suites.All, I frequently see that
coverage.ec is corrupted, and no report is produced. Typical failures look like this:
> java.io.UTFDataFormatException: malformed input around byte 52
> 	at java.io.DataInputStream.readUTF(DataInputStream.java:656)
> 	at java.io.DataInputStream.readUTF(DataInputStream.java:564)
> 	at com.vladium.emma.data.ClassDescriptor.readExternal(ClassDescriptor.java:171)
> 	at com.vladium.emma.data.MetaData.readExternal(MetaData.java:228)
> 	at com.vladium.emma.data.DataFactory.readEntry(DataFactory.java:770)
> 	at com.vladium.emma.data.DataFactory.mergeload(DataFactory.java:461)
> 	at com.vladium.emma.data.DataFactory.load(DataFactory.java:56)
> 	at com.vladium.emma.report.ReportProcessor._run(ReportProcessor.java:175)
> 	at com.vladium.emma.Processor.run(Processor.java:54)
> 	at com.vladium.emma.report.reportCommand.run(reportCommand.java:130)
> 	at emma.main(emma.java:40)
> or
> Exception in thread "main" com.vladium.emma.EMMARuntimeException: unexpected failure:

>         at com.vladium.emma.Command.exit(Command.java:237)
>         at com.vladium.emma.report.reportCommand.run(reportCommand.java:145)
>         at emma.main(emma.java:40)
> Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
>         at java.util.HashMap.<init>(HashMap.java:181)
>         at java.util.HashMap.<init>(HashMap.java:193)
>         at com.vladium.emma.data.MetaData.readExternal(MetaData.java:223)
>         at com.vladium.emma.data.DataFactory.readEntry(DataFactory.java:770)
>         at com.vladium.emma.data.DataFactory.mergeload(DataFactory.java:461)
>         at com.vladium.emma.data.DataFactory.load(DataFactory.java:56)
>         at com.vladium.emma.report.ReportProcessor._run(ReportProcessor.java:175)
>         at com.vladium.emma.Processor.run(Processor.java:54)
>         at com.vladium.emma.report.reportCommand.run(reportCommand.java:130)
>         ... 1 more
> I suspect that the problem is that all sub-processes spawned by the main test process
write to the same file, sometimes multiple processes running at the same time, and that the
file gets corrupted because there's no coordination between the processes when they're writing
to it.
> Experiments I have run also indicate that making the sub-processes write to different
files helps (I haven't managed to reproduce the corruption yet with that change), so I suggest
we make that change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message