Return-Path: X-Original-To: apmail-uima-user-archive@www.apache.org Delivered-To: apmail-uima-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 660EC18A31 for ; Wed, 13 Jan 2016 22:27:29 +0000 (UTC) Received: (qmail 83404 invoked by uid 500); 13 Jan 2016 22:27:29 -0000 Delivered-To: apmail-uima-user-archive@uima.apache.org Received: (qmail 83350 invoked by uid 500); 13 Jan 2016 22:27:29 -0000 Mailing-List: contact user-help@uima.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@uima.apache.org Delivered-To: mailing list user@uima.apache.org Received: (qmail 83339 invoked by uid 99); 13 Jan 2016 22:27:28 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Jan 2016 22:27:28 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 77A261A07E1 for ; Wed, 13 Jan 2016 22:27:28 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0 X-Spam-Level: X-Spam-Status: No, score=0 tagged_above=-999 required=6.31 tests=[SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id i8OCsDenmJi7 for ; Wed, 13 Jan 2016 22:27:18 +0000 (UTC) Received: from gateway31.websitewelcome.com (gateway31.websitewelcome.com [192.185.144.94]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 1B75020428 for ; Wed, 13 Jan 2016 22:27:18 +0000 (UTC) Received: from cm2.websitewelcome.com (cm2.websitewelcome.com [192.185.178.13]) by gateway31.websitewelcome.com (Postfix) with ESMTP id 9BF76D19BAAC3 for ; Wed, 13 Jan 2016 16:27:11 -0600 (CST) Received: from gator3253.hostgator.com ([198.57.247.217]) by cm2.websitewelcome.com with id 5aT91s01t4i9tuE01aTBMR; Wed, 13 Jan 2016 16:27:11 -0600 Received: from [129.34.20.19] (port=49685 helo=[9.2.55.39]) by gator3253.hostgator.com with esmtpsa (TLSv1.2:DHE-RSA-AES128-SHA:128) (Exim 4.85) (envelope-from ) id 1aJTsj-000KvN-Ga for user@uima.apache.org; Wed, 13 Jan 2016 16:27:09 -0600 Subject: Re: CAS serializationWithCompression To: user@uima.apache.org References: <0a3601d14cdd$d578c440$806a4cc0$@gnoetics.com> <881F0CB3-1365-4AE4-BFD5-0229138458E4@apache.org> <0acb01d14d51$77c25810$67470830$@gnoetics.com> <56955A38.8050102@schor.com> <0b2e01d14d86$6d7e3880$487aa980$@gnoetics.com> <0c1f01d14e35$70c5a7d0$5250f770$@gnoetics.com> From: Marshall Schor X-Enigmail-Draft-Status: N1110 Message-ID: <5696CF3D.6070405@schor.com> Date: Wed, 13 Jan 2016 17:27:09 -0500 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <0c1f01d14e35$70c5a7d0$5250f770$@gnoetics.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - gator3253.hostgator.com X-AntiAbuse: Original Domain - uima.apache.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - schor.com X-BWhitelist: no X-Source-IP: 129.34.20.19 X-Exim-ID: 1aJTsj-000KvN-Ga X-Source: X-Source-Args: X-Source-Dir: X-Source-Sender: ([9.2.55.39]) [129.34.20.19]:49685 X-Source-Auth: msa+schor.com X-Email-Count: 1 X-Source-Cap: bWlzY2hvcjttaXNjaG9yO2dhdG9yMzI1My5ob3N0Z2F0b3IuY29t Great! Glad to see some use is being made of JSON :-). -Marshall On 1/13/2016 2:05 PM, D. Heinze wrote: > Found the problem by serializing the CAS to Json. The CAS sofaText was > acting like a pushdown stack and accumulating the full text of each > successive document due to an input stream and buffer not getting properly > closed/cleared between iterations. > > Thanks / Dan > > -----Original Message----- > From: D. Heinze [mailto:dheinze@gnoetics.com] > Sent: Tuesday, January 12, 2016 2:13 PM > To: user@uima.apache.org > Subject: RE: CAS serializationWithCompression > > Thanks Marshall. Will do. I just completed upgrading from UIMA 2.6.0 to > 2.8.1 just to make sure there were no issues there. Will now get back to > the CAS serialization issue. Yes, I've been trying to think of where there > could be retained junk that is getting added back into the CAS with each > iteration. > > -Dan > > -----Original Message----- > From: Marshall Schor [mailto:msa@schor.com] > Sent: Tuesday, January 12, 2016 11:56 AM > To: user@uima.apache.org > Subject: Re: CAS serializationWithCompression > > hmmm, seems like unusual behavior. > > It would help a lot to diagnose this if you could construct a small test > case - one which perhaps creates a cas, fills it with a bit of data, does > the compressed serialization, resets the cas, and loops and see if that > produces "expanding" serializations. > > -- if it does, please post the test case to a Jira and we'll diagnose / > fix this :-) > > -- if it doesn't, then you have to get closer to your actual use case and > iterate until you see what it is that you last added that starts making it > serialize ever-expanding instances. That will be a big clue, I think. > > -Marshall > > On 1/12/2016 10:54 AM, D. Heinze wrote: >> The CAS.size() starts as larger than the serializedWithCompression >> version, but eventually the serializedWithCompression version grows to >> be larger than the CAS.size(). >> The overall process is: >> * Create a new CAS >> * Read in an xml document and store the structure and content in the cas. >> * Tokenize and parse the document and store that info in the cas. >> * Run a number of lexical engines and ConceptMapper engines on the >> data and store that info in the cas >> * Produce an xml document with the content of the original input >> document marked up with the analysis results and both write that out >> to a file and also store it in the cas >> * serializeWithCompression to a FileOutputStream >> * cas.reset() >> * iterate on the next input document >> All the work other than creating and cas.reset() is done using the JCas. >> Even though the output CASes keep getting larger, they seem to >> deserialize just fine and are usable. >> Thanks/Dan >> >> -----Original Message----- >> From: Richard Eckart de Castilho [mailto:rec@apache.org] >> Sent: Tuesday, January 12, 2016 2:45 AM >> To: user@uima.apache.org >> Subject: Re: CAS serializationWithCompression >> >> Is the CAS.size() larger than the serialized version or smaller? >> What are you actually doing to the CAS? Just serializing/deserializing >> a couple of times in a row, or do you actually add feature structures? >> The sample code you show doesn't give any hint about where the CAS >> comes from and what is being done with it. >> >> -- Richard >> >>> On 12.01.2016, at 03:06, D. Heinze wrote: >>> >>> I'm having a problem with CAS serializationWithCompression. I am >>> processing a few million text document on an IBM P8 with 16 physical >>> SMTP 8 cpus, 200GB RAM, Ubuntu 14.04.3 LTS and IBM Java 1.8. >>> >>> I run 55 UIMA pipelines concurrently. I'm using UIMA 2.6.0. >>> >>> I use serializeWithCompression to save the final state of the >>> processing on each document to a file for later processing. >>> >>> However, the size of the serialized CAS just keeps growing. The size >>> of the CAS is stable, but the serialized CASes just keep getting >>> bigger. I even went to creating a new CAS for each process instead of >>> using cas.reset(). I have also tried writing the serialized CAS to a >>> byte array output stream first and then to a file, but it is the >>> serializeWithCompression that caused the size problem not writing the >> file. >>> Here's what the code looks like. Flushing or not flushing does not >>> make a difference. Closing or not closing the file output strem does >>> not make a difference (other than leaking memory). I've also tried >>> doing serializeWithCompression with type filtering. Wanted to try >>> using a Marker, but cannot see how to do that. The problem exists >>> regardless of doing 1 or >>> 55 pipelines concurrently. >>> >>> >>> >>> File fout = new File(documentPath); >>> >>> fos = new FileOutputStream(fout); >>> >>> >>> org.apache.uima.cas.impl.Serialization.serializeWithCompression( >>> cas, fos); >>> >>> fos.flush(); >>> >>> fos.close(); >>> >>> logger.info( "serializedCas size " + cas.size() + " ToFile " + >>> documentPath); >>> >>> >>> >>> Suggestions will be appreciated. >>> >>> >>> >>> Thanks / Dan >>> >>> >>> >