cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pier Fumagalli (JIRA)" <>
Subject [jira] Commented: (COCOON-1658) Somewhere output is held...
Date Tue, 25 Oct 2005 23:57:59 GMT
    [ ] 

Pier Fumagalli commented on COCOON-1658:

That did the trick. I'm now seeing the XML being sent as soon as it's serialized.

May I ask why there's an unlimited buffer for the output? Setting a buffer of "0" or of "4096"
didn't change anything in my environment (as I assume Jetty already does some buffering on
its own), while not setting it (unlimited, I'd suppose) saved me 1 or 2 seconds over 30. Not
a big deal...

If we consider that "normally" it takes the same amount of time to process and to deliver
the page, overall, I'm seeing a huge increase  if the content is streamed as it comes out
of the pipeline (normally my clients can download at around 500 KiloBytes/sec).

So, my question is why is the buffer right now set to "unlimited"? Is there any specific caveat
for this?

The problem also lies in the fact that when the request is done, all my 100 Megabytes of memory
need to be garbage collected (and that takes time), locking up (from time to time) the VM
while object relationships are checked...

Wouldn't it be more sensible to set the buffer to something more conservative like 4, 8, 16
or 32 kilobytes?

> Somewhere output is held...
> ---------------------------
>          Key: COCOON-1658
>          URL:
>      Project: Cocoon
>         Type: Bug
>   Components: * Cocoon Core
>     Versions: 2.1.8-dev (Current SVN)
>     Reporter: Pier Fumagalli
>     Priority: Critical

> Cocoon standard, as of right now, built without any blocks.
> I modify the default sitemap adding one simple entry:
>     <map:match pattern="bigtest">
>       <map:generate src="bigtest.xml"/>
>       <map:serialize type="xml"/>
>     </map:match>
> The file "bigtest.xml" is a 100Mb XML file that I simply want to generate and serialize
(minimal test, no transformers that can do anything weird).
> I then open my terminal and do a "curl http://localhost:8888/bigtest > /dev/null",
to have an idea of the thrughput for this file.
> Apparently, the output is held for roughly 25 seconds, nothing comes out, no bytes are
serialized. All of a sudden, the entire 100 megabytes are serialized in one big lump (and
it takes 5 seconds to do so).
> This happens if the pipeline is configured as being "caching" or "noncaching" (nothing
> In the first 25 seconds, the JVM running Cocoon uses 100% of my processor (so, it's doing
something), and the TOP shows something _really_ strange.
> My JVM grows of roughly 200 megabytes in size (note, I start Cocoon, post the big request,
close cocoon).
> This is a trace from my TOP:
> -----------------------------------------------------------------------------
> 12498 java         0.1%  0:03.01  19   357   240  25.1M  28.7M  25.4M   735M
> 12498 java        87.2%  0:06.22  19   403   242  54.2M+ 28.7M  55.1M+  735M-
> 12498 java        75.7%  0:10.88  19   403   242  78.3M  28.7M  79.2M   735M
> 12498 java        80.2%  0:14.78  19   403   242   129M  28.7M   130M   735M
> 12498 java        84.3%  0:19.77  19   403   242   168M+ 28.7M   169M+  735M
> 12498 java        77.4%  0:23.67  19   403   242   231M  28.7M   232M   735M
> 12498 java        40.7%  0:27.92  19   403   242   231M+ 28.7M   232M+  735M+
> 12498 java         0.1%  0:28.18  20   408   245   231M  28.7M   232M   735M
> Something tells me that we are indeed caching all the content in a big char[] (100 megabytes
of US-ASCII text are 200 megabytes when stored in a char[]).
> Any clue on where this can happen? It's impairing our ability to serve bigger feeds (aka,
2 gigs! :-P)

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message