incubator-kato-spec mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Poole <spoole...@googlemail.com>
Subject Re: JSR 326 and Apache Kato - A "state of the nation" examination
Date Mon, 25 Jan 2010 10:21:31 GMT
Hi Alois -  waited a few days to see what other people might say.

Thanks for you input. I've responded inline below

On Wed, Jan 20, 2010 at 2:18 PM, Alois Reitbauer <
alois.reitbauer@dynatrace.com> wrote:

> Everybody,
>
> here is my perspective - the perspective of a performance tool vendor.
> The motivation for us to be part of the JSR  is to improve the
> possibilities of working with memory dumps, specifically for larger
> 64bit JVMs.  We see that the JVMTI API has reached its limits regarding
> performance and usability. It was a good API at the time created,
> however times and applications change.
>

Are you against improving JVMTI if we determined that it was a sensible
option?  I ask this because  there is a scenario where that would be the
case.  The scenario is tied up with thoughts I have about how we collect the
data required to go into a snapshot.  I won't write them here but I will
start a new thread on the snapshot API design.


> The major use case we have to support is memory analysis of large JVMs
> as well as complex production environments. By complex I am mainly
> talking about their size like environments with up to a couple of
> hundred JVMS. The usage of memory profiling goes beyond the resolution
> of an out-of-memory error. We see more and more companies that want to
> optimize their memory footprints. In addition to creating a single dump
> after a JVM has crashed, we see analysis cases where you want to compare
> a number of dumps over time, which requires efficient dumping
> mechanisms.
>
> How important is tracking individual objects across dumps over time?


> We are not using vendor-specific dump formats at all, but have our own
> implementation consisting of an agent and server part. Working with dump
> files is in many use cases not practical. Specifically in production the
> logistics for working with files introduces unnecessary complexity -
> especially when fast problem analysis is required.  Nearly every tool
> vendor uses his own dump format. Maybe not initially, but after
> preprocessing. In order to work efficiently with large dumps you need to
> perform operations like index etc.
>
> However this does not means that I propose an API at the abstraction
> level of JVMTI. I really like the idea of having a kind of query
> language, getting binary data back and then processing it in a
> standardized way. Every JVM vender should be free in how he implement
> the data collection process. At the same time I want to be able to
> either write this information to disk or stream it somewhere for further
> processing. So we also  a streaming type interface which allows us to
> process the information as a stream rather than a whole file.


"Streaming" that is a scary concept.  Though I suppose if the design of the
API has a visitor type pattern then it would not be too scary.


> Rethinking
> these requirements we need a protocol/API to communicate with the JVM to
> issue our queries and an API to process the binary stream we get back -
> a very similar approach to how JDBC works.
>
> Agree - that's the sort of pattern I see too.


> The protocol part is important because I have to specify in advance
> which information I want. I do not necessarily want to get back all
> information. Monitoring the number of objects of a specific class in
> most cases requires to create a whole heap dump. I know that there are
> already other ways to do that, however none of them works for create
> dump then analyze approach.
>
> From my perspective we need to support the diagnosis of
> application-level problems as our primary goal. The end-user of dumps
> will always be developers. Who else is able to make sense of the data.
> However the environments  range from a small development JVM to a large
> 32 GB production JVM. Tools for the latter are very rare.
>
> Don't get me wrong. The work done in the KATO project is great. It is a
> great a showcase and reference on the similarities and difference in
> vendor-specific dump formats.  I am wondering who you see as the users
> of KATO? We as a tool vendor will still require our own dump solution,
> for the technical reasons stated above.
>
> We have to look at this effort as a multi step approach.  Our final goal
has to be to improve the diagnostic capabilities for end user customers
(those who run the Java applications)  That means improved tools and that
means  tools vendors writing said tools.  Tools vendors do not want to write
tools for one single JVM so we have to provide them with a standard
interface.


> I also see joint efforts with JVM vendors as mandatory as otherwise we
> are not able to make a significant technological improvement here.
>

Agree entirely.  I'm going to hold off until the Sun/Oracle acquisition has
completed and then I will ask the Sun and Oracle EG reps to present their
position on this JSR.

Starting with the OpenJDK project is a good first step. However at the
> end all vendors have to provide implementations. The new features of
> JVMTI for Java6 also show that there is activity and willingness to
> contribute.
>
> I am happy that there is no increased moment again and I am looking
> forward to the future of JSR 326.  First we have to agree what this
> future should look like.
>
> Best
>
> Alois
>
>


-- 
Steve

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message