incubator-kato-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject svn commit: r797411 - /incubator/kato/trunk/org.apache.kato/
Date Fri, 24 Jul 2009 10:34:44 GMT
Author: spoole
Date: Fri Jul 24 10:34:44 2009
New Revision: 797411

updated introduction in spec


Modified: incubator/kato/trunk/org.apache.kato/
--- incubator/kato/trunk/org.apache.kato/ (original)
+++ incubator/kato/trunk/org.apache.kato/ Fri
Jul 24 10:34:44 2009
@@ -11,12 +11,47 @@
   <section xml:id="introduction.jsr326">
     <title>What is JSR 326?</title>
-	<para>Places to visit</para>
-    <para><uri xlink:href=""   xlink:type="simple">Kato
+	<para>JSR 326 is intended to be a Java API specification for standardising how and
what can be retrieved from the contents of post-mortem artefacts - 
+	typically process and JVM dumps.</para><para>Unusually for new APIs, this project
will endeavour to encompass the old and the new. A diagnostic solution that only works when
users move to the latest release does little to improve diagnosability in the short term.
This project will consume existing dump artefacts as well as possible while developing an
API that can address the emerging trends in JVM and application directions. The most obvious
of these trends are the exploitation of very large heaps, alternative languages and, paradoxically
for Java, the increased use of native memory through vehicles such as NIO.</para>
+	 </section>   
+     <section xml:id="introduction.kato">
+    <title>What is Apache Kato?</title>
+    <para>Project Kato is intended to be the place where the Specification, Reference
implementation (RI) and Technology Compatibility Kit (TCK) are openly created. The intention
is that the Specification and RI will be developed in tight unison, guided by a user-story-focused
approach to ensure that real-world problems drive the project from the beginning.</para>
+    <para>This project is about bringing together people and ideas to create a common,
cross industry API, and we can't think of a better place to do that than in Apache.</para>
+    </section>
+	<section xml:id="introduction.background">
+    <title>Background</title>
+    <para>Post-mortem versus Live Monitoring: It's worth noting that the term "post
mortem" is used loosely. It does not just imply dead Java Virtual Machines and applications;
JSR 326 also covers living, breathing applications where the dump artefacts are deliberately
produced as part of live monitoring activity. Live monitoring generally means tracing, profiling,
debugging, or even bytecode monitoring and diagnosis by agents via the java.lang.instrument
API . It can also mean analysis of dumps to look for trends and gather statistics. 
+    The live-monitoring diagnostic space is well served except for this last area, which
is where JSR 326 can help.
+    IBM developed an API called DTFJ ("Diagnostic Tooling and Framework for Java") as a means
of providing its support teams a basis on which to write tools to diagnose Java SDK and Java
application faults. It consists of a native JVM-specific component and the DTFJ API, which
was written in pure Java. 
+    <para>In 2009 IBM donated the implementation dependent portions of DTFJ to the
Apache Kato project</para>
+    </section>
+    <section xml:id="introduction.rationale">
+    <title>Rationale</title>
+    <para>JSR 326 exists because of the widely acknowledged limitations in diagnosing
Java application problems after the fact. 
+    There are many good ways to understand and diagnose problems while they happen, but few
credible or pervasive tools exist for helping resolve problems when all has gone suddenly
and horribly wrong.
+    Outside of "live monitoring" there is no standard way to provide diagnostics information,
and hence no standard tools. 
+    Each tool writer has to figure out how to access the data individually and specifically
for each JVM vendor and operating system. 
+    This sparsity of tools has meant that users have limited options in diagnosing their
own problems, especially unexpected or intermittent failures. 
+    Consequently these users turn to the providers of their software to work out what is
+    Application, middleware, and JVM vendors are spending increasing time supporting customers
in problem diagnosis. Emerging trends indicate that this is going to get worse. 
+    </para><para>
+   Today JVM heap sizes are measured in small numbers of gigabytes, processors on desktops
come in twos or fours, 
+    and most applications running on a JVM are written in Java. To help analyse problems
in these configurations, 
+    we use a disparate set of diagnostic tools and artefacts. If the problem can't be reproduced
in a debugger, 
+    then things quickly get complicated. There are point tools for problems like deadlock
analysis or the ubiquitous Java out-of-memory problems, 
+    but overall the Java diagnostic tools arena is fragmented and JVM- or OS-specific. Tool
writers have to choose their place in this matrix. 
+    We want to change that by removing the need for tool writers to make a choice. By enabling
tool writers to easily target all the major JVM vendors and operating systems,
+    we expect the number and capability of diagnostic tools to greatly increase. 
+    Tomorrow it gets harder; heap sizes hit 100's of gigabytes, processors come packaged
in powers of 16, and the JVM commonly executes a wide range of language environments. 
+    We can't tackle tomorrow's problems until we have a platform to address today's.
+	</para>
+  <section xml:id="introduction.links">
+    <title>Places to visit</title>
+    <para><uri xlink:href=""   xlink:type="simple">Kato
+   </para>
+   </section> 
\ No newline at end of file

View raw message