incubator-kato-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From spo...@apache.org
Subject svn commit: r800434 - /incubator/kato/trunk/org.apache.kato/kato.docs/src/docbkx/chapters/Introduction.xml
Date Mon, 03 Aug 2009 15:51:25 GMT
Author: spoole
Date: Mon Aug  3 15:51:25 2009
New Revision: 800434

URL: http://svn.apache.org/viewvc?rev=800434&view=rev
Log:
more words in the edr

Modified:
    incubator/kato/trunk/org.apache.kato/kato.docs/src/docbkx/chapters/Introduction.xml

Modified: incubator/kato/trunk/org.apache.kato/kato.docs/src/docbkx/chapters/Introduction.xml
URL: http://svn.apache.org/viewvc/incubator/kato/trunk/org.apache.kato/kato.docs/src/docbkx/chapters/Introduction.xml?rev=800434&r1=800433&r2=800434&view=diff
==============================================================================
--- incubator/kato/trunk/org.apache.kato/kato.docs/src/docbkx/chapters/Introduction.xml (original)
+++ incubator/kato/trunk/org.apache.kato/kato.docs/src/docbkx/chapters/Introduction.xml Mon
Aug  3 15:51:25 2009
@@ -1,57 +1,167 @@
 <?xml version="1.0" encoding="UTF-8"?>
-<chapter version="5.0" xml:id="introduction"
-         xmlns="http://docbook.org/ns/docbook"
-         xmlns:xlink="http://www.w3.org/1999/xlink"
-         xmlns:xi="http://www.w3.org/2001/XInclude"
-         xmlns:svg="http://www.w3.org/2000/svg"
-         xmlns:mml="http://www.w3.org/1998/Math/MathML"
-         xmlns:html="http://www.w3.org/1999/xhtml"
-         xmlns:db="http://docbook.org/ns/docbook">
-  <title>Introduction</title>
+<chapter version="5.0" xml:id="introduction" xmlns="http://docbook.org/ns/docbook"
+	xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xi="http://www.w3.org/2001/XInclude"
+	xmlns:svg="http://www.w3.org/2000/svg" xmlns:mml="http://www.w3.org/1998/Math/MathML"
+	xmlns:html="http://www.w3.org/1999/xhtml" xmlns:db="http://docbook.org/ns/docbook">
+	<title>Introduction</title>
+
+	<section xml:id="introduction.jsr326">
+		<title>What is JSR 326?</title>
+		<para>JSR 326 is intended to be a Java API specification for
+			standardising how and what can be retrieved from the contents of
+			post-mortem artefacts -
+			typically process and JVM dumps.</para>
+		<para>Unusually for new APIs, this project will endeavour to
+			encompass the old and the new. A diagnostic solution that only works
+			when users move to the latest release does little to improve
+			diagnosability in the short term. This project will consume existing
+			dump artefacts as well as possible while developing an API that can
+			address the emerging trends in JVM and application directions. The
+			most obvious of these trends are the exploitation of very large
+			heaps, alternative languages and, paradoxically for Java, the
+			increased use of native memory through vehicles such as NIO.</para>
+	</section>
+
+	<section xml:id="introduction.kato">
+		<title>What is Apache Kato?</title>
+		<para>Project Kato is intended to be the place where the
+			Specification, Reference implementation (RI) and Technology
+			Compatibility Kit (TCK) are openly created. The intention is that the
+			Specification and RI will be developed in tight unison, guided by a
+			user-story-focused approach to ensure that real-world problems drive
+			the project from the beginning.</para>
+		<para>This project is about bringing together people and ideas to
+			create a common, cross industry API, and we can't think of a better
+			place to do that than in Apache.</para>
+	</section>
 
-  <section xml:id="introduction.jsr326">
-    <title>What is JSR 326?</title>
-	<para>JSR 326 is intended to be a Java API specification for standardising how and
what can be retrieved from the contents of post-mortem artefacts - 
-	typically process and JVM dumps.</para><para>Unusually for new APIs, this project
will endeavour to encompass the old and the new. A diagnostic solution that only works when
users move to the latest release does little to improve diagnosability in the short term.
This project will consume existing dump artefacts as well as possible while developing an
API that can address the emerging trends in JVM and application directions. The most obvious
of these trends are the exploitation of very large heaps, alternative languages and, paradoxically
for Java, the increased use of native memory through vehicles such as NIO.</para>
-	 </section>   
-     <section xml:id="introduction.kato">
-    <title>What is Apache Kato?</title>
-    <para>Project Kato is intended to be the place where the Specification, Reference
implementation (RI) and Technology Compatibility Kit (TCK) are openly created. The intention
is that the Specification and RI will be developed in tight unison, guided by a user-story-focused
approach to ensure that real-world problems drive the project from the beginning.</para>
-    <para>This project is about bringing together people and ideas to create a common,
cross industry API, and we can't think of a better place to do that than in Apache.</para>
-    </section>
-    
 	<section xml:id="introduction.background">
-    <title>Background</title>
-    <para>Post-mortem versus Live Monitoring: It's worth noting that the term "post
mortem" is used loosely. It does not just imply dead Java Virtual Machines and applications;
JSR 326 also covers living, breathing applications where the dump artefacts are deliberately
produced as part of live monitoring activity. Live monitoring generally means tracing, profiling,
debugging, or even bytecode monitoring and diagnosis by agents via the java.lang.instrument
API . It can also mean analysis of dumps to look for trends and gather statistics. 
-    The live-monitoring diagnostic space is well served except for this last area, which
is where JSR 326 can help.
-    IBM developed an API called DTFJ ("Diagnostic Tooling and Framework for Java") as a means
of providing its support teams a basis on which to write tools to diagnose Java SDK and Java
application faults. It consists of a native JVM-specific component and the DTFJ API, which
was written in pure Java. 
+		<title>Background</title>
+		<para>Post-mortem versus Live Monitoring: It's worth noting
+			that the term "post mortem" is used loosely. It does not just imply
+			dead Java Virtual Machines and applications; JSR 326 also covers
+			living, breathing applications where the dump artefacts are
+			deliberately produced as part of live monitoring activity. Live
+			monitoring generally means tracing, profiling, debugging, or even
+			bytecode monitoring and diagnosis by agents via the
+			java.lang.instrument API . It can also mean analysis of dumps to look
+			for trends and gather statistics.
+			The live-monitoring diagnostic space
+			is well served except for this last
+			area, which is where JSR 326 can
+			help.
+			IBM developed an API called DTFJ ("Diagnostic Tooling and
+			Framework for
+			Java") as a means of providing its support teams a basis
+			on which to
+			write tools to diagnose Java SDK and Java application
+			faults. It
+			consists of a native JVM-specific component and the DTFJ
+			API, which
+			was written in pure Java. 
     </para>
-    <para>In 2009 IBM donated the implementation dependent portions of DTFJ to the
Apache Kato project</para>
-    </section>
-    <section xml:id="introduction.rationale">
-    <title>Rationale</title>
-    <para>JSR 326 exists because of the widely acknowledged limitations in diagnosing
Java application problems after the fact. 
-    There are many good ways to understand and diagnose problems while they happen, but few
credible or pervasive tools exist for helping resolve problems when all has gone suddenly
and horribly wrong.
-    Outside of "live monitoring" there is no standard way to provide diagnostics information,
and hence no standard tools. 
-    Each tool writer has to figure out how to access the data individually and specifically
for each JVM vendor and operating system. 
-    This sparsity of tools has meant that users have limited options in diagnosing their
own problems, especially unexpected or intermittent failures. 
-    Consequently these users turn to the providers of their software to work out what is
happening. 
-    Application, middleware, and JVM vendors are spending increasing time supporting customers
in problem diagnosis. Emerging trends indicate that this is going to get worse. 
-    </para><para>
-   Today JVM heap sizes are measured in small numbers of gigabytes, processors on desktops
come in twos or fours, 
-    and most applications running on a JVM are written in Java. To help analyse problems
in these configurations, 
-    we use a disparate set of diagnostic tools and artefacts. If the problem can't be reproduced
in a debugger, 
-    then things quickly get complicated. There are point tools for problems like deadlock
analysis or the ubiquitous Java out-of-memory problems, 
-    but overall the Java diagnostic tools arena is fragmented and JVM- or OS-specific. Tool
writers have to choose their place in this matrix. 
-    We want to change that by removing the need for tool writers to make a choice. By enabling
tool writers to easily target all the major JVM vendors and operating systems,
-    we expect the number and capability of diagnostic tools to greatly increase. 
-    Tomorrow it gets harder; heap sizes hit 100's of gigabytes, processors come packaged
in powers of 16, and the JVM commonly executes a wide range of language environments. 
-    We can't tackle tomorrow's problems until we have a platform to address today's.
+		<para>In 2009 IBM donated the implementation dependent portions of
+			DTFJ to the Apache Kato project</para>
+	</section>
+	<section xml:id="introduction.rationale">
+		<title>Rationale</title>
+		<para>JSR 326 exists because of the widely acknowledged limitations
+			in diagnosing Java application problems after the fact.
+			There are many
+			good ways to understand and diagnose problems while they
+			happen, but
+			few credible or pervasive tools exist for helping resolve
+			problems
+			when all has gone suddenly and horribly wrong.
+			Outside of "live
+			monitoring" there is no standard way to provide diagnostics
+			information, and hence no standard tools.
+			Each tool writer has to
+			figure out how to access the data individually
+			and specifically for
+			each JVM vendor and operating system.
+			This sparsity of tools has meant
+			that users have limited options in
+			diagnosing their own problems,
+			especially unexpected or intermittent
+			failures.
+			Consequently these
+			users turn to the providers of their software to work out what
+			is
+			happening.
+			Application, middleware, and JVM vendors are spending
+			increasing time supporting
+			customers in problem diagnosis. Emerging
+			trends indicate that this is
+			going to get worse. 
+    </para>
+		<para>
+			Today JVM heap sizes are measured in small numbers of gigabytes,
+			processors on desktops come in twos or fours,
+			and most applications
+			running on a JVM are written in Java. To help
+			analyse problems in
+			these configurations,
+			we use a disparate set of diagnostic tools and
+			artefacts. If the
+			problem can't be reproduced in a debugger,
+			then
+			things quickly get complicated. There are point tools for problems
+			like deadlock analysis or the ubiquitous Java out-of-memory problems,
+			but overall the Java diagnostic tools arena is fragmented and JVM- or
+			OS-specific. Tool writers have to choose their place in this matrix.
+			We want to change that by removing the need for tool writers to make
+			a choice. By enabling tool writers to easily target all the major JVM
+			vendors and operating systems,
+			we expect the number and capability of
+			diagnostic tools to greatly
+			increase.
+			Tomorrow it gets harder; heap
+			sizes hit 100's of gigabytes, processors come
+			packaged in powers of
+			16, and the JVM commonly executes a wide range
+			of language
+			environments.
+			We can't tackle tomorrow's problems until we have a
+			platform to address
+			today's.
 	</para>
 	</section>
-  <section xml:id="introduction.links">
-    <title>Places to visit</title>
-    <para><uri xlink:href="http://cwiki.apache.org/KATO/"   xlink:type="simple">Kato
Wiki</uri>
-   </para>
-   </section> 
+	<section>
+		<title>Tell me more about "Diagnostic Artifacts"</title>
+		<para>
+			Simply put, when something goes wrong you'd like to know why.
+			A diagnostic artifact is whatever material is left when your
+			application or JVM fails.
+			Sometimes it's a message to the console, or a record in a log file.
+			Hopefully you'll get enough information to figure out what happened and fix
+			the problem.
+		</para>
+		<para>
+			Unfortunately there are many cases where you don't get to see the
+			obvious
+			<quote>smoking gun.</quote>
+		</para>
+		<para>
+			In those situations you need access to more information so you can
+			dig into the causes of your problem.
+			Historically the sorts of artifact you need are split into two types: those which
+			show a time element
+			and those that are a snapshot of working system. The former of these
+			types is of course a trace, while the
+			latter comes under the term
+			<quote>dump</quote>
+			or
+			<quote>core file</quote>.</para>
+			<para>It's these latter type that JSR 326 is designed to consume.
+		</para>
+	</section>
+	<section>
+		<title>What types of Dump are supported by this API?</title>
+		<para>The Apache Kato incubator project is developing the reference implementation
for JSR 326. 
+		That work includes the development of an implementation that can read standard binary HPROF
files and an experimental new dump
+		 that uses JVMTI to expose more information than is currently in a HPROF file.
+		</para>
+		<para>Other JVM vendors can develop implementations to support the API for their
own dumps.</para>     
+	</section>
 </chapter>
\ No newline at end of file



Mime
View raw message