From kato-spec-return-40-apmail-incubator-kato-spec-archive=incubator.apache.org@incubator.apache.org Thu Feb 19 11:47:07 2009 Return-Path: Delivered-To: apmail-incubator-kato-spec-archive@minotaur.apache.org Received: (qmail 29694 invoked from network); 19 Feb 2009 11:47:07 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 19 Feb 2009 11:47:07 -0000 Received: (qmail 97883 invoked by uid 500); 19 Feb 2009 11:47:07 -0000 Delivered-To: apmail-incubator-kato-spec-archive@incubator.apache.org Received: (qmail 97858 invoked by uid 500); 19 Feb 2009 11:47:07 -0000 Mailing-List: contact kato-spec-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: kato-spec@incubator.apache.org Delivered-To: mailing list kato-spec@incubator.apache.org Received: (qmail 97847 invoked by uid 99); 19 Feb 2009 11:47:06 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Feb 2009 03:47:06 -0800 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of pilkington.adam@googlemail.com designates 209.85.218.165 as permitted sender) Received: from [209.85.218.165] (HELO mail-bw0-f165.google.com) (209.85.218.165) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Feb 2009 11:46:55 +0000 Received: by bwz9 with SMTP id 9so907162bwz.12 for ; Thu, 19 Feb 2009 03:46:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=i8E/1B0mZI5qh8JytDOBdU3/9TjTJhfeGl4BVfjCaAU=; b=prs/3gXEcNMIT85jc4oMxWk5vb0lgKLX97zk5r2jp6TCwy9TAik3rOjQK7DSjP87iL f6rTwPtWvhLgCjNfFriSQUoUjiTjP+XUzvV27ZdzZhkzgnqL9X+38AH623GHVDgxBqfe jIYpYlWd9FBGeKty+c5+wbN+lRjdHRonjNbZM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=kPoXS5iDdXk8fGJ5WZD7bkZI2SIFPVF2DnDbUgLfsqfODwK4KF0KJVyE0HObSdpPTB c53q7aWInGZuTb60ArubJIb8mKG54U0UHdBULkP/b5ixW5oc6Oi3mwKBmlmv3S9A5tpW 6BPHiWb9CMwyfSkT96KwUnUDVoDLK9DpRxSAM= MIME-Version: 1.0 Received: by 10.181.144.11 with SMTP id w11mr3181218bkn.81.1235043994822; Thu, 19 Feb 2009 03:46:34 -0800 (PST) In-Reply-To: <986b4acf0902170850sf1b54cg23ec6ed11c64a3b1@mail.gmail.com> References: <85b044530902170838g63a0986cv534f3a2a3b04b3c6@mail.gmail.com> <986b4acf0902170850sf1b54cg23ec6ed11c64a3b1@mail.gmail.com> Date: Thu, 19 Feb 2009 11:46:34 +0000 Message-ID: <986b4acf0902190346x5a06328ve50797c2c9967089@mail.gmail.com> Subject: Re: Design guidelines From: Adam Pilkington To: kato-spec@incubator.apache.org Content-Type: multipart/alternative; boundary=00163646be0297e78d04634416d5 X-Virus-Checked: Checked by ClamAV on apache.org --00163646be0297e78d04634416d5 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable I would like to add another area for consideration to this list, and that i= s the approach that we should use for logging. I think that we should be looking to define the following - The logging package to use - is this going to be java.util.logging or do we want to use Log4j or something else ? - The namespace conventions so that the logging output from one componen= t does not become corrupted by other components. - Some guidance should be defined on the logging levels that are to be used, and the type of events that occur within those levels. This should mean that if we turn on just error logging we are not swamped with other messages not related to errors. 2009/2/17 Adam Pilkington > Hi, I would like to add the following comments to this, > > Section 2.6 : I don't necessarily believe that checked exceptions are the > best way to handle this. I think that runtime exceptions should be thrown > when the user does not have a realistic chance to correct the underlying > cause of the error. Checked exception should be used when there is a chan= ce > to correct the underlying cause of the exception or take some other form = of > action to allow the operation to continue. I also believe that exceptions > are bounded by the architecture layer in which they occur. For example, I > was discussing this with Steve and if we look at the data access layer th= en > almost all operations on the dump can throw an IOException. I don't belie= ve > that this should be bubbled up through the layers, but handled by the dat= a > access layer. This could take the form of wrapping the original exception= in > a runtime or other checked exception and then re-throwing that. One > advantage of this approach is that it allows additional information to be > conveyed with the exception i.e. not just that you tried to read beyond t= he > end of the file, but details of the context in which the operation was be= ing > carried out. > > I also think that this is an issue that cannot be just resolved through A= PI > design, but is part of the reference implementation that we publish. I > would allow Kato to throw almost exclusively runtime exceptions, but in t= he > RI implement an error handling strategy, which could be as simple as logg= ing > the message to a console window, or it could be more sophisticated such a= s > interpreting the error and presenting an option to the user to fix it. Th= at > doesn't mean to say that classes cannot catch these runtime exceptions, i= t > just means that they are only going to do so if they intend to either add > more information or transform the error again. I think that this will mak= e > the code a lot easier to read and is a better choice than either checked > exceptions or error number checking. > > Section 4.0 : For the data access models I think that the following shoul= d > also be added for consideration in the layer that directly accesses the d= ump > > - Established patterns for iterating over lists > - Marking reference points within the dump to provide jump > capabilities to known sections and areas > - Most recently used caches > - Most frequently used caches > - Intelligent block reading of data > > > 2009/2/17 Carmine Cristallo > > The main purpose of this email is to outline some of the design >> considerations to be kept into account during the development of the >> API and to stimulate the discussion about them. >> >> Some of the following sections will be better understood after a quick >> look at the IBM DTFJ API, which will constitutes the seed of the IBM >> contribution to the Apache Kato project. Such sections will be clearly >> marked. >> >> 1 General Principles >> >> The following principles could be used as overall quality criteria for >> the design of the API. >> >> 1.1 User Story driven: the design of the API will be driven by user >> stories. As a general statement, no information should be exposed if >> there is no user story justifying the need for it. >> >> 1.2 Comsumability: users of the API should be able to easily >> understand how to use the API from the API description and from common >> repeated design patterns. The amount of boilerplate code neccessary to >> get at any useful information needs to be monitored. The user stories >> supporting the API will aid in keeping the boilerplate down but its >> important to state that the more understandable the API is, the easier >> its adoption will be. >> >> 1.3 Consistency: common guidelines and patterns should be followed >> designing the API. For example, all the calls returning multiple >> values must have a common way of doing it (i.e. Lists or Iterators or >> arrays) >> >> 1.4 Common tasks should be easy to implement: care should be taken to >> design the API in such a way that common user stories have a simple >> implementation scenario. For example, in the DTFJ API, in most cases >> there will be only one JavaRuntime per Image, and there should be a >> more direct way of getting it than iterating through the AddressSpaces >> and Processes. >> >> 1.5 Backward compatibility: the client code written against a given >> release of the API should remain source code compatible with any >> future release of the API. >> >> >> >> 2 Exception handling model >> >> In the domain of postmortem analysis, the following types of >> exceptions can occur: >> >> 2.1 File Access Exceptions: reported when an error occurs opening, >> seeking, reading, writing or closing any of the file which constitute >> the dump or are generated as a result of processing the dump. >> Applications are expected to respond to this type of exception by >> informing their users that they should correct the problem with their >> file system (e.g. getting the file name right). >> >> 2.2 File Format Exceptions: reported when data is correctly read from >> an encoded file, but that data is not compatible with the encoding >> rules or syntax. Applications are supposed to respond to these >> exceptions by informing their users that the file is corrupt and >> further process is impossible. >> >> 2.3 Operation Not Supported Exceptions: the type of dump file being >> analysed does not support the invoked API. >> >> 2.4 Memory Access Exceptions: thrown when the address and length of a >> data read request does not lie entirely within one of the valid >> address ranges. >> >> 2.5 Corrupt Data Exceptions: reported when data is correctly read from >> a file but it has a value incompatible with its nature. Corruption is >> to be considered as a normal event in processing postmortem dumps, >> therefore such exceptions are not to be treated as error conditions. >> >> 2.6 Data Not Available Exceptions: reported when the requested >> information is not contained within the specific dump being analysed. >> As for the previous case, this is not to be seen as an error >> condition. >> >> Exception handling in DTFJ is a major source of struggle. Almost every >> call to the DTFJ API throws one exception of the last two types, or >> both. There's no question about the fact that such events are >> definitely better handled with checked exceptions rather than with >> unchecked ones. On the other hand, the fact that some objects >> retrieved from a dump can be corrupted or not available is an >> intrinsic condition to every API call. Handling such conditions with >> checked exceptions would put the burden of handling them onto the >> client code, leading to almost every API call being wrapped by a >> try/catch block. As a side effect, it has been noted from past >> experience that in such situations client code tends to take a form >> like: >> >> public clientMethod1() { >> try { >> katoObject1.methodA(); >> katoObject2.methodB(); >> katoObject3.methodC(); >> } catch (KatoException ke) { >> ... >> } >> } >> >> rather than: >> >> public clientMethod2() { >> try { >> katoObject1.methodA(); >> } catch (KatoException ke1) { >> ... >> } >> try { >> katoObject2.methodB(); >> } catch (KatoException ke2) { >> ... >> } >> try { >> katoObject3.methodC(); >> } catch (KatoException ke3) { >> ... >> } >> } >> and this can lead to poor debuggability of the client code. >> >> It is also true that in very few cases the client code will need to >> implement different behaviours for the Data Unavailable and the >> Corrupt Data cases: most of the time, they will be treated in the same >> way, and the corrupt data, when available, will just be ignored. It >> would make sense, therefore, to group the two cases under a single >> name: let's therefore define "Invalid Data" a situation where either >> the data is not available or it's corrupted. So the key questions >> become: "Does it make sense to think of a way to handle the Invalid >> Data case without the use of exceptions? If yes, how?" >> >> One possible solution to this problem could be to reserve the null >> return value, in every API call, to Invalid Data: an API call returns >> null if and only if the data being requested is either unavailable or >> corrupted. To discriminate the two cases, the client code could call a >> specific errno-like API which returns the corrupt data, of the latest >> API call, or null if the data was unavailable. Most of the time, the >> client code would therefore look similar to: >> >> public clientMethod1() { >> KatoThing value; >> value =3D katoObject1.methodA(); >> if (value =3D=3D null) { >> // handle the invalid data >> } >> } >> >> although, in a small number of cases, the code might be more similar to >> this: >> >> public clientMethod2() { >> KatoThing value; >> value =3D katoObject1.methodA(); >> if (value =3D=3D null) { >> CorruptData cd =3D KatoHelper.getLastCorruptData(); >> if (cd =3D=3D null) { >> // handle the data unavailable case >> } else { >> // handle the corrupt data case >> } >> } >> } >> >> As a side effect, this solution would imply that primitive types >> cannot be used as return values, and their corresponding object >> wrappers would need to be used instead. >> >> >> 3.0 Optionality >> >> The Kato API will be designed to support different types of dump >> formats. Examples of system dumps are HPROF for SUN VMs, system dumps, >> Javacores and PHD for IBM VMs, etc. >> >> Different dump formats expose different information, so if we design >> the API as a monolithic block, there will be cases in which some parts >> of it =96 more or less large, depending on the dump format =96 may not b= e >> implemented. >> Although the "Operation Not Supported Exception" case described above >> does provide some support for these cases, we certainly need a better >> mechanism to support optionality. >> One possible solution lies in the consideration that we don't really >> need to design for optionality at method level: normally, dump >> formats tend to focus on one or more "views" of the process that >> generates them. Examples of these views are: >> >> 3.1 Process view: formats that support this view expose information >> like command line, environment, native threads and locks, stack >> frames, loaded libraries, memory layout, symbols, etc. System dumps >> normally expose nearly all of these data. >> >> 3.2 Java Runtime view: formats supporting this view expose information >> like VM args, Java Threads, Java Monitors, classloaders, heaps, heap >> roots, compiled methods, etc. HPROF is an example of format that >> supports this view. >> >> 3.3 Java Heap view: formats supporting this view expose Java classes, >> objects and their relationships. IBM PHD is an example of dump format >> supporting this view, as well as SUN's HPROF. >> >> The API should be designed in order for a given file format to support >> one or more of these views, as well as allowing new views to be >> plugged in. Inside each view, it could be reasonable to provide a >> further level of granularity involving Operation Not Supported >> Exceptions. >> >> >> 4 Data access models >> >> In designing the data access models, care should be taken about the >> fact that the API may have to deal with dumps whose size is vastly >> greater than available memory. Therefore =96 and this holds especially >> for Java objects in the heap view =96 creating all the objects in memory >> at the moment the dump is opened may not be a good idea. >> In this context, user stories will dictate the way data is accessed >> from the dump. If it will turn out that heap browsing starting from >> the roots, or dominator tree browsing will be major use cases, for >> example, it makes sense to think about loading the children of a node >> object lazily at the moment the parent object is first displayed, and >> not any earlier. A first summary of the ways of accessing objects >> could be the following: >> >> 4.1 retrieve the Java object located at a given address in memory (if >> memory is available in the dump, i.e. if the dump supports a Process >> view); >> 4.2 retrieve all the heap root objects; >> 4.3 for any given object, retrieve the objects referenced by it; >> 4.4 retrieve all objects satisfying a given query (e.g. all objects of >> class java.lang.String, or all objects of any class having a field >> named "value"). This will involve having a query language of some form >> built-in the API. >> >> (more to figure out...) >> >> >> Please feel free to share your comments about all the items above, and >> to add more.... >> >> >> >> Carmine >> > > > > -- > Regards > > Adam Pilkington > --=20 Regards Adam Pilkington --00163646be0297e78d04634416d5--