incubator-kato-spec mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stuart Monteith <stuk...@stoo.me.uk>
Subject Re: Kato API javadoc
Date Mon, 06 Apr 2009 22:16:48 GMT
Hi,
	The TCK harness that's checked in finds all of the necessary classes,  
executes all the "configure*" methods it finds, then calls a method  
that is configured to get the JVM to generate a system dump, javacore,  
or whatever. In one usecase we get the JVM to SIGSEGV. Then the  
harness executes another JVM that execute the jUnit testcases for the  
same set of classes, making available an instance of the Image and  
JavaRuntime for the dump that was generated by the previous JVM. The  
trick is really down to generating the dump programatically -  
something I'd like to do with hprof at some point.

I assume -target 1.4 would just complain. However, a build step to  
make two classes from one is interesting - would just need to decide  
on a mechanism.

Regards,
	Stuart


On 6 Apr 2009, at 21:34, Nicholas Sterling wrote:

> Hi, Stuart.
>
> Interesting.  Yes, that is convenient being able to keep it all  
> together that way -- it would be a shame to lose that.  Hmmm.  I  
> suppose you could have some build step that rips out testXXX methods  
> and compiles for the target VM??
>
> I don't think "-target 1.4" would help (?).
>
> In any case, as you say, it's not the end of the world.
>
> Nicholas
>
> p.s.  Will coordination of the two VMs complicate things?  I suppose  
> you just have to run the target to completion before starting the  
> client?
>
>
> Stuart Monteith wrote:
>> Hi Nicholas,
>>   You are definitely on the same page. One thing we need to rethink  
>> is the TCK.
>>
>> The TCK tests each look something like this:
>>
>> /**
>> * Testcase to  find a Java thread called "quack".
>> */
>> public class DuckTests extends Testcase {
>>   String threadName="quack";
>>
>>   public void configure() {
>>      Thread thread = new Thread(threadName, Runnable);
>>      thread.start()
>>   }
>>
>>   public void testQuack() {
>>      JavaRuntime jr = *getJavaRuntime*();
>>
>>      Iterator threads =  *jr.getThreads*();
>>      while(threads.hasNext()) {
>>         JavaThread next = (JavaThread) threads.next();
>>
>>         if  (threadName.equals(next.getName())) {
>>            // success
>>            return;
>>         }
>>      }
>>
>>      fail("Couldn't find thread \""+threadName+"\"");
>> }
>>
>> The TCK would on the first instance run the configure() method,  
>> then cause the JVM to generate a dump that can be read through our  
>> API. It would then would run jUnit and execute the testQuack()  
>> method. This way we have a nice, self contained test that includes  
>> the setup of a JVM and the test of the API to read the resultant  
>> dump in the same place. You'll notice that the constant for the  
>> name of the thread is shared by the JVM dumping and the JVM that  
>> would be running the testcases, which helps consistency enormously.
>>
>> It makes sense to do it this way, but only when the API is at the  
>> same level as the JVM that is being configured to generate a dump.
>>
>> If Kato is to support 1.4.2, the TCK will need to support it, but  
>> that is something that will just have to be fixed as the API evolves.
>>
>> I don't believe it would be an enormous problem.
>> For example, we could have a class to configure and another to run  
>> the tests, each compiled to different levels:
>>
>> /**
>> * Configuration for DuckTests
>> */
>> public class DuckConfiguration extends TestCase {
>>   String threadName="quack";
>>
>>   public void configure() {
>>      Thread thread = new Thread(threadName, Runnable);
>>      thread.start()
>>   }
>> }
>>
>> /**
>> * Testcase to find Duck thread with API.
>> */
>> public class DuckTests extends DuckConfiguration {
>>
>>   public void testQuack() {
>>      JavaRuntime jr = getJavaRuntime();
>>
>>      Iterator<JavaThread> threads =  jr.getThreads().iterator();
>>      while(threads.hasNext()) {
>>         JavaThread next = (JavaThread) threads.next();
>>
>>         if  (threadName.equals(next.getName())) {
>>            // success
>>            return;
>>         }
>>      }
>>
>>      fail("Couldn't find thread \""+threadName+"\"");
>> }
>>
>> So, in summary, my view is that there is nothing stopping us using  
>> Java 5.0 language features, we just need to structure the TCK to  
>> take this into account.
>>
>> Regards,
>>   Stuart
>>
>>
>> Nicholas Sterling wrote:
>>>
>>>
>>> Steve Poole wrote:
>>>>> Also, I'm seeing methods that return Iterator with no type (e.g.  
>>>>> in
>>>>> JavaMethod).  Is that just a temporary placeholder which will  
>>>>> ultimately get
>>>>> a type?
>>>>>
>>>>
>>>>
>>>> That's an interesting question.   The reason for there being no  
>>>> type info is
>>>> that the API was designed to compile and run on 1.4.2.
>>>> We need to decide if that still makes sense.   I know that 1.4 is  
>>>> out of
>>>> support by Sun and IBM.    What about Oracle?
>>>>
>>> Steve, I was hoping that we do *not* need to give people the  
>>> ability to write such tools in 1.4.2, but we *do* need to  give  
>>> people the ability to analyze 1.4.2 dump artifacts using a tool  
>>> written in Java 6.  Since no 1.4.2-based code would be linked  
>>> against the API, the API would be free to use generics and other  
>>> features (enums would help a lot, I suspect) that came later.   
>>> That is, a provider would have to understand 1.4.2 dump artifacts,  
>>> but it could be compiled under JDK 6.
>>>
>>> Or maybe I'm not on the same page...
>>>
>>> Nicholas
>>>
>>>>
>>>>> Nicholas
>>>>>
>>>>>
>>>>>
>>>>> Steve Poole wrote:
>>>>>
>>>>>
>>>>>> Well at last ! -  we actually have the API javdoc available -   
>>>>>> it's here
>>>>>> http://hudson.zones.apache.org/hudson/view/Kato/job/kato.api-head/javadoc/
>>>>>>
>>>>>> I'm certainly not going to hold this up as a the greatest  
>>>>>> javadoc in the
>>>>>> world but its a good place to start.  I do feel that we have   
>>>>>> finally
>>>>>> arrived :-)
>>>>>>
>>>>>> The API has lots of "DTFJ"ness to it that needs to go but I'm  
>>>>>> really
>>>>>> interested in intitial reactions to the javadoc -  is the form  
>>>>>> of the API
>>>>>> what you expected?
>>>>>>
>>>>>>
>>>>>> Moving on - there is still code needed to make the API work (we 

>>>>>> need to
>>>>>> get
>>>>>> the hprof support working)   but  we can make progress in the  
>>>>>> interim.  I
>>>>>> want to move quickly towards having a regular heat beat where  
>>>>>> we are
>>>>>> moving
>>>>>> through the usecases that we have.  To do that we need to  get  

>>>>>> up to
>>>>>> speed
>>>>>> with the API shape as it stands today.    Stuart has published  
>>>>>> some info
>>>>>> on
>>>>>> the  API but its not really sufficent for educational needs :-)
>>>>>>
>>>>>> Is it worth holding a conference call so that we can walk  
>>>>>> through the API
>>>>>> to
>>>>>> explain why its the shape it is or is everyone comfortable with 

>>>>>> just more
>>>>>> doc?
>>>>>>
>>>>>> Cheers
>>>>>>
>>>>>> Steve
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>>


Mime
View raw message