Return-Path: Delivered-To: apmail-harmony-commits-archive@www.apache.org Received: (qmail 53885 invoked from network); 26 Dec 2007 10:18:00 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 26 Dec 2007 10:18:00 -0000 Received: (qmail 42457 invoked by uid 500); 26 Dec 2007 10:17:49 -0000 Delivered-To: apmail-harmony-commits-archive@harmony.apache.org Received: (qmail 42362 invoked by uid 500); 26 Dec 2007 10:17:48 -0000 Mailing-List: contact commits-help@harmony.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@harmony.apache.org Delivered-To: mailing list commits@harmony.apache.org Received: (qmail 42353 invoked by uid 99); 26 Dec 2007 10:17:48 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Dec 2007 02:17:48 -0800 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO eris.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Dec 2007 10:17:40 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id A13841A9832; Wed, 26 Dec 2007 02:17:28 -0800 (PST) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r606876 [1/6] - in /harmony/enhanced/drlvm/trunk/vm/gc_gen/src: common/ finalizer_weakref/ gen/ jni/ los/ mark_compact/ mark_sweep/ semi_space/ thread/ trace_forward/ utils/ verify/ Date: Wed, 26 Dec 2007 10:17:15 -0000 To: commits@harmony.apache.org From: xli@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20071226101728.A13841A9832@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: xli Date: Wed Dec 26 02:17:10 2007 New Revision: 606876 URL: http://svn.apache.org/viewvc?rev=606876&view=rev Log: HARMONY-4325:[drlvm][gc] Tick project repository. This patch implements concurrent sweep in concurrent GC, so that there is almost no pause in GC, except the rootset enumeration phase. The patch also implements two new concurrent GC algorithms: Mostly concurrent and on-the-fly DLG. Added: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/hashcode.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace.h (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_alloc.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_alloc.h (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_chunk.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_chunk.h (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_compact.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_concurrent_gc_stats.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_fallback_mark.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_mark.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_mark_mostly_concurrent.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_mark_otf_concurrent.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_mark_sweep.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_mark_sweep.h (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_sweep.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_sweep_concurrent.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_verify.cpp (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/wspace_verify.h (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/semi_space/ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/semi_space/sspace.h (with props) harmony/enhanced/drlvm/trunk/vm/gc_gen/src/trace_forward/sspace_temp.cpp (with props) Removed: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_alloc.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_alloc.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_chunk.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_chunk.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_fallback_mark.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_mark.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_mark_sweep.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_mark_sweep.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_sweep.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_verify.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_verify.h Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/fix_repointed_refs.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_class.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_vm.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_platform.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_space.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/hashcode.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/mark_scan_pool.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/object_status.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/space_tuner.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/weak_roots.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/finalizer_weakref/finalizer_weakref.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/finalizer_weakref/finalizer_weakref.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/finalizer_weakref/finalizer_weakref_metadata.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/gen/gc_for_barrier.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/gen/gc_for_barrier.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/gen/gen.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/gen/gen.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/gen/gen_adapt.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/jni/java_natives.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/los/lspace_alloc_collect.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_compact/fallback_mark_scan.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_compact/mspace.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_compact/mspace_alloc.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_compact/mspace_extend_compact.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_compact/mspace_slide_compact.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_compact/space_tune_mark_scan.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/gc_ms.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/gc_ms.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_compact.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/mark_sweep/sspace_mark_concurrent.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/collector.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/collector.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/collector_alloc.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/gc_thread.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/marker.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/marker.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/mutator.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/mutator.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/thread/mutator_alloc.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/trace_forward/fspace.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/trace_forward/fspace.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/trace_forward/fspace_alloc.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/trace_forward/fspace_gen_forward_pool.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/trace_forward/fspace_nongen_forward_pool.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/utils/vector_block.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/verify/verifier_common.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/verify/verifier_common.h harmony/enhanced/drlvm/trunk/vm/gc_gen/src/verify/verifier_scanner.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/verify/verify_gc_effect.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/verify/verify_live_heap.cpp harmony/enhanced/drlvm/trunk/vm/gc_gen/src/verify/verify_live_heap.h Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.cpp URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.cpp?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.cpp (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.cpp Wed Dec 26 02:17:10 2007 @@ -18,9 +18,11 @@ #include "gc_common.h" #include "../gen/gen.h" #include "../mark_sweep/gc_ms.h" -#include "../mark_sweep/sspace.h" +#include "../mark_sweep/wspace.h" #include "collection_scheduler.h" #include "gc_concurrent.h" +#include "../verify/verify_live_heap.h" + static int64 time_delay_to_start_mark = 0; void collection_scheduler_initialize(GC* gc) @@ -43,10 +45,10 @@ Boolean gc_need_start_concurrent_mark(GC* gc) { - if(!USE_CONCURRENT_GC) return FALSE; + if(!USE_CONCURRENT_MARK) return FALSE; //FIXME: GEN mode also needs the support of starting mark after thread resume. #ifdef USE_MARK_SWEEP_GC - if(gc_is_concurrent_mark_phase() ) return FALSE; + if(gc_is_concurrent_mark_phase() || gc_mark_is_concurrent()) return FALSE; int64 time_current = time_now(); if( time_current - get_collection_end_time() > time_delay_to_start_mark) @@ -62,25 +64,50 @@ #endif } +Boolean gc_need_start_concurrent_sweep(GC* gc) +{ + if(!USE_CONCURRENT_SWEEP) return FALSE; + + if(gc_sweep_is_concurrent()) return FALSE; + + /*if mark is concurrent and STW GC has not started, we should start concurrent sweep*/ + if(gc_mark_is_concurrent() && !gc_is_concurrent_mark_phase(gc)) + return TRUE; + else + return FALSE; +} + +Boolean gc_need_reset_status(GC* gc) +{ + if(gc_sweep_is_concurrent() && !gc_is_concurrent_sweep_phase(gc)) + return TRUE; + else + return FALSE; +} +Boolean gc_need_prepare_rootset(GC* gc) +{ + /*TODO: support on-the-fly root set enumeration.*/ + return FALSE; +} void gc_update_collection_scheduler(GC* gc, int64 mutator_time, int64 mark_time) { - //FIXME: GEN GC should be supportted. + //FIXME: support GEN GC. #ifdef USE_MARK_SWEEP_GC Collection_Scheduler* collection_scheduler = gc->collection_scheduler; Space* space = NULL; - space = (Space*) gc_get_sspace(gc); + space = (Space*) gc_get_wspace(gc); - Space_Statistics* sspace_stat = space->space_statistic; + Space_Statistics* space_stat = space->space_statistic; unsigned int slot_index = collection_scheduler->last_slot_index_in_window; unsigned int num_slot = collection_scheduler->num_slot_in_window; - collection_scheduler->num_obj_traced_window[slot_index] = sspace_stat->num_live_obj; - collection_scheduler->size_alloced_window[slot_index] = sspace_stat->last_size_free_space; + collection_scheduler->num_obj_traced_window[slot_index] = space_stat->num_live_obj; + collection_scheduler->size_alloced_window[slot_index] = space_stat->last_size_free_space; int64 time_mutator = mutator_time; int64 time_mark = mark_time; @@ -90,7 +117,7 @@ collection_scheduler->trace_rate_window[slot_index] = time_mark == 0 ? 0 : (float)collection_scheduler->num_obj_traced_window[slot_index] / time_mark; - + collection_scheduler->num_slot_in_window = num_slot >= STATISTICS_SAMPLING_WINDOW_SIZE ? num_slot : (++num_slot); collection_scheduler->last_slot_index_in_window = (++slot_index)% STATISTICS_SAMPLING_WINDOW_SIZE; @@ -110,19 +137,60 @@ if(average_alloc_rate == 0 || average_trace_rate == 0){ time_delay_to_start_mark = 0; }else{ - float expected_time_alloc = sspace_stat->size_free_space / average_alloc_rate; - float expected_time_trace = sspace_stat->num_live_obj / average_trace_rate; + float time_alloc_expected = space_stat->size_free_space / average_alloc_rate; + float time_trace_expected = space_stat->num_live_obj / average_trace_rate; - if(expected_time_alloc > expected_time_trace) - collection_scheduler->time_delay_to_start_mark = (int64)((expected_time_alloc - expected_time_trace)*0.7); - else - collection_scheduler->time_delay_to_start_mark = 0; + if(time_alloc_expected > time_trace_expected){ + if(gc_concurrent_match_algorithm(OTF_REM_OBJ_SNAPSHOT_ALGO)||gc_concurrent_match_algorithm(OTF_REM_NEW_TARGET_ALGO)){ + collection_scheduler->time_delay_to_start_mark = (int64)((time_alloc_expected - time_trace_expected)*0.65); + }else if(gc_concurrent_match_algorithm(MOSTLY_CONCURRENT_ALGO)){ + collection_scheduler->time_delay_to_start_mark = (int64)(mutator_time * 0.6); + } + + }else{ + collection_scheduler->time_delay_to_start_mark = 0; + } time_delay_to_start_mark = collection_scheduler->time_delay_to_start_mark; - } + } + //[DEBUG] set to 0 for debugging. + //time_delay_to_start_mark = 0; #endif return; } + +Boolean gc_try_schedule_collection(GC* gc, unsigned int gc_cause) +{ + gc_check_concurrent_phase(gc); + + if(gc_need_prepare_rootset(gc)){ + /*TODO:Enable concurrent rootset enumeration.*/ + assert(0); + } + + if(gc_need_start_concurrent_mark(gc)){ + gc_start_concurrent_mark(gc); + return TRUE; + } + + if(gc_need_start_concurrent_sweep(gc)){ + gc->num_collections++; + gc_start_concurrent_sweep(gc); + return TRUE; + } + + if(gc_need_reset_status(gc)){ + int disable_count = hythread_reset_suspend_disable(); + gc_reset_after_concurrent_collection(gc); + vm_resume_threads_after(); + hythread_set_suspend_disable(disable_count); + } + + return FALSE; + +} + + Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/collection_scheduler.h Wed Dec 26 02:17:10 2007 @@ -41,8 +41,11 @@ void collection_scheduler_destruct(GC* gc); void gc_update_collection_scheduler(GC* gc, int64 mutator_time, int64 mark_time); +Boolean gc_try_schedule_collection(GC* gc, unsigned int gc_cause); Boolean gc_need_start_concurrent_mark(GC* gc); #endif + + Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/fix_repointed_refs.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/fix_repointed_refs.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/fix_repointed_refs.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/fix_repointed_refs.h Wed Dec 26 02:17:10 2007 @@ -32,24 +32,36 @@ Partial_Reveal_Object* p_obj = read_slot(p_ref); if(!p_obj) return; +#ifdef USE_UNIQUE_MOVE_COMPACT_GC + p_obj = obj_get_fw_in_table(p_obj); + assert(obj_belongs_to_gc_heap(p_obj)); + write_slot(p_ref, p_obj); + return; + +#endif + if(IS_MOVE_COMPACT){ /* This condition is removed because we do los sliding compaction at every major compaction after add los minor sweep. */ //if(obj_is_moved(p_obj)) /*Fixme: los_boundery ruined the modularity of gc_common.h*/ if(p_obj < los_boundary){ - write_slot(p_ref, obj_get_fw_in_oi(p_obj)); + p_obj = obj_get_fw_in_oi(p_obj); }else{ - *p_ref = obj_get_fw_in_table(p_obj); + p_obj = obj_get_fw_in_table(p_obj); } - }else{ + + write_slot(p_ref, p_obj); + + }else{ /* slide compact */ if(obj_is_fw_in_oi(p_obj)){ /* Condition obj_is_moved(p_obj) is for preventing mistaking previous mark bit of large obj as fw bit when fallback happens. * Because until fallback happens, perhaps the large obj hasn't been marked. So its mark bit remains as the last time. * This condition is removed because we do los sliding compaction at every major compaction after add los minor sweep. * In major collection condition obj_is_fw_in_oi(p_obj) can be omitted, * since those which can be scanned in MOS & NOS must have been set fw bit in oi. */ - assert((POINTER_SIZE_INT)obj_get_fw_in_oi(p_obj) > DUAL_MARKBITS); - write_slot(p_ref, obj_get_fw_in_oi(p_obj)); + p_obj = obj_get_fw_in_oi(p_obj); + assert(obj_belongs_to_gc_heap(p_obj)); + write_slot(p_ref, p_obj); } } Added: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.cpp URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.cpp?rev=606876&view=auto ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.cpp (added) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.cpp Wed Dec 26 02:17:10 2007 @@ -0,0 +1,304 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "gc_space.h" + +void space_init_blocks(Blocked_Space* space) +{ + Block* blocks = (Block*)space->heap_start; + Block_Header* last_block = (Block_Header*)blocks; + unsigned int start_idx = space->first_block_idx; + for(unsigned int i=0; i < space->num_managed_blocks; i++){ + Block_Header* block = (Block_Header*)&(blocks[i]); + block_init(block); + block->block_idx = i + start_idx; + last_block->next = block; + last_block = block; + } + last_block->next = NULL; + space->blocks = blocks; + + return; +} + +void space_desturct_blocks(Blocked_Space* space) +{ + Block* blocks = (Block*)space->heap_start; + unsigned int i=0; + for(; i < space->num_managed_blocks; i++){ + Block_Header* block = (Block_Header*)&(blocks[i]); + block_destruct(block); + } +} + +void blocked_space_shrink(Blocked_Space* space, unsigned int changed_size) +{ + unsigned int block_dec_count = changed_size >> GC_BLOCK_SHIFT_COUNT; + void* new_base = (void*)&(space->blocks[space->num_managed_blocks - block_dec_count]); + + void* decommit_base = (void*)round_down_to_size((POINTER_SIZE_INT)new_base, SPACE_ALLOC_UNIT); + + assert( ((Block_Header*)decommit_base)->block_idx >= space->free_block_idx); + + void* old_end = (void*)&space->blocks[space->num_managed_blocks]; + POINTER_SIZE_INT decommit_size = (POINTER_SIZE_INT)old_end - (POINTER_SIZE_INT)decommit_base; + assert(decommit_size && !(decommit_size%GC_BLOCK_SIZE_BYTES)); + + Boolean result = vm_decommit_mem(decommit_base, decommit_size); + assert(result == TRUE); + + space->committed_heap_size = (POINTER_SIZE_INT)decommit_base - (POINTER_SIZE_INT)space->heap_start; + space->num_managed_blocks = (unsigned int)(space->committed_heap_size >> GC_BLOCK_SHIFT_COUNT); + + Block_Header* new_last_block = (Block_Header*)&space->blocks[space->num_managed_blocks - 1]; + space->ceiling_block_idx = new_last_block->block_idx; + new_last_block->next = NULL; +} + +void blocked_space_extend(Blocked_Space* space, unsigned int changed_size) +{ + unsigned int block_inc_count = changed_size >> GC_BLOCK_SHIFT_COUNT; + + void* old_base = (void*)&space->blocks[space->num_managed_blocks]; + void* commit_base = (void*)round_down_to_size((POINTER_SIZE_INT)old_base, SPACE_ALLOC_UNIT); + unsigned int block_diff_count = (unsigned int)(((POINTER_SIZE_INT)old_base - (POINTER_SIZE_INT)commit_base) >> GC_BLOCK_SHIFT_COUNT); + block_inc_count += block_diff_count; + + POINTER_SIZE_INT commit_size = block_inc_count << GC_BLOCK_SHIFT_COUNT; + void* result = vm_commit_mem(commit_base, commit_size); + assert(result == commit_base); + + void* new_end = (void*)((POINTER_SIZE_INT)commit_base + commit_size); + space->committed_heap_size = (POINTER_SIZE_INT)new_end - (POINTER_SIZE_INT)space->heap_start; + /*Fixme: For_Heap_Adjust, but need fix if static mapping.*/ + space->heap_end = new_end; + /* init the grown blocks */ + Block_Header* block = (Block_Header*)commit_base; + Block_Header* last_block = (Block_Header*)((Block*)block -1); + unsigned int start_idx = last_block->block_idx + 1; + unsigned int i; + for(i=0; block < new_end; i++){ + block_init(block); + block->block_idx = start_idx + i; + last_block->next = block; + last_block = block; + block = (Block_Header*)((Block*)block + 1); + } + last_block->next = NULL; + space->ceiling_block_idx = last_block->block_idx; + space->num_managed_blocks = (unsigned int)(space->committed_heap_size >> GC_BLOCK_SHIFT_COUNT); +} + +void blocked_space_block_iterator_init(Blocked_Space *space) +{ space->block_iterator = (Block_Header*)space->blocks; } + +void blocked_space_block_iterator_init_free(Blocked_Space *space) +{ space->block_iterator = (Block_Header*)&space->blocks[space->free_block_idx - space->first_block_idx]; } + +Block_Header *blocked_space_block_iterator_get(Blocked_Space *space) +{ return (Block_Header*)space->block_iterator; } + +Block_Header *blocked_space_block_iterator_next(Blocked_Space *space) +{ + Block_Header *cur_block = (Block_Header*)space->block_iterator; + + while(cur_block != NULL){ + Block_Header *next_block = cur_block->next; + + Block_Header *temp = (Block_Header*)atomic_casptr((volatile void **)&space->block_iterator, next_block, cur_block); + if(temp != cur_block){ + cur_block = (Block_Header*)space->block_iterator; + continue; + } + return cur_block; + } + /* run out space blocks */ + return NULL; +} + +/* ================================================ */ + +#ifdef USE_32BITS_HASHCODE +Partial_Reveal_Object* block_get_first_marked_object_extend(Block_Header* block, void** start_pos) +{ + Partial_Reveal_Object* cur_obj = (Partial_Reveal_Object*)block->base; + Partial_Reveal_Object* block_end = (Partial_Reveal_Object*)block->free; + + Partial_Reveal_Object* first_marked_obj = next_marked_obj_in_block(cur_obj, block_end); + if(!first_marked_obj) + return NULL; + + *start_pos = obj_end_extend(first_marked_obj); + + return first_marked_obj; +} +#endif /* #ifdef USE_32BITS_HASHCODE */ + +Partial_Reveal_Object* block_get_first_marked_object(Block_Header* block, void** start_pos) +{ + Partial_Reveal_Object* cur_obj = (Partial_Reveal_Object*)block->base; + Partial_Reveal_Object* block_end = (Partial_Reveal_Object*)block->free; + + Partial_Reveal_Object* first_marked_obj = next_marked_obj_in_block(cur_obj, block_end); + if(!first_marked_obj) + return NULL; + + *start_pos = obj_end(first_marked_obj); + + return first_marked_obj; +} + +Partial_Reveal_Object* block_get_next_marked_object(Block_Header* block, void** start_pos) +{ + Partial_Reveal_Object* cur_obj = *(Partial_Reveal_Object**)start_pos; + Partial_Reveal_Object* block_end = (Partial_Reveal_Object*)block->free; + + Partial_Reveal_Object* next_marked_obj = next_marked_obj_in_block(cur_obj, block_end); + if(!next_marked_obj) + return NULL; + + *start_pos = obj_end(next_marked_obj); + + return next_marked_obj; +} + +Partial_Reveal_Object *block_get_first_marked_obj_prefetch_next(Block_Header *block, void **start_pos) +{ + Partial_Reveal_Object *cur_obj = (Partial_Reveal_Object *)block->base; + Partial_Reveal_Object *block_end = (Partial_Reveal_Object *)block->free; + + Partial_Reveal_Object *first_marked_obj = next_marked_obj_in_block(cur_obj, block_end); + if(!first_marked_obj) + return NULL; + + Partial_Reveal_Object *next_obj = obj_end(first_marked_obj); + *start_pos = next_obj; + + if(next_obj >= block_end) + return first_marked_obj; + + Partial_Reveal_Object *next_marked_obj = next_marked_obj_in_block(next_obj, block_end); + + if(next_marked_obj){ + if(next_marked_obj != next_obj) + obj_set_prefetched_next_pointer(next_obj, next_marked_obj); + } else { + obj_set_prefetched_next_pointer(next_obj, 0); + } + return first_marked_obj; +} + +Partial_Reveal_Object *block_get_first_marked_obj_after_prefetch(Block_Header *block, void **start_pos) +{ +#ifdef USE_32BITS_HASHCODE + return block_get_first_marked_object_extend(block, start_pos); +#else + return block_get_first_marked_object(block, start_pos); +#endif +} + +Partial_Reveal_Object *block_get_next_marked_obj_prefetch_next(Block_Header *block, void **start_pos) +{ + Partial_Reveal_Object *cur_obj = *(Partial_Reveal_Object **)start_pos; + Partial_Reveal_Object *block_end = (Partial_Reveal_Object *)block->free; + + if(cur_obj >= block_end) + return NULL; + + Partial_Reveal_Object *cur_marked_obj; + + if(obj_is_marked_in_vt(cur_obj)) + cur_marked_obj = cur_obj; + else + cur_marked_obj = (Partial_Reveal_Object *)obj_get_prefetched_next_pointer(cur_obj); + + if(!cur_marked_obj) + return NULL; + + Partial_Reveal_Object *next_obj = obj_end(cur_marked_obj); + *start_pos = next_obj; + + if(next_obj >= block_end) + return cur_marked_obj; + + Partial_Reveal_Object *next_marked_obj = next_marked_obj_in_block(next_obj, block_end); + + if(next_marked_obj){ + if(next_marked_obj != next_obj) + obj_set_prefetched_next_pointer(next_obj, next_marked_obj); + } else { + obj_set_prefetched_next_pointer(next_obj, 0); + } + + return cur_marked_obj; +} + +Partial_Reveal_Object *block_get_next_marked_obj_after_prefetch(Block_Header *block, void **start_pos) +{ + Partial_Reveal_Object *cur_obj = (Partial_Reveal_Object *)(*start_pos); + Partial_Reveal_Object *block_end = (Partial_Reveal_Object *)block->free; + + if(cur_obj >= block_end) + return NULL; + + Partial_Reveal_Object *cur_marked_obj; + + if(obj_is_marked_in_vt(cur_obj)) + cur_marked_obj = cur_obj; + else if (obj_is_fw_in_oi(cur_obj)) + /* why we need this obj_is_fw_in_oi(cur_obj) check. It's because of one source block with two dest blocks. + In that case, the second half of it might have been copied by another thread, so the live objects's + markbit is cleared. When the thread for the first half reaches the first object of the second half, + it finds there is no markbit in vt. But it still want to get the forward pointer of this object, + and figure out that, the target address (forward pointer) is in another dest block than current one; + so it knows it finishes its part of the source block, and will get next source source block. + Without this check, it will get prefetch pointer from oi, and finds the forward pointer (and with fwd_bit). + It's not the next live object, so it's wrong. + + Now I simply let it to return NULL when it finds ! obj_is_marked_in_vt(cur_obj) but obj_is_fw_in_oi(cur_obj). + This is the same in logic but clearer. + Change from original code: + if(obj_is_marked_in_vt(cur_obj) || obj_is_fw_in_oi(cur_obj) ) + cur_marked_obj = cur_obj; + else + cur_marked_obj = obj_get_prefetched_next_pointer(cur_obj); + + To current code: + if(obj_is_marked_in_vt(cur_obj) ) + cur_marked_obj = cur_obj; + else if (obj_is_fw_in_oi(cur_obj) ) + return NULL; + else + cur_marked_obj = obj_get_prefetched_next_pointer(cur_obj); + */ + return NULL; + else + cur_marked_obj = obj_get_prefetched_next_pointer(cur_obj); + + if(!cur_marked_obj) + return NULL; + +#ifdef USE_32BITS_HASHCODE + Partial_Reveal_Object *next_obj = obj_end_extend(cur_marked_obj); +#else + Partial_Reveal_Object *next_obj = obj_end(cur_marked_obj); +#endif + + *start_pos = next_obj; + + return cur_marked_obj; +} Propchange: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.cpp ------------------------------------------------------------------------------ svn:eol-style = native Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_block.h Wed Dec 26 02:17:10 2007 @@ -47,14 +47,23 @@ void* ceiling; void* new_free; /* used only during compaction */ unsigned int block_idx; +#ifdef USE_UNIQUE_MOVE_COMPACT_GC + unsigned int num_multi_block; /*number of block in large block*/ +#endif volatile unsigned int status; + + /* following three fields are used only in parallel sliding compaction */ volatile unsigned int dest_counter; + Partial_Reveal_Object* src; + Partial_Reveal_Object* next_src; + #ifdef USE_32BITS_HASHCODE Hashcode_Buf* hashcode_buf; /*hash code entry list*/ #endif - Partial_Reveal_Object* src; - Partial_Reveal_Object* next_src; Block_Header* next; +#ifdef USE_UNIQUE_MOVE_COMPACT_GC + Block_Header* next_large_block; /*used to link free super large block in gc_mc*/ +#endif POINTER_SIZE_INT table[1]; /* entry num == OFFSET_TABLE_SIZE_WORDS */ }Block_Header; @@ -95,12 +104,18 @@ #define ADDRESS_OFFSET_TO_BLOCK_HEADER(addr) ((unsigned int)((POINTER_SIZE_INT)addr&GC_BLOCK_LOW_MASK)) #define ADDRESS_OFFSET_IN_BLOCK_BODY(addr) ((unsigned int)(ADDRESS_OFFSET_TO_BLOCK_HEADER(addr)- GC_BLOCK_HEADER_SIZE_BYTES)) +#ifdef USE_UNIQUE_MOVE_COMPACT_GC +#define NUM_BLOCKS_PER_LARGE_OBJECT(size) (((size)+GC_BLOCK_HEADER_SIZE_BYTES+ GC_BLOCK_SIZE_BYTES-1)/GC_BLOCK_SIZE_BYTES) +#endif inline void block_init(Block_Header* block) { block->free = (void*)((POINTER_SIZE_INT)block + GC_BLOCK_HEADER_SIZE_BYTES); block->ceiling = (void*)((POINTER_SIZE_INT)block + GC_BLOCK_SIZE_BYTES); block->base = block->free; block->new_free = block->free; +#ifdef USE_UNIQUE_MOVE_COMPACT_GC + block->num_multi_block = 0; +#endif block->status = BLOCK_FREE; block->dest_counter = 0; block->src = NULL; @@ -139,20 +154,46 @@ } #endif -inline void obj_set_prefetched_next_pointer(Partial_Reveal_Object* obj, Partial_Reveal_Object* raw_prefetched_next){ - /*Fixme: em64t: This may be not necessary!*/ - if(raw_prefetched_next == 0){ - *((POINTER_SIZE_INT*)obj + 1) = 0; - return; - } - REF ref = obj_ptr_to_ref(raw_prefetched_next); - *( (REF*)((POINTER_SIZE_INT*)obj + 1) ) = ref; -} - -inline Partial_Reveal_Object* obj_get_prefetched_next_pointer(Partial_Reveal_Object* obj){ - /*Fixme: em64t: This may be not necessary!*/ - assert(obj); - return read_slot( (REF*)((POINTER_SIZE_INT*)obj + 1) ); +/*FIXME: We use native pointer and put into oi, assuming VT and OI takes two POINTER_SIZE_INTs. + This is untrue if: + 1. compressed_ref && compressed_vt (vt and oi: 32bit. this can be true when heap<4G in 64bit machine) + Fortunately, current real design gives both vt (padded) and oi 64 bit even in this case. + + This is true in 64bit machine if either vt or oi is not compressed: + 2. compressed_ref && ! compressed_vt (vt: 64bit, oi: 64bit) + 3. !compressed_ref && ! compressed_vt (vt: 64bit, oi: 64bit) + 4. !compressed_ref && compressed_vt (with padded vt) (vt: 64bit, oi: 64bit) + + When compressed_ref is true, REF is 32bit. It doesn't work in case of 2. + So we always use native pointer for both. + If case 1 design is changed in future to use 32bit for both, we need reconsider. + + Use REF here has another implication. Since it's a random number, can have 1 in LSBs, which might be confused as FWD_BIT or MARK_BIT. + We need ensure that, if we use REF, we never check this prefetch pointer for FWD_BIT or MARK_BIT. + +*/ + +inline void obj_set_prefetched_next_pointer(Partial_Reveal_Object* p_obj, Partial_Reveal_Object* raw_prefetched_next) +{ + Partial_Reveal_Object** p_ref = (Partial_Reveal_Object**)p_obj + 1; + *p_ref = raw_prefetched_next; + +/* see comments above. + REF* p_ref = (REF*)p_obj + 1; + write_slot(p_ref, raw_prefetched_next); +*/ +} + +inline Partial_Reveal_Object* obj_get_prefetched_next_pointer(Partial_Reveal_Object* p_obj) +{ + assert(p_obj); + Partial_Reveal_Object** p_ref = (Partial_Reveal_Object**)p_obj + 1; + return *p_ref; + +/* see comments above. + REF* p_ref = (REF*)p_obj + 1; + return read_slot(p_ref); +*/ } inline Partial_Reveal_Object *next_marked_obj_in_block(Partial_Reveal_Object *cur_obj, Partial_Reveal_Object *block_end) @@ -166,158 +207,13 @@ return NULL; } -#ifdef USE_32BITS_HASHCODE -inline Partial_Reveal_Object* block_get_first_marked_object_extend(Block_Header* block, void** start_pos) -{ - Partial_Reveal_Object* cur_obj = (Partial_Reveal_Object*)block->base; - Partial_Reveal_Object* block_end = (Partial_Reveal_Object*)block->free; - - Partial_Reveal_Object* first_marked_obj = next_marked_obj_in_block(cur_obj, block_end); - if(!first_marked_obj) - return NULL; - - *start_pos = obj_end_extend(first_marked_obj); - - return first_marked_obj; -} -#endif - -inline Partial_Reveal_Object* block_get_first_marked_object(Block_Header* block, void** start_pos) -{ - Partial_Reveal_Object* cur_obj = (Partial_Reveal_Object*)block->base; - Partial_Reveal_Object* block_end = (Partial_Reveal_Object*)block->free; - - Partial_Reveal_Object* first_marked_obj = next_marked_obj_in_block(cur_obj, block_end); - if(!first_marked_obj) - return NULL; - - *start_pos = obj_end(first_marked_obj); - - return first_marked_obj; -} - -inline Partial_Reveal_Object* block_get_next_marked_object(Block_Header* block, void** start_pos) -{ - Partial_Reveal_Object* cur_obj = *(Partial_Reveal_Object**)start_pos; - Partial_Reveal_Object* block_end = (Partial_Reveal_Object*)block->free; - - Partial_Reveal_Object* next_marked_obj = next_marked_obj_in_block(cur_obj, block_end); - if(!next_marked_obj) - return NULL; - - *start_pos = obj_end(next_marked_obj); - - return next_marked_obj; -} - -inline Partial_Reveal_Object *block_get_first_marked_obj_prefetch_next(Block_Header *block, void **start_pos) -{ - Partial_Reveal_Object *cur_obj = (Partial_Reveal_Object *)block->base; - Partial_Reveal_Object *block_end = (Partial_Reveal_Object *)block->free; - - Partial_Reveal_Object *first_marked_obj = next_marked_obj_in_block(cur_obj, block_end); - if(!first_marked_obj) - return NULL; - - Partial_Reveal_Object *next_obj = obj_end(first_marked_obj); - *start_pos = next_obj; - - if(next_obj >= block_end) - return first_marked_obj; - - Partial_Reveal_Object *next_marked_obj = next_marked_obj_in_block(next_obj, block_end); - - if(next_marked_obj){ - if(next_marked_obj != next_obj) - obj_set_prefetched_next_pointer(next_obj, next_marked_obj); - } else { - obj_set_prefetched_next_pointer(next_obj, 0); - } - return first_marked_obj; -} - -inline Partial_Reveal_Object *block_get_first_marked_obj_after_prefetch(Block_Header *block, void **start_pos) -{ -#ifdef USE_32BITS_HASHCODE - return block_get_first_marked_object_extend(block, start_pos); -#else - return block_get_first_marked_object(block, start_pos); -#endif -} - -inline Partial_Reveal_Object *block_get_next_marked_obj_prefetch_next(Block_Header *block, void **start_pos) -{ - Partial_Reveal_Object *cur_obj = *(Partial_Reveal_Object **)start_pos; - Partial_Reveal_Object *block_end = (Partial_Reveal_Object *)block->free; - - if(cur_obj >= block_end) - return NULL; - - Partial_Reveal_Object *cur_marked_obj; - - if(obj_is_marked_in_vt(cur_obj)) - cur_marked_obj = cur_obj; - else - cur_marked_obj = (Partial_Reveal_Object *)obj_get_prefetched_next_pointer(cur_obj); - - if(!cur_marked_obj) - return NULL; - - Partial_Reveal_Object *next_obj = obj_end(cur_marked_obj); - *start_pos = next_obj; - - if(next_obj >= block_end) - return cur_marked_obj; - - Partial_Reveal_Object *next_marked_obj = next_marked_obj_in_block(next_obj, block_end); - - if(next_marked_obj){ - if(next_marked_obj != next_obj) - obj_set_prefetched_next_pointer(next_obj, next_marked_obj); - } else { - obj_set_prefetched_next_pointer(next_obj, 0); - } - - return cur_marked_obj; -} - -inline Partial_Reveal_Object *block_get_next_marked_obj_after_prefetch(Block_Header *block, void **start_pos) -{ - Partial_Reveal_Object *cur_obj = (Partial_Reveal_Object *)(*start_pos); - Partial_Reveal_Object *block_end = (Partial_Reveal_Object *)block->free; - - if(cur_obj >= block_end) - return NULL; - - Partial_Reveal_Object *cur_marked_obj; - - if(obj_is_marked_in_vt(cur_obj) || obj_is_fw_in_oi(cur_obj)) - cur_marked_obj = cur_obj; - else - cur_marked_obj = obj_get_prefetched_next_pointer(cur_obj); - - if(!cur_marked_obj) - return NULL; - -#ifdef USE_32BITS_HASHCODE - Partial_Reveal_Object *next_obj = obj_end_extend(cur_marked_obj); -#else - Partial_Reveal_Object *next_obj = obj_end(cur_marked_obj); -#endif - - *start_pos = next_obj; - - return cur_marked_obj; -} - -inline REF obj_get_fw_in_table(Partial_Reveal_Object *p_obj) +inline Partial_Reveal_Object* obj_get_fw_in_table(Partial_Reveal_Object *p_obj) { /* only for inter-sector compaction */ unsigned int index = OBJECT_INDEX_TO_OFFSET_TABLE(p_obj); Block_Header *curr_block = GC_BLOCK_HEADER(p_obj); Partial_Reveal_Object* new_addr = (Partial_Reveal_Object *)(((POINTER_SIZE_INT)p_obj) - curr_block->table[index]); - REF new_ref = obj_ptr_to_ref(new_addr); - return new_ref; + return new_addr; } inline void block_clear_table(Block_Header* block) @@ -328,6 +224,19 @@ } #ifdef USE_32BITS_HASHCODE +Partial_Reveal_Object* block_get_first_marked_object_extend(Block_Header* block, void** start_pos); +#endif + +Partial_Reveal_Object* block_get_first_marked_object(Block_Header* block, void** start_pos); +Partial_Reveal_Object* block_get_next_marked_object(Block_Header* block, void** start_pos); +Partial_Reveal_Object *block_get_first_marked_obj_prefetch_next(Block_Header *block, void **start_pos); +Partial_Reveal_Object *block_get_first_marked_obj_after_prefetch(Block_Header *block, void **start_pos); +Partial_Reveal_Object *block_get_next_marked_obj_prefetch_next(Block_Header *block, void **start_pos); +Partial_Reveal_Object *block_get_next_marked_obj_after_prefetch(Block_Header *block, void **start_pos); + + +/* <-- blocked_space hashcode_buf ops */ +#ifdef USE_32BITS_HASHCODE inline Hashcode_Buf* block_set_hashcode_buf(Block_Header *block, Hashcode_Buf* new_hashcode_buf) { Hashcode_Buf* old_hashcode_buf = block->hashcode_buf; @@ -351,9 +260,13 @@ Hashcode_Buf* hashcode_buf = block_get_hashcode_buf(GC_BLOCK_HEADER(p_obj)); return hashcode_buf_lookup(p_obj,hashcode_buf); } -#endif + +#endif //#ifdef USE_32BITS_HASHCODE +/* blocked_space hashcode_buf ops --> */ #endif //#ifndef _BLOCK_H_ + + Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.cpp URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.cpp?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.cpp (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.cpp Wed Dec 26 02:17:10 2007 @@ -43,8 +43,6 @@ extern POINTER_SIZE_INT INIT_LOS_SIZE; extern Boolean FORCE_FULL_COMPACT; -extern Boolean MINOR_ALGORITHM; -extern Boolean MAJOR_ALGORITHM; extern unsigned int NUM_MARKERS; extern unsigned int NUM_COLLECTORS; @@ -91,7 +89,7 @@ assert(property_name); char *value = get_property(property_name, VM_PROPERTIES); if (NULL == value){ - DIE2("gc.base","Warning: property value "<generate_barrier = TRUE; + } + } + + if (is_property_set("gc.concurrent_enumeration", VM_PROPERTIES) == 1){ + USE_CONCURRENT_ENUMERATION= get_boolean_property("gc.concurrent_enumeration"); + if(USE_CONCURRENT_ENUMERATION){ + USE_CONCURRENT_GC = TRUE; + gc->generate_barrier = TRUE; + } + } + + if (is_property_set("gc.concurrent_mark", VM_PROPERTIES) == 1){ + USE_CONCURRENT_MARK= get_boolean_property("gc.concurrent_mark"); + if(USE_CONCURRENT_MARK){ + USE_CONCURRENT_GC = TRUE; + gc->generate_barrier = TRUE; + } + } + + if (is_property_set("gc.concurrent_sweep", VM_PROPERTIES) == 1){ + USE_CONCURRENT_SWEEP= get_boolean_property("gc.concurrent_sweep"); + if(USE_CONCURRENT_SWEEP){ + USE_CONCURRENT_GC = TRUE; + } + } + + char* concurrent_algo = NULL; + + if (is_property_set("gc.concurrent_algorithm", VM_PROPERTIES) == 1) { + concurrent_algo = get_property("gc.concurrent_algorithm", VM_PROPERTIES); } + gc_decide_concurrent_algorithm(gc, concurrent_algo); + #if defined(ALLOC_ZEROING) && defined(ALLOC_PREFETCH) if(is_property_set("gc.prefetch",VM_PROPERTIES) ==1) { - PREFETCH_ENABLED=get_boolean_property("gc.prefetch"); + PREFETCH_ENABLED=get_boolean_property("gc.prefetch"); } if(is_property_set("gc.prefetch_distance",VM_PROPERTIES)==1) { PREFETCH_DISTANCE = get_size_property("gc.prefetch_distance"); if(!PREFETCH_ENABLED) { - WARN2("gc.prefetch_distance","Warning: Prefetch distance set with Prefetch disabled!"); - } + WARN2("gc.prefetch_distance","Warning: Prefetch distance set with Prefetch disabled!"); + } } if(is_property_set("gc.prefetch_stride",VM_PROPERTIES)==1) { @@ -319,11 +354,34 @@ void gc_assign_free_area_to_mutators(GC* gc) { -#ifndef USE_MARK_SWEEP_GC +#if !defined(USE_MARK_SWEEP_GC) && !defined(USE_UNIQUE_MOVE_COMPACT_GC) gc_gen_assign_free_area_to_mutators((GC_Gen*)gc); #endif } +void gc_init_collector_alloc(GC* gc, Collector* collector) +{ +#ifndef USE_MARK_SWEEP_GC + gc_gen_init_collector_alloc((GC_Gen*)gc, collector); +#else + gc_init_collector_free_chunk_list(collector); +#endif +} + +void gc_reset_collector_alloc(GC* gc, Collector* collector) +{ +#if !defined(USE_MARK_SWEEP_GC) && !defined(USE_UNIQUE_MOVE_COMPACT_GC) + gc_gen_reset_collector_alloc((GC_Gen*)gc, collector); +#endif +} + +void gc_destruct_collector_alloc(GC* gc, Collector* collector) +{ +#ifndef USE_MARK_SWEEP_GC + gc_gen_destruct_collector_alloc((GC_Gen*)gc, collector); +#endif +} + void gc_copy_interior_pointer_table_to_rootset(); /*used for computing collection time and mutator time*/ @@ -333,6 +391,27 @@ int64 get_collection_end_time() { return collection_end_time; } +void gc_prepare_rootset(GC* gc) +{ + if(!USE_CONCURRENT_GC){ + gc_metadata_verify(gc, TRUE); +#ifndef BUILD_IN_REFERENT + gc_finref_metadata_verify((GC*)gc, TRUE); +#endif + } + + /* Stop the threads and collect the roots. */ + lock(gc->enumerate_rootset_lock); + INFO2("gc.process", "GC: stop the threads and enumerate rootset ...\n"); + gc_clear_rootset(gc); + gc_reset_rootset(gc); + vm_enumerate_root_set_all_threads(); + gc_copy_interior_pointer_table_to_rootset(); + gc_set_rootset(gc); + unlock(gc->enumerate_rootset_lock); + +} + void gc_reclaim_heap(GC* gc, unsigned int gc_cause) { INFO2("gc.process", "\nGC: GC start ...\n"); @@ -367,28 +446,33 @@ gc_set_rootset(gc); unlock(gc->enumerate_rootset_lock); - if(USE_CONCURRENT_GC && gc_mark_is_concurrent()){ - gc_finish_concurrent_mark(gc); - } - - gc->in_collection = TRUE; - - /* this has to be done after all mutators are suspended */ - gc_reset_mutator_context(gc); + if(USE_CONCURRENT_GC && gc_sweep_is_concurrent()){ + if(gc_is_concurrent_sweep_phase()) + gc_finish_concurrent_sweep(gc); + }else{ + if(USE_CONCURRENT_GC && gc_is_concurrent_mark_phase()){ + gc_finish_concurrent_mark(gc, TRUE); + } - if(!IGNORE_FINREF ) gc_set_obj_with_fin(gc); + gc->in_collection = TRUE; + + /* this has to be done after all mutators are suspended */ + gc_reset_mutator_context(gc); + + if(!IGNORE_FINREF ) gc_set_obj_with_fin(gc); -#ifndef USE_MARK_SWEEP_GC - gc_gen_reclaim_heap((GC_Gen*)gc, collection_start_time); +#if defined(USE_MARK_SWEEP_GC) + gc_ms_reclaim_heap((GC_MS*)gc); +#elif defined(USE_UNIQUE_MOVE_COMPACT_GC) + gc_mc_reclaim_heap((GC_MC*)gc); #else - gc_ms_reclaim_heap((GC_MS*)gc); + gc_gen_reclaim_heap((GC_Gen*)gc, collection_start_time); #endif + } - gc_reset_interior_pointer_table(); - collection_end_time = time_now(); -#ifndef USE_MARK_SWEEP_GC +#if !defined(USE_MARK_SWEEP_GC)&&!defined(USE_UNIQUE_MOVE_COMPACT_GC) gc_gen_collection_verbose_info((GC_Gen*)gc, collection_end_time - collection_start_time, mutator_time); gc_gen_space_verbose_info((GC_Gen*)gc); #endif @@ -397,11 +481,15 @@ int64 mark_time = 0; if(USE_CONCURRENT_GC && gc_mark_is_concurrent()){ - gc_reset_concurrent_mark(gc); mark_time = gc_get_concurrent_mark_time(gc); + gc_reset_concurrent_mark(gc); + } + + if(USE_CONCURRENT_GC && gc_sweep_is_concurrent()){ + gc_reset_concurrent_sweep(gc); } -#ifndef USE_MARK_SWEEP_GC +#if !defined(USE_MARK_SWEEP_GC)&&!defined(USE_UNIQUE_MOVE_COMPACT_GC) if(USE_CONCURRENT_GC && gc_need_start_concurrent_mark(gc)) gc_start_concurrent_mark(gc); #endif @@ -432,6 +520,10 @@ vm_reclaim_native_objs(); gc->in_collection = FALSE; + + gc_reset_collector_state(gc); + + gc_clear_dirty_set(gc); vm_resume_threads_after(); assert(hythread_is_suspend_enabled()); @@ -439,5 +531,7 @@ INFO2("gc.process", "GC: GC end\n"); return; } + + Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_common.h Wed Dec 26 02:17:10 2007 @@ -38,6 +38,10 @@ #include "../gen/gc_for_barrier.h" +/* +#define USE_MARK_SWEEP_GC //define it to only use Mark-Sweep GC (no NOS, no LOS). +*/ +//#define USE_UNIQUE_MOVE_COMPACT_GC //define it to only use Move-Compact GC (no NOS, no LOS). #define GC_GEN_STATS #define null 0 @@ -77,6 +81,7 @@ #define USE_32BITS_HASHCODE +/* define it to use only mark-sweep GC for entire heap management */ //#define USE_MARK_SWEEP_GC typedef void (*TaskType)(void*); @@ -84,17 +89,15 @@ enum Collection_Algorithm{ COLLECTION_ALGOR_NIL, - /*minor nongen collection*/ + MINOR_GEN_FORWARD_POOL, MINOR_NONGEN_FORWARD_POOL, - /* minor gen collection */ - MINOR_GEN_FORWARD_POOL, + MINOR_GEN_SEMISPACE_POOL, + MINOR_NONGEN_SEMISPACE_POOL, - /* major collection */ MAJOR_COMPACT_SLIDE, MAJOR_COMPACT_MOVE, - MAJOR_MARK_SWEEP - + MAJOR_MARK_SWEEP }; /* Possible combinations: @@ -110,6 +113,7 @@ /* Two main kinds: generational GC and mark-sweep GC; this is decided at compiling time */ GEN_GC = 0x1, MARK_SWEEP_GC = 0x2, + MOVE_COMPACT_NO_LOS = 0x4, /* Mask of bits standing for two basic kinds */ GC_BASIC_KIND_MASK = ~(unsigned int)0x7, @@ -124,7 +128,8 @@ /* Sub-kinds of mark-sweep GC use the 12~15th LSB */ MS_COLLECTION = 0x1002, /* 0x1000 & MARK_SWEEP_GC */ - MS_COMPACT_COLLECTION = 0x2002 /* 0x2000 & MARK_SWEEP_GC */ + MS_COMPACT_COLLECTION = 0x2002, /* 0x2000 & MARK_SWEEP_GC */ + MC_COLLECTION = 0x1004 }; extern Boolean IS_FALLBACK_COMPACTION; /* only for mark/fw bits debugging purpose */ @@ -133,7 +138,8 @@ GC_CAUSE_NIL, GC_CAUSE_NOS_IS_FULL, GC_CAUSE_LOS_IS_FULL, - GC_CAUSE_SSPACE_IS_FULL, + GC_CAUSE_COS_IS_FULL, + GC_CAUSE_WSPACE_IS_FULL, GC_CAUSE_RUNTIME_FORCE_GC }; @@ -375,6 +381,20 @@ return TRUE; } +extern volatile Boolean obj_alloced_live; +inline Boolean is_obj_alloced_live() +{ return obj_alloced_live; } + +inline void gc_enable_alloc_obj_live() +{ + obj_alloced_live = TRUE; +} + +inline void gc_disenable_alloc_obj_live() +{ + obj_alloced_live = FALSE; +} + /* all GCs inherit this GC structure */ struct Marker; struct Mutator; @@ -433,7 +453,8 @@ SpinLock concurrent_mark_lock; SpinLock enumerate_rootset_lock; - + SpinLock concurrent_sweep_lock; + /* system info */ unsigned int _system_alloc_unit; @@ -456,6 +477,8 @@ return (addr >= gc_heap_base(gc) && addr < gc_heap_ceiling(gc)); } +Boolean obj_belongs_to_gc_heap(Partial_Reveal_Object* p_obj); + /* gc must match exactly that kind if returning TRUE */ inline Boolean gc_match_kind(GC *gc, unsigned int kind) { @@ -472,10 +495,15 @@ return (Boolean)(gc->collect_kind & multi_kinds); } +inline void gc_reset_collector_state(GC* gc) +{ gc->num_active_collectors = 0;} + inline unsigned int gc_get_processor_num(GC* gc) { return gc->_num_processors; } void gc_parse_options(GC* gc); void gc_reclaim_heap(GC* gc, unsigned int gc_cause); +void gc_prepare_rootset(GC* gc); + int64 get_collection_end_time(); @@ -498,6 +526,10 @@ #endif /* STATIC_NOS_MAPPING */ +void gc_init_collector_alloc(GC* gc, Collector* collector); +void gc_reset_collector_alloc(GC* gc, Collector* collector); +void gc_destruct_collector_alloc(GC* gc, Collector* collector); + FORCE_INLINE Boolean addr_belongs_to_nos(void* addr) { return addr >= nos_boundary; } @@ -513,5 +545,5 @@ inline Boolean obj_is_moved(Partial_Reveal_Object* p_obj) { return ((p_obj >= los_boundary) || (*p_global_lspace_move_obj)); } -extern Boolean VTABLE_TRACING; +extern Boolean TRACE_JLC_VIA_VTABLE; #endif //_GC_COMMON_H_ Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.cpp URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.cpp?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.cpp (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.cpp Wed Dec 26 02:17:10 2007 @@ -14,23 +14,32 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - #include "gc_common.h" #include "gc_metadata.h" #include "../thread/mutator.h" #include "../thread/marker.h" +#include "../thread/collector.h" #include "../finalizer_weakref/finalizer_weakref.h" #include "../gen/gen.h" #include "../mark_sweep/gc_ms.h" #include "interior_pointer.h" #include "collection_scheduler.h" #include "gc_concurrent.h" +#include "../gen/gc_for_barrier.h" + +Boolean USE_CONCURRENT_GC = FALSE; +Boolean USE_CONCURRENT_ENUMERATION = FALSE; +Boolean USE_CONCURRENT_MARK = FALSE; +Boolean USE_CONCURRENT_SWEEP = FALSE; -Boolean USE_CONCURRENT_GC = FALSE; +volatile Boolean concurrent_mark_phase = FALSE; +volatile Boolean mark_is_concurrent = FALSE; +volatile Boolean concurrent_sweep_phase = FALSE; +volatile Boolean sweep_is_concurrent = FALSE; -volatile Boolean concurrent_mark_phase = FALSE; -volatile Boolean mark_is_concurrent = FALSE; +volatile Boolean gc_sweeping_global_normal_chunk = FALSE; +unsigned int CONCURRENT_ALGO = 0; static void gc_check_concurrent_mark(GC* gc) { @@ -39,7 +48,13 @@ #ifndef USE_MARK_SWEEP_GC gc_gen_start_concurrent_mark((GC_Gen*)gc); #else - gc_ms_start_concurrent_mark((GC_MS*)gc, MAX_NUM_MARKERS); + if(gc_concurrent_match_algorithm(OTF_REM_OBJ_SNAPSHOT_ALGO)){ + gc_ms_start_concurrent_mark((GC_MS*)gc, MIN_NUM_MARKERS); + }else if(gc_concurrent_match_algorithm(OTF_REM_NEW_TARGET_ALGO)){ + gc_ms_start_concurrent_mark((GC_MS*)gc, MIN_NUM_MARKERS); + }else if(gc_concurrent_match_algorithm(MOSTLY_CONCURRENT_ALGO)){ + //ignore. + } #endif unlock(gc->concurrent_mark_lock); } @@ -48,12 +63,14 @@ static void gc_wait_concurrent_mark_finish(GC* gc) { wait_mark_finish(gc); + gc_set_barrier_function(WRITE_BARRIER_REM_NIL); gc_set_concurrent_status(gc,GC_CONCURRENT_STATUS_NIL); - gc_reset_snaptshot(gc); } void gc_start_concurrent_mark(GC* gc) { + int disable_count; + if(!try_lock(gc->concurrent_mark_lock) || gc_mark_is_concurrent()) return; /*prepare rootset*/ @@ -61,6 +78,7 @@ lock(gc->enumerate_rootset_lock); gc_metadata_verify(gc, TRUE); gc_reset_rootset(gc); + disable_count = hythread_reset_suspend_disable(); vm_enumerate_root_set_all_threads(); gc_copy_interior_pointer_table_to_rootset(); gc_set_rootset(gc); @@ -79,21 +97,77 @@ #ifndef USE_MARK_SWEEP_GC gc_gen_start_concurrent_mark((GC_Gen*)gc); #else - gc_ms_start_concurrent_mark((GC_MS*)gc, MIN_NUM_MARKERS); + if(gc_concurrent_match_algorithm(OTF_REM_OBJ_SNAPSHOT_ALGO)){ + gc_set_barrier_function(WRITE_BARRIER_REM_OBJ_SNAPSHOT); + gc_ms_start_concurrent_mark((GC_MS*)gc, MIN_NUM_MARKERS); + }else if(gc_concurrent_match_algorithm(MOSTLY_CONCURRENT_ALGO)){ + gc_set_barrier_function(WRITE_BARRIER_REM_SOURCE_OBJ); + gc_ms_start_most_concurrent_mark((GC_MS*)gc, MIN_NUM_MARKERS); + }else if(gc_concurrent_match_algorithm(OTF_REM_NEW_TARGET_ALGO)){ + gc_set_barrier_function(WRITE_BARRIER_REM_OLD_VAR); + gc_ms_start_concurrent_mark((GC_MS*)gc, MIN_NUM_MARKERS); + } #endif if(TRUE){ unlock(gc->enumerate_rootset_lock); vm_resume_threads_after(); + assert(hythread_is_suspend_enabled()); + hythread_set_suspend_disable(disable_count); } unlock(gc->concurrent_mark_lock); } -void gc_finish_concurrent_mark(GC* gc) +void wspace_mark_scan_mostly_concurrent_reset(); +void wspace_mark_scan_mostly_concurrent_terminate(); + +void gc_finish_concurrent_mark(GC* gc, Boolean is_STW) { gc_check_concurrent_mark(gc); + + if(gc_concurrent_match_algorithm(MOSTLY_CONCURRENT_ALGO)) + wspace_mark_scan_mostly_concurrent_terminate(); + gc_wait_concurrent_mark_finish(gc); + + if(gc_concurrent_match_algorithm(MOSTLY_CONCURRENT_ALGO)){ + /*If gc use mostly concurrent algorithm, there's a final marking pause. + Suspend the mutators once again and finish the marking phase.*/ + int disable_count; + if(!is_STW){ + /*suspend the mutators.*/ + lock(gc->enumerate_rootset_lock); + gc_metadata_verify(gc, TRUE); + gc_reset_rootset(gc); + disable_count = hythread_reset_suspend_disable(); + vm_enumerate_root_set_all_threads(); + gc_copy_interior_pointer_table_to_rootset(); + gc_set_rootset(gc); + } + + /*prepare dirty object*/ + gc_prepare_dirty_set(gc); + + gc_set_weakref_sets(gc); + + /*start STW mark*/ +#ifndef USE_MARK_SWEEP_GC + assert(0); +#else + gc_ms_start_final_mark_after_concurrent((GC_MS*)gc, MIN_NUM_MARKERS); +#endif + + wspace_mark_scan_mostly_concurrent_reset(); + gc_clear_dirty_set(gc); + if(!is_STW){ + unlock(gc->enumerate_rootset_lock); + vm_resume_threads_after(); + assert(hythread_is_suspend_enabled()); + hythread_set_suspend_disable(disable_count); + } + } + gc_reset_dirty_set(gc); } void gc_reset_concurrent_mark(GC* gc) @@ -112,8 +186,142 @@ if(marker->time_mark > time_mark){ time_mark = marker->time_mark; } + marker->time_mark = 0; } return time_mark; +} + +void gc_start_concurrent_sweep(GC* gc) +{ + + if(!try_lock(gc->concurrent_sweep_lock) || gc_sweep_is_concurrent()) return; + + /*FIXME: enable finref*/ + if(!IGNORE_FINREF ){ + gc_set_obj_with_fin(gc); + Collector* collector = gc->collectors[0]; + collector_identify_finref(collector); +#ifndef BUILD_IN_REFERENT + }else{ + gc_set_weakref_sets(gc); + gc_update_weakref_ignore_finref(gc); +#endif + } + + gc_set_concurrent_status(gc, GC_CONCURRENT_SWEEP_PHASE); + + gc_set_weakref_sets(gc); + + /*Note: We assumed that adding entry to weakroot_pool is happened in STW rootset enumeration. + So, when this assumption changed, we should modified the below function.*/ + gc_identify_dead_weak_roots(gc); + + /*start concurrent mark*/ +#ifndef USE_MARK_SWEEP_GC + assert(0); +#else + gc_ms_start_concurrent_sweep((GC_MS*)gc, MIN_NUM_MARKERS); +#endif + + unlock(gc->concurrent_sweep_lock); +} + +void gc_reset_concurrent_sweep(GC* gc) +{ + gc->num_active_collectors = 0; + gc_sweep_unset_concurrent(); +} + +void gc_wait_concurrent_sweep_finish(GC* gc) +{ + wait_collection_finish(gc); + gc_set_concurrent_status(gc,GC_CONCURRENT_STATUS_NIL); +} + +void gc_finish_concurrent_sweep(GC * gc) +{ + gc_wait_concurrent_sweep_finish(gc); +} + +void gc_check_concurrent_phase(GC * gc) +{ + /*Note: we do not finish concurrent mark here if we do not want to start concurrent sweep.*/ + if(gc_is_concurrent_mark_phase(gc) && is_mark_finished(gc) && USE_CONCURRENT_SWEEP){ + /*Although all conditions above are satisfied, we can not guarantee concurrent marking is finished. + Because, sometimes, the concurrent marking has not started yet. We check the concurrent mark lock + here to guarantee this occasional case.*/ + if(try_lock(gc->concurrent_mark_lock)){ + unlock(gc->concurrent_mark_lock); + gc_finish_concurrent_mark(gc, FALSE); + } + } + + if(gc_is_concurrent_sweep_phase(gc) && is_collector_finished(gc)){ + //The reason is same as concurrent mark above. + if(try_lock(gc->concurrent_sweep_lock)){ + unlock(gc->concurrent_sweep_lock); + gc_finish_concurrent_sweep(gc); + } + } +} + + +void gc_reset_after_concurrent_collection(GC* gc) +{ + /*FIXME: enable concurrent GEN mode.*/ + gc_reset_interior_pointer_table(); + if(gc_is_gen_mode()) gc_prepare_mutator_remset(gc); + + /* Clear rootset pools here rather than in each collection algorithm */ + gc_clear_rootset(gc); + + if(!IGNORE_FINREF ){ + INFO2("gc.process", "GC: finref process after collection ...\n"); + gc_put_finref_to_vm(gc); + gc_reset_finref_metadata(gc); + gc_activate_finref_threads((GC*)gc); +#ifndef BUILD_IN_REFERENT + } else { + gc_clear_weakref_pools(gc); +#endif + } + +#ifdef USE_MARK_SWEEP_GC + gc_ms_update_space_statistics((GC_MS*)gc); +#endif + + gc_clear_dirty_set(gc); + + vm_reclaim_native_objs(); + gc->in_collection = FALSE; + + gc_reset_collector_state(gc); + + if(USE_CONCURRENT_GC && gc_mark_is_concurrent()){ + gc_reset_concurrent_mark(gc); + } + + if(USE_CONCURRENT_GC && gc_sweep_is_concurrent()){ + gc_reset_concurrent_sweep(gc); + } +} + +void gc_decide_concurrent_algorithm(GC* gc, char* concurrent_algo) +{ + if(!concurrent_algo){ + CONCURRENT_ALGO = OTF_REM_OBJ_SNAPSHOT_ALGO; + }else{ + string_to_upper(concurrent_algo); + + if(!strcmp(concurrent_algo, "OTF_OBJ")){ + CONCURRENT_ALGO = OTF_REM_OBJ_SNAPSHOT_ALGO; + + }else if(!strcmp(concurrent_algo, "MOSTLY_CON")){ + CONCURRENT_ALGO = MOSTLY_CONCURRENT_ALGO; + }else if(!strcmp(concurrent_algo, "OTF_SLOT")){ + CONCURRENT_ALGO = OTF_REM_NEW_TARGET_ALGO; + } + } } Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_concurrent.h Wed Dec 26 02:17:10 2007 @@ -22,11 +22,46 @@ enum GC_CONCURRENT_STATUS{ GC_CONCURRENT_STATUS_NIL = 0x00, GC_CONCURRENT_MARK_PHASE = 0x01, + GC_CONCURRENT_MARK_FINAL_PAUSE_PHASE = 0x11, // for mostly concurrent only. + GC_CONCURRENT_SWEEP_PHASE = 0x02 }; +enum HANDSHAKE_SINGAL{ + HANDSHAKE_NIL = 0x00, + + /*mutator to collector*/ + ENABLE_COLLECTOR_SWEEP_LOCAL_CHUNKS = 0x01, + DISABLE_COLLECTOR_SWEEP_LOCAL_CHUNKS = 0x02, + + + ENABLE_COLLECTOR_SWEEP_GLOBAL_CHUNKS = 0x03, + DISABLE_COLLECTOR_SWEEP_GLOBAL_CHUNKS = 0x04 +// /*collector to mutator*/ +// ENABLE_MUTATOR_ALLOC_BARRIER = 0x03, +// DISABLE_MUTATOR_ALLOC_BARRIER = 0x04 +}; + +extern Boolean USE_CONCURRENT_GC; +extern Boolean USE_CONCURRENT_ENUMERATION; +extern Boolean USE_CONCURRENT_MARK; +extern Boolean USE_CONCURRENT_SWEEP; + extern volatile Boolean concurrent_mark_phase; extern volatile Boolean mark_is_concurrent; -extern Boolean USE_CONCURRENT_GC; +extern volatile Boolean concurrent_sweep_phase; +extern volatile Boolean sweep_is_concurrent; +extern unsigned int CONCURRENT_ALGO; + +enum CONCURRENT_MARK_ALGORITHM{ + OTF_REM_OBJ_SNAPSHOT_ALGO = 0x01, + OTF_REM_NEW_TARGET_ALGO = 0x02, + MOSTLY_CONCURRENT_ALGO = 0x03 +}; + +inline Boolean gc_concurrent_match_algorithm(unsigned int concurrent_algo) +{ + return CONCURRENT_ALGO == concurrent_algo; +} inline Boolean gc_mark_is_concurrent() { @@ -35,11 +70,15 @@ inline void gc_mark_set_concurrent() { + if(gc_concurrent_match_algorithm(OTF_REM_OBJ_SNAPSHOT_ALGO) + ||gc_concurrent_match_algorithm(OTF_REM_NEW_TARGET_ALGO)) + gc_enable_alloc_obj_live(); mark_is_concurrent = TRUE; } inline void gc_mark_unset_concurrent() { + gc_disenable_alloc_obj_live(); mark_is_concurrent = FALSE; } @@ -53,21 +92,77 @@ return gc->gc_concurrent_status == GC_CONCURRENT_MARK_PHASE; } +inline Boolean gc_sweep_is_concurrent() +{ + return sweep_is_concurrent; +} + +inline void gc_sweep_set_concurrent() +{ + sweep_is_concurrent = TRUE; +} + +inline void gc_sweep_unset_concurrent() +{ + sweep_is_concurrent = FALSE; +} + +inline Boolean gc_is_concurrent_sweep_phase() +{ + return concurrent_sweep_phase; +} + +inline Boolean gc_is_concurrent_sweep_phase(GC* gc) +{ + return gc->gc_concurrent_status == GC_CONCURRENT_SWEEP_PHASE; +} + inline void gc_set_concurrent_status(GC*gc, unsigned int status) { + /*Reset status*/ + concurrent_mark_phase = FALSE; + concurrent_sweep_phase = FALSE; + gc->gc_concurrent_status = status; - if(status == GC_CONCURRENT_MARK_PHASE){ - concurrent_mark_phase = TRUE; - gc_mark_set_concurrent(); - }else{ - concurrent_mark_phase = FALSE; + switch(status){ + case GC_CONCURRENT_MARK_PHASE: + concurrent_mark_phase = TRUE; + gc_mark_set_concurrent(); + break; + case GC_CONCURRENT_SWEEP_PHASE: + concurrent_sweep_phase = TRUE; + gc_sweep_set_concurrent(); + break; + default: + assert(!concurrent_mark_phase && !concurrent_sweep_phase); } + + return; } void gc_reset_concurrent_mark(GC* gc); void gc_start_concurrent_mark(GC* gc); -void gc_finish_concurrent_mark(GC* gc); +void gc_finish_concurrent_mark(GC* gc, Boolean is_STW); int64 gc_get_concurrent_mark_time(GC* gc); +void gc_start_concurrent_sweep(GC* gc); +void gc_finish_concurrent_sweep(GC * gc); + +void gc_reset_after_concurrent_collection(GC* gc); +void gc_check_concurrent_phase(GC * gc); + +void gc_decide_concurrent_algorithm(GC* gc, char* concurrent_algo); + +void gc_reset_concurrent_sweep(GC* gc); + +extern volatile Boolean gc_sweeping_global_normal_chunk; + +inline Boolean gc_is_sweeping_global_normal_chunk() +{ return gc_sweeping_global_normal_chunk; } + +inline void gc_set_sweeping_global_normal_chunk() +{ gc_sweeping_global_normal_chunk = TRUE; } +inline void gc_unset_sweeping_global_normal_chunk() +{ gc_sweeping_global_normal_chunk = FALSE; } #endif Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_class.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_class.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_class.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_class.h Wed Dec 26 02:17:10 2007 @@ -25,9 +25,6 @@ #include "open/types.h" #include "gc_platform.h" -#ifndef FORCE_INLINE -#define FORCE_INLINE inline -#endif /* CONST_MARK_BIT is used in mark_scan in vt, no matter MARK_BIT_FLIPPING used or not. MARK_BIT_FLIPPING is used in oi for marking and forwarding in non-gen nursery forwarding (the marking is for those objects not in nos.) @@ -158,23 +155,23 @@ extern POINTER_SIZE_INT vtable_base; #ifdef COMPRESS_VTABLE -FORCE_INLINE VT compress_vt(Partial_Reveal_VTable* vt) +FORCE_INLINE VT encode_vt(Partial_Reveal_VTable* vt) { assert(vt); return (VT)((POINTER_SIZE_INT)vt - vtable_base); } -FORCE_INLINE Partial_Reveal_VTable* uncompress_vt(VT vt) +FORCE_INLINE Partial_Reveal_VTable* decode_vt(VT vt) { assert(vt); return (Partial_Reveal_VTable*)((POINTER_SIZE_INT)vt + vtable_base); } #else/*ifdef COMPRESS_VTABLE*/ -FORCE_INLINE VT compress_vt(Partial_Reveal_VTable* vt) +FORCE_INLINE VT encode_vt(Partial_Reveal_VTable* vt) { return (VT)vt; } -FORCE_INLINE Partial_Reveal_VTable* uncompress_vt(VT vt) +FORCE_INLINE Partial_Reveal_VTable* decode_vt(VT vt) { return (Partial_Reveal_VTable*) vt; } #endif @@ -226,13 +223,13 @@ FORCE_INLINE GC_VTable_Info *obj_get_gcvt_raw(Partial_Reveal_Object *obj) { - Partial_Reveal_VTable *vtable = uncompress_vt(obj_get_vt(obj)); + Partial_Reveal_VTable *vtable = decode_vt(obj_get_vt(obj)); return vtable_get_gcvt_raw(vtable); } FORCE_INLINE GC_VTable_Info *obj_get_gcvt(Partial_Reveal_Object *obj) { - Partial_Reveal_VTable* vtable = uncompress_vt(obj_get_vt(obj) ); + Partial_Reveal_VTable* vtable = decode_vt(obj_get_vt(obj) ); return vtable_get_gcvt(vtable); } @@ -244,7 +241,7 @@ FORCE_INLINE Boolean object_has_ref_field_before_scan(Partial_Reveal_Object *obj) { - Partial_Reveal_VTable *vt = uncompress_vt(obj_get_vt_raw(obj)); + Partial_Reveal_VTable *vt = decode_vt(obj_get_vt_raw(obj)); GC_VTable_Info *gcvt = vtable_get_gcvt_raw(vt); return (Boolean)((POINTER_SIZE_INT)gcvt & GC_CLASS_FLAG_REFS); } @@ -313,6 +310,8 @@ } #endif //#ifndef _GC_TYPES_H_ + + Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_vm.cpp URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_vm.cpp?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_vm.cpp (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_for_vm.cpp Wed Dec 26 02:17:10 2007 @@ -63,7 +63,6 @@ #endif vm_helper_register_magic_helper(VM_RT_NEW_RESOLVED_USING_VTABLE_AND_SIZE, "org/apache/harmony/drlvm/gc_gen/GCHelper", "alloc"); vm_helper_register_magic_helper(VM_RT_NEW_VECTOR_USING_VTABLE, "org/apache/harmony/drlvm/gc_gen/GCHelper", "allocArray"); - vm_helper_register_magic_helper(VM_RT_GC_HEAP_WRITE_REF, "org/apache/harmony/drlvm/gc_gen/GCHelper", "write_barrier_slot_rem"); } int gc_init() @@ -73,10 +72,12 @@ vm_gc_lock_init(); -#ifndef USE_MARK_SWEEP_GC - unsigned int gc_struct_size = sizeof(GC_Gen); +#if defined(USE_MARK_SWEEP_GC) +unsigned int gc_struct_size = sizeof(GC_MS); +#elif defined(USE_UNIQUE_MOVE_COMPACT_GC) +unsigned int gc_struct_size = sizeof(GC_MC); #else - unsigned int gc_struct_size = sizeof(GC_MS); +unsigned int gc_struct_size = sizeof(GC_Gen); #endif GC* gc = (GC*)STD_MALLOC(gc_struct_size); assert(gc); @@ -85,10 +86,16 @@ gc_parse_options(gc); -// iberezhniuk: compile-time switching is now used in both VM and GC +#ifdef BUILD_IN_REFERENT + if( ! IGNORE_FINREF){ + INFO2(" gc.init" , "finref must be ignored, since BUILD_IN_REFERENT is defined." ); + IGNORE_FINREF = TRUE; + } +#endif + +/* VT pointer compression is a compile-time option, reference compression and vtable compression are orthogonal */ #ifdef COMPRESS_VTABLE assert(vm_vtable_pointers_are_compressed()); - // ppervov: reference compression and vtable compression are orthogonal vtable_base = vm_get_vtable_base(); #endif @@ -98,10 +105,12 @@ gc_metadata_initialize(gc); /* root set and mark stack */ -#ifndef USE_MARK_SWEEP_GC - gc_gen_initialize((GC_Gen*)gc, min_heap_size_bytes, max_heap_size_bytes); -#else +#if defined(USE_MARK_SWEEP_GC) gc_ms_initialize((GC_MS*)gc, min_heap_size_bytes, max_heap_size_bytes); +#elif defined(USE_UNIQUE_MOVE_COMPACT_GC) + gc_mc_initialize((GC_MC*)gc, min_heap_size_bytes, max_heap_size_bytes); +#else + gc_gen_initialize((GC_Gen*)gc, min_heap_size_bytes, max_heap_size_bytes); #endif set_native_finalizer_thread_flag(!IGNORE_FINREF); @@ -132,11 +141,13 @@ INFO2("gc.process", "GC: call GC wrapup ...."); GC* gc = p_global_gc; -#ifndef USE_MARK_SWEEP_GC +#if defined(USE_MARK_SWEEP_GC) + gc_ms_destruct((GC_MS*)gc); +#elif defined(USE_UNIQUE_MOVE_COMPACT_GC) + gc_mc_destruct((GC_MC*)gc); +#else gc_gen_wrapup_verbose((GC_Gen*)gc); gc_gen_destruct((GC_Gen*)gc); -#else - gc_ms_destruct((GC_MS*)gc); #endif gc_metadata_destruct(gc); /* root set and mark stack */ @@ -205,7 +216,7 @@ Boolean gc_supports_class_unloading() { - return VTABLE_TRACING; + return TRACE_JLC_VIA_VTABLE; } void gc_add_weak_root_set_entry(Managed_Object_Handle *ref, Boolean is_pinned, Boolean is_short_weak) @@ -253,29 +264,35 @@ int64 gc_free_memory() { -#ifndef USE_MARK_SWEEP_GC - return (int64)gc_gen_free_memory_size((GC_Gen*)p_global_gc); -#else +#if defined(USE_MARK_SWEEP_GC) return (int64)gc_ms_free_memory_size((GC_MS*)p_global_gc); +#elif defined(USE_UNIQUE_MOVE_COMPACT_GC) + return (int64)gc_mc_free_memory_size((GC_MC*)p_global_gc); +#else + return (int64)gc_gen_free_memory_size((GC_Gen*)p_global_gc); #endif } /* java heap size.*/ int64 gc_total_memory() { -#ifndef USE_MARK_SWEEP_GC - return (int64)((POINTER_SIZE_INT)gc_gen_total_memory_size((GC_Gen*)p_global_gc)); -#else +#if defined(USE_MARK_SWEEP_GC) return (int64)((POINTER_SIZE_INT)gc_ms_total_memory_size((GC_MS*)p_global_gc)); +#elif defined(USE_UNIQUE_MOVE_COMPACT_GC) + return (int64)((POINTER_SIZE_INT)gc_mc_total_memory_size((GC_MC*)p_global_gc)); +#else + return (int64)((POINTER_SIZE_INT)gc_gen_total_memory_size((GC_Gen*)p_global_gc)); #endif } int64 gc_max_memory() { -#ifndef USE_MARK_SWEEP_GC - return (int64)((POINTER_SIZE_INT)gc_gen_total_memory_size((GC_Gen*)p_global_gc)); +#if defined(USE_MARK_SWEEP_GC) + return (int64)((POINTER_SIZE_INT)gc_ms_total_memory_size((GC_MS*)p_global_gc)); +#elif defined(USE_UNIQUE_MOVE_COMPACT_GC) + return (int64)((POINTER_SIZE_INT)gc_mc_total_memory_size((GC_MC*)p_global_gc)); #else - return (int64)((POINTER_SIZE_INT)gc_ms_total_memory_size((GC_MS*)p_global_gc)); + return (int64)((POINTER_SIZE_INT)gc_gen_total_memory_size((GC_Gen*)p_global_gc)); #endif } @@ -343,7 +360,7 @@ #else //USE_32BITS_HASHCODE int32 gc_get_hashcode(Managed_Object_Handle p_object) { -#ifdef USE_MARK_SWEEP_GC +#if defined(USE_MARK_SWEEP_GC) || defined(USE_UNIQUE_MOVE_COMPACT_GC) return (int32)0;//p_object; #endif @@ -403,10 +420,12 @@ // data structures in not consistent for heap iteration if (!JVMTI_HEAP_ITERATION) return; -#ifndef USE_MARK_SWEEP_GC - gc_gen_iterate_heap((GC_Gen *)p_global_gc); +#if defined(USE_MARK_SWEEP_GC) + gc_ms_iterate_heap((GC_MS*)p_global_gc); +#elif defined(USE_UNIQUE_MOVE_COMPACT_GC) + gc_mc_iterate_heap((GC_MC*)p_global_gc); #else - gc_ms_iterate_heap((GC_MS*)p_global_gc); + gc_gen_iterate_heap((GC_Gen *)p_global_gc); #endif } @@ -419,3 +438,9 @@ mutator_need_block = FALSE; return old_flag; } + +Boolean obj_belongs_to_gc_heap(Partial_Reveal_Object* p_obj) +{ + return address_belongs_to_gc_heap(p_obj, p_global_gc); +} + Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.cpp URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.cpp?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.cpp (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.cpp Wed Dec 26 02:17:10 2007 @@ -76,7 +76,7 @@ gc_metadata.mutator_remset_pool = sync_pool_create(); gc_metadata.collector_remset_pool = sync_pool_create(); gc_metadata.collector_repset_pool = sync_pool_create(); - gc_metadata.dirty_obj_snaptshot_pool = sync_pool_create(); + gc_metadata.gc_dirty_set_pool = sync_pool_create(); gc_metadata.weakroot_pool = sync_pool_create(); #ifdef USE_32BITS_HASHCODE gc_metadata.collector_hashcode_pool = sync_pool_create(); @@ -99,7 +99,7 @@ sync_pool_destruct(metadata->mutator_remset_pool); sync_pool_destruct(metadata->collector_remset_pool); sync_pool_destruct(metadata->collector_repset_pool); - sync_pool_destruct(metadata->dirty_obj_snaptshot_pool); + sync_pool_destruct(metadata->gc_dirty_set_pool); sync_pool_destruct(metadata->weakroot_pool); #ifdef USE_32BITS_HASHCODE sync_pool_destruct(metadata->collector_hashcode_pool); @@ -188,10 +188,17 @@ //if(obj_is_moved(p_obj)) /*Fixme: los_boundery ruined the modularity of gc_common.h*/ if(p_obj < los_boundary){ - write_slot(p_ref, obj_get_fw_in_oi(p_obj)); + p_obj = obj_get_fw_in_oi(p_obj); }else{ - *p_ref = obj_get_fw_in_table(p_obj); + p_obj = obj_get_fw_in_table(p_obj); } + + write_slot(p_ref, p_obj); + + }else if(gc_match_kind(gc, MC_COLLECTION)){ + p_obj = obj_get_fw_in_table(p_obj); + write_slot(p_ref, p_obj); + }else{ if(obj_is_fw_in_oi(p_obj)){ /* Condition obj_is_moved(p_obj) is for preventing mistaking previous mark bit of large obj as fw bit when fallback happens. @@ -415,52 +422,51 @@ Boolean obj_is_mark_black_in_table(Partial_Reveal_Object* p_obj); #endif -void gc_reset_snaptshot(GC* gc) +void gc_reset_dirty_set(GC* gc) { GC_Metadata* metadata = gc->metadata; - /*reset mutator local snapshot block*/ Mutator *mutator = gc->mutator_list; while (mutator) { - Vector_Block* local_snapshot = mutator->dirty_obj_snapshot; - assert(local_snapshot); - if(!vector_block_is_empty(local_snapshot)){ + Vector_Block* local_dirty_set = mutator->dirty_set; + assert(local_dirty_set); + if(!vector_block_is_empty(local_dirty_set)){ #ifdef _DEBUG - POINTER_SIZE_INT* iter = vector_block_iterator_init(local_snapshot); - while(!vector_block_iterator_end(local_snapshot,iter)){ + POINTER_SIZE_INT* iter = vector_block_iterator_init(local_dirty_set); + while(!vector_block_iterator_end(local_dirty_set,iter)){ Partial_Reveal_Object* p_obj = (Partial_Reveal_Object*) *iter; - iter = vector_block_iterator_advance(local_snapshot, iter); + iter = vector_block_iterator_advance(local_dirty_set, iter); #ifdef USE_MARK_SWEEP_GC assert(obj_is_mark_black_in_table(p_obj)); #endif } #endif - vector_block_clear(mutator->dirty_obj_snapshot); + vector_block_clear(mutator->dirty_set); } mutator = mutator->next; } - /*reset global snapshot pool*/ - Pool* global_snapshot = metadata->dirty_obj_snaptshot_pool; + /*reset global dirty set pool*/ + Pool* global_dirty_set_pool = metadata->gc_dirty_set_pool; - if(!pool_is_empty(global_snapshot)){ - Vector_Block* snapshot_block = pool_get_entry(global_snapshot); - while(snapshot_block != NULL){ - if(!vector_block_is_empty(snapshot_block)){ + if(!pool_is_empty(global_dirty_set_pool)){ + Vector_Block* dirty_set = pool_get_entry(global_dirty_set_pool); + while(dirty_set != NULL){ + if(!vector_block_is_empty(dirty_set)){ #ifdef _DEBUG - POINTER_SIZE_INT* iter = vector_block_iterator_init(snapshot_block); - while(!vector_block_iterator_end(snapshot_block,iter)){ + POINTER_SIZE_INT* iter = vector_block_iterator_init(dirty_set); + while(!vector_block_iterator_end(dirty_set,iter)){ Partial_Reveal_Object* p_obj = (Partial_Reveal_Object*) *iter; - iter = vector_block_iterator_advance(snapshot_block, iter); + iter = vector_block_iterator_advance(dirty_set, iter); #ifdef USE_MARK_SWEEP_GC assert(obj_is_mark_black_in_table(p_obj)); #endif } #endif } - vector_block_clear(snapshot_block); - pool_put_entry(metadata->free_set_pool,snapshot_block); - snapshot_block = pool_get_entry(global_snapshot); + vector_block_clear(dirty_set); + pool_put_entry(metadata->free_set_pool,dirty_set); + dirty_set = pool_get_entry(global_dirty_set_pool); } } @@ -468,8 +474,39 @@ } +void gc_prepare_dirty_set(GC* gc) +{ + GC_Metadata* metadata = gc->metadata; + Pool* gc_dirty_set_pool = metadata->gc_dirty_set_pool; + lock(gc->mutator_list_lock); + + Mutator *mutator = gc->mutator_list; + while (mutator) { + //FIXME: temproray solution for mostly concurrent. + lock(mutator->dirty_set_lock); + pool_put_entry(gc_dirty_set_pool, mutator->dirty_set); + mutator->dirty_set = free_set_pool_get_entry(metadata); + unlock(mutator->dirty_set_lock); + mutator = mutator->next; + } + unlock(gc->mutator_list_lock); +} + +void gc_clear_dirty_set(GC* gc) +{ + gc_prepare_dirty_set(gc); + + GC_Metadata* metadata = gc->metadata; + + Vector_Block* dirty_set = pool_get_entry(metadata->gc_dirty_set_pool); + while(dirty_set){ + vector_block_clear(dirty_set); + pool_put_entry(metadata->free_set_pool, dirty_set); + dirty_set = pool_get_entry(metadata->gc_dirty_set_pool); + } +} Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_metadata.h Wed Dec 26 02:17:10 2007 @@ -49,7 +49,7 @@ Pool* collector_hashcode_pool; #endif - Pool* dirty_obj_snaptshot_pool; + Pool* gc_dirty_set_pool; }GC_Metadata; @@ -64,12 +64,15 @@ void gc_clear_rootset(GC* gc); void gc_fix_rootset(Collector* collector, Boolean double_fix); void gc_clear_remset(GC* gc); -void gc_reset_snaptshot(GC* gc); +void gc_reset_dirty_set(GC* gc); void gc_identify_dead_weak_roots(GC *gc); void gc_update_weak_roots(GC *gc, Boolean double_fix); void gc_clear_remset(GC* gc); +void gc_clear_dirty_set(GC* gc); + +void gc_prepare_dirty_set(GC* gc); inline void gc_task_pool_clear(Pool* task_pool) { @@ -136,21 +139,20 @@ assert(mutator->rem_set); } -inline void mutator_snapshotset_add_entry(Mutator* mutator, Partial_Reveal_Object* p_obj) +inline void mutator_dirtyset_add_entry(Mutator* mutator, Partial_Reveal_Object* p_obj) { - Vector_Block* dirty_obj_snapshot = mutator->dirty_obj_snapshot; - vector_block_add_entry(dirty_obj_snapshot, (POINTER_SIZE_INT)p_obj); + Vector_Block* dirty_set = mutator->dirty_set; + vector_block_add_entry(dirty_set, (POINTER_SIZE_INT)p_obj); - if( !vector_block_is_full(dirty_obj_snapshot) ) return; + if( !vector_block_is_full(dirty_set) ) return; - vector_block_set_full(dirty_obj_snapshot); + vector_block_set_full(dirty_set); - if(vector_block_set_exclusive(dirty_obj_snapshot)){ - //?vector_block_set_full(dirty_obj_snapshot); //ynhe - pool_put_entry(gc_metadata.dirty_obj_snaptshot_pool, dirty_obj_snapshot); + if(vector_block_set_exclusive(dirty_set)){ + pool_put_entry(gc_metadata.gc_dirty_set_pool, dirty_set); } - mutator->dirty_obj_snapshot = free_set_pool_get_entry(&gc_metadata); + mutator->dirty_set = free_set_pool_get_entry(&gc_metadata); } inline void collector_repset_add_entry(Collector* collector, Partial_Reveal_Object** p_ref) Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_platform.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_platform.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_platform.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_platform.h Wed Dec 26 02:17:10 2007 @@ -55,13 +55,11 @@ #ifdef _WINDOWS_ #define FORCE_INLINE __forceinline -#else - -#ifdef __linux__ +#elif defined (__linux__) #define FORCE_INLINE inline __attribute__((always_inline)) -#endif - -#endif +#else +#define FORCE_INLINE inline +#endif /* _WINDOWS_ */ #define ABS_DIFF(x, y) (((x)>(y))?((x)-(y)):((y)-(x))) #define USEC_PER_SEC INT64_C(1000000) @@ -116,6 +114,11 @@ return (int)hythread_create_ex(ret_thread, get_gc_thread_group(), stacksize, priority, NULL, (hythread_entrypoint_t)func, data); +} + +inline int vm_thread_is_suspend_enable() +{ + return hythread_is_suspend_enabled(); } inline void *atomic_casptr(volatile void **mem, void *with, const void *cmp) Modified: harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_space.h URL: http://svn.apache.org/viewvc/harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_space.h?rev=606876&r1=606875&r2=606876&view=diff ============================================================================== --- harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_space.h (original) +++ harmony/enhanced/drlvm/trunk/vm/gc_gen/src/common/gc_space.h Wed Dec 26 02:17:10 2007 @@ -128,117 +128,15 @@ inline unsigned int blocked_space_free_mem_size(Blocked_Space *space){ return GC_BLOCK_SIZE_BYTES * (space->ceiling_block_idx - space->free_block_idx + 1); } inline Boolean blocked_space_used_mem_size(Blocked_Space *space){ return GC_BLOCK_SIZE_BYTES * (space->free_block_idx - space->first_block_idx); } -inline void space_init_blocks(Blocked_Space* space) -{ - Block* blocks = (Block*)space->heap_start; - Block_Header* last_block = (Block_Header*)blocks; - unsigned int start_idx = space->first_block_idx; - for(unsigned int i=0; i < space->num_managed_blocks; i++){ - Block_Header* block = (Block_Header*)&(blocks[i]); - block_init(block); - block->block_idx = i + start_idx; - last_block->next = block; - last_block = block; - } - last_block->next = NULL; - space->blocks = blocks; - - return; -} +void space_init_blocks(Blocked_Space* space); +void space_desturct_blocks(Blocked_Space* space); -inline void space_desturct_blocks(Blocked_Space* space) -{ - Block* blocks = (Block*)space->heap_start; - unsigned int i=0; - for(; i < space->num_managed_blocks; i++){ - Block_Header* block = (Block_Header*)&(blocks[i]); - block_destruct(block); - } -} +void blocked_space_shrink(Blocked_Space* space, unsigned int changed_size); +void blocked_space_extend(Blocked_Space* space, unsigned int changed_size); -inline void blocked_space_shrink(Blocked_Space* space, unsigned int changed_size) -{ - unsigned int block_dec_count = changed_size >> GC_BLOCK_SHIFT_COUNT; - void* new_base = (void*)&(space->blocks[space->num_managed_blocks - block_dec_count]); - - void* decommit_base = (void*)round_down_to_size((POINTER_SIZE_INT)new_base, SPACE_ALLOC_UNIT); - - assert( ((Block_Header*)decommit_base)->block_idx >= space->free_block_idx); - - void* old_end = (void*)&space->blocks[space->num_managed_blocks]; - POINTER_SIZE_INT decommit_size = (POINTER_SIZE_INT)old_end - (POINTER_SIZE_INT)decommit_base; - assert(decommit_size && !(decommit_size%GC_BLOCK_SIZE_BYTES)); - - Boolean result = vm_decommit_mem(decommit_base, decommit_size); - assert(result == TRUE); - - space->committed_heap_size = (POINTER_SIZE_INT)decommit_base - (POINTER_SIZE_INT)space->heap_start; - space->num_managed_blocks = (unsigned int)(space->committed_heap_size >> GC_BLOCK_SHIFT_COUNT); - - Block_Header* new_last_block = (Block_Header*)&space->blocks[space->num_managed_blocks - 1]; - space->ceiling_block_idx = new_last_block->block_idx; - new_last_block->next = NULL; -} - -inline void blocked_space_extend(Blocked_Space* space, unsigned int changed_size) -{ - unsigned int block_inc_count = changed_size >> GC_BLOCK_SHIFT_COUNT; - - void* old_base = (void*)&space->blocks[space->num_managed_blocks]; - void* commit_base = (void*)round_down_to_size((POINTER_SIZE_INT)old_base, SPACE_ALLOC_UNIT); - unsigned int block_diff_count = (unsigned int)(((POINTER_SIZE_INT)old_base - (POINTER_SIZE_INT)commit_base) >> GC_BLOCK_SHIFT_COUNT); - block_inc_count += block_diff_count; - - POINTER_SIZE_INT commit_size = block_inc_count << GC_BLOCK_SHIFT_COUNT; - void* result = vm_commit_mem(commit_base, commit_size); - assert(result == commit_base); - - void* new_end = (void*)((POINTER_SIZE_INT)commit_base + commit_size); - space->committed_heap_size = (POINTER_SIZE_INT)new_end - (POINTER_SIZE_INT)space->heap_start; - /*Fixme: For_Heap_Adjust, but need fix if static mapping.*/ - space->heap_end = new_end; - /* init the grown blocks */ - Block_Header* block = (Block_Header*)commit_base; - Block_Header* last_block = (Block_Header*)((Block*)block -1); - unsigned int start_idx = last_block->block_idx + 1; - unsigned int i; - for(i=0; block < new_end; i++){ - block_init(block); - block->block_idx = start_idx + i; - last_block->next = block; - last_block = block; - block = (Block_Header*)((Block*)block + 1); - } - last_block->next = NULL; - space->ceiling_block_idx = last_block->block_idx; - space->num_managed_blocks = (unsigned int)(space->committed_heap_size >> GC_BLOCK_SHIFT_COUNT); -} - -inline void blocked_space_block_iterator_init(Blocked_Space *space) -{ space->block_iterator = (Block_Header*)space->blocks; } - -inline void blocked_space_block_iterator_init_free(Blocked_Space *space) -{ space->block_iterator = (Block_Header*)&space->blocks[space->free_block_idx - space->first_block_idx]; } - -inline Block_Header *blocked_space_block_iterator_get(Blocked_Space *space) -{ return (Block_Header*)space->block_iterator; } - -inline Block_Header *blocked_space_block_iterator_next(Blocked_Space *space) -{ - Block_Header *cur_block = (Block_Header*)space->block_iterator; - - while(cur_block != NULL){ - Block_Header *next_block = cur_block->next; - - Block_Header *temp = (Block_Header*)atomic_casptr((volatile void **)&space->block_iterator, next_block, cur_block); - if(temp != cur_block){ - cur_block = (Block_Header*)space->block_iterator; - continue; - } - return cur_block; - } - /* run out space blocks */ - return NULL; -} +void blocked_space_block_iterator_init(Blocked_Space *space); +void blocked_space_block_iterator_init_free(Blocked_Space *space); +Block_Header *blocked_space_block_iterator_get(Blocked_Space *space); +Block_Header *blocked_space_block_iterator_next(Blocked_Space *space); #endif //#ifndef _GC_SPACE_H_