Thanks!
 
Switching to java 1.6.0_18 seems to have gotten past the 2GB file boundary. I now have a new ring token for the first node in my cluster.
 
Can I run a "loadbalance" on nodes 2-6 to achive more data and token balancing?
 
Should I perform a cleanup operation on node 1?
 
During the loadbalance operation, the following changes occurred:
 
node 1: changed token value ; data size change from 5.7 GB to 8.2 GB (loadbalance was performed on this node)
node 2: no token ring changes ; data size remained at 5.7 GB ; again found the following warning(s) in log file
 
java.io.IOException: Reached an EOL or something bizzare occured. Reading from: /node1 BufferSizeRemaining: 16
        at org.apache.cassandra.net.io.StartState.doRead(StartState.java:44)
        at org.apache.cassandra.net.io.ProtocolState.read(ProtocolState.java:39)
        at org.apache.cassandra.net.io.TcpReader.read(TcpReader.java:95)
        at org.apache.cassandra.net.TcpConnection$ReadWorkItem.run(TcpConnection.java:445)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
 
node 3: no token ring changes ; data size remained at 5.7 GB
node 4: no token ring changes ; data size change from 3KB to 5.7 GB (cassandra selected target of load balance)
node 5: no token ring changes ; data size remained at  3 KB
node 6: no token ring changes ; data size remained at  3 KB
 
Thanks again, I really appreciate your help on this.
Jon
--------------------------------------------------------------------------------------------------------------------

On Tue, Mar 2, 2010 at 9:24 AM, Jon Graham <sjcloud22@gmail.com> wrote:
Thanks Jonathan,
 
My 32-bit java version is at: 1.6.0_13-b03. I'll try a java upgrade.
This tracks well with the exact MaxInt -tmp- Data file size
 
Jon

On Tue, Mar 2, 2010 at 9:15 AM, Jonathan Ellis <jbellis@gmail.com> wrote:
Doing some googling, this is a different JRE bug than the on addressed
by 795: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6253145.
It is marked fixed in JDK 6u18, so try upgrading to that.

-Jonathan

On Tue, Mar 2, 2010 at 10:46 AM, Jon Graham <sjcloud22@gmail.com> wrote:
> Hello,
>
> I am running a 32-bit linux version 2.6.27.24. My original data set was
> copied from a 64-bit cassandra cluster to a 32-bit cassandra cluster. I am
> trying to load balance the data on a 32-bit cluster.
>
> Is the cassandra-795 issue applicable for 32-linux too for the 0.5.0
> release?
>
> Thanks,
> Jon
> On Mon, Mar 1, 2010 at 4:55 PM, Jonathan Ellis <jbellis@gmail.com> wrote:
>>
>> On Mon, Mar 1, 2010 at 5:39 PM, Jon Graham <sjcloud22@gmail.com> wrote:
>> > Reached an EOL or something bizzare occured. Reading from: /192.168.2.13
>> > BufferSizeRemaining: 16
>>
>> This one is harmless
>>
>> > java.io.IOException: Value too large for defined data type
>> >     at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>> >     at sun.nio.ch.FileChannelImpl.transferToDirectly(Unknown Source)
>> >     at sun.nio.ch.FileChannelImpl.transferTo(Unknown Source)
>> >     at
>> > org.apache.cassandra.net.TcpConnection.stream(TcpConnection.java:226)
>> >     at
>> > org.apache.cassandra.net.FileStreamTask.run(FileStreamTask.java:55)
>> >     at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
>> > Source)
>> >     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
>> > Source)
>> >     at java.lang.Thread.run(Unknown Source)
>>
>> This one is killing you.
>>
>> Are you on windows?  If so
>> https://issues.apache.org/jira/browse/CASSANDRA-795 should fix it.
>> That's in both 0.5.1 and 0.6 beta.
>>
>> -Jonathan
>
>