db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Segel" <mse...@mycingular.blackberry.net>
Subject Re: maximum file size
Date Fri, 10 Nov 2006 15:49:48 GMT
Sorry for top post, using my crack berry.

Not a good idea to use multiple files.
Why not go all the way and make derby in to an mpp db? All you would have to do is to preprocess
the inbound query then send it off to all of the nodes, taking the result set(s) in to a local
temp file and then post process and return the results...  :-)

Actually a better idea would be a rewrite of how derby stores data.

Introduce chunks, table spaces, index spaces, blob spaces...  And all that which 
Goes with it.  

But then, you've increased your foot print and a large portion of your audience.

You could then decide to rewrite the core derby structure to handle either situation, but
now you have "storm cloud"

The short simple answer is to choose a different database that is better suited to your task.

Try ibm's ids 10.

But hey what do I know? I'm back to working with a client using oracle 9 spatial...
Sent via BlackBerry.

-Mike Segel
312 952 8175

-----Original Message-----
From: dmclean62@comcast.net
Date: Fri, 10 Nov 2006 14:12:35 
To:"Derby Discussion" <derby-user@db.apache.org>
Subject: Re: maximum file size

You could use multiple tables to get around the file size limit.

Decide how many rows would go in each table and then use some mechanism for assigning a unique
ID to each row. You would then be able to determine which table a specific row is in with
an integer division.

table # = <global row #> / <rows per table>

local row # = <global row #> % <rows per table>

The Telemetry Data Warehouse for the Hubble Space Telescope divides the data up by time -
each data table contains one calendar year's worth of telemetry data.

Just a couple of ideas.


 -------------- Original message ----------------------
From: Suresh Thalamati <suresh.thalamati@gmail.com>
> redcloud wrote:
> > Hi! I need to build a SQL table containing 1000000000 (!!!) rows. But i 
> > filled up a table with 20000000 rows (file size of table 4GB) and my 
> > filesystem denied to go on filling up the table. My question is: can 
> > derby db build "infinitive" size table by chunking in multiple files?
> > 
> No. Currently a table maps to a single file in Derby. Table size is 
> limited by the size of the file that can be created on a file system.
View raw message