db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Segel <de...@segel.com>
Subject Re: advice for client/server application
Date Fri, 24 Mar 2006 20:48:34 GMT
On Friday 24 March 2006 12:44 am, Veaceslav Chicu wrote:
> if the problem is to move files, you can do this with blobs?
>
> you can have a stored procedures that will save, read file on the server
>
>
> best regards,
> Slavic
>

I was wondering about that.
Derby does support IO Streams so you could do that.
But this sort of begs the question... why store the file in the database?

If you look at content management solutions from companies like IBM their 
solutions are based on storing the file's meta data within the database.

If you take that design approach, you'll find that you can separate your 
TFTP/FTP issues away from the JDBC layer.

But this doesn't answer the question of why you would want to store the file 
within the database in the first place?

Going back up the thread... the original poster writes:
"Things have been working just fine, except that when I run queries, the
>>>>>server process does all the work and returns the results as a vector of
>>>>>string arrays. It's never sat well with me--and as you can imagine--now
>>>>>that the dataset is getting pretty big (120.000-4KB rows returned for
>>>>>some queries), I'm using too much memory.
>>>>>
>>>>>What I'd like to do is get my jdbc connection object onto the client so
>>>>>I don't have to "package" everything up when returning resultsets. The
>>>>>question is how?  My first idea was to just use derby's network server
>>>>>and write the file protocol separately, but I'd prefer to stick with
>>>>>just one socket if I can.
"

This may not make sense... 4KB objects to contain the meta data information 
regarding files? (Author says app is 1/2 ftp , 1/2 database app.) 

IMHO, there is a design issue that needs to be worked out prior to trying to 
code up the application.  It could be that you're trying to fit a square peg 
in a round hole....


> Ryan P Bobko wrote:
> > Thanks for the advice, but I'm not sure if that will help. I now realize
> > what makes me uneasy about my architecture is that I feel like I'm
> > rewriting JDBC little by little just so I can have my file-moving piece
> > on top of it. JDBC works great for my purposes, so my preference would be
> > to remove my custom protocol whenever possible. Is it feasible to extend
> > the JDBC API? Or will that just be more trouble than it's worth?
> >
> > I realize I just changed my question. Any more advice?
> >
> > ry
> >
> > On Wednesday 22 March 2006 07:23 pm, David W. Van Couvering wrote:
> >> A common way client applications working with large result sets have
> >> handled the "too much memory" problem that I've seen is to send the
> >> results over in chunks.  Instead of sending all 120,000 records in one
> >> response, just send 100 or 1,000.  The client processes those 1,000
> >> records, throws them away, and get the next 1,000.
> >>
> >> Would that work for you?
> >>
> >> David
> >>
> >> Ryan P Bobko wrote:
> >>> Hi List,
> >>> First of all, I can't say enough how impressed I've been with Derby.
> >>> Every time I've thought this embedded wouldn't be able to do something
> >>> I expect from a "full-blown" database (nested selects, correlated
> >>> subqueries, stored procedures, you name it), it's suprised me. I love
> >>> it.
> >>>
> >>> This isn't strictly a Derby question, but I'm hoping for some advice or
> >>> suggestions with how to procede. I've been working on an application
> >>> that is a sort of half-database, half-FTP client/server setup. The
> >>> protocol I've implemented between the client and server lets the app do
> >>> things like run queries, but also move files around based on those
> >>> results. Or insert rows into the database based on where files have
> >>> moved to. Files can be moved from the server to client and vice versa.
> >>>
> >>> Things have been working just fine, except that when I run queries, the
> >>> server process does all the work and returns the results as a vector of
> >>> string arrays. It's never sat well with me--and as you can imagine--now
> >>> that the dataset is getting pretty big (120.000-4KB rows returned for
> >>> some queries), I'm using too much memory.
> >>>
> >>> What I'd like to do is get my jdbc connection object onto the client so
> >>> I don't have to "package" everything up when returning resultsets. The
> >>> question is how?  My first idea was to just use derby's network server
> >>> and write the file protocol separately, but I'd prefer to stick with
> >>> just one socket if I can.
> >>>
> >>> Advice? Thanks for your time.
> >>> ry

-- 
--
Michael Segel
Principal 
Michael Segel Consulting Corp.
derby@segel.com
(312) 952-8175 [mobile]

Mime
View raw message