river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike McGrady <mmcgr...@topiatechnology.com>
Subject Re: datastructure classes
Date Fri, 17 Dec 2010 16:20:41 GMT
I concur.

Sent from my iPhone

Michael McGrady
Principal investigator AF081_028 SBIR
Chief Architect
Topia Technology, Inc
Work 1.253.572.9712
Cel 1.253.720.3365

On Dec 17, 2010, at 8:00 AM, Gregg Wonderly <gregg@wonderly.org> wrote:

> On 12/16/2010 8:09 AM, Sim IJskes - QCG wrote:
>> On 16-12-10 14:55, Patricia Shanahan wrote:
>>>> However, we should be able to do, say, hundreds of millions of
>>>> transactions in a day in real-time critical systems such as the FAA
>>>> or the stock market with data affinity and integrity and all the
>>>> other "ilities". If Outrigger cannot do this, it is of no interest
>>>> to us.
>>> 
>>> The current record for a relational database doing simple transactions
>>> is 30 million transactions per minute (Oracle/SPARC TPC-C). Your mileage
>>> may vary, but there is no inherent limit on relational database scaling
>>> that puts a few hundred thousand transactions per minute out of reach.
>> 
>> Apart from that, it would be very interesting to see how a COTS DB backed
>> javaspace whould behave in practice. And it could be the first step into
>> producing alternative persistence mechanisms. In the early stage it would be
>> comfortable to know we don't have to prove the correctness of a cots-db. In a
>> later stage we can always look at lifting the transaction based blockstorage
>> layer from derby or another java based db for instance.
> 
> One of the primary issues with bandwidth through any system is latency.  While multiprocessor/multi-core
and distributed computing can provide huge bandwidth possibilities, the underlying issue is
per transaction latency.
> 
> If you look simply, at the JDBC model for example, the act of converting the JDBC activities
to network traffic and back (marshal/unmarshal) is one of the primary "time consuming operations".
 When you add onto that other overhead associated with how each database authenticates and
manages its "network" traffic.  I have no exact numbers to demonstrate this, but it is something
which I have very long experience dealing with, over the years, with my broker (built before
JMS existed) and catching up 100's of thousands of transactions into databases which have
gone down for maintenance.
> 
> JDBC only allows one transaction per connection, so you have context overhead involved
there.  With each transaction coming across a separate TNS-Listener process in oracle, you
have OS context switching issues that inject latency.
> 
> Overall, the bandwidth can be very large, but the per transaction latency is probably
the biggest reason that SQL databases are not always the best choice for some types of performance
needs.
> 
> Gregg Wonderly

Mime
View raw message