Return-Path: X-Original-To: apmail-hbase-dev-archive@www.apache.org Delivered-To: apmail-hbase-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2E03E189E2 for ; Tue, 30 Jun 2015 15:37:29 +0000 (UTC) Received: (qmail 93967 invoked by uid 500); 30 Jun 2015 15:37:28 -0000 Delivered-To: apmail-hbase-dev-archive@hbase.apache.org Received: (qmail 93883 invoked by uid 500); 30 Jun 2015 15:37:28 -0000 Mailing-List: contact dev-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list dev@hbase.apache.org Received: (qmail 93872 invoked by uid 99); 30 Jun 2015 15:37:28 -0000 Received: from Unknown (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Jun 2015 15:37:28 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 888E8180348 for ; Tue, 30 Jun 2015 15:37:27 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.009 X-Spam-Level: X-Spam-Status: No, score=-0.009 tagged_above=-999 required=6.31 tests=[T_RP_MATCHES_RCVD=-0.01, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id QaIW0K06HnX4 for ; Tue, 30 Jun 2015 15:37:14 +0000 (UTC) Received: from BLU004-OMC2S21.hotmail.com (blu004-omc2s21.hotmail.com [65.55.111.96]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 614BA2310F for ; Tue, 30 Jun 2015 15:37:11 +0000 (UTC) Received: from BLU436-SMTP181 ([65.55.111.73]) by BLU004-OMC2S21.hotmail.com over TLS secured channel with Microsoft SMTPSVC(7.5.7601.23008); Tue, 30 Jun 2015 08:37:02 -0700 X-TMN: [KkOqVxH3WbTYoDepOniAIZH1SF8jQqL4] X-Originating-Email: [michael_segel@hotmail.com] Message-ID: Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 (Mac OS X Mail 8.2 \(2098\)) Subject: Re: [DISCUSS] Multi-Cluster HBase Client From: Michael Segel In-Reply-To: Date: Tue, 30 Jun 2015 08:36:52 -0700 Content-Transfer-Encoding: quoted-printable References: To: dev@hbase.apache.org X-Mailer: Apple Mail (2.2098) X-OriginalArrivalTime: 30 Jun 2015 15:37:00.0713 (UTC) FILETIME=[9B3C8190:01D0B34A] Guys,=20 You really don=E2=80=99t want to do this. (Fault tolerant across a = single cluster pair=E2=80=A6)=20 What I didn=E2=80=99t say in my other email is that you=E2=80=99re not = being specific as to what constitutes a failure. Read: How does your = client know that you=E2=80=99ve lost a connection to its primary client?=20= What you might as well do is to create a load balancing server that will = then manage the connection to one of N clusters in your replication = group. And even then you=E2=80=99ll want to make this redundant.=20 Really?=20 How often do you have a problem with your client connection?=20 If so=E2=80=A6 get a new HBase Admin or switch to MapRDB because you = have a stability problem=E2=80=A6 In terms of a generic client who wants to manage multiple connections=E2=80= =A6 yeah, that=E2=80=99s a pretty straight forward problem to solve.=20 But keep in mind that your cluster isn=E2=80=99t across multiple data = centers but that you have multiple clusters.=20 Of course=E2=80=A6 maybe you=E2=80=99re all on a single cluster and = you=E2=80=99re using slider =E2=80=A6 ;-)=20 Again, please think before you pound code. =20 But hey! What do I know? I don=E2=80=99t own my IP :-( ;-P > On Jun 30, 2015, at 6:24 AM, Ted Malaska = wrote: >=20 > Cool Let me know. If we appeal HBase.MCC correctly maybe we can hit = two > birds with one stone. At least the client part. It would be nice to = have > a client that was configurable and in the core that would support use = cases > like this. >=20 > On Tue, Jun 30, 2015 at 9:18 AM, ramkrishna vasudevan < > ramkrishna.s.vasudevan@gmail.com> wrote: >=20 >> Thanks Ted. >>=20 >> Ya as you said the idea is to solve a bigger use case where there is = a >> globally distributed cluster but the data is local to each cluster - = ie. >> the data that we write and read is local to that geography or = cluster. The >> cross site Big table will help you to read and write from such a = cluster >> transparently just by differentiating them with a cluster id. >>=20 >> But the other subset of the problem that HBase.MCC solves can also be >> achieved because the failover switching during writes/reads happens = based >> on the replication setup that is available in that local cluster. >>=20 >> The state of CSBT - I need to know the latest update but it was = earlier >> discussed that CSBT cannot be part of the hbase-package but as a = stand >> alone tool. I can get the update on that. >>=20 >> Regards >> Ram >>=20 >>=20 >> On Tue, Jun 30, 2015 at 5:05 PM, Ted Malaska = >> wrote: >>=20 >>> Hey Ramkrishna, >>>=20 >>> I think your right that are some things that are the same. The >> difference >>> is the problem they are trying to solve and the scope. >>>=20 >>> In the HBase.MCC design it is only about cluster fail over and = keeping >> 100% >>> up time in the case of single site failure. The Cross-site Big = Table >> looks >>> to have some of that too, but also it is more complex because it has = the >>> requirement of data being local to a single cluster. So you need to = see >>> all the clusters to get all the data. >>>=20 >>> May be I'm wrong by they are not solving for the same problem. Also >>> because of the HBase.MCC limited scope it is far easier to implement = and >>> maintain. >>>=20 >>> Now all through I agree that the Cross site Big Table has a valid = use >>> case. The use case for HBase.MCC is to more set an equal the ground = with >>> Cassandra in the market place. To allow us to have eventual = consistency >> in >>> the case of single site failure. With configs to determine what >> thresholds >>> must be pasted before exciting those eventual consistency records. >>>=20 >>> This will allow HBase to better compete for use cases that involve = Near >>> Real Time Streaming. This is important because this is the new hot = trend >>> in the market today to move your batch to near real time. I think = HBase >> is >>> the best solution out there today for this but for the fake that at = site >> or >>> region server failure we loss functionality. (Read and Write on site >>> failure, and write on RS failure) >>>=20 >>> In the end HBase.MCC's scope is what hopefully should make it = exciting. >>> All we need to do is make a new client and update the connection = factory >> to >>> give you that multi cluster client when requested through the = configs. >> No >>> updates to ZK or HBase core would have to be touched. >>>=20 >>> Side note: Because of the flexibility in the HBase.MCC configs there = is a >>> way to reach a good majority of the Cross-site BigTable goals with = just >>> HBase.MCC. >>> Last question: What became of Cross-site BigTable? >>>=20 >>> Let me know if you find this correct. >>> Thanks >>> Ted Malaska >>>=20 >>> On Tue, Jun 30, 2015 at 12:42 AM, ramkrishna vasudevan < >>> ramkrishna.s.vasudevan@gmail.com> wrote: >>>=20 >>>> Hi Ted >>>>=20 >>>> I think the idea here is very similar to the Cross-site Big Table >> project >>>> that was presented in HBaseCon 2014. >>>>=20 >>>> Pls find the slide linke below >>>> http://www.slideshare.net/HBaseCon/ecosystem-session-3. >>>> This project also adds a client side wrappers so that the client = can >>>> internally do a failover in case of a cluster going down and >>> automatically >>>> switching over to the replicated clusters based on the = configurations. >>> Let >>>> us know if you find this interesting. >>>>=20 >>>> Regards >>>> Ram >>>>=20 >>>>=20 >>>>=20 >>>> On Tue, Jun 30, 2015 at 4:01 AM, Ted Malaska = >>=20 >>>> wrote: >>>>=20 >>>>> lol I did sorry, this is the right doc >>>>>=20 >>>>>=20 >>>>>=20 >>>>=20 >>>=20 >> = https://github.com/tmalaska/HBase.MCC/blob/master/MultiHBaseClientDesignDo= c.docx.pdf >>>>>=20 >>>>> On Mon, Jun 29, 2015 at 6:30 PM, Andrew Purtell = >>=20 >>>>> wrote: >>>>>=20 >>>>>> I think you may have put up the wrong document? That link goes to >>>> product >>>>>> doc. >>>>>>=20 >>>>>>=20 >>>>>> On Mon, Jun 29, 2015 at 3:24 PM, Ted Malaska < >>> ted.malaska@cloudera.com >>>>>=20 >>>>>> wrote: >>>>>>=20 >>>>>>> Here is the PDF link. >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>>=20 >>>>=20 >>>=20 >> = https://github.com/tmalaska/HBase.MCC/blob/master/MultiClusterAndEDH_Lates= t.docx.pdf >>>>>>>=20 >>>>>>> On Mon, Jun 29, 2015 at 6:09 PM, Sean Busbey < >> busbey@cloudera.com> >>>>>> wrote: >>>>>>>=20 >>>>>>>> Michael, >>>>>>>>=20 >>>>>>>> This is the dev list, no sound-bite pitch is needed. We have >>> plenty >>>>> of >>>>>>>> features that take time to explain the nuance. Please either >>> engage >>>>>> with >>>>>>>> the complexity of the topic or wait for the feature to land and >>> get >>>>>>>> user-accessible documentation. We all get busy from time to >> time, >>>> but >>>>>>>> that's no reason to push a higher burden on those who are >>> currently >>>>>>> engaged >>>>>>>> with a particular effort, especially this early in development. >>>>>>>>=20 >>>>>>>> That said, the first paragraph gives a suitable brief >> motivation >>>>>>> (slightly >>>>>>>> rephrased below): >>>>>>>>=20 >>>>>>>>> Some applications require response and availability SLAs >> that a >>>>>> single >>>>>>>> HBase cluster can not meet alone. Particularly for >>>>>>>>> high percentiles, queries to a single cluster can be delayed >> by >>>>> e.g. >>>>>> GC >>>>>>>> pauses, individual server process failure, or maintenance >>>>>>>>> activity. By providing clients with a transparent >> multi-cluster >>>>>>>> configuration option we can avoid these outlier conditions by >>>>>>>>> mask these failures from applications that are tolerant to >>> weaker >>>>>>>> consistency guarantees than HBase provides out of the box. >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> Ted, >>>>>>>>=20 >>>>>>>> Thanks for writing this up! We'd prefer to keep discussion of >> it >>> on >>>>> the >>>>>>>> mailing list, so please avoid moving to private webex's. >>>>>>>>=20 >>>>>>>> Would you mind if I or one of the other community members >>> converted >>>>> the >>>>>>>> design doc to pdf so that it's more accessible? >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> On Mon, Jun 29, 2015 at 4:52 PM, Ted Malaska < >>>>> ted.malaska@cloudera.com >>>>>>>=20 >>>>>>>> wrote: >>>>>>>>=20 >>>>>>>>> Why don't we set up a webex to talk out the detail. What >> times >>>> r u >>>>>>> open >>>>>>>> to >>>>>>>>> talk this week. >>>>>>>>>=20 >>>>>>>>> But to answer your questions. This is for active active and >>>> active >>>>>>>>> failover clusters. There is a primary and n number of fail >>> overs >>>>> per >>>>>>>>> client. This is for gets and puts. >>>>>>>>>=20 >>>>>>>>> There r a number of configs in the doc to define how to >>> failover. >>>>>> The >>>>>>>>> options allow a couple different use cases. There is a lot >> of >>>>> detail >>>>>>> in >>>>>>>>> the doc and I just didn't want to put it all in the email. >>>>>>>>>=20 >>>>>>>>> But honestly I put a lot of time in the doc. I would love >> to >>>> know >>>>>>> what >>>>>>>> u >>>>>>>>> think. >>>>>>>>> On Jun 29, 2015 5:46 PM, "Michael Segel" < >>>>> michael_segel@hotmail.com> >>>>>>>>> wrote: >>>>>>>>>=20 >>>>>>>>>> Ted, >>>>>>>>>>=20 >>>>>>>>>> If you can=E2=80=99t do a 30 second pitch, then its not worth = the >>>> effort. >>>>>> ;-) >>>>>>>>>>=20 >>>>>>>>>> Look, when someone says that they want to have a single >>> client >>>>> talk >>>>>>> to >>>>>>>>>> multiple HBase clusters, that could mean two very different >>>>> things. >>>>>>>>>> First, you could mean that you want a single client to >>> connect >>>> to >>>>>> an >>>>>>>>>> active/active pair of HBase clusters where they replicate >> to >>>> each >>>>>>>> other. >>>>>>>>>> (Active / Passive would also be implied, but then you have >>> the >>>>>> issue >>>>>>> of >>>>>>>>>> when does the passive cluster go active? ) >>>>>>>>>>=20 >>>>>>>>>> Then you have the issue of someone wanting to talk to >>> multiple >>>>>>>> different >>>>>>>>>> clusters so that they can query the data, create local data >>>> sets >>>>>>> which >>>>>>>>> they >>>>>>>>>> wish to join, combining data from various sources. >>>>>>>>>>=20 >>>>>>>>>> The second is a different problem from the first. >>>>>>>>>>=20 >>>>>>>>>> -Mike >>>>>>>>>>=20 >>>>>>>>>>> On Jun 29, 2015, at 3:38 PM, Ted Malaska < >>>>>> ted.malaska@cloudera.com >>>>>>>>=20 >>>>>>>>>> wrote: >>>>>>>>>>>=20 >>>>>>>>>>> Hey Michael, >>>>>>>>>>>=20 >>>>>>>>>>> Read the doc please. It goes through everything at a low >>>>> level. >>>>>>>>>>>=20 >>>>>>>>>>> Thanks >>>>>>>>>>> Ted Malaska >>>>>>>>>>>=20 >>>>>>>>>>> On Mon, Jun 29, 2015 at 4:36 PM, Michael Segel < >>>>>>>>>> michael_segel@hotmail.com> >>>>>>>>>>> wrote: >>>>>>>>>>>=20 >>>>>>>>>>>> No down time? >>>>>>>>>>>>=20 >>>>>>>>>>>> So you want a client to go against a pair of >> active/active >>>>> hbase >>>>>>>>>> instances >>>>>>>>>>>> on tied clusters? >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>>> On Jun 29, 2015, at 3:20 PM, Ted Malaska < >>>>>>> ted.malaska@cloudera.com >>>>>>>>>=20 >>>>>>>>>>>> wrote: >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Hey Michael, >>>>>>>>>>>>>=20 >>>>>>>>>>>>> The use case is simple "No down time use cases" even in >>> the >>>>>> case >>>>>>> of >>>>>>>>>> site >>>>>>>>>>>>> failure. >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Now on this statement >>>>>>>>>>>>> "Why not simply manage each connection/context via a >>>> threaded >>>>>>>> child?" >>>>>>>>>>>>>=20 >>>>>>>>>>>>> That is the point, to make that simple, tested, easy, >> and >>>>>>>> transparent >>>>>>>>>> for >>>>>>>>>>>>> HBase users. >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Ted Malaska >>>>>>>>>>>>>=20 >>>>>>>>>>>>> On Mon, Jun 29, 2015 at 4:11 PM, Michael Segel < >>>>>>>>>>>> michael_segel@hotmail.com> >>>>>>>>>>>>> wrote: >>>>>>>>>>>>>=20 >>>>>>>>>>>>>> So if I understand your goal, you want a client who >> can >>>>>> connect >>>>>>> to >>>>>>>>> one >>>>>>>>>>>> or >>>>>>>>>>>>>> more hbase clusters at the same time=E2=80=A6 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Ok, so lets walk through the use case and help me >>>>> understand a >>>>>>>>> couple >>>>>>>>>> of >>>>>>>>>>>>>> use cases for this=E2=80=A6 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Why not simply manage each connection/context via a >>>> threaded >>>>>>>> child? >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> On Jun 29, 2015, at 1:48 PM, Ted Malaska < >>>>>>>> ted.malaska@cloudera.com >>>>>>>>>>=20 >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Hey Dev List, >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> My name is Ted Malaska, long time lover and user of >>>> HBase. >>>>> I >>>>>>>> would >>>>>>>>>> like >>>>>>>>>>>>>> to >>>>>>>>>>>>>>> discuss adding in a multi-cluster client into HBase. >>> Here >>>>> is >>>>>>> the >>>>>>>>> link >>>>>>>>>>>> for >>>>>>>>>>>>>>> the design doc ( >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>>=20 >>>>=20 >>>=20 >> = https://github.com/tmalaska/HBase.MCC/blob/master/MultiHBaseClientDesignDo= c.docx%20(1).docx >>>>>>>>>>>>>> ) >>>>>>>>>>>>>>> but I have pulled some parts into this main e-mail to >>>> give >>>>>> you >>>>>>> a >>>>>>>>> high >>>>>>>>>>>>>> level >>>>>>>>>>>>>>> understanding of it's scope. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> *Goals* >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> The proposed solution is a multi-cluster HBase client >>>> that >>>>>>> relies >>>>>>>>> on >>>>>>>>>>>> the >>>>>>>>>>>>>>> existing HBase Replication functionality to provide >> an >>>>>> eventual >>>>>>>>>>>>>> consistent >>>>>>>>>>>>>>> solution in cases of primary cluster down time. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>=20 >>>>> = https://github.com/tmalaska/HBase.MCC/blob/master/FailoverImage.png >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Additional goals are: >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> - >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Be able to switch between single HBase clusters to >>>>>> Multi-HBase >>>>>>>>> Client >>>>>>>>>>>>>>> with limited or no code changes. This means using >> the >>>>>>>>>>>>>> HConnectionManager, >>>>>>>>>>>>>>> Connection, and Table interfaces to hide complexities >>>> from >>>>>> the >>>>>>>>>>>>>> developer >>>>>>>>>>>>>>> (Connection and Table are the new classes for >>>> HConnection, >>>>>> and >>>>>>>>>>>>>>> HTableInterface in HBase version 0.99). >>>>>>>>>>>>>>> - >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Offer thresholds to allow developers to decide >> between >>>>>> degrees >>>>>>> of >>>>>>>>>>>>>>> strongly consistent and eventually consistent. >>>>>>>>>>>>>>> - Support N number of linked HBase Clusters >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> *Read-Replicas* >>>>>>>>>>>>>>> Also note this is in alinement with Read-Replicas and >>> can >>>>>> work >>>>>>>> with >>>>>>>>>>>> that. >>>>>>>>>>>>>>> This client is multi-cluster where Read-Replicas help >>> us >>>> to >>>>>> be >>>>>>>>> multi >>>>>>>>>>>>>> Region >>>>>>>>>>>>>>> Server. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> *Replication* >>>>>>>>>>>>>>> You will also see in the document that this works >> with >>>>>> current >>>>>>>>>>>>>> replication >>>>>>>>>>>>>>> and requires no changes to it. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> *Only a Client change* >>>>>>>>>>>>>>> You will also see in the doc this is only a new >> client. >>>>> Which >>>>>>>> means >>>>>>>>>> no >>>>>>>>>>>>>>> extra code for the end developer, only addition >> configs >>>> to >>>>>> set >>>>>>> it >>>>>>>>> up. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> *Github* >>>>>>>>>>>>>>> This is a github project that shows that this works >> at: >>>>>>>>>>>>>>> https://github.com/tmalaska/HBase.MCC >>>>>>>>>>>>>>> Note this is only a prototype. When adding it to >> HBase >>> we >>>>>> will >>>>>>>> use >>>>>>>>> it >>>>>>>>>>>> as >>>>>>>>>>>>>> a >>>>>>>>>>>>>>> starting point but there will be changes. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> *Initial Results:* >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Red is where our primary cluster has failed and you >>> will >>>>> see >>>>>>> from >>>>>>>>> the >>>>>>>>>>>>>>> bottom to graphs that our puts, deletes, and gets are >>> not >>>>>>>>>> interrupted. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>>=20 >>>>=20 >>>=20 >> = https://github.com/tmalaska/HBase.MCC/blob/master/AveragePutTimeWithMultiR= estartsAndShutDowns.png >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>> Ted Malaska >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> The opinions expressed here are mine, while they may >>>>> reflect a >>>>>>>>>> cognitive >>>>>>>>>>>>>> thought, that is purely accidental. >>>>>>>>>>>>>> Use at your own risk. >>>>>>>>>>>>>> Michael Segel >>>>>>>>>>>>>> michael_segel (AT) hotmail.com >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> The opinions expressed here are mine, while they may >>>> reflect a >>>>>>>>> cognitive >>>>>>>>>>>> thought, that is purely accidental. >>>>>>>>>>>> Use at your own risk. >>>>>>>>>>>> Michael Segel >>>>>>>>>>>> michael_segel (AT) hotmail.com >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> The opinions expressed here are mine, while they may >> reflect >>> a >>>>>>>> cognitive >>>>>>>>>> thought, that is purely accidental. >>>>>>>>>> Use at your own risk. >>>>>>>>>> Michael Segel >>>>>>>>>> michael_segel (AT) hotmail.com >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> -- >>>>>>>> Sean >>>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>> -- >>>>>> Best regards, >>>>>>=20 >>>>>> - Andy >>>>>>=20 >>>>>> Problems worthy of attack prove their worth by hitting back. - = Piet >>>> Hein >>>>>> (via Tom White) >>>>>>=20 >>>>>=20 >>>>=20 >>>=20 >>=20 The opinions expressed here are mine, while they may reflect a cognitive = thought, that is purely accidental.=20 Use at your own risk.=20 Michael Segel michael_segel (AT) hotmail.com