Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 8ABAC200D2E for ; Tue, 31 Oct 2017 22:14:27 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 894311609EF; Tue, 31 Oct 2017 21:14:27 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 574A01609E6 for ; Tue, 31 Oct 2017 22:14:26 +0100 (CET) Received: (qmail 29283 invoked by uid 500); 31 Oct 2017 21:14:25 -0000 Mailing-List: contact dev-help@fluo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@fluo.apache.org Delivered-To: mailing list dev@fluo.apache.org Received: (qmail 29271 invoked by uid 99); 31 Oct 2017 21:14:25 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 31 Oct 2017 21:14:25 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 6B59C1A127D for ; Tue, 31 Oct 2017 21:14:24 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.22 X-Spam-Level: X-Spam-Status: No, score=-0.22 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=deenlo-com.20150623.gappssmtp.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id SDDnRak_1rND for ; Tue, 31 Oct 2017 21:14:20 +0000 (UTC) Received: from mail-qt0-f181.google.com (mail-qt0-f181.google.com [209.85.216.181]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 4B55360F56 for ; Tue, 31 Oct 2017 21:14:19 +0000 (UTC) Received: by mail-qt0-f181.google.com with SMTP id 1so483280qtn.3 for ; Tue, 31 Oct 2017 14:14:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=deenlo-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-transfer-encoding; bh=62rn6rixPYbrbBca6LYuUOTGoJOGaRTNHg/1vfMIUro=; b=X/lyhuHLTdqJHA1RoPAGU+RxK1rDPFd3sxap/MGotMga08g6KKwNJLxbKENFVQsvPT htBWN76XRZzLPwbPfP9UOyW9EuDl7JWsjg8ZRam5RhmY9RTKGohde1HKlHPFuETd9ZIm 5agJPrM9ycoWJ22wcJjFA2y9vfcuC4Wr6TJg0vG90YB1ujwJIjbD0jW4r9nR5pk+wq7q 4C6hmarJ8vO0gZ9wCjk73UywcLsLCIebxeQ9IMrUlMZgOIj0mkwzR/XU4nYvetmzv0wE 3qmRM2rBxazbAKpei3gb+p2zbZMtuR7GAxfs2ax5O76r3ixzYsp9Fp9L07Q6vdrmDmXG TWgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-transfer-encoding; bh=62rn6rixPYbrbBca6LYuUOTGoJOGaRTNHg/1vfMIUro=; b=YGZPKulG8r9jJW7yGA4vXSl0hUvv63/o1IOYofLwPHrj5sjiw3oJkX5ZdmPShWMaZV KT6xfLy186Fj8Blo16Figa3rZH2zIvsfBbhC8WIXciZlpyBh+uAzp8+JSH5CMtkOwk5E DQJduZktuEI/9GDvk27xSOK9hgb69mSjlBLWtlAkzA9EpdAaTB5+mVmUhO8vvFg7u18T US316MG0F/lgjWAaKdhetCQeHy9bzvEi5jtSZD1NGh+z5J21xwDPhDuhMjeDue6kGZN1 7z4To6UI3yz1tVuBc1UdS9LrsDftv8jQGVXfPxV03eqAyvmGvHIrMuKZBoZpQPFWlkjB rXjw== X-Gm-Message-State: AJaThX7NfDuCf0Y7oEzmjbaE6bktcbtBaeJ0RqMvmgCrxFrcZnVlDyaI 8LfoXAPjNmjIO+PdFRecYy9WOw4vSnc5tIbPuo7/rv7v X-Google-Smtp-Source: ABhQp+TkB1IiX1bk7oBDh4djbG1EuPetNRQB11AJVJGO6X9Q6cYjL3o5hu1dXHeVhtMLxoaT/cfDEdf/bVs0ZMuRFlA= X-Received: by 10.200.15.49 with SMTP id e46mr5304581qtk.150.1509484457591; Tue, 31 Oct 2017 14:14:17 -0700 (PDT) MIME-Version: 1.0 Received: by 10.12.138.20 with HTTP; Tue, 31 Oct 2017 14:14:17 -0700 (PDT) In-Reply-To: References: From: Keith Turner Date: Tue, 31 Oct 2017 17:14:17 -0400 Message-ID: Subject: Re: fluo accumulo table tablet servers not keeping up with application To: fluo-dev Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable archived-at: Tue, 31 Oct 2017 21:14:27 -0000 On Tue, Oct 31, 2017 at 2:22 PM, Meier, Caleb wro= te: > Hey Keith, > > Just following up on your last message. After looking at the worker Scan= Task logs, it seems like the workers are conducting scans as frequently as = the min sleep time permits. That is, if the min sleep time is set to 10s, = a ScanTask is being executed every 10s. In addition, running the Fluo wait= command indicates that the number of outstanding notifications steadily in= creases or is held constant (depending on the number of workers). Based on= your comments below, it seems like the workers should be scanning at a low= er rate given that the notification work queue is constantly increasing in = size. Another thing that we tried was reducing the number of workers and i= ncreasing the min sleep time. This lowered the scan burden on the tablet s= erver, but unsurprisingly our processing rate plummeted. We also tried low= ering the ingest rate for a fixed number of workers (lowering the notificat= ion rate for each worker thread). While it took longer for the TabletServe= r to become saturated, it still became overwhelmed. > > In general, for the queries that we are benchmarking, our notification:da= ta ratio is about 7:1 (i.e. each piece of ingested data generates about 7 n= otifications on the way to being entirely processed). I think that this is= our primary culprit, but I think that our application specific scans are a= lso part of the problem (I'm still in the process of trying to determine wh= at portion of the scans that we are seeing is specific to our observers and= what portion is specific to notification discovery - any suggestions here = would be appreciated). One reason that I think notification discovery is t= he culprit is that we implemented an in memory cache for the metadata, and = that didn't seem to affect the scan rate too much (metadata seeks constitut= e about 30% of our seeks/scans). > > Going forward, we're going to shard our data and look into ways to cut do= wn on scans. Any other suggestions about how to improve performance would = be appreciated. In 1.0.0 each worker scans all tablets for notifications. In 1.1.0 tablets and workers split into groups, you can adjust the worker group size[1], it defaults to 7. If you are using 1.1.0, I would recommend experimenting with this. If you have 70 workers, then you will have 10 groups. The tablets will also be divided into 10 groups. Each worker will scan all of the tablets in its group. Notifications are hash partitioned within a group. If you lower the group size, then you will have less scanning. But as you lower the group size you increase the chance of work being unevenly spread. For example with a group size of 7 that means at most 7 workers will scan a tablet. It also means the notifications in tablet can only be processed by 7 workers. In the worst case if one tablet has all of the notifications, then only only 7 workers will process those notifications. If the notifications in the table are evenly spread across tablets, then you could probably decrease the group size to 2 or 3. There are two possible ways to get sense of what scans are up to via sampling. One is to sample listscans commands in the accumulo shell and see what iterators are in use. Transactions and notification scanning will use different iterators. Could also sample scan jstacks in some tservers and look at which iterators are used. Another thing to look into would be to see how many deleted notifications there are. Using the command fluo scan --raw -c ntfy Should be able to see notifications and deletes for notifications. I am curious how many deletes there are. When a table if flushed/minor compacted some notifications will be GC by an iterator. A full compaction will do more. These deletes have to be filtered at scan time. If you have a chance I would be interested in the following numbers (or ratios for the three numbers). * How many deleted notifications are there? How many notifications are the= re? * Flush table * How many deleted notifications are there? How many notifications are the= re? * compact table * How many deleted notifications are there? How many notifications are the= re? Keith [1]: https://github.com/apache/fluo/blob/rel/fluo-1.1.0-incubating/modules/= core/src/main/java/org/apache/fluo/core/impl/FluoConfigurationImpl.java#L30 > > Thanks, > Caleb > > Caleb A. Meier, Ph.D. > Senior Software Engineer =E2=99=A6 Analyst > Parsons Corporation > 1911 N. Fort Myer Drive, Suite 800 =E2=99=A6 Arlington, VA 22209 > Office: (703)797-3066 > Caleb.Meier@Parsons.com =E2=99=A6 www.parsons.com > > -----Original Message----- > From: Keith Turner [mailto:keith@deenlo.com] > Sent: Friday, October 27, 2017 12:17 PM > To: fluo-dev > Subject: Re: fluo accumulo table tablet servers not keeping up with appli= cation > > On Fri, Oct 27, 2017 at 11:03 AM, Meier, Caleb = wrote: >> Hey Keith, >> >> Our benchmark consists of a single query that is a join of two statement= patterns (essentially patterns that incoming data matches, where a unit of= data is a statement). We are ingesting 50 pairs of statements a minute (1= 00 total), where each statement in the pair matches one of the statement pa= tterns. Because the data is being ingested at a constant rate, the stateme= nt pattern Observers and Join Observers are constantly working. One thing = that is worth mentioning is that we changed the property fluo.implScanTask.= maxSleep from 5 min to 10 seconds. Based on the constant ingest rate, your= comments below, and our low maxSleep, it seems like the workers would cons= tantly be scanning for new notifications. >> >>> Once a worker scans all tablets and finds a list of notifications, it d= oes not scan again until half of those notifications are processed. >> >> How does the maxSleep property work in conjunction with this? If the ma= x sleep time elapses before a worker processes half of the notifications, w= ill it scan? > > I don't think it will scan again until the # of queued notifications is c= ut in half. I looked in 1.0.0 and 1.1.0 and I think while loops linked bel= ow should hold off on the scan until the queue halves. > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__github.com_apache_= fluo_blob_rel_fluo-2D1.0.0-2Dincubating_modules_core_src_main_java_org_apac= he_fluo_core_worker_finder_hash_ScanTask.java-23L85&d=3DDwIFaQ&c=3DNwf-pp4x= tYRe0sCRVM8_LWH54joYF7EKmrYIdfxIq10&r=3DvuVdzYC2kksVZR5STiFwDpzJ7CrMHCgeo_4= WXTD0qo8&m=3DbtY_WNg1O7SuwcHi1m2ksRp3ggzrI7nJlnC2B5cHgaU&s=3DBRyQS2DPBtEfUv= HT-JKBXPWABrSyihP6yaJcfE1BJFQ&e=3D > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__github.com_apache_= fluo_blob_rel_fluo-2D1.1.0-2Dincubating_modules_core_src_main_java_org_apac= he_fluo_core_worker_finder_hash_ScanTask.java-23L88&d=3DDwIFaQ&c=3DNwf-pp4x= tYRe0sCRVM8_LWH54joYF7EKmrYIdfxIq10&r=3DvuVdzYC2kksVZR5STiFwDpzJ7CrMHCgeo_4= WXTD0qo8&m=3DbtY_WNg1O7SuwcHi1m2ksRp3ggzrI7nJlnC2B5cHgaU&s=3DZxURCZE5k65I00= 8z7o4UQGsm6o0mBtJnwV_N6Y668oM&e=3D > > Were you able to find the ScanTask debug messages in the worker logs? > Below are the log messages int the code to give a sense of what to look f= or. > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__github.com_apache_= fluo_blob_rel_fluo-2D1.0.0-2Dincubating_modules_core_src_main_java_org_apac= he_fluo_core_worker_finder_hash_ScanTask.java-23L130&d=3DDwIFaQ&c=3DNwf-pp4= xtYRe0sCRVM8_LWH54joYF7EKmrYIdfxIq10&r=3DvuVdzYC2kksVZR5STiFwDpzJ7CrMHCgeo_= 4WXTD0qo8&m=3DbtY_WNg1O7SuwcHi1m2ksRp3ggzrI7nJlnC2B5cHgaU&s=3DC141kYyjygBL3= kWZyUObU1-nu4ZjvMnu7xp_QbIGkCA&e=3D > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__github.com_apache_= fluo_blob_rel_fluo-2D1.1.0-2Dincubating_modules_core_src_main_java_org_apac= he_fluo_core_worker_finder_hash_ScanTask.java-23L146&d=3DDwIFaQ&c=3DNwf-pp4= xtYRe0sCRVM8_LWH54joYF7EKmrYIdfxIq10&r=3DvuVdzYC2kksVZR5STiFwDpzJ7CrMHCgeo_= 4WXTD0qo8&m=3DbtY_WNg1O7SuwcHi1m2ksRp3ggzrI7nJlnC2B5cHgaU&s=3D4Qy1-LbMEpJ7N= ZLqngU8ZOEOBv6nB0nXM8mjkWdpEL4&e=3D > > IIRC I think if notifications were found in a tablet during the last scan= , then it will always scan it during the next scan loop. As notifications = are not found in a tablet then that tablets next scan time doubles up to fl= uo.implScanTask.maxSleep. > > So its possible that all notifications found are being processed quickly = and then the workers are scanning for more. The debug messages would show = this. > > There is also a minSleep time. This property determines the minimum amou= nt of time it will sleep between scan loops, seems to default to 5 secs. C= ould try increasing this. > > Looking at the props, it seems they prop names for min and max sleep chan= ged between 1.0.0 and 1.1.0. > > >> >> Caleb A. Meier, Ph.D. >> Senior Software Engineer =E2=99=A6 Analyst >> Parsons Corporation >> 1911 N. Fort Myer Drive, Suite 800 =E2=99=A6 Arlington, VA 22209 >> Office: (703)797-3066 >> Caleb.Meier@Parsons.com =E2=99=A6 www.parsons.com >> >> -----Original Message----- >> From: Keith Turner [mailto:keith@deenlo.com] >> Sent: Thursday, October 26, 2017 6:20 PM >> To: fluo-dev >> Subject: Re: fluo accumulo table tablet servers not keeping up with >> application >> >> On Thu, Oct 26, 2017 at 5:47 PM, Meier, Caleb = wrote: >>> Hey Keith, >>> >>> We'll rerun the benchmarks tomorrow and track the outstanding notificat= ions. We'll also see if compacting at some point during ingest helps with = the scan rate. Have you observed such high scan rates for such a small amo= unt of data in any of your benchmarking? What would account for the huge d= isparity in results read vs. results returned? It seems like our scans are= extremely inefficient for some reason. Our tablet servers are becoming ov= erwhelmed even before data gets flushed to disk. >> >> Oh I never saw you attachment, may not be able to attach stuff on mailin= g list. >> >> Its possible that what you are seeing is the workers scanning for notifi= cations. If you look in the workers logs do you see messages about scannin= g for notifications? If so what do they look like? >> >> In 1.0.0 each worker scans all tablets in random order. When it scans i= t has an iterator that uses hash+mod to select a subset of notifications. = The iterator also suppresses deleted notifications. >> So the selection and suppression by that iterator could explain the read= vs returned. It does exponential back off on tablets where it does not fi= nd data. Once a worker scans all tablets and finds a list of notifications= , it does not scan again until half of those notifications are processed. >> >> In the beginning, would you have a lot of notifications? If so I would = expect a lot of scanning and then it should slow down once the workers get = a list of notifications to process. >> >> In 1.1.0 the workers divide up the tablets (so workers no longer scan >> all tablets, groups of workers share groups of tablets). If the >> table is splits after the workers start, it may take them a bit to execu= te the distributed algorithm that divys tablets among workers. >> >> Anyway the debug messages about scanning for notifications in the worker= s should provide some insight into this. >> >> If its not notification scanning, then it could be that the application = is scanning over a lots of data that was deleted or something like that. >> >>> >>> Caleb A. Meier, Ph.D. >>> Senior Software Engineer =E2=99=A6 Analyst >>> Parsons Corporation >>> 1911 N. Fort Myer Drive, Suite 800 =E2=99=A6 Arlington, VA 22209 >>> Office: (703)797-3066 >>> Caleb.Meier@Parsons.com =E2=99=A6 www.parsons.com >>> >>> -----Original Message----- >>> From: Keith Turner [mailto:keith@deenlo.com] >>> Sent: Thursday, October 26, 2017 5:36 PM >>> To: fluo-dev >>> Subject: Re: fluo accumulo table tablet servers not keeping up with >>> application >>> >>> On Thu, Oct 26, 2017 at 2:50 PM, Meier, Caleb = wrote: >>>> Hey Keith, >>>> >>>> Thanks for the reply. Regarding our benchmark, I've attached some scr= eenshots of our Accumulo UI that were taken while the benchmark was running= . Basically, our ingest rate is pretty low (about 150 entries/s, but our s= can rate is off the charts - approaching 6 million entries/s!). Also, noti= ce the disparity between reads and returned in the Scan chart. That dispar= ity would suggest that we're possibly doing full table scans somewhere, whi= ch is strange given that all of our scans are RowColumn constrained. Perha= ps we are building our Scanner incorrectly. In an effort to maximize the = number of TabletServers, we split the Fluo table into 5MB tablets. Also, t= he data is not well balanced -- the tablet servers do take turns being maxe= d out while others are idle. We're considering possible sharding strategie= s. >>>> >>>> Given that our TabletServers are getting saturated so quickly for such= a low ingest rate, it seems like we definitely need to cut down on the num= ber of scans as a first line of attack to see what that buys us. Then we'l= l look into tuning Accumulo and Fluo. Does this seem like a reasonable app= roach to you? Does the scan rate of our application strike you as extremel= y high? When you look at the Rya Observers, can you pay attention to how w= e are building our scans to make sure that we're not inadvertently doing fu= ll table scans? Also, what exactly do you mean by "are the 6 lookups in th= e transaction done sequentially"? >>> >>> Regarding the scan rate there are few things I Am curious about. >>> >>> Fluo workers scan for notifications in addition to the scanning done >>> by your apps. I made some changes in 1.1.0 to reduce the amount of >>> scanning needed to find notifications, but this should not make much >>> of a difference on a small amount of nodes. Details about this are >>> in >>> 1.1.0 release notes. I am not sure what the best way is to determine h= ow much of the scanning you are seeing is app vs notification finding. Can= you run the fluo wait command to see how many outstanding notifications th= ere are? >>> >>> Transactions leave a paper trail behind and compactions clean this up (= Fluo has a garbage collection iterator). This is why I asked what effect c= ompacting the table had. Compactions will also clean up deleted notificati= ons. >>> >>> >>>> >>>> Thanks, >>>> Caleb >>>> >>>> Caleb A. Meier, Ph.D. >>>> Senior Software Engineer =E2=99=A6 Analyst >>>> Parsons Corporation >>>> 1911 N. Fort Myer Drive, Suite 800 =E2=99=A6 Arlington, VA 22209 >>>> Office: (703)797-3066 >>>> Caleb.Meier@Parsons.com =E2=99=A6 www.parsons.com >>>> >>>> -----Original Message----- >>>> From: Keith Turner [mailto:keith@deenlo.com] >>>> Sent: Thursday, October 26, 2017 1:39 PM >>>> To: fluo-dev >>>> Subject: Re: fluo accumulo table tablet servers not keeping up with >>>> application >>>> >>>> Caleb >>>> >>>> What if any tuning have you done? The following tune-able Accumulo pa= rameters impact performance. >>>> >>>> * Write ahead log sync settings (this can have huge performance >>>> implications) >>>> * Files per tablet >>>> * Tablet server cache sizes >>>> * Accumulo data block sizes >>>> * Tablet server client thread pool size >>>> >>>> For Fluo the following tune-able parameters are important. >>>> >>>> * Commit memory (this determines how many transactions are held in >>>> memory while committing) >>>> * Threads running transactions >>>> >>>> What does the load (CPU and memory) on the cluster look like? I'm cur= ious how even it is? For example is one tserver at 100% cpu while others a= re idle, this could be caused by uneven data access patterns. >>>> >>>> Would it be possible for me to see or run the benchmark? I am going t= o take a look at the Rya observers, let me know if there is anything in par= ticular I should look at. >>>> >>>> Are the 6 lookups in the transaction done sequentially? >>>> >>>> Keith >>>> >>>> On Thu, Oct 26, 2017 at 11:34 AM, Meier, Caleb wrote: >>>>> Hello Fluo Devs, >>>>> >>>>> We have implemented an incremental query evaluation service for Apach= e Rya that leverages Apache Fluo. We=E2=80=99ve been doing some benchmarki= ng and we=E2=80=99ve found that the Accumulo Tablet servers for the Fluo ta= ble are falling behind pretty quickly for our application. We=E2=80=99ve t= ried splitting the Accumulo Table so that we have more Tablet Servers, but = that doesn=E2=80=99t really buy us too much. Our application is fairly sca= n intensive=E2=80=94we have a metadata framework in place that allows us to= pass query results through the query tree, and each observer needs to look= up metadata to determine which observer to route its data to after process= ing. To give you some indication of our scan rates, our Join Observer does= about 6 lookups, builds a scanner to do one RowColumn restricted scan, and= then does many writes. So an obvious way to alleviate the burden on the T= ableServer is to cut down on the number of scans. >>>>> >>>>> One approach that we are considering is to import all of our metadata= into memory. Essentially, each Observer would need access to an in memory= metadata cache. We=E2=80=99re considering using the Observer context, but= this cache needs to be mutable because a user needs to be able to register= new queries. Is it possible to update the context, or would we need to re= start the application to do that? I guess other options would be to create= a static cache for each Observer that stores the metadata, or to store it = in Zookeeper. Have any of you devs ever had create a solution to share sta= te between Observers that doesn=E2=80=99t rely on the Fluo table? >>>>> >>>>> In addition to cutting down on the scan rate, are there any other app= roaches that you would consider? I assume that the problem lies primarily = with how we=E2=80=99ve implemented our application, but I=E2=80=99m also wo= ndering if there is anything we can do from a configuration point of view t= o reduce the burden on the Tablet servers. Would reducing the number of wo= rkers/worker threads to cut down on the number of times a single observatio= n is processed be helpful? It seems like this approach would cut out some = redundant scans as well, but it might be more of a second order optimizatio= n. In general, any insight that you might have on this problem would be gre= atly appreciated. >>>>> >>>>> Sincerely, >>>>> Caleb Meier >>>>> >>>>> Caleb A. Meier, Ph.D. >>>>> Senior Software Engineer =E2=99=A6 Analyst >>>>> Parsons Corporation >>>>> 1911 N. Fort Myer Drive, Suite 800 =E2=99=A6 Arlington, VA 22209 >>>>> Office: (703)797-3066 >>>>> Caleb.Meier@Parsons.com =E2=99=A6 >>>>> www.parsons.com>>>> c >>>>> om+> >>>>>