Return-Path: X-Original-To: apmail-hama-dev-archive@www.apache.org Delivered-To: apmail-hama-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7AD059704 for ; Thu, 14 Mar 2013 11:03:32 +0000 (UTC) Received: (qmail 43512 invoked by uid 500); 14 Mar 2013 11:03:32 -0000 Delivered-To: apmail-hama-dev-archive@hama.apache.org Received: (qmail 43402 invoked by uid 500); 14 Mar 2013 11:03:32 -0000 Mailing-List: contact dev-help@hama.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hama.apache.org Delivered-To: mailing list dev@hama.apache.org Received: (qmail 43384 invoked by uid 99); 14 Mar 2013 11:03:31 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Mar 2013 11:03:31 +0000 Received: from localhost (HELO mail-ie0-f174.google.com) (127.0.0.1) (smtp-auth username edwardyoon, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Mar 2013 11:03:31 +0000 Received: by mail-ie0-f174.google.com with SMTP id k10so2769719iea.5 for ; Thu, 14 Mar 2013 04:03:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:content-transfer-encoding :x-gm-message-state; bh=qxIYbEqtRxKc71Y0T67juS/ExPfBx6RYEHqz5WLx5pk=; b=JormfWZRJlAWeY34lJtrVJjxQwdME5t4q36PitL+XZEnqzZL4xryAnuOWpaRJQPF2B mxhM0h7dsvbv3tlfI2KuM0m/xmQJXrMSe4I3FOXAXHtP3jPAFqjY6cQsqDi5YS4FfuTK 5EMFSi+XrTHxke4Yogv0VCWlgua6QCaEjDe4ML54C1KRNo7LE7sFAFG38Xv+EBjzP4vZ 6GEPjEpcxr5uEitIJdGPoDvmFHno5mKZkZoD1HcWIPaR+VEoUhtDx0/xEhe929V/JE5Z iG0Kadc9IzKWCJUXfBkeAsWCbuZZvJa8nr5fgoBthMkPkJqCJIQ7FQ7YQj9ZjVTDu0tc otew== MIME-Version: 1.0 X-Received: by 10.50.53.208 with SMTP id d16mr1915083igp.5.1363259010604; Thu, 14 Mar 2013 04:03:30 -0700 (PDT) Received: by 10.64.33.140 with HTTP; Thu, 14 Mar 2013 04:03:30 -0700 (PDT) In-Reply-To: References: Date: Thu, 14 Mar 2013 20:03:30 +0900 Message-ID: Subject: Re: Error with fastgen input From: "Edward J. Yoon" To: dev@hama.apache.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Gm-Message-State: ALoCoQmk2y9UuWH+BnB6TYI+5imw4mmiICSaos+AFDOIpKBabJSGtVZUopH7dHAApsosR4uRXsYE P.S., These comments are never helpful in developing community. "before you run riot on all along the codebase, Suraj ist currently working on that stuff- don't make it more difficult for him rebasing all his patches the whole time. He has the plan so that we made to make the stuff working, his part is currently missing. So don't try to muddle arround there, it will make this take longer than already needed." On Thu, Mar 14, 2013 at 7:57 PM, Edward J. Yoon wro= te: > In my opinion, the our best action is - 1) explain the plans, edit > together on Wiki, and then 2) break-down implementation tasks as small > as possible so that available people can try it in parallel. Then, you > can use available people. Do you remember, I asked you to write down > your plan here? - http://wiki.apache.org/hama/SpillingQueue If you > have some time, Please do for me. I'll help you in my free time. > > Regarding branches, maybe we all are not familiar with online > collaboration (or don't want to collaborate anymore). If we want to > walk own ways, why we need to be in here together? > > On Thu, Mar 14, 2013 at 7:13 PM, Suraj Menon wro= te: >> Three points: >> >> Firstly, apologies because partly this conversation emanates from the de= lay >> in providing the set of patches. I was not able to slice as much time I = was >> hoping. >> >> Second, I think I/we can work on a separate branches. Since most of thes= e >> concerns could only be answered by future patches, a decision could be m= ade >> then. We can decide if svn revert is needed during the process on trunk. >> (This is a general comment and not related to particular JIRA) >> >> Third, Please feel free to slice a release if it is really important. >> >> Thanks, >> Suraj >> >> On Thu, Mar 14, 2013 at 5:39 AM, Edward J. Yoon w= rote: >> >>> To reduce arguing, I'm appending my opinions. >>> >>> In HAMA-704, I wanted to remove only message map to reduce memory >>> consumption. I still don't want to talk about disk-based vertices and >>> Spilling Queue at the moment. With this, I wanted to release 0.6.1 >>> 'partitioning issue fixed and quick executable examples' version ASAP. >>> That's why I scheduled Spilling Queue in 0.7 roadmap. >>> >>> As you can see, issues are happening one right after another. I don't >>> think we have to clean all never-ending issues. We can improve >>> step-by-step. >>> >>> 1. http://wiki.apache.org/hama/RoadMap >>> >>> On Thu, Mar 14, 2013 at 6:22 PM, Edward J. Yoon >>> wrote: >>> > Typos ;) >>> > >>> >> except YARN integration tasks. If you leave here, I have to take cov= er >>> >> YARN tasks. Should I wait someone? Am I touching core module >>> > >>> > I have to cover YARN tasks instead of you. >>> > >>> > On Thu, Mar 14, 2013 at 6:12 PM, Edward J. Yoon >>> wrote: >>> >> Hmm, here's my opinions: >>> >> >>> >> As you know, we have a problem of lack of team members and >>> >> contributors. So we should break down every tasks as small as >>> >> possible. Our best action is improving step-by-step. And every >>> >> Hama-x.x.x should run well even though it's a baby cart level. >>> >> >>> >> And, Tech should be developed under the necessity. So I think we nee= d >>> >> to cut release as often as possible. Therefore I volunteered to mana= ge >>> >> release. Actually, I was wanted to work only on QA (quality assuranc= e) >>> >> related tasks because yours code is better than me and I have a >>> >> cluster. >>> >> >>> >> However, we are currently not doing like that. I guess there are man= y >>> >> reasons. We're all not a full-time open sourcer (except me). >>> >> >>> >>> You have 23 issues assigned. Why do you need to work on that? >>> >> >>> >> I don't know what you mean exactly. But 23 issues are almost example= s >>> >> except YARN integration tasks. If you leave here, I have to take cov= er >>> >> YARN tasks. Should I wait someone? Am I touching core module >>> >> aggressively? >>> >> >>> >>> Otherwise Suraj and I branch that issues away and you can play >>> arround.l in >>> >>> trunk how you like. >>> >> >>> >> I also don't know what you mean exactly but if you want, Please do. >>> >> >>> >> By the way, can you answer about this question - Is it really >>> >> technical conflicts? or emotional conflicts? >>> >> >>> >> On Thu, Mar 14, 2013 at 5:32 PM, Thomas Jungblut >>> >> wrote: >>> >>> You have 23 issues assigned. Why do you need to work on that? >>> >>> Otherwise Suraj and I branch that issues away and you can play >>> arround.l in >>> >>> trunk how you like. >>> >>> Am 14.03.2013 09:04 schrieb "Edward J. Yoon" : >>> >>> >>> >>>> P.S., Please don't say like that. >>> >>>> >>> >>>> No decisions made yet. And if someone have a question or missed >>> >>>> something, you have to try to explain here. Because this is a open >>> >>>> source. Anyone can't say "don't touch trunk bc I'm working on it". >>> >>>> >>> >>>> On Thu, Mar 14, 2013 at 4:37 PM, Edward J. Yoon < >>> edwardyoon@apache.org> >>> >>>> wrote: >>> >>>> > Sorry for my quick and dirty style small patches. >>> >>>> > >>> >>>> > However, we should work together in parallel. Please share here = if >>> >>>> > there are some progresses. >>> >>>> > >>> >>>> > On Thu, Mar 14, 2013 at 3:46 PM, Thomas Jungblut >>> >>>> > wrote: >>> >>>> >> Hi Edward, >>> >>>> >> >>> >>>> >> before you run riot on all along the codebase, Suraj ist curren= tly >>> >>>> working >>> >>>> >> on that stuff- don't make it more difficult for him rebasing al= l >>> his >>> >>>> >> patches the whole time. >>> >>>> >> He has the plan so that we made to make the stuff working, his >>> part is >>> >>>> >> currently missing. So don't try to muddle arround there, it wil= l >>> make >>> >>>> this >>> >>>> >> take longer than already needed. >>> >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> >> 2013/3/14 Edward J. Yoon >>> >>>> >> >>> >>>> >>> Personally, I would like to solve this issue by touching >>> >>>> >>> DiskVerticesInfo. If we write sorted sub-sets of vertices into >>> >>>> >>> multiple files, we can avoid huge memory consumption. >>> >>>> >>> >>> >>>> >>> If we want to sort partitioned data using messaging system, id= ea >>> >>>> >>> should be collected. >>> >>>> >>> >>> >>>> >>> On Thu, Mar 14, 2013 at 10:31 AM, Edward J. Yoon < >>> >>>> edwardyoon@apache.org> >>> >>>> >>> wrote: >>> >>>> >>> > Oh, now I get how iterate() works. HAMA-704 is nicely writte= n. >>> >>>> >>> > >>> >>>> >>> > On Thu, Mar 14, 2013 at 12:02 AM, Edward J. Yoon < >>> >>>> edwardyoon@apache.org> >>> >>>> >>> wrote: >>> >>>> >>> >> I'm reading changes of HAMA-704 again. As a result of addin= g >>> >>>> >>> >> DiskVerticesInfo, vertices list is needed to be sorted. I'm >>> not sure >>> >>>> >>> >> but I think this approach will bring more disadvantages tha= n >>> >>>> >>> >> advantages. >>> >>>> >>> >> >>> >>>> >>> >> On Wed, Mar 13, 2013 at 11:09 PM, Edward J. Yoon < >>> >>>> edwardyoon@apache.org> >>> >>>> >>> wrote: >>> >>>> >>> >>>>>> in loadVertices? Maybe consider feature for coupling >>> storage in >>> >>>> >>> user space >>> >>>> >>> >>>>>> with BSP Messaging[HAMA-734] can avoid double reads and >>> writes. >>> >>>> >>> This way >>> >>>> >>> >>>>>> partitioned or non-partitioned by partitioner, can keep >>> vertices >>> >>>> >>> sorted >>> >>>> >>> >>>>>> with a single read and single write on every peer. >>> >>>> >>> >>> >>> >>>> >>> >>> And, as I commented JIRA ticket, I think we can't use >>> messaging >>> >>>> system >>> >>>> >>> >>> for sorting vertices within partition files. >>> >>>> >>> >>> >>> >>>> >>> >>> On Wed, Mar 13, 2013 at 11:00 PM, Edward J. Yoon < >>> >>>> >>> edwardyoon@apache.org> wrote: >>> >>>> >>> >>>> P.S., (number of splits =3D number of partitions) is real= ly >>> confuse >>> >>>> to >>> >>>> >>> >>>> me. Even though blocks number is equal to desired tasks >>> number, >>> >>>> data >>> >>>> >>> >>>> should be re-partitioned again. >>> >>>> >>> >>>> >>> >>>> >>> >>>> On Wed, Mar 13, 2013 at 10:36 PM, Edward J. Yoon < >>> >>>> >>> edwardyoon@apache.org> wrote: >>> >>>> >>> >>>>> Indeed. If there are already partitioned input files >>> (unsorted) >>> >>>> and >>> >>>> >>> so >>> >>>> >>> >>>>> user want to skip pre-partitioning phase, it should be >>> handled in >>> >>>> >>> >>>>> GraphJobRunner BSP program. Actually, I still don't know= why >>> >>>> >>> >>>>> re-partitioned files need to be Sorted. It's only about >>> >>>> >>> >>>>> GraphJobRunner. >>> >>>> >>> >>>>> >>> >>>> >>> >>>>>> partitioning. (This is outside the scope of graphs. We = can >>> have >>> >>>> a >>> >>>> >>> dedicated >>> >>>> >>> >>>>>> partitioning superstep for graph applications). >>> >>>> >>> >>>>> >>> >>>> >>> >>>>> Sorry. I don't understand exactly yet. Do you mean just = a >>> >>>> >>> partitioning >>> >>>> >>> >>>>> job based on superstep API? >>> >>>> >>> >>>>> >>> >>>> >>> >>>>> By default, 100 tasks will be assigned for partitioning = job. >>> >>>> >>> >>>>> Partitioning job will create 1,000 partitions. Thus, we = can >>> >>>> execute >>> >>>> >>> >>>>> the Graph job with 1,000 tasks. >>> >>>> >>> >>>>> >>> >>>> >>> >>>>> Let's assume that a input sequence file is 20GB (100 >>> blocks). If >>> >>>> I >>> >>>> >>> >>>>> want to run with 1,000 tasks, what happens? >>> >>>> >>> >>>>> >>> >>>> >>> >>>>> On Wed, Mar 13, 2013 at 9:49 PM, Suraj Menon < >>> >>>> surajsmenon@apache.org> >>> >>>> >>> wrote: >>> >>>> >>> >>>>>> I am responding on this thread because of better >>> continuity for >>> >>>> >>> >>>>>> conversation. We cannot expect the partitions to be sor= ted >>> every >>> >>>> >>> time. When >>> >>>> >>> >>>>>> the number of splits =3D number of partitions and >>> partitioning is >>> >>>> >>> switched >>> >>>> >>> >>>>>> off by user[HAMA-561], the partitions would not be sort= ed. >>> Can >>> >>>> we >>> >>>> >>> do this >>> >>>> >>> >>>>>> in loadVertices? Maybe consider feature for coupling >>> storage in >>> >>>> >>> user space >>> >>>> >>> >>>>>> with BSP Messaging[HAMA-734] can avoid double reads and >>> writes. >>> >>>> >>> This way >>> >>>> >>> >>>>>> partitioned or non-partitioned by partitioner, can keep >>> vertices >>> >>>> >>> sorted >>> >>>> >>> >>>>>> with a single read and single write on every peer. >>> >>>> >>> >>>>>> >>> >>>> >>> >>>>>> Just clearing confusion if any regarding superstep >>> injection for >>> >>>> >>> >>>>>> partitioning. (This is outside the scope of graphs. We = can >>> have >>> >>>> a >>> >>>> >>> dedicated >>> >>>> >>> >>>>>> partitioning superstep for graph applications). >>> >>>> >>> >>>>>> Say there are x splits and y number of tasks configured= by >>> user. >>> >>>> >>> >>>>>> >>> >>>> >>> >>>>>> if x > y >>> >>>> >>> >>>>>> The y tasks are scheduled with x of them having each of >>> the x >>> >>>> >>> splits and >>> >>>> >>> >>>>>> the remaining with no resource local to them. Then the >>> >>>> partitioning >>> >>>> >>> >>>>>> superstep redistributes the partitions among them to cr= eate >>> >>>> local >>> >>>> >>> >>>>>> partitions. Now the question is can we re-initialize a >>> peer's >>> >>>> input >>> >>>> >>> based >>> >>>> >>> >>>>>> on this new local part of partition? >>> >>>> >>> >>>>>> >>> >>>> >>> >>>>>> if y > x >>> >>>> >>> >>>>>> works as it works today. >>> >>>> >>> >>>>>> >>> >>>> >>> >>>>>> Just putting my points in brainstorming. >>> >>>> >>> >>>>>> >>> >>>> >>> >>>>>> -Suraj >>> >>>> >>> >>>>>> >>> >>>> >>> >>>>>> >>> >>>> >>> >>>>>> On Mon, Mar 11, 2013 at 7:39 AM, Edward J. Yoon < >>> >>>> >>> edwardyoon@apache.org>wrote: >>> >>>> >>> >>>>>> >>> >>>> >>> >>>>>>> I just filed here >>> >>>> https://issues.apache.org/jira/browse/HAMA-744 >>> >>>> >>> >>>>>>> >>> >>>> >>> >>>>>>> On Mon, Mar 11, 2013 at 7:35 PM, Edward J. Yoon < >>> >>>> >>> edwardyoon@apache.org> >>> >>>> >>> >>>>>>> wrote: >>> >>>> >>> >>>>>>> > Additionally, >>> >>>> >>> >>>>>>> > >>> >>>> >>> >>>>>>> >> spilling queue and sorted spilling queue, can we >>> inject the >>> >>>> >>> partitioning >>> >>>> >>> >>>>>>> >> superstep as the first superstep and use local memo= ry? >>> >>>> >>> >>>>>>> > >>> >>>> >>> >>>>>>> > Can we execute different number of tasks per superst= ep? >>> >>>> >>> >>>>>>> > >>> >>>> >>> >>>>>>> > On Mon, Mar 11, 2013 at 6:56 PM, Edward J. Yoon < >>> >>>> >>> edwardyoon@apache.org> >>> >>>> >>> >>>>>>> wrote: >>> >>>> >>> >>>>>>> >>> For graph processing, the partitioned files that >>> result >>> >>>> from >>> >>>> >>> the >>> >>>> >>> >>>>>>> >>> partitioning job must be sorted. Currently only th= e >>> >>>> partition >>> >>>> >>> files in >>> >>>> >>> >>>>>>> >> >>> >>>> >>> >>>>>>> >> I see. >>> >>>> >>> >>>>>>> >> >>> >>>> >>> >>>>>>> >>> For other partitionings and with regard to our >>> superstep >>> >>>> API, >>> >>>> >>> Suraj's >>> >>>> >>> >>>>>>> idea >>> >>>> >>> >>>>>>> >>> of injecting a preprocessing superstep that >>> partitions the >>> >>>> >>> stuff into >>> >>>> >>> >>>>>>> our >>> >>>> >>> >>>>>>> >>> messaging system is actually the best. >>> >>>> >>> >>>>>>> >> >>> >>>> >>> >>>>>>> >> BTW, if some garbage objects can be accumulated in >>> >>>> partitioning >>> >>>> >>> step, >>> >>>> >>> >>>>>>> >> separated partitioning job may not be bad idea. Is >>> there >>> >>>> some >>> >>>> >>> special >>> >>>> >>> >>>>>>> >> reason? >>> >>>> >>> >>>>>>> >> >>> >>>> >>> >>>>>>> >> On Wed, Mar 6, 2013 at 6:15 PM, Thomas Jungblut >>> >>>> >>> >>>>>>> >> wrote: >>> >>>> >>> >>>>>>> >>> For graph processing, the partitioned files that >>> result >>> >>>> from >>> >>>> >>> the >>> >>>> >>> >>>>>>> >>> partitioning job must be sorted. Currently only th= e >>> >>>> partition >>> >>>> >>> files in >>> >>>> >>> >>>>>>> >>> itself are sorted, thus more tasks result in not >>> sorted >>> >>>> data >>> >>>> >>> in the >>> >>>> >>> >>>>>>> >>> completed file. This only applies for the graph >>> processing >>> >>>> >>> package. >>> >>>> >>> >>>>>>> >>> So as Suraj told, it would be much more simpler to >>> solve >>> >>>> this >>> >>>> >>> via >>> >>>> >>> >>>>>>> >>> messaging, once it is scalable (it will be very ve= ry >>> >>>> >>> scalable!). So the >>> >>>> >>> >>>>>>> >>> GraphJobRunner can be partitioning the stuff with = a >>> single >>> >>>> >>> superstep in >>> >>>> >>> >>>>>>> >>> setup() as it was before ages ago. The messaging m= ust >>> be >>> >>>> >>> sorted anyway >>> >>>> >>> >>>>>>> for >>> >>>> >>> >>>>>>> >>> the algorithm so this is a nice side effect and sa= ves >>> us >>> >>>> the >>> >>>> >>> >>>>>>> partitioning >>> >>>> >>> >>>>>>> >>> job for graph processing. >>> >>>> >>> >>>>>>> >>> >>> >>>> >>> >>>>>>> >>> For other partitionings and with regard to our >>> superstep >>> >>>> API, >>> >>>> >>> Suraj's >>> >>>> >>> >>>>>>> idea >>> >>>> >>> >>>>>>> >>> of injecting a preprocessing superstep that >>> partitions the >>> >>>> >>> stuff into >>> >>>> >>> >>>>>>> our >>> >>>> >>> >>>>>>> >>> messaging system is actually the best. >>> >>>> >>> >>>>>>> >>> >>> >>>> >>> >>>>>>> >>> >>> >>>> >>> >>>>>>> >>> 2013/3/6 Suraj Menon >>> >>>> >>> >>>>>>> >>> >>> >>>> >>> >>>>>>> >>>> No, the partitions we write locally need not be >>> sorted. >>> >>>> Sorry >>> >>>> >>> for the >>> >>>> >>> >>>>>>> >>>> confusion. The Superstep injection is possible wi= th >>> >>>> Superstep >>> >>>> >>> API. >>> >>>> >>> >>>>>>> There >>> >>>> >>> >>>>>>> >>>> are few enhancements needed to make it simpler af= ter >>> I >>> >>>> last >>> >>>> >>> worked on >>> >>>> >>> >>>>>>> it. >>> >>>> >>> >>>>>>> >>>> We can then look into partitioning superstep bein= g >>> >>>> executed >>> >>>> >>> before the >>> >>>> >>> >>>>>>> >>>> setup of first superstep of submitted job. I thin= k >>> it is >>> >>>> >>> feasible. >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> >>>>>>> >>>> On Tue, Mar 5, 2013 at 5:48 AM, Edward J. Yoon < >>> >>>> >>> edwardyoon@apache.org >>> >>>> >>> >>>>>>> >>>> >wrote: >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> >>>>>>> >>>> > > spilling queue and sorted spilling queue, can= we >>> >>>> inject >>> >>>> >>> the >>> >>>> >>> >>>>>>> >>>> partitioning >>> >>>> >>> >>>>>>> >>>> > > superstep as the first superstep and use loca= l >>> memory? >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> > Actually, I wanted to add something before call= ing >>> >>>> >>> BSP.setup() >>> >>>> >>> >>>>>>> method >>> >>>> >>> >>>>>>> >>>> > to avoid execute additional BSP job. But, in my >>> opinion, >>> >>>> >>> current is >>> >>>> >>> >>>>>>> >>>> > enough. I think, we need to collect more >>> experiences of >>> >>>> >>> input >>> >>>> >>> >>>>>>> >>>> > partitioning on large environments. I'll do. >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> > BTW, I still don't know why it need to be Sorte= d?! >>> >>>> MR-like? >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> > On Thu, Feb 28, 2013 at 11:20 PM, Suraj Menon < >>> >>>> >>> >>>>>>> surajsmenon@apache.org> >>> >>>> >>> >>>>>>> >>>> > wrote: >>> >>>> >>> >>>>>>> >>>> > > Sorry, I am increasing the scope here to outs= ide >>> graph >>> >>>> >>> module. >>> >>>> >>> >>>>>>> When we >>> >>>> >>> >>>>>>> >>>> > have >>> >>>> >>> >>>>>>> >>>> > > spilling queue and sorted spilling queue, can= we >>> >>>> inject >>> >>>> >>> the >>> >>>> >>> >>>>>>> >>>> partitioning >>> >>>> >>> >>>>>>> >>>> > > superstep as the first superstep and use loca= l >>> memory? >>> >>>> >>> >>>>>>> >>>> > > Today we have partitioning job within a job a= nd >>> are >>> >>>> >>> creating two >>> >>>> >>> >>>>>>> copies >>> >>>> >>> >>>>>>> >>>> > of >>> >>>> >>> >>>>>>> >>>> > > data on HDFS. This could be really costly. Is= it >>> >>>> possible >>> >>>> >>> to >>> >>>> >>> >>>>>>> create or >>> >>>> >>> >>>>>>> >>>> > > redistribute the partitions on local memory a= nd >>> >>>> >>> initialize the >>> >>>> >>> >>>>>>> record >>> >>>> >>> >>>>>>> >>>> > > reader there? >>> >>>> >>> >>>>>>> >>>> > > The user can run a separate job give in examp= les >>> area >>> >>>> to >>> >>>> >>> >>>>>>> explicitly >>> >>>> >>> >>>>>>> >>>> > > repartition the data on HDFS. The deployment >>> question >>> >>>> is >>> >>>> >>> how much >>> >>>> >>> >>>>>>> of >>> >>>> >>> >>>>>>> >>>> disk >>> >>>> >>> >>>>>>> >>>> > > space gets allocated for local memory usage? >>> Would it >>> >>>> be >>> >>>> >>> a safe >>> >>>> >>> >>>>>>> >>>> approach >>> >>>> >>> >>>>>>> >>>> > > with the limitations? >>> >>>> >>> >>>>>>> >>>> > > >>> >>>> >>> >>>>>>> >>>> > > -Suraj >>> >>>> >>> >>>>>>> >>>> > > >>> >>>> >>> >>>>>>> >>>> > > On Thu, Feb 28, 2013 at 7:05 AM, Thomas Jungb= lut >>> >>>> >>> >>>>>>> >>>> > > wrote: >>> >>>> >>> >>>>>>> >>>> > > >>> >>>> >>> >>>>>>> >>>> > >> yes. Once Suraj added merging of sorted file= s >>> we can >>> >>>> add >>> >>>> >>> this to >>> >>>> >>> >>>>>>> the >>> >>>> >>> >>>>>>> >>>> > >> partitioner pretty easily. >>> >>>> >>> >>>>>>> >>>> > >> >>> >>>> >>> >>>>>>> >>>> > >> 2013/2/28 Edward J. Yoon >> > >>> >>>> >>> >>>>>>> >>>> > >> >>> >>>> >>> >>>>>>> >>>> > >> > Eh,..... btw, is re-partitioned data reall= y >>> >>>> necessary >>> >>>> >>> to be >>> >>>> >>> >>>>>>> Sorted? >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> > On Thu, Feb 28, 2013 at 7:48 PM, Thomas >>> Jungblut >>> >>>> >>> >>>>>>> >>>> > >> > wrote: >>> >>>> >>> >>>>>>> >>>> > >> > > Now I get how the partitioning works, >>> obviously >>> >>>> if >>> >>>> >>> you merge >>> >>>> >>> >>>>>>> n >>> >>>> >>> >>>>>>> >>>> > sorted >>> >>>> >>> >>>>>>> >>>> > >> > files >>> >>>> >>> >>>>>>> >>>> > >> > > by just appending to each other, this wi= ll >>> >>>> result in >>> >>>> >>> totally >>> >>>> >>> >>>>>>> >>>> > unsorted >>> >>>> >>> >>>>>>> >>>> > >> > data >>> >>>> >>> >>>>>>> >>>> > >> > > ;-) >>> >>>> >>> >>>>>>> >>>> > >> > > Why didn't you solve this via messaging? >>> >>>> >>> >>>>>>> >>>> > >> > > >>> >>>> >>> >>>>>>> >>>> > >> > > 2013/2/28 Thomas Jungblut < >>> >>>> thomas.jungblut@gmail.com >>> >>>> >>> > >>> >>>> >>> >>>>>>> >>>> > >> > > >>> >>>> >>> >>>>>>> >>>> > >> > >> Seems that they are not correctly sorte= d: >>> >>>> >>> >>>>>>> >>>> > >> > >> >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 50 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 52 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 54 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 56 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 58 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 61 >>> >>>> >>> >>>>>>> >>>> > >> > >> ... >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 78 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 81 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 83 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 85 >>> >>>> >>> >>>>>>> >>>> > >> > >> ... >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 94 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 96 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 98 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 1 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 10 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 12 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 14 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 16 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 18 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 21 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 23 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 25 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 27 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 29 >>> >>>> >>> >>>>>>> >>>> > >> > >> vertexID: 3 >>> >>>> >>> >>>>>>> >>>> > >> > >> >>> >>>> >>> >>>>>>> >>>> > >> > >> So this won't work then correctly... >>> >>>> >>> >>>>>>> >>>> > >> > >> >>> >>>> >>> >>>>>>> >>>> > >> > >> >>> >>>> >>> >>>>>>> >>>> > >> > >> 2013/2/28 Thomas Jungblut < >>> >>>> >>> thomas.jungblut@gmail.com> >>> >>>> >>> >>>>>>> >>>> > >> > >> >>> >>>> >>> >>>>>>> >>>> > >> > >>> sure, have fun on your holidays. >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> 2013/2/28 Edward J. Yoon < >>> >>>> edwardyoon@apache.org> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> Sure, but if you can fix quickly, ple= ase >>> do. >>> >>>> >>> March 1 is >>> >>>> >>> >>>>>>> >>>> > holiday[1] >>> >>>> >>> >>>>>>> >>>> > >> so >>> >>>> >>> >>>>>>> >>>> > >> > >>>> I'll appear next week. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> 1. >>> >>>> >>> >>>>>>> >>> http://en.wikipedia.org/wiki/Public_holidays_in_South_Korea >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> On Thu, Feb 28, 2013 at 6:36 PM, Thom= as >>> >>>> Jungblut >>> >>>> >>> >>>>>>> >>>> > >> > >>>> wrote: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> > Maybe 50 is missing from the file, >>> didn't >>> >>>> >>> observe if all >>> >>>> >>> >>>>>>> >>>> items >>> >>>> >>> >>>>>>> >>>> > >> were >>> >>>> >>> >>>>>>> >>>> > >> > >>>> added. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> > As far as I remember, I copy/pasted= the >>> >>>> logic >>> >>>> >>> of the ID >>> >>>> >>> >>>>>>> into >>> >>>> >>> >>>>>>> >>>> > the >>> >>>> >>> >>>>>>> >>>> > >> > >>>> fastgen, >>> >>>> >>> >>>>>>> >>>> > >> > >>>> > want to have a look into it? >>> >>>> >>> >>>>>>> >>>> > >> > >>>> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> > 2013/2/28 Edward J. Yoon < >>> >>>> edwardyoon@apache.org >>> >>>> >>> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> I guess, it's a bug of fastgen, wh= en >>> >>>> generate >>> >>>> >>> adjacency >>> >>>> >>> >>>>>>> >>>> matrix >>> >>>> >>> >>>>>>> >>>> > >> into >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> multiple files. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> On Thu, Feb 28, 2013 at 6:29 PM, >>> Thomas >>> >>>> >>> Jungblut >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> wrote: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> > You have two files, are they >>> partitioned >>> >>>> >>> correctly? >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> > 2013/2/28 Edward J. Yoon < >>> >>>> >>> edwardyoon@apache.org> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> It looks like a bug. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> edward@udanax >>> :~/workspace/hama-trunk$ >>> >>>> ls >>> >>>> >>> -al >>> >>>> >>> >>>>>>> >>>> > >> /tmp/randomgraph/ >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> total 44 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> drwxrwxr-x 3 edward edward 40= 96 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:03 . >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> drwxrwxrwt 19 root root 204= 80 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:04 .. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -rwxrwxrwx 1 edward edward 22= 43 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:01 >>> >>>> >>> >>>>>>> part-00000 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -rw-rw-r-- 1 edward edward = 28 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:01 >>> >>>> >>> >>>>>>> >>>> > .part-00000.crc >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -rwxrwxrwx 1 edward edward 22= 51 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:01 >>> >>>> >>> >>>>>>> part-00001 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -rw-rw-r-- 1 edward edward = 28 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:01 >>> >>>> >>> >>>>>>> >>>> > .part-00001.crc >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> drwxrwxr-x 2 edward edward 40= 96 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:03 >>> >>>> >>> >>>>>>> partitions >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> edward@udanax >>> :~/workspace/hama-trunk$ >>> >>>> ls >>> >>>> >>> -al >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> /tmp/randomgraph/partitions/ >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> total 24 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> drwxrwxr-x 2 edward edward 4096 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:03 . >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> drwxrwxr-x 3 edward edward 4096 >>> 2=EC=9B=94 28 >>> >>>> >>> 18:03 .. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -rwxrwxrwx 1 edward edward 2932 >>> 2=EC=9B=94 28 >>> >>>> 18:03 >>> >>>> >>> >>>>>>> part-00000 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -rw-rw-r-- 1 edward edward 32 >>> 2=EC=9B=94 28 >>> >>>> 18:03 >>> >>>> >>> >>>>>>> >>>> > .part-00000.crc >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -rwxrwxrwx 1 edward edward 2955 >>> 2=EC=9B=94 28 >>> >>>> 18:03 >>> >>>> >>> >>>>>>> part-00001 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -rw-rw-r-- 1 edward edward 32 >>> 2=EC=9B=94 28 >>> >>>> 18:03 >>> >>>> >>> >>>>>>> >>>> > .part-00001.crc >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> edward@udanax >>> :~/workspace/hama-trunk$ >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> On Thu, Feb 28, 2013 at 5:27 PM= , >>> Edward >>> >>>> < >>> >>>> >>> >>>>>>> >>>> edward@udanax.org >>> >>>> >>> >>>>>>> >>>> > > >>> >>>> >>> >>>>>>> >>>> > >> > wrote: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> > yes i'll check again >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> > Sent from my iPhone >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> > On Feb 28, 2013, at 5:18 PM, >>> Thomas >>> >>>> >>> Jungblut < >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> thomas.jungblut@gmail.com> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> wrote: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> Can you verify an observatio= n >>> for me >>> >>>> >>> please? >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> 2 files are created from >>> fastgen, >>> >>>> >>> part-00000 and >>> >>>> >>> >>>>>>> >>>> > >> part-00001, >>> >>>> >>> >>>>>>> >>>> > >> > >>>> both >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> ~2.2kb >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> sized. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> In the below partition >>> directory, >>> >>>> there >>> >>>> >>> is only a >>> >>>> >>> >>>>>>> >>>> single >>> >>>> >>> >>>>>>> >>>> > >> > 5.56kb >>> >>>> >>> >>>>>>> >>>> > >> > >>>> file. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> Is it intended for the >>> partitioner to >>> >>>> >>> write a >>> >>>> >>> >>>>>>> single >>> >>>> >>> >>>>>>> >>>> > file >>> >>>> >>> >>>>>>> >>>> > >> if >>> >>>> >>> >>>>>>> >>>> > >> > you >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> configured >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> two? >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> It even reads it as a two fi= les, >>> >>>> strange >>> >>>> >>> huh? >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> 2013/2/28 Thomas Jungblut < >>> >>>> >>> >>>>>>> thomas.jungblut@gmail.com> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> Will have a look into it. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> gen fastgen 100 10 >>> /tmp/randomgraph >>> >>>> 1 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> pagerank /tmp/randomgraph >>> >>>> /tmp/pageout >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> did work for me the last ti= me I >>> >>>> >>> profiled, maybe >>> >>>> >>> >>>>>>> the >>> >>>> >>> >>>>>>> >>>> > >> > >>>> partitioning >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> doesn't >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> partition correctly with th= e >>> input >>> >>>> or >>> >>>> >>> something >>> >>>> >>> >>>>>>> else. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> 2013/2/28 Edward J. Yoon < >>> >>>> >>> edwardyoon@apache.org >>> >>>> >>> >>>>>>> > >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> Fastgen input seems not wor= k >>> for >>> >>>> graph >>> >>>> >>> examples. >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> edward@edward-virtualBox >>> >>>> >>> >>>>>>> :~/workspace/hama-trunk$ >>> >>>> >>> >>>>>>> >>>> > >> bin/hama >>> >>>> >>> >>>>>>> >>>> > >> > jar >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> examples/target/hama-examples-0.7.0-SNAPSHOT.jar gen >>> >>>> >>> >>>>>>> >>>> > >> > fastgen >>> >>>> >>> >>>>>>> >>>> > >> > >>>> 100 10 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> /tmp/randomgraph 2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:02 WARN >>> >>>> >>> util.NativeCodeLoader: >>> >>>> >>> >>>>>>> Unable >>> >>>> >>> >>>>>>> >>>> > to >>> >>>> >>> >>>>>>> >>>> > >> > load >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> native-hadoop library for = your >>> >>>> >>> platform... >>> >>>> >>> >>>>>>> using >>> >>>> >>> >>>>>>> >>>> > >> > builtin-java >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> classes >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> where applicable >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:03 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> Running >>> >>>> >>> >>>>>>> >>>> job: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> job_localrunner_0001 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:03 INFO >>> >>>> >>> bsp.LocalBSPRunner: >>> >>>> >>> >>>>>>> Setting >>> >>>> >>> >>>>>>> >>>> up >>> >>>> >>> >>>>>>> >>>> > a >>> >>>> >>> >>>>>>> >>>> > >> new >>> >>>> >>> >>>>>>> >>>> > >> > >>>> barrier >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> for 2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> tasks! >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:06 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> Current >>> >>>> >>> >>>>>>> >>>> > >> supersteps >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> number: 0 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:06 INFO >>> >>>> >>> bsp.BSPJobClient: The >>> >>>> >>> >>>>>>> total >>> >>>> >>> >>>>>>> >>>> > number >>> >>>> >>> >>>>>>> >>>> > >> > of >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> supersteps: 0 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:06 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> Counters: 3 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:06 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> org.apache.hama.bsp.JobInProgress$JobCounter >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:06 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > SUPERSTEPS=3D0 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:06 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > LAUNCHED_TASKS=3D2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:06 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> org.apache.hama.bsp.BSPPeerImpl$PeerCounter >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:06 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> TASK_OUTPUT_RECORDS=3D100 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> Job Finished in 3.212 seco= nds >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> edward@edward-virtualBox >>> >>>> >>> >>>>>>> :~/workspace/hama-trunk$ >>> >>>> >>> >>>>>>> >>>> > >> bin/hama >>> >>>> >>> >>>>>>> >>>> > >> > jar >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> examples/target/hama-examples-0.7.0-SNAPSHOT >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> hama-examples-0.7.0-SNAPSHOT-javadoc.jar >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> hama-examples-0.7.0-SNAPSHOT.jar >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> edward@edward-virtualBox >>> >>>> >>> >>>>>>> :~/workspace/hama-trunk$ >>> >>>> >>> >>>>>>> >>>> > >> bin/hama >>> >>>> >>> >>>>>>> >>>> > >> > jar >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> examples/target/hama-examples-0.7.0-SNAPSHOT.jar >>> >>>> >>> >>>>>>> >>>> > pagerank >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> /tmp/randomgraph /tmp/page= our >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:29 WARN >>> >>>> >>> util.NativeCodeLoader: >>> >>>> >>> >>>>>>> Unable >>> >>>> >>> >>>>>>> >>>> > to >>> >>>> >>> >>>>>>> >>>> > >> > load >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> native-hadoop library for = your >>> >>>> >>> platform... >>> >>>> >>> >>>>>>> using >>> >>>> >>> >>>>>>> >>>> > >> > builtin-java >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> classes >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> where applicable >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:29 INFO >>> >>>> >>> bsp.FileInputFormat: >>> >>>> >>> >>>>>>> Total >>> >>>> >>> >>>>>>> >>>> > input >>> >>>> >>> >>>>>>> >>>> > >> > paths >>> >>>> >>> >>>>>>> >>>> > >> > >>>> to >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> process >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> : 2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:29 INFO >>> >>>> >>> bsp.FileInputFormat: >>> >>>> >>> >>>>>>> Total >>> >>>> >>> >>>>>>> >>>> > input >>> >>>> >>> >>>>>>> >>>> > >> > paths >>> >>>> >>> >>>>>>> >>>> > >> > >>>> to >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> process >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> : 2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:30 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> Running >>> >>>> >>> >>>>>>> >>>> job: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> job_localrunner_0001 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:30 INFO >>> >>>> >>> bsp.LocalBSPRunner: >>> >>>> >>> >>>>>>> Setting >>> >>>> >>> >>>>>>> >>>> up >>> >>>> >>> >>>>>>> >>>> > a >>> >>>> >>> >>>>>>> >>>> > >> new >>> >>>> >>> >>>>>>> >>>> > >> > >>>> barrier >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> for 2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> tasks! >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> Current >>> >>>> >>> >>>>>>> >>>> > >> supersteps >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> number: 1 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: The >>> >>>> >>> >>>>>>> total >>> >>>> >>> >>>>>>> >>>> > number >>> >>>> >>> >>>>>>> >>>> > >> > of >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> supersteps: 1 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> Counters: 6 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> org.apache.hama.bsp.JobInProgress$JobCounter >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > SUPERSTEPS=3D1 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > LAUNCHED_TASKS=3D2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> org.apache.hama.bsp.BSPPeerImpl$PeerCounter >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > SUPERSTEP_SUM=3D4 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> IO_BYTES_READ=3D4332 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> TIME_IN_SYNC_MS=3D14 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> TASK_INPUT_RECORDS=3D100 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.FileInputFormat: >>> >>>> >>> >>>>>>> Total >>> >>>> >>> >>>>>>> >>>> > input >>> >>>> >>> >>>>>>> >>>> > >> > paths >>> >>>> >>> >>>>>>> >>>> > >> > >>>> to >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> process >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> : 2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.BSPJobClient: >>> >>>> >>> >>>>>>> Running >>> >>>> >>> >>>>>>> >>>> job: >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> job_localrunner_0001 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> bsp.LocalBSPRunner: >>> >>>> >>> >>>>>>> Setting >>> >>>> >>> >>>>>>> >>>> up >>> >>>> >>> >>>>>>> >>>> > a >>> >>>> >>> >>>>>>> >>>> > >> new >>> >>>> >>> >>>>>>> >>>> > >> > >>>> barrier >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> for 2 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> tasks! >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> graph.GraphJobRunner: 50 >>> >>>> >>> >>>>>>> >>>> > vertices >>> >>>> >>> >>>>>>> >>>> > >> > are >>> >>>> >>> >>>>>>> >>>> > >> > >>>> loaded >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> into >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> local:1 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 INFO >>> >>>> >>> graph.GraphJobRunner: 50 >>> >>>> >>> >>>>>>> >>>> > vertices >>> >>>> >>> >>>>>>> >>>> > >> > are >>> >>>> >>> >>>>>>> >>>> > >> > >>>> loaded >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> into >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> local:0 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> 13/02/28 10:32:33 ERROR >>> >>>> >>> bsp.LocalBSPRunner: >>> >>>> >>> >>>>>>> >>>> Exception >>> >>>> >>> >>>>>>> >>>> > >> > during >>> >>>> >>> >>>>>>> >>>> > >> > >>>> BSP >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> execution! >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> java.lang.IllegalArgumentException: >>> >>>> >>> Messages >>> >>>> >>> >>>>>>> must >>> >>>> >>> >>>>>>> >>>> > never >>> >>>> >>> >>>>>>> >>>> > >> be >>> >>>> >>> >>>>>>> >>>> > >> > >>>> behind >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> the >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> vertex in ID! Current Mess= age >>> ID: 1 >>> >>>> >>> vs. 50 >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> >>> org.apache.hama.graph.GraphJobRunner.iterate(GraphJobRunner.java:279) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>> >>>> >>> >>> >>>> >>> org.apache.hama.graph.GraphJobRunner.doSuperstep(GraphJobRunner.java:22= 5) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> org.apache.hama.graph.GraphJobRunner.bsp(GraphJobRunner.java:1= 29) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>> >>>> >>> >>> >>>> >>> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.run(LocalBSPRunner.java:25= 6) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> >>>>>>> >>> >>>> >>> >>> >>>> >>> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.call(LocalBSPRunner.java:2= 86) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> >>>>>>> >>> >>>> >>> >>> >>>> >>> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.call(LocalBSPRunner.java:2= 11) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:= 334) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> java.util.concurrent.FutureTask.run(FutureTask.java:166) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>> >>>> >>> >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:= 334) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> java.util.concurrent.FutureTask.run(FutureTask.java:166) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> >>>>>>> >>> >>>> >>> >>> >>>> >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja= va:1110) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> >>>>>>> >>> >>>> >>> >>> >>>> >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.j= ava:603) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> at >>> >>>> >>> java.lang.Thread.run(Thread.java:722) >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> -- >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> Best Regards, Edward J. Yo= on >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>>> @eddieyoon >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> -- >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> Best Regards, Edward J. Yoon >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> @eddieyoon >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> -- >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> Best Regards, Edward J. Yoon >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> @eddieyoon >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>>> -- >>> >>>> >>> >>>>>>> >>>> > >> > >>>> Best Regards, Edward J. Yoon >>> >>>> >>> >>>>>>> >>>> > >> > >>>> @eddieyoon >>> >>>> >>> >>>>>>> >>>> > >> > >>>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>> >>>> >>> >>>>>>> >>>> > >> > >> >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> > -- >>> >>>> >>> >>>>>>> >>>> > >> > Best Regards, Edward J. Yoon >>> >>>> >>> >>>>>>> >>>> > >> > @eddieyoon >>> >>>> >>> >>>>>>> >>>> > >> > >>> >>>> >>> >>>>>>> >>>> > >> >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> > -- >>> >>>> >>> >>>>>>> >>>> > Best Regards, Edward J. Yoon >>> >>>> >>> >>>>>>> >>>> > @eddieyoon >>> >>>> >>> >>>>>>> >>>> > >>> >>>> >>> >>>>>>> >>>> >>> >>>> >>> >>>>>>> >> >>> >>>> >>> >>>>>>> >> >>> >>>> >>> >>>>>>> >> >>> >>>> >>> >>>>>>> >> -- >>> >>>> >>> >>>>>>> >> Best Regards, Edward J. Yoon >>> >>>> >>> >>>>>>> >> @eddieyoon >>> >>>> >>> >>>>>>> > >>> >>>> >>> >>>>>>> > >>> >>>> >>> >>>>>>> > >>> >>>> >>> >>>>>>> > -- >>> >>>> >>> >>>>>>> > Best Regards, Edward J. Yoon >>> >>>> >>> >>>>>>> > @eddieyoon >>> >>>> >>> >>>>>>> >>> >>>> >>> >>>>>>> >>> >>>> >>> >>>>>>> >>> >>>> >>> >>>>>>> -- >>> >>>> >>> >>>>>>> Best Regards, Edward J. Yoon >>> >>>> >>> >>>>>>> @eddieyoon >>> >>>> >>> >>>>>>> >>> >>>> >>> >>>>> >>> >>>> >>> >>>>> >>> >>>> >>> >>>>> >>> >>>> >>> >>>>> -- >>> >>>> >>> >>>>> Best Regards, Edward J. Yoon >>> >>>> >>> >>>>> @eddieyoon >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> -- >>> >>>> >>> >>>> Best Regards, Edward J. Yoon >>> >>>> >>> >>>> @eddieyoon >>> >>>> >>> >>> >>> >>>> >>> >>> >>> >>>> >>> >>> >>> >>>> >>> >>> -- >>> >>>> >>> >>> Best Regards, Edward J. Yoon >>> >>>> >>> >>> @eddieyoon >>> >>>> >>> >> >>> >>>> >>> >> >>> >>>> >>> >> >>> >>>> >>> >> -- >>> >>>> >>> >> Best Regards, Edward J. Yoon >>> >>>> >>> >> @eddieyoon >>> >>>> >>> > >>> >>>> >>> > >>> >>>> >>> > >>> >>>> >>> > -- >>> >>>> >>> > Best Regards, Edward J. Yoon >>> >>>> >>> > @eddieyoon >>> >>>> >>> >>> >>>> >>> >>> >>>> >>> >>> >>>> >>> -- >>> >>>> >>> Best Regards, Edward J. Yoon >>> >>>> >>> @eddieyoon >>> >>>> >>> >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > -- >>> >>>> > Best Regards, Edward J. Yoon >>> >>>> > @eddieyoon >>> >>>> >>> >>>> >>> >>>> >>> >>>> -- >>> >>>> Best Regards, Edward J. Yoon >>> >>>> @eddieyoon >>> >>>> >>> >> >>> >> >>> >> >>> >> -- >>> >> Best Regards, Edward J. Yoon >>> >> @eddieyoon >>> > >>> > >>> > >>> > -- >>> > Best Regards, Edward J. Yoon >>> > @eddieyoon >>> >>> >>> >>> -- >>> Best Regards, Edward J. Yoon >>> @eddieyoon >>> > > > > -- > Best Regards, Edward J. Yoon > @eddieyoon --=20 Best Regards, Edward J. Yoon @eddieyoon