hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hyunsik Choi" <hyunsikc...@korea.ac.kr>
Subject Re: A Scale-Out RDF Store for Distributed Processing on Map/Reduce
Date Tue, 21 Oct 2008 02:36:04 GMT
Hi Colin,

I'm a member of RDF proposal. I have one question as to Metaweb. Do
you (or your company) have a plan to make Metaweb to be open source?

Hyunsik Choi

-----------------------------------------------------------------
Hyunsik Choi (Ph.D Student)

Laboratory of Prof. Yon Dohn Chung
Database & Information Systems Group
Dept. of Computer Science & Engineering, Korea University
1, 5-ga, Anam-dong, Seongbuk-gu, Seoul, 136-713, Republic of Korea

TEL : +82-2-3290-3580
-----------------------------------------------------------------

On Tue, Oct 21, 2008 at 10:23 AM, Colin Evans <colin@metaweb.com> wrote:
> Hi Edward,
> At Metaweb, we're experimenting with storing raw triples in HDFS flat files,
> and have written a simple query language and planner that executes the
> queries with chained map-reduce jobs.  This approach works well for
> warehousing triple data, and doesn't require HBase.  Queries may take a few
> minutes to execute, but the system scales for very large datasets and result
> sets because it doesn't try to resolve queries in memory.  We're currently
> testing with more than 150MM triples and have been happy with the results.
>
> -Colin
>
>
> Edward J. Yoon wrote:
>>
>> Hi all,
>>
>> This RDF proposal is a good long time ago. Now we'd like to settle
>> down to research again. I attached our proposal, We'd love to hear
>> your feedback & stories!!
>>
>> Thanks.
>>
>
>

Mime
View raw message