oodt-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Mattmann <chris.mattm...@gmail.com>
Subject Re: CAS maturity
Date Wed, 23 Apr 2014 17:39:11 GMT
Dear Chadi,

Thanks for your question! Yes, the Apache OODT CAS currently deals with
large amounts of data, even up
to petabytes (many 100s of TB), and the reality is though it's not
necessarily the CAS that's doing the
only heavy lifting in those environments for the data management pieces.
CAS works side by side with
distributed file systems (HDFS, NFS, Lustre, Gluster, etc.), along with
replicated databases and catalogs
and search engines (RDBMS, Solr, Lucene, Cassandra, HBase, HIVE, etc.) to
manage Big Data while inducing
very limited storage and computational overhead beyond the ingestion,
metadata extraction, data movement
and other step-by-step orchestration that it does.

I would be very happy to discuss a specific use case or concern you have
and thank you for considering
using CAS!

Cheers,
Chris

------------------------
Chris Mattmann
chris.mattmann@gmail.com




-----Original Message-----
From: chadi jaber <chadijaber986@hotmail.com>
Reply-To: <user@oodt.apache.org>
Date: Tuesday, April 22, 2014 2:00 PM
To: "user@oodt.apache.org" <user@oodt.apache.org>
Subject: CAS maturity

>Hello 
>I am willing to use the CAS component to implement a file Archiving
>facility + versioning. the OODT component seems to have all the features
>i need. But i have still concerns about  it ability to cope with big data
>amounts (reaching petabytes). Can I have some feedback about OODT Cas Use
>with this kind of amounts? is it enough mature to be used in a real life
>project ? 
>Sorry if my questions are too straightforward :) but the stakes are
>important for me.
>
>Thanks in advance for your help.
>Chadi
> 		 	   		  



Mime
View raw message