metamodel-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ashish Mukherjee <>
Subject Composite Data Context on Big Data
Date Mon, 16 Feb 2015 09:53:08 GMT

I was thinking of a specific scenario of Composite Data Context wrt

I understand that MetaModel performs number of functions in-memory after
querying the respective data sources. However, if the intermediate
data-sets are large, this operation could be memory intensive and slow. Is
there a thought about tackling such a scenario through a clustered approach
in some future release?

If that is not in the roadmap, what classes should one look at to work on


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message