cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrés de la Peña (JIRA) <>
Subject [jira] [Commented] (CASSANDRA-12245) initial view build can be parallel
Date Wed, 23 Aug 2017 11:13:00 GMT


Andrés de la Peña commented on CASSANDRA-12245:

[Here|] is a new
version of the patch addressing the review comments. The updated dtests are [here|].

bq. It would be nice to maybe try to reuse the {{Splitter}} methods if possible, so we can
reuse tests, or if that's not straightforward maybe put the methods on splitter and add some
tests to make sure it's working correctly.

I have moved the [methods to split the ranges|]
to the {{Splitter}}, reusing its [{{valueForToken}}|]
method. Tests [here|].

bq. Can probably remove the generation field from the builds in progress table and remove
this comment


bq. {{views_builds_in_progress_v2}} sounds a bit hacky, so perhaps we should call it {{system.view_builds_in_progress}}
(remove the s) and also add a NOTICE entry informing the previous table was replaced and data
files can be removed.

Renamed to {{system.view_builds_in_progress}}. Added an [NEWS.txt entry|]
informing about the replacement.

bq. I'm a bit concerned about starving the compaction executor for a long period during view
build of large base tables, so we should probably have another option like {{concurret_view_builders}}
with a conservative default and perhaps control the concurrency at the {{ViewBuilderController}}.

Agree. I have added a new dedicated [executor|]
in the {{CompactionManager}}, similar to the executors used for validation and cache cleanup.
The concurrency of this executor is determined by the new config property [{{concurrent_materialized_view_builders}}|],
which defaults to a perhaps too much conservative value of {{1}}. This property can be modified
through both JMX and the new [{{setconcurrentviewbuilders}}|]
and [{{getconcurrentviewbuilders}}|]
nodetool commands. These commands are tested [here|].

I'm not sure about if it still makes sense for the builder task to extend {{CompactionInfo.Holder}}.
If so, I'm neither sure about how to use {{prevToken.size(range.right)}} (that returns a {{double}})
to create {{CompationInfo}} objects. WDYT?

bq. ViewBuilder seems to be reimplementing some of the logic of PartitionRangeReadCommand,
so I wonder if we shoud take this chance to simplify and use that instead of manually constructing
the commands via ReducingKeyIterator and multiple SinglePartitionReadCommands? We can totally
do this in other ticket if you prefer.

I would prefer to do this in another ticket.

bq. Perform view marking on ViewBuilderController instead of ViewBuilder

I have moved the marking of system tables (and the retries in case of failure) from the {{ViewBuilderTask}}
to the {{ViewBuilder}}, using a callback to do the marking. I think the code is clearer this

bq. Updating the view built status at every key is perhaps a bit inefficient and unnecessary,
so perhaps we should update it every 1000 keys or so.

Done, it is updated [every 1000 keys|].
It doesn't seem to make a great difference in some small benchmarks that I have run.

bq. Would be nice to update the {{interrupt_build_process_test}} to stop halfway through the
build (instead of the start of the build) and verify it it's being resumed correctly with
the new changes.

Updated [here|].
It also uses [a byteman script|]
to make sure that the MV build isn't finished before stopping the cluster, which is more likely
to happen.

CI results seem ok, there are no failures in unit tests and the failing dtests seem unrelated.

> initial view build can be parallel
> ----------------------------------
>                 Key: CASSANDRA-12245
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Materialized Views
>            Reporter: Tom van der Woerdt
>            Assignee: Andrés de la Peña
>             Fix For: 4.x
> On a node with lots of data (~3TB) building a materialized view takes several weeks,
which is not ideal. It's doing this in a single thread.
> There are several potential ways this can be optimized :
>  * do vnodes in parallel, instead of going through the entire range in one thread
>  * just iterate through sstables, not worrying about duplicates, and include the timestamp
of the original write in the MV mutation. since this doesn't exclude duplicates it does increase
the amount of work and could temporarily surface ghost rows (yikes) but I guess that's why
they call it eventual consistency. doing it this way can avoid holding references to all tables
on disk, allows parallelization, and removes the need to check other sstables for existing
data. this is essentially the 'do a full repair' path

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message