cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aleksey Yeschenko (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-8032) User based request scheduler
Date Sun, 05 Oct 2014 18:42:33 GMT


Aleksey Yeschenko commented on CASSANDRA-8032:

bq. Is the removal of thrift definite for cassandra-3.0 ?

It's not being removed (yet). But Thrift is considered 'frozen' now (see the announcement
somewhere in the dev mailing list, a few months ago). That is, we won't accept any patches
for new Thrift-related functionality, or improvements to existing Thrift-related features.
Only major bug fixes.

> User based request scheduler
> ----------------------------
>                 Key: CASSANDRA-8032
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: mck
>            Assignee: mck
>            Priority: Minor
>              Labels: patch
>         Attachments: v1-0001-CASSANDRA-8032-User-based-request-scheduler.txt
> Today only a keyspace based request scheduler exists.
> Post CASSANDRA-4898 it could be possible to implement a request_scheduler based on users
(from system_auth.credentials) rather than keyspaces. This could offer a finer granularity
of control, from read-only vs read-write users on keyspaces, to application dedicated vs ad-hoc
users. Alternatively it could also offer a granularity larger and easier to work with than
per keyspace.
> The request scheduler is a useful concept but i think that setups with enough nodes often
favour separate clusters rather than either creating separate virtual datacenters or using
the request scheduler. To give the request scheduler another, and more flexible, implementation
could especially help those users that don't yet have enough nodes to warrant separate clusters,
or even separate virtual datacenters. On such smaller clusters cassandra can still be seen
as an unstable technology because poor consumers/schemas can easily affect, even bring down,
a whole cluster.
> I haven't look into the feasibility of this within the code, but it comes to mind as
rather simple, and i would be interested in offering a patch if the idea carries validity.

This message was sent by Atlassian JIRA

View raw message