[ https://issues.apache.org/jira/browse/FLINK1901?page=com.atlassian.jira.plugin.system.issuetabpanels:commenttabpanel&focusedCommentId=14662173#comment14662173
]
ASF GitHub Bot commented on FLINK1901:

Github user sachingoel0101 commented on the pull request:
https://github.com/apache/flink/pull/949#issuecomment128781613
I have worked on this problem before. The idea is to divide the data into blocks and find
the probability of selection of an element from a block.
Thus, suppose there are blocks B_1, B_2, ..., B_N with probabilities P_1, P_2, ..., P_N,
then you sample k points by first sampling from the distribution {P_1, P_2, ..., P_N} and
find the number of elements you require from each block. After that you select the required
number of points from each block and take a union.
It is pretty easy to implement in a shared memory system but with a distributed system,
it is hard. I tried the following approach some time before, although didn't quite finish
working on it:
blockedData = Data > (block_id, data)
blockNumbers = (block_id, data) > (block_id, count)
(1...k) > (list of block ids we'll be sampling from)
After this, I tried broadcasting the list and selecting the required number of elements
from each block, which can be done quite easily. But what if k is very large?
> Create sample operator for Dataset
> 
>
> Key: FLINK1901
> URL: https://issues.apache.org/jira/browse/FLINK1901
> Project: Flink
> Issue Type: Improvement
> Components: Core
> Reporter: Theodore Vasiloudis
> Assignee: Chengxiang Li
>
> In order to be able to implement Stochastic Gradient Descent and a number of other machine
learning algorithms we need to have a way to take a random sample from a Dataset.
> We need to be able to sample with or without replacement from the Dataset, choose the
relative size of the sample, and set a seed for reproducibility.

This message was sent by Atlassian JIRA
(v6.3.4#6332)
