I am new for spark ml.
There is some project for me, for some given math model and I would like to get its optimized solution.
It is very similar with spark mllib application. However, the key problem for me is that the given math model is not obviously belonging to the models ( as classification, regression,
clustering, collaborative filtering, dimensionality reduction ) provided in spark ml...
For some specific application , I think the most important thing is to find the proper model for it from the known spark mllib, then all will follow the steps, since the optimizer is already
under the mllib.
However, my question is that, generally how it would go if the specific application is exactly belonging to the given models in mllib? Whether it generally convenient to split the specific
background and convert into the given model?
What is the general way to apply mllib for some specific backgrounds?