flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jörn Franke <jornfra...@gmail.com>
Subject Re: Complexity of Flink
Date Sat, 14 Apr 2018 12:23:31 GMT
That is complexity of the source code, this is easy to obtain, just fork it on github and send
it through sonarqube or codacy cloud. I am not sure if this is not done already by the flink
project. For my open source libraries (hadoopcryptoledger and hadoopoffice) that also provide
Flink modules I do this.

> On 14. Apr 2018, at 14:17, Esa Heikkinen <esa.heikkinen@student.tut.fi> wrote:
> Yes, you are right.
> But if I only focus to a statistical complexity of sources of Flink ? E.g. number of
libraries, functions/classes/methods, number and size of source files and so on ?
> How easily it is to get this information ?
> Best, Esa
> -----Original Message-----
> From: Jörn Franke <jornfranke@gmail.com> 
> Sent: Saturday, April 14, 2018 1:43 PM
> To: Esa Heikkinen <esa.heikkinen@student.tut.fi>
> Cc: user@flink.apache.org
> Subject: Re: Complexity of Flink
> I think this always depends. I found Flink more clean compared to other Big Data platforms
and with some experience it is rather easy to deploy.
> However how do you measure complexity? How do you plan to cater for  other components
(eg deploy in the cloud, deploy locally in a Hadoop cluster etc).
> Then how do you take into account experience of the team leader and people deploying
it, issues with unqualified external service providers, contracts etc?
> Those are the variables that you need to define and then validate (case study and/or
>> On 14. Apr 2018, at 12:24, Esa Heikkinen <heikkin2@student.tut.fi> wrote:
>> Hi
>> I am writing a scientific article, that is related to deployment of Flink.
>> I would be very interesting to know, how to measure a complexity of Flink platform
or framework ?
>> Does anyone know a good articles about that ?
>> I think it is not always so simple to deploy and use..
>> Best, Esa

View raw message