ignite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dmitrievanthony@gmail.com <dmitrievanth...@gmail.com>
Subject What is the best approach to extend Thin Client functionality?
Date Mon, 17 Dec 2018 10:02:48 GMT
Currently ML/TensorFlow module requires an ability to expose some functionality to be used
in C++ code. 

As far as I understand, currently Ignite provides an ability to work with it from C++ only
through the Thin Client. The list of operations supported by it is very limited. What is the
best approach to work with additional Ignite functionality (like ML/TensorFlow) from C++ code?

I see several ways we can do it:
1. Extend list of Thin Client operations. Unfortunately, it will lead to overgrowth of API.
As result of that it will be harder to implement and maintain Thin Clients for different languages.
2. Use Thin Client as a "transport layer" and make Ignite functionality calls via puts/gets
commands/responses into/from cache (like command pattern). It's looks a bit confusing to use
cache with put/get operations as a transport.
3. Add custom endpoint that will listen specific port and process custom commands. It will
introduce a new endpoint and a new protocol.

What do you think about these approaches? Could you suggest any other ways?

To have more concrete discussion lets say we need to functions available from C++: "saveModel(name,
model)", "getModel(name)" already implemented in Ignite ML and available via Java API.

Mime
View raw message