ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexey Zinoviev (Jira)" <j...@apache.org>
Subject [jira] [Commented] (IGNITE-10286) [ML] Umbrella: Model serving
Date Thu, 03 Oct 2019 10:19:00 GMT

    [ https://issues.apache.org/jira/browse/IGNITE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943487#comment-16943487

Alexey Zinoviev commented on IGNITE-10286:

Yes, we can without any issues if the IGFS dropping will be freezed

> [ML] Umbrella: Model serving
> ----------------------------
>                 Key: IGNITE-10286
>                 URL: https://issues.apache.org/jira/browse/IGNITE-10286
>             Project: Ignite
>          Issue Type: New Feature
>          Components: ml
>            Reporter: Yury Babak
>            Assignee: Yury Babak
>            Priority: Major
>             Fix For: 2.8
> We want to have convenient API for model serving. It means that we need a mechanism for
storing models and infer them inside Apache Ignite.
> For now, I see 2 important features - distributed storage for any models and inference.
> From my point of view, we could use some built-in(predefined) cache as model storage.
And use service grid for model inference. We could implement some "ModelService" for access
to our storage, receive the list of all suitable model(including model metrics and some other
information about a model), choose one(or several) and infer it from this service.
> Model from TF should also use the same mechanisms for storing and inference.

This message was sent by Atlassian Jira

View raw message