ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitry Pavlov <dpavlov....@gmail.com>
Subject Re: Inserting data into Ignite got stuck when memory is full with persistent store enabled.
Date Tue, 17 Oct 2017 12:56:57 GMT
Hi Ray,

I’m also trying to reproduce this behaviour, but for 20M of entries it
works fine for ignite 2.2.

It is expected that in-memory only mode works faster, because memory has a
higher write speed by several orders of magnitude than the disc.

Which type of disc is installed in servers? Is it HDD or SSD? Which is
write speed for this disc for random writes?

What if we will estimate key (6 fields) + value (12 fields) volume, add 300
bytes overhead for each entry. Than we will mutiply it to 1,3 because of
index overhead. And then we can multiply it to 550M of entities - this will
be approx. number of bytes requied. Then this planned capacity we also can
divide by disk write speed. What will be estimates in hours for such
upload? Of course, this estimate will not be realistic, since the indexes
need to be rewritten a large number of times. The real time of work will be
much more then estimated.

Could you please provide estimations for fields length in key and value?

Sincerely,
Dmitriy Pavlov


вт, 17 окт. 2017 г. в 7:57, Ray <rayliu@cisco.com>:

> The above log is captured when the data ingestion slowed down, not stuck
> completely.
> The job has been running two and a half hour now, and the total records to
> be ingested is 550 million.
> During last ten minutes, less than one million records has been ingested
> into Ignite.
>
> The performance for writing using IgniteDataStreamer is really slow with
> persistent store enabled.
> I also did this test on another cluster without persistent store enabled,
> it
> took about forty minutes to save all 550 million records using the same
> code.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Mime
View raw message