Hi, i have a pretty big (20gig+)database that grows pretty fast. This makes it pretty slow in some of the bigger columns.
So we have been thinking about creating a storage DB with the data that’s hardly used (data is mostly used the first week of its presence. After that it hardly gets used again. But it happens so cant delete it).
So the plan was to copy the data from one database to another slower database server with less expensive disk space and so on. But the more I work on it the stupider it feels. Al the foreign keys and regenerated keys and so on must be dealt with somehow.
We are pretty bad at rebuilding indexes and stuff so maybe if we just got around to do that more often maybe we wouldn’t need the storage DB? Or is there a good way to do this? I am pretty sure im not the first one with this problem…
And also, we have had some problem with compressing tables that they don’t seem to get the same generated keys anymore. And screws up al the foreign keys and so on. Is there a solution for this?