I know many of you database engineers out there would have super large databases. And the problem with such databases are fetching data from them could be slow.
Why not partition them?
What ever database technology you use, there must be someway of partitioning data in a database.
I'm currently partitioning data by fields of a database. We thought the performance was good enough. But then I tried partitioning the existing partitions by date and again, performance improved radically!
Try it, partitions of partitions may not be an easy thing to do, you might have to modify a lot of other things like the way you write into them, so keep that in mind before you try such things. Don't blame if tried what I suggested and ended up too much work. I did warn you :P
Why not partition them?
What ever database technology you use, there must be someway of partitioning data in a database.
I'm currently partitioning data by fields of a database. We thought the performance was good enough. But then I tried partitioning the existing partitions by date and again, performance improved radically!
Try it, partitions of partitions may not be an easy thing to do, you might have to modify a lot of other things like the way you write into them, so keep that in mind before you try such things. Don't blame if tried what I suggested and ended up too much work. I did warn you :P
Comments
Post a Comment