Largely databases are split into two separate areas, OLAP (Reporting) and OLTP (Transational day to day stuff).
What tends to happen on the more organically grown systems is they start out as a few tables for transactional stuff and then as the database grows it gets hammered with the BI stick by well meaning department heads wanting meaningful reports to help manage/grow the business.
Quick wins for database performance can be to start a warehouse project, shovel (ETL) the data in to the warehouse and run the reports from there. The sell is that the warehouse gives time slices (snapshots) of the business so can provide richer reports. The management would struggle to justify the cost as warehouses are huge and consume many acres of disk space rapidly.
Or just stage the data for the reports elsewhere, run a nightly hose down and fresh ETL, so no time slices but freshly staged DB. You could sell this as a PreProd environment or DR system in the making.
The important point is you get reporting load off your DB.
Then you can run a profiler trace to identify slowly performing queries and get your DBA to help you identify when to daub some indexes or rewrite some of the queries to perform better.
Of course rebuilding indexes and updating statistics is bread and butter and it sounds very much like you're already doing that. The key here (badoom tish) is that just because a table is large (deep) doesn't mean it has to perform badly. This isn't 1970.
What tends to happen on the more organically grown systems is they start out as a few tables for transactional stuff and then as the database grows it gets hammered with the BI stick by well meaning department heads wanting meaningful reports to help manage/grow the business.
Quick wins for database performance can be to start a warehouse project, shovel (ETL) the data in to the warehouse and run the reports from there. The sell is that the warehouse gives time slices (snapshots) of the business so can provide richer reports. The management would struggle to justify the cost as warehouses are huge and consume many acres of disk space rapidly.
Or just stage the data for the reports elsewhere, run a nightly hose down and fresh ETL, so no time slices but freshly staged DB. You could sell this as a PreProd environment or DR system in the making.
The important point is you get reporting load off your DB.
Then you can run a profiler trace to identify slowly performing queries and get your DBA to help you identify when to daub some indexes or rewrite some of the queries to perform better.
Of course rebuilding indexes and updating statistics is bread and butter and it sounds very much like you're already doing that. The key here (badoom tish) is that just because a table is large (deep) doesn't mean it has to perform badly. This isn't 1970.
Comment