I am currently making a relatively deep (read: nerdy) study of how to optimally load a huge data warehouse.
Based on my experience with ETL tools, SQL tuning and index optimization I know that there are several approaches one may take when loading a billion row data warehouse with million row dimension tables. However, very few of them are optimal or even viable…
On question that must be answered to find the optimal stratagy is the following:
Let us assume we have an optimally configured SQL Server system (all sp_configure settings and trace flags optimized for our hardware).
Futhermore, assume we have the following:
- A 1 GB Full-Duplex network card with the best drivers available
- The best disk system money can buy (Real spindles – not Solid state)
- The fastest CPU of each class: x86, x64 or IA-64 architecture (there is no parallel execution happening here – yet)
- Plenty of RAM allocated to sqlserv.exe
Now, let us run these three simple statements against our warehouse database:
- SELECT SurrogateKey FROM DimensionTable WHERE EntityKey = @ek
- UPDATE DimensionTable SET EntityKey = @ek WHERE SurrogateKey = @sk
- INSERT INTO DimensionTable (SurrogateKey, EntityKey) VALUES (@sk, @ek)
These are (simplified) version of the queries used to respectively make key lookups, type 1 and type 2 dimension changes
Assume the following about the execution of the above statements
- We are using the OLEDB driver subsystem to communicate with SQL Server
- Optimal indexes are in place to support all queries
- No page splitting occurs
- The Fillfactor is 100% in the indexes
- The query plan for all statements is in the plan cache
- The SELECT can be serviced from the buffer pool by looking at only 3 pages in the index B-tree (optimistic guess)
- The UPDATE and the INSERT each require only one I/O operation (optimistic guess)
For our warehouse architecture we can consider two viable scenarios:
- Statements executed from another machine on the network to the server (crossing the 1GB Full-Duplex link)
- Statements executed from same machine (doing in memory movement where possible)
Now my question is: How fast can these statements be if we have the best software, tuning and hardware available?…
Or in Data Warehouse terms:
"What is the absolute lowest time used pr. row when loading a dimension table using a naïve (read: straightforward) approach to dimension loading"
BTW: (An example of such a naïve approach is the one employed by the SSIS 2005 SCD transformation)
In a later post I will explore a non-naïve approach which i consider optimal for dimension loading for dimension of any size.