Zombie vs Sheep

Transforming Source Keys to Real Keys – Part 2: Using Maps To Fix Key Problems

In part 1 of this post, I introduced the idea of map tables. These tables serve as an abstraction between the source systems and the entities in the data warehouse. In this post, I will describe how you can use the maps to correct for source key errors.

Using the taxonomy of key pathologies described earlier, I will walk you through some examples of map usage.

Continue reading…

Zombie vs Sheep

Transforming Source Keys to Real Keys – Part 1: Introducing Map tables

I have ranted enough about the key quality, or lack thereof, in source systems. It is time to look at how we will practically go about transforming the dirty source to the final DW model. What I shall propose is a de-coupling of the source entities from the data warehouse entities. As will be shown, this leads to an interesting ETL pattern that can either address, or properly encapsulate, a variety of source problems.

Continue reading…

Zombie vs Sheep

An Overview of Source Key Pathologies

I previously made the point that source systems cannot be trusted to generate good keys. In this post, I will explore the pollution you may encounter as you dip your feet into the crude oil leaked ocean of data quality.

Let us look at what can go wrong with keys:

Composite Keys

This is perhaps the most common pathology. It occurs when a programmer believes it is better to use a composite key than a single column key. This is rarely motivated by thoughts of sharding or of worries that the 64 bit integer space may run out. Normally, it is driven by nothing but laziness.

Composite keys have several problems:

  • They are difficult to use for junior SQL coders, especially in join conditions and IN / NOT IN clauses in filters. Wrong usage may lead to wrong results that go unnoticed.
  • They typically perform much worse than single column keys. We care about this in a data warehouse.
  • It is difficult to create good statistics on composite keys to serve queries well. This can lead to poor query plans and unpredictable performance

There ARE cases where composite keys make sense in a source system, but given the trouble they cause, the burden of proof rests on the designer of the composite key to show why it is superior to a single column key. As the ETL developer, you typically just have to accept the status quo.

Duplicate Keys

Mentioned for completeness. Even though good algorithms (both distributed and single machine) have existed for ensuring key uniqueness since the early 70’ies – there are still source system which have “keys” that are not real keys. If your key is not unique, it is not a key!

However, these “keys” are more common that you might think, especially when the source system delivers extracts that are result of joins (some programmers may throw a DISTINCT in there for good measure)

Zombie Keys

Screen shot 2011-08-14 at 11.02.48 PMThese keys die and then rise from the grave (Christians may prefer another term). This is perhaps best illustrated with an example:

Assume an inventory system that uses EAN as a key. When a product is in stock, the product table of the inventory system containss the EAN key. If a product goes out of stock, a cleanup job may run that removes all unused EAN keys. From the perspective of the ETL developers – it looks like the key just died and we are left with the question about what to do about it. Later, the product comes back in stock and the EAN number is inserted into the inventory products table again, the key has risen from the grave.

Unfortunately, zombie keys can be hard to tell apart from…

Dolly Keys

Named after the cloned sheep, because sheep are stupid and clones are hard to tell apart, even though they are two different entities. Dolly keys happen when a source deletes a key and later re-uses that key for a different purpose and to name a different entity.

Take Social Security numbers as an example. In Denmark, where I have a social security number, this key is in the format: YYMMDD-XXXX, with the first 6 digits being my birthday and the last four making the key unique (this by the way includes a checksum on the last digit). The quick reader may notice that this only leaves 10000 keys for people born on a given day (and not worrying about the shift of centuries). This is fast becoming a problem in Denmark, where we are now considering the use of letters in the keys, or re-using keys. This amounts to a lot of trouble for systems storing social security numbers. When it comes to keys, just use a value that is big enough to hold all instances – the 32 or 64 bit integer space should suffice for all by the most extreme values. 32-bit values were good enough for running the Internet, they are good enough for you!

Multi Keys

It happens that a source system has more than one key for the same entity. This can be the result of historical corrections and may represent a valid source scenario in the source.

For example, a source may have two rows for the same customer, because the duplicate entry was not discovered at the point of data entry. When the error is discovered, both keys are set to point to the same entity in the source. When we extract the data we are faced with correcting this error in the customer dimension.

Summary

In this post I have provided an overview of typical source system key pathologies. It now rests on my shoulders to describe how to handle these pathologies in the ETL code.

Statistics Blog Logo

The Ascending Key Problem in Fact Tables –Part two: Stat Job!

In my last post I described a common problem with statistics in data warehouses. I also directed you to a trace flag that partially solves the problem. There is an issue with this trace flag: at the time of writing this blog, it does not work on partitioned tables. Considering that large tables tend to be partitioned, this could be an issue.

Continue reading…

Statistics Blog Logo

The Ascending Key Problem in Fact Tables– Part one: Pain!

Time for another deep dive into SQL Server. In the last couple of days, I have been having coffee with Joe Chang in the afternoon, who some of you may know. He is visiting Sweden a lot by the way, so shoot him an email if you are interested in having him present. Joe and I ended up talking about an interesting problem in data warehousing: the ascending key problem. I think it is best illustrated by an example.

Continue reading…

Spinlock Post Logo

Latch and Spinlock Papers Published on Microsoft

I am happy to announce that my team mates, Ewan Fairweather and Mike Ruthruff have published two excellent whitepapers on latch and spinlock diagnosis. You can find them here:

Pay special attention to the spinlock scripts, you will find an interesting trace flag in there that sheds more light on my intentionally vague dodging of the call stack subject in my blog entry here. I am sorry that I did not provide more details in the blog, but I did not want to give the plot away Smile

I am very proud to have Ewan take over the OLTP tuning on the SQLCAT  team here in EMEA. Buy this guy a drink next time you meet him at a conference and have a chat with him about scalability.

Strategy Guide to Rebuild 2

imageIn this blog post I will describe strategies for the game Rebuild 2 for the iPad – a cute little pastime that I have spend some hours on lately while traveling between airports. Like software systems, I like to optimize games and every now and again do an off topic post on my blog about it.

The following will contain spoilers, don’t read on if you don’t want them.

Continue reading…

Intel 386 CPU

DL980 Configuration Guidance published

The Asian CAT team, together with Jimmy May (The Aspiring geek himself), the MS PFE SAP team and HP have published a very interesting blog about DL980 configuration:

If you plan to run on that machine, I highly recommend you read up on the great information in the blog.

Whenever you play around with an HP server, I generally recommend you download and skim through the QuickSpec. It gives you the details of how the machine is laid out – the PCI card speeds and placement come in quite handy when you configure the machine with FusionIO.

Another good resource for large machine tuning is to take a look at the full disclosure reports on TPC-E runs

Spinlock Post Logo

Diagnosing and fixing SOS_OBJECT_STORE spins for Singleton INSERTS

Following up on my previous post, my next target for “optimization”, while I am waiting for an even faster I/O system, is the SOS_OBJECT_STORE spin.

Recall that I am having high waits for WRITELOG, but still see 100% CPU, which indicates that spins may be our sinner. The big spin on the system is still LOGCACHE_ACCESS – but until we get hardware to work on that – we might as well stay greedy and learn a bit about SQL Server in the process. We just got rid of the OPT_IDX_STATS spin by running TF2330.

Unfortunately, the documentation available on our next spin: SOS_OBJECT_STORE is rather sparse. It is one of the SQLOS internal data structure used many places inside SQL Server. But there are ways, even for the public (which is why I can share it here), to get more information about what is going on. You can capture the call stacks of SQL Server when it does this spin and use publicly available symbols to lookup the function names inside the code.

One way to do this is to run an Xperf trace of the sqlservr.exe, another is with WinDbg. Anything that can collect and bucketize call stacks can help you. I will not get into more details here, but follow the links in this paragraph to learn more. I also have an article on SQLCAT that should help you get started on setting public symbol paths.

Suffice to say that I got hold of the sqlservr.pdb file (the publicly available symbols) and had a look at the call stacks that leads to SOS_OBJECT_STORE spins:

SpinlockBase::Sleep
LockManager::GetLockBlocks
lck_lockInternal
GetLock
PageContext::AcquireLock
IndexDataSetSession::AcquireLocksForInsert
IndexDataSetSession::InsertSmallRecord
IndexDataSetSession::InsertRowInternal
DatasetSession::InsertRow
RowsetNewSS::InsertRow

Aha! So this is related to the lock manager acquiring a lock on a page. Now, you may then ask: how can we influence this, surely we cannot control the way locks are acquired.

Well, as a matter of fact, we DO have a knob that gives us a tiny bit of influence. How about building the index like this:

CREATE CLUSTERED INDEX MyBigTable_cl
ON dbo.MyBigTable (c1)
WITH (ALLOW_PAGE_LOCKS = OFF)

That should get rid of one level of the lock hierarchy (ROW/PAGE/TABLE), restricting us to either table level locks or row locks. Since we are playing OLTP system here – who needs page locks anyway? Total leftover from old times Smile… (I am only half kidding here)

Interestingly, this was just what was needed, the SOS_OBJECT_STORE spin is now gone:

image

But throughput has not changed at all. This is not surprising, given the much larger amount of spins on LOGCACHE_ACCES. But we learned something new: Disabling PAGE level locks can save some CPU cycles by eliminating some of the code paths – we can speculate that this might lead to increased throughput once other bottlenecks are gone.

At this time, before I am moving to a faster transaction log, these are my waits:

image

Notice the high SOS_SCHEDULER_YIELD up there right after the WRITELOG? I have a feeling those spins are not yet done tormenting us….

Following up on my previous post, my next target for “optimization”, while I am waiting for an even faster I/O system, is the SOS_OBJECT_STORE spin.

Recall that I am having high waits for WRITELOG, but still see 100% CPU, which indicates that spins may be our sinner. The big spin on the system is still LOGCACHE_ACCESS – but until we get hardware to work on that – we might as well stay greedy and learn a bit about SQL Server in the process. We just got rid of the OPT_IDX_STATS spin by running TF2330.

Unfortunately, the documentation available on our next spin: SOS_OBJECT_STORE is rather sparse. It is one of the SQLOS internal data structure used many places inside SQL Server. But there are ways, even for the public (which is why I can share it here), to get more information about what is going on. You can capture the call stacks of SQL Server when it does this spin and use publicly available symbols to lookup the function names inside the code.

One way to do this is to run an Xperf trace of the sqlservr.exe, another is with WinDbg. Anything that can collect and bucketize call stacks can help you. I will not get into more details here, but follow the links in this paragraph to learn more. I also have an article on SQLCAT that should help you get started on setting public symbol paths.

Suffice to say that I got hold of the sqlservr.pdb file (the publicly available symbols) and had a look at the call stacks that leads to SOS_OBJECT_STORE spins:

SpinlockBase::Sleep
LockManager::GetLockBlocks
lck_lockInternal
GetLock
PageContext::AcquireLock
IndexDataSetSession::AcquireLocksForInsert
IndexDataSetSession::InsertSmallRecord
IndexDataSetSession::InsertRowInternal
DatasetSession::InsertRow
RowsetNewSS::InsertRow

Aha! So this is related to the lock manager acquiring a lock on a page. Now, you may then ask: how can we influence this, surely we cannot control the way locks are acquired.

Well, as a matter of fact, we DO have a knob that gives us a tiny bit of influence. How about building the index like this:

CREATE CLUSTERED INDEX MyBigTable_cl
ON dbo.MyBigTable (c1)
WITH (ALLOW_PAGE_LOCKS = OFF)

That should get rid of one level of the lock hierarchy (ROW/PAGE/TABLE), restricting us to either table level locks or row locks. Since we are playing OLTP system here – who needs page locks anyway? Total leftover from old times Smile… (I am only half kidding here)

Interestingly, this was just what was needed, the SOS_OBJECT_STORE spin is now gone:

image

But throughput has not changed at all. This is not surprising, given the much larger amount of spins on LOGCACHE_ACCES. But we learned something new: Disabling PAGE level locks can save some CPU cycles by eliminating some of the code paths – we can speculate that this might lead to increased throughput once other bottlenecks are gone.

At this time, before I am moving to a faster transaction log, these are my waits:

image

Notice the high SOS_SCHEDULER_YIELD up there right after the WRITELOG? I have a feeling those spins are not yet done tormenting us….

Following up on my previous post, my next target for “optimization”, while I am waiting for an even faster I/O system, is the SOS_OBJECT_STORE spin.

Recall that I am having high waits for WRITELOG, but still see 100% CPU, which indicates that spins may be our sinner. The big spin on the system is still LOGCACHE_ACCESS – but until we get hardware to work on that – we might as well stay greedy and learn a bit about SQL Server in the process. We just got rid of the OPT_IDX_STATS spin by running TF2330.

Unfortunately, the documentation available on our next spin: SOS_OBJECT_STORE is rather sparse. It is one of the SQLOS internal data structure used many places inside SQL Server. But there are ways, even for the public (which is why I can share it here), to get more information about what is going on. You can capture the call stacks of SQL Server when it does this spin and use publicly available symbols to lookup the function names inside the code.

One way to do this is to run an Xperf trace of the sqlservr.exe, another is with WinDbg. Anything that can collect and bucketize call stacks can help you. I will not get into more details here, but follow the links in this paragraph to learn more. I also have an article on SQLCAT that should help you get started on setting public symbol paths.

Suffice to say that I got hold of the sqlservr.pdb file (the publicly available symbols) and had a look at the call stacks that leads to SOS_OBJECT_STORE spins:

SpinlockBase::Sleep
LockManager::GetLockBlocks
lck_lockInternal
GetLock
PageContext::AcquireLock
IndexDataSetSession::AcquireLocksForInsert
IndexDataSetSession::InsertSmallRecord
IndexDataSetSession::InsertRowInternal
DatasetSession::InsertRow
RowsetNewSS::InsertRow

Aha! So this is related to the lock manager acquiring a lock on a page. Now, you may then ask: how can we influence this, surely we cannot control the way locks are acquired.

Well, as a matter of fact, we DO have a knob that gives us a tiny bit of influence. How about building the index like this:

CREATE CLUSTERED INDEX MyBigTable_cl
ON dbo.MyBigTable (c1)
WITH (ALLOW_PAGE_LOCKS = OFF)

That should get rid of one level of the lock hierarchy (ROW/PAGE/TABLE), restricting us to either table level locks or row locks. Since we are playing OLTP system here – who needs page locks anyway? Total leftover from old times Smile… (I am only half kidding here)

Interestingly, this was just what was needed, the SOS_OBJECT_STORE spin is now gone:

image

But throughput has not changed at all. This is not surprising, given the much larger amount of spins on LOGCACHE_ACCES. But we learned something new: Disabling PAGE level locks can save some CPU cycles by eliminating some of the code paths – we can speculate that this might lead to increased throughput once other bottlenecks are gone.

At this time, before I am moving to a faster transaction log, these are my waits:

image

Notice the high SOS_SCHEDULER_YIELD up there right after the WRITELOG? I have a feeling those spins are not yet done tormenting us….