An Approach to Database Maintenance with Multi-Tenant Architectures

Have you ever watched a toddler eat when he was first learning to use a spoon?  For example, my son was very entertaining to watch.  He would pick up a grain of rice and then put it on the spoon and eat it.  Needless to say, some of his meals would take a long time.  But, he eventually learned that the spoon had a greater capacity, and he could put more rice grains on the spoon or just scoop up a bunch of rice and then he could eat faster.

Interestingly, the maintenance we do on databases can be quite similar.  Many times we may just run a maintenance process on one database at a time.  While this may be acceptable in a small environment, it will begin to cause problems in larger environments because the maintenance time will increase in a linear fashion as new databases are continually added.  And this became a huge problem in my current database environment when dealing with our integrity checks (DBCC CheckDB:

A few of the applications I work with employ the multi-tenant data architecture (, with a separate database for each project.  So, the more projects (i.e. business) for the company, the more databases we will have and the longer the maintenance will take collectively on all the databases.  But, then I had a revelation.  Just like a mostly empty spoon with only one grain of rice, I realized that our servers had a lot more capacity or resources that I could use in them.

How was I able to take advantage of more resources with our database maintenance processes?

The key to answering this question was to add more grains of rice, or do more database maintenance processes at one time.  And this really proved to be easier than it sounds.

For our integrity checks, we use Ola Hallengren’s scripts (  Unfortunately, it doesn’t come with an out-of-the-box way to do this.  However, a person could write a wrapper around this that uses a queue table and a worker stored procedure.  The basic idea is to put all the databases you want to do maintenance on in the queue table and then run multiple worker processes.  Hopefully, something like that is built into his product at some point.

But, I wanted to do it even simpler.  I decided that I wanted to call his stored procedure multiple times from different SQL Agent jobs.  This would be similar to having multiple workers.  Next, I needed to determine how to divide up the databases.  I went with using the modulo (%) operator (  Since I was planning to use 4 jobs, I would need to break the databases into 4 groups.  I would use the database_id, and would use the following code: database_id % @group_count. This would return values of 0 to 3 for each database.  Then, I would just add the databases to a list and pass it to the stored procedure.  I have attached the completed code below for one of the SQL Agent jobs.  The only difference between the jobs would be to pass in a different @database_group for each process or job.  Additionally, this code could also be easily made into a stored procedure or even a function with two parameters.

-- parameters
DECLARE @database_list VARCHAR(8000)
DECLARE @database_group INT
DECLARE @group_count INT

SELECT @database_group = 1
SELECT @group_count = 4

-- build database list
SELECT @database_list = ISNULL(@database_list + ',', '') + name
FROM sys.databases
WHERE name NOT IN ('master', 'tempdb', 'model', 'msdb')
AND database_id % @group_count = @database_group - 1

-- run maintenance
EXECUTE [dbo].[DatabaseIntegrityCheck] @Databases = @database_list, @LogToTable = 'Y'

So, this was an approach that worked for my environment.  Before implementing this, the database integrity checks were taking over 24 hours to run.  After implementing this solution, with 4 concurrent processes, it increased the resource usage on the server, but also saved 50% in the amount of maintenance time used.

This entry was posted in DBA and tagged , , . Bookmark the permalink.

One Response to An Approach to Database Maintenance with Multi-Tenant Architectures

  1. Nuodb says:

    Deploy your data where and when its needed. A flexible deployment architecture allows you to decouple storage and computational processing resources to serve your customer and business needs anywhere in the world. NuoDB enables you to build a global data management platform that’s highly available with guaranteed 24/7 up-time, hot upgrades, built-in data redundancy and disaster recovery.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s