How to check definition of views in a database

During preparation of replication from remote site to central office I stumbled upon a need to check definitions of all views in a database. The script is simple, yet useful:

select, sc.text from sys.views sv 
inner join dbo.syscomments sc on sv.object_id =
where sv.is_ms_shipped = 0 

You get all the non-system views – if you’re looking for a specific text in a script, just drop another conditions in WHERE clause and you’re done. I – for instance – was able to locate all views referencing linked servers.

[Story] Terrible performance problem

Hello again and welcome to new series categorized informally as “DBA stories”. This is what it means – stories from real life depicting strange, bizarre, sometimes funny (sometimes not) but somehow interesting to a DBA. The choice of them is arbitrary – as long as I have my stories, I will try not to include stories posted by someone else. And remember, these things happened, so they might as well happen to you!

OK, so for the beginning a situation that made me stay up whole night once. I get a phone call from help desk agent around 11pm saying that one of mission-critical systems suffers from a poor performance. OK, I say, let me see – I log on and there it is, CPU load 100% with SQL Server barely responding due to load. The system uses transactional replication to maintain a warm-standby and I started thinking of invoking business continuity procedures. Then I saw that a 10 GB database has 40 GB of log which is growing by 1 GB every half an hour. So, it’s a database problem and I start digging.

Service restart for SQL Server didn’t help, of course – it didn’t remove the source of the problem. So I started looking for the source with Profiler and then I found it.

The clue was – one of the components received a patch with a new functionality, which – from my (DBA) view – took all records in one of the tables and in a loop updated all records in each iteration (for each record there was UPDATE issued without WHERE). There were like 5000 records in this table, so each execution of this procedure updated ~25 million records, and that’s not the end of it. This procedure was run once a minute! That’s why the log was growing so fast and replication could not simply keep it up as well.

Once the problem was found, one quick call to application support team with request to disable this functionality solved it immediately. How did it pass QA? you might ask. Well, there are two morals:

  1. Keep your QA environment as similar to production as possible.
  2. Don’t trust your developers – have it checked two or three times.

Some common problems with users and logins and their solution

After few posts devoted to indexes it’s time to look at something else. Let’s see some common problems with users and logins, and methods of overcoming them. These are the basic and most frequently occuring ones.

  1. Unable to connect to server – cannot open user default database (error 4064).

    The message appears if the default database for a login does not exist or it’s unavailable. An example of former would be a migration, when you move some databases and logins between servers. Latter is a little bit more tricky – imagine a server which participates in database mirroring; the mirror database is in recovery all the time, so any attempts of logging into it would be rejected.
    The solution is easy – set the default database for the login to an existing one, which always available. Good practice is to use master for this purpose – master always exists and all users can connect to it. You can use SSMS for it or – if you prefer typing:

  2. Unable to map login to user in a database – user, group or role already exists in the current database (error 15023).


    This message can also appear during a migration, if you copy (or move) your database (using Copy Database Wizard or detach/attach) but forget to create respective logins first, rather create logins after the database is brought online, and then try to map them to database users.

    The solution is also simple – but only T-SQL this time. You have to execute it in the database context and it works only for SQL logins (see Books Online for details of sp_change_users_login):

    sp_change_users_login 'update_one', login_name, user_name

    There is also a note in mentioned BOL article saying that from SQL Server 2008 sp_change_users_login stored procedure is considered to be removed in the future. Instead, it is recommended to use ALTER USER statement:

    ALTER USER user_name TO LOGIN = login_name
  3. Orphaned database (no owner set).

    I can’t think of any reasonable explanation for that, but since I have experienced that recently, it’s worth mentioning. You restore a database from backup and it turns out that user dbo is mapped to a login that does not exist. You can do it from SSMS (Database Properties –> Files –> input database owner) or using T-SQL:

    sp_changedbowner login_name

    If you look for sp_changedbowner in the Books Online, you’ll notice that it is marked for deprecation as well. The suggested solution is to use ALTER AUTHORIZATION insted:


A few words about required privileges to complete these actions. ALTER LOGIN requires ALTER ANY LOGIN permission, but a principal can change it for its own login (this may fail if tried from SSMS since it sets CHECK_EXPIRATION and CHECK_POLICY which require CONTROL SERVER permission, at least it’s what I observed on SQL Server 2005). To remap a user with ALTER USER, you require CONTROL permission in a database. sp_change_users_login requires membership in db_owner database role. Finally, ALTER AUTHORIZATION requires TAKE OWNERSHIP on the database (on the object, in general) and might require IMPERSONATE, if you’re not logged with login taking ownership. sp_changedbowner might need CONTROL SERVER in addition to that, if new owner doesn’t have a user in a database. Since a DBA probably is a sysadmin, this is not a problem, but I have seen users in live databases yielding CONTROL which might make us wonder “do they really need it?”.

Well, this would be all about users and logins for now. If you have questions, suggestions or corrections, feel free to comment and ask. I’ll be happy to respond.

How to find potential missing indexes in a database

In previous posts (here and here), I described the methods I use to monitor and maintain indexes. This is a next chapter – let’s try to find candidates for new indexes or hints for extending existing ones. Of course, there are few methods to achieve this goal:

  1. Find a candidate query and analyze it in Management Studio (applies to SQL Server 2008 or newer). This reduces to another problem – how to find a suitable query? You can use Profiler, but you have to spend some time working on filtering plus you can miss the query, if you’re not patient enough. You can also use reports in Management Studio – right click on server, select Reports –> Standard Reports and choose Peformance – Top Queries by Average IO and Top Queries by Total IO. Queries in Top Queries in Average IO are the best candidates for optimizing, especially if they are frequently used.
    As for analysis – you can do it yourself manually or take advantage of new SSMS feature in 2008 – if you display execution plan, it will mention an index which could be helpful and projected benefit if such index is created.
  2. If you have a query or a bunch of them, you can provide them as input to Database Engine Tuning Advisor and get the recommendation.
  3. You can also check sys.dm_db_missing_index_details DMV to get information of all possible missing indexes in a database. This is actually the method used in SSMS (mentioned in step 1) – if it stumbles upon a scan or a seek which would benefit from additional index, it updates stats in the DMV and returns information to the user. If you want to find all indexes you can use following script:
    	mid.statement as [Table name], 
    	isnull(migs.unique_compiles, 0) as [Compile count], 
    	isnull(migs.user_seeks, 0) as [User seeks], 
    	isnull(migs.user_scans, 0) as [User scans], 
    	sys.dm_db_missing_index_details mid
    	inner join sys.dm_db_missing_index_groups mig on mid.index_handle = mig.index_handle
    	left join sys.dm_db_missing_index_group_stats migs on mig.index_group_handle = migs.group_handle

Compare it also with Books On-Line entries for sys.dm_db_missing_index_columns dynamic management function (there is a similar example to this one in this article) and sys.dm_db_missing_index_details dynamic management view for more information. And – as usual – please let me know if you have any comments or suggestions.

A script for index maintenance based on fragmentation

In my previous post regarding indexes I presented a method of inspecting all indexes in a database using sys.dm_db_index_physical_stats. Let us develop it a little bit more.

Suppose you want to loop through all indexes and rebuild those which fragmentation exceeds 30 percent, and reorganize those which fragmentation is between 5 and 30 percent. Rebuilding or reorganizing all indexes is not a problem – you can prepare a maintenance plan and incorporate a particular task. But it will not solve the problem – this way you can apply only action to all the indexes in a database. You can say there is no problem in rebuilding all indexes, but there is – there is no point rebuilding those which are barely fragmented since it is a waste of resources (disk space, CPU and I/O), extra log space is being used to record all the operations and it may not give any performance boost at all. On the other hand – reorganizing some indexes may give no performance bonus, especially at high level of fragmentation, when it’s easier (and better) to rebuild.

So the problem stated in the beginning may be solved this way:

  1. Extract information about all indexes to be maintained (note that the scope can be narrowed to a single table) – you need index name, table name and fragmentation.
  2. For each index perform rebuild or reorganization based on fragmentation.
  3. (optional) Place it in a maintenance plan (in Execute T-SQL Statement Task) or SQL Server Agent job to run it periodically.

First step is something already mentioned:

DECLARE @IndexName varchar(255)
DECLARE @TableName varchar(255)
declare @Frag float

SELECT si.[name] as index_name,
	so.[name] as table_name
FROM sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, NULL) sdm
	inner join sys.indexes si on sdm.object_id = si.object_id and si.index_id = sdm.index_id
	inner join sys.objects so on so.object_id = si.object_id

Notice variable declarations and a cursor for a future loop. Since I already described this part of the query, let’s fast forward to second step. Here’s the code:

OPEN TableCursor 

FETCH NEXT FROM TableCursor INTO @IndexName, @Frag , @TableName
	print @TableName + ' - ' + @IndexName + '...'
	if @Frag < 30 and @Frag > 5
		print ' REORGANIZE '
		exec ('ALTER INDEX ' + @IndexName + ' ON [' + @TableName + '] REORGANIZE')
	else if @Frag > 30
		print ' REBUILD '
		exec ('ALTER INDEX ' + @IndexName + ' ON [' + @TableName + '] REBUILD')
	print 'done' + char(13)
	FETCH NEXT FROM TableCursor INTO @IndexName, @Frag, @TableName

CLOSE TableCursor

Using a cursor, we can loop through all the indexes and then choose appropriate action based on value of @Frag variable. In addition, you will have a trace of actions in the script output.

Whole script is available for download here. As usual, I await your comments and suggestions.

How to check fragmentation of all indexes in a database

It has been told multiple times how index maintenance is important for a healthy database. Here’s one of my tricks – a quick script to check all indexes within a database. It uses DMF sys.dm_db_index_physical stats, which is available since SQL Server 2005. Quick glance into Books Online gives the basic syntax:

	{ database_id | NULL | 0 | DEFAULT }, 
	{ object_id | NULL | 0 | DEFAULT }, 
	{ index_id | NULL | 0 | -1 | DEFAULT }, 
	{ partition_number | NULL | 0 | DEFAULT },
	{ mode | NULL | DEFAULT }

However, the output provided by this DMF is not easily readable (see for yourself Szeroki uśmiech), so it would be good just to make more user-friendly. Here’s my proposition:

	s.[name] AS [Schema], 
	o.[name] AS [Table], 
	ips.index_type_desc AS [Index Type], 
	i.[name] AS [Index Name], 
	i.is_primary_key AS [Primary Key], 
	i.is_unique AS [Unique], 
	i.fill_factor AS [Fill factor], 
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) ips
	INNER JOIN sys.objects o ON ips.object_id = o.object_id
	INNER JOIN sys.schemAS s ON o.schema_id = s.schema_id
	INNER JOIN sys.indexes i ON i.object_id = ips.object_id 
	AND i.index_id = ips.index_id

And a sample result you can get by running it against Adventure Works:


You can narrow it down by providing a table name as a second parameter to sys.dm_db_index_physical stats, but it must be preceded with a schema name (e.g. OBJECT_ID(N‘Person.Address’) ). Or, if you want to check several tables, and a WHERE clause and filter them. Finally, you can also use it in your index maintenance routine to select which indexes should be rebuilt, which should be reorganized and which don’t need a maintenance; instead of simply rebuilding them all.

Setting a new collation for SQL Server instance

Collation can be a little bit tricky sometimes. Changing it is not a problem (theoretically) if you want to do it on a column, table or database level – you have to issue respectful ALTER statement and that’s it.

Another thing is changing default server collation. You set it during installation and if you want to change it, you have to rebuild the system databases. Here’s the procedure:

  1. Log on to administrative account.
  2. Run command prompt (as administrator, if you’re on Windows Vista or later).
  3. Go to folder with your SQL Server installation source.
  4. Run the following statement:

And that’s it. Tested on SQL Server 2005 and SQL Server 2008 R2. Important thing is that it preserves service pack version (for example you don’t have to reapply SP3 for SQL Server 2005).

Hello, world! :) and off we go – with Power Shell for DBA

First, let me express I am really happy to post here. I was following BeyondRelational for quite some time and learned a lot (especially from T-SQL Challenges, I guess) and I have finally decided to contribute a bit. I hope I am able to deliver some quality, so any comments are welcome.

So, let’s start with something not really obvious. Everyone knows about Power Shell, but who uses it for database administration? Of course since SQL Server 2008 there is an option of running Power Shell from within SSMS, but there are situations when it comes really handy, for example, when running queries against multiple servers and/or databases.

The idea is taken from one of the blogs I’ve come across once. It states that Power Shell exposes useful methods for database developers, so let’s put it to good use for administrative work.
In this scenario you get a list of servers from somewhere (it may be a text file or Active Directory), connect to each of them using SMO and get the information you want. For testing purposes, let’s grab name, compatibility level, recovery model, size and some other options. For details or extensions, MSDN website is a good place to start.

Let’s break it to bits. First we get the servers we want to query, then we’ll work on getting the information.

The simplest way to get the list of servers to be queried is to place them in a file, then read it. You can fetch with one line:

$serverList = Get-Content 'filename'

The downside is that, if your server list changes, you have to remember to update your configuration file and since you are probably in a domain, this information is already out there somewhere just waiting for you to reach it. If you have a domain group containing all the servers, this gets really easy. Just remember to either run Active Directory Module for Power Shell, or include Import-Module ActiveDirectory before this:

$serverList = Get-ADComputer –Filter {MemberOf –eq "group_name"} | select Name

Normally, the Get-ADComputer returns more information than we really need, so first we get the data and then just take what’s necessary – in this case, Name. It’s, however, possible that the situation differs and you don’t have a group, you have a separate OU instead. In this case, you have to prepare an LDAP query to get to proper path and provide a dummy filter (Get-ADComputer requires Filter parameter):

$serverList = Get-ADComputer –Filter {Name –like "*"} –LDAPFilter “ou=Servers,ou=Computers,dc=contoso,dc=com” | select Name

Since we have our list of servers, we can start querying them. We have to load Microsoft.SqlServer.SMO (Server Management Objects) namespace first. Note that SMO does not support SQL Server version below 2000, but they should be pretty rare at the moment.

After we have the namespace ready, we will loop through all the servers. For each of the servers, we’ll create an instance of Microsoft.SqlServer.Management.Smo.Server class, and then we’ll look into Databases collection. To keep the output nicely formatted, we’ll store the results in temporary variable $out, which is displayed using Out-GridView at the end.

$out = @()
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
foreach ($server in $serverList)
	$s = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $server
	$dbs | select Name, CompatibilityLevel, RecoveryModel, AutoClose, AutoShrink, Owner, Size, LogFiles | Add-Member -Name "Server" -Value $server -MemberType NoteProperty -PassThru -Force -OutVariable +out | Out-Null
$out | ogv -Title "Database information"

There are more options starting from here, for instance you can list all permissions in all databases (again, refer to MSDN for detailed description):

$out =@()
foreach ($db in $dbs)
	$db.EnumObjectPermissions() | where {$_.ObjectSchema -ne "sys" -and $_.ObjectSchema -ne "INFORMATION_SCHEMA"} | select @{Name="dbName";Expression={$db.Name}}, * -OutVariable +out | out-null
$out | ogv -Title "Permissions on all objects in all databases"

I hope you find it interesting how you can make a bridge between development and administration – it’s a way to administrative stuff programatically. Anyway, I hope you enjoy my first post here and again – feel free to comment.