Linked servers’ antipatterns

After previous post about using Windows Authentication for linked servers I realized there are more common points with regards to linked servers that might be identified as antipatterns. I’ll describe those together with suggestions of solving them.

  1. Incorrect security settings.

    This situation takes place often when ad-hoc connection is made to use remote data for example for reporting purposes (which is not available locally). It involves the following:

    • Security option is set to “Be made using this security context” and remote login has a lot of privileges – like db_owner in some databases or even sysadmin. This might allow a user to elevate his permissions on remote server and perform action he is not intended to do.
    • Security option is set to “Be made without using security context” and remote server allows anonymous logins to perform some actions. This will also lead to users performing actions without being audited. It also requires remote server to have guest accounts enabled or include some securables into public role in some databases that will cause uncontrolled access to these databases.

    The solution for that is to follow Principle of Least Privilege – ensure only required access is granted on the remote server. I recommend either using security option 3 “Be made using the login’s security context” or creating a dedicated login and granting it minimum rights together with option 4 (downside of latter method is that it grants exactly the same rights on remote server regardless of login used on local server).

  2. Joins of two linked server queries.

    I noticed this in reporting queries that had to aggregate data from several data sources and the idea was to use query like (it’s executed on Server1):

    select columns
    Server1.DB1.dbo.Table1 t1
    Server1.DB1.dbo.Table2 t2
    Server2.DB1.dbo.Table1 t3
    Server3.DB1.dbo.Table1 t4

    There are several problems with this query:

    • This query requires a lot of tempdb resources on Server1 – Server2 and Server3 only provide data, but the actual execution takes place on Server1. So before the join is executed, data must be transferred from remote servers and placed in temporary tables.
    • Following from the point above – network might be a bottleneck if volume of data to be transferred is huge.
    • Even if remote databases are optimized and properly indexed, since query is executed in Server1 it’s nearly impossible to tune it as it uses temporary worktables in tempdb.
    • Based on generated execution plan, it is also possible that selected physical join operator is nested loop join and it will trigger a number of queries to be executed on remote server equal to row count in the pipeline.

    As the queries like that are mostly used for reporting purposes, the perfect solution would be to split the query into several datasets and let it be processed on the client side. I’m not aware of such feature in Reporting Services up to 2012 version. My workaround for this case involves one of two options:

    • Use OPENQUERY to group tables by remote server, thus limiting a number of resources (remote server calls, tempdb resources, pipeline objects etc.) needed to execute the query. It will require a lot of rework on the query, though.
    • Set up replication or AlwaysOn readable secondary to consolidate data on single machine.
  3. Linked server chaining.

    This takes place when server A connects to server B, which in turn connects to server C, effectively allowing users on server A to retrieve data from server C. My suggestion in this case is to avoid proliferation of linked servers and setting up replication or AlwaysOn instead. This will ensure better control over data access.

This is a third post in a row about linked servers and I am out of ideas regarding those for now. However, if there’s something valid or requiring correction, or maybe you’d like to ask something, please let me know.

Remote stored procedure execution through linked server

RPCOutI am recently doing a lot of experiments with linked servers. Last one included enabling remote stored procedure call and it’s really quick – it only requires RPC Out to be set to true in the linked server definition (on the right). You can also use sp_serveroption to set this value.

As it turns out RPC setting does not do anything – see for example here. Remember about granting rights on the remote server – giving too much access may lead to data leaks or uncontrolled data modifications on the remote machine. As usual, we want to have as limited access to remote resources as possible.

I also observed strange behaviour – I used two servers SQL12T1 and SQL12T2 in this demo. I first created linked server on T1 pointing to T2 using SQL Server provider and then I tried to created another one on T2 which points to T1. However, I used default provider (Microsoft OLE DB Provider for SQL Server) without specifying any data connection and my newly created linked server pointed to localhost (take a look below – at first I thought that T1 is redirecting me to T2 somehow, but then it became clear to me it’s connecting to localhost). Keep an eye on it – make sure that you choose SQL Server if you want to input server name only or provide valid connection string for OLE DB.

SQL12T1_2 and SQL12T2

SQL Server upgrade from 2005 to 2008 R2 – more notes

I have already put down some notes about upgrading SQL Server from 2005 to 2008 R2, but since it’s been a few weeks I gained some useful experience. I’ll divide it into two parts – what works and what doesn’t, so it’s pretty much like a standard lessons learnt document. It’s all pretty fresh – I had two upgrades in May, both successful, but not without hassle (self-inflicted, of course).

What works:

  1. Copying installation source locally instead of installing from a network share – network disruptions have no impact on upgrade. I had my upgrade at the same time as LAN maintenance and it had a minimal impact on SQL Server upgrade.
  2. Having a local account with administrative rights – in case you can’t use built-in Administrator account you might consider creating a local account and adding it to local Administrators group as it will free you from network/login issues. Just remember to clean up afterwards.
  3. Upgrading from SQL Server 2005 databases in a compatibility level 90 – straightforward, never met a single one that had problems after setting compatibility level to 100.
  4. Migrating of Reporting Services between SQL Servers (both 2005) by means of backup\restore – yes, but in 2005 I had to install SSRS, initialize its database and overwrite it on restore. Never tried it in 2008+, but I presume it works exactly the same. When I tried to restore to a new database, and then reconfigure SSRS, it wasn’t able to initialize. However, there’s a trick with deleting encryption keys which might also work in this case – worth checking in free time.

What doesn’t work

  1. Upgrading a read-only database – renders it unusable. I did it (nothing to be proud of) and to fix it I had to put it in emergency mode, detach, reattach and set it back to read-only. Please note it’s what I did and worked for me  – checked with DBCC CHECKDB afterwards and the database was fine. Why emergency mode, you might ask – the database startup fails during recovery phase, because it requires to be upgraded as server version has changed while it’s read-only. So it leaves you with a 2008 R2 server using databases with internal version 661, but a read-only database remains with internal version 611 (because it’s read-only), so the server issues an error that data files are inaccessible and you can’t even detach it. The emergency mode is used here to suppress recovery and allow detaching – I wasn’t brave enough to try setting the database to read-write after emergency mode on – also worth checking in free time if it works and what happens to the database. Alternative – restore latest backup (since the database ia read-only, it should not change) to another SQL Server 2005 box, set it to read-write and copy to 2008 R2 instance. Lucky me as the whole database was read-only – if it were only a filegroup inside, that would be a bigger issue (see Paul Randal’s post at about it).
  2. Upgrading SQL Server 2000 databases to 2008 R2 – some of them contain some deprecated features like old-style joins and it turns out that you can upgrade the server, but have to leave the database on compatibility level 80. I’ve seen this happen a few times, and the only solution is to review database structure and remove all the deprecated features. The thing is – you might not always have time or resources to do it, but it’s where SQL Server Upgrade Advisor comes in, the reports can help you track where the problems can be.
  3. This is me grumbling about a thing that is by design – a number of restarts required to upgrade your SQL Server, especially on Windows Server 2003. If you didn’t reboot since last patching session, restart. If you don’t have Windows Installer 4.5 (and I don’t since I’m starting from 2005), next restart. Upgrade SQL Server, next restart. After Service Pack, another restart. Now multiply it by a number of instances on the server that are upgraded, add some time for some preparations before, clean up after, plan for some contingency, and you can barely squeeze two instances in 6 hours. Luckily this isn’t one of the new HP ProLiant DL380 G7 servers that take 10 minutes to reboot, spending 9 minutes on power and thermal calibration. This is also an important point – when planning your maintenance window, consider all the activities. If the installation takes 30 minutes and updates another 15, both require a restart which takes 15 minutes, plus one upfront, then your downtime is not 45 minutes, but 90! It is crucial because if you miscalculate, you might not fit in the maintenance window and then you have a service down, not to mention that if something goes wrong, you need some time for rollback plan (which cannot fail – but imagine that tape drives are busy with your 70%-completed Exchange backup with still 2 TB to go and you have to wait till it’s done to restore 10 GB database).

Now, I might have another set comments pretty soon – a round of upgrades is waiting for me in June plus a retirement. If you have any comments or questions, feel free to share.

SQL Server 2012 beta exams – my feelings

Earlier this month I had the opportunity to participate in beta tests of SQL Server 2012 certification exams. I have already described what exams are prepared and what the new certification path is in one of previous posts, so let me briefly describe how the exams loked like.

First of all, it’s a standard Prometric test. You come to testing center with two IDs (both with a signature, one of them needs to have a photo of you) half an hour before the exam, sign all the paperwork, leave all unnecessary stuff in a locker and wait till the test administrator calls you. I understand that the exam rules are pretty standard, but I love the part which says “no weapons are allowed at the Prometric test center”. Well, perhaps there are places in the world, where you have to fight your way to the test, fortunately Poland is not one of those.

I signed up for two beta exams (71-461 and 71-462) on one day, mostly due to time pressure. I am currently heavily loaded at work due to new system implementations and I didn’t have much time to do some real preparations. I had a bunch of VMs set up at my home PC lab, spent some time installing and configuring SQL Server 2012 but didn’t actually go deeper into it, for instance I had planned to establish AlwaysOn HA solution as a practice, but it ended up with reading Books Online a day before the exams. I had no idea what the certifications would be then apart from recertification requirement, so it was a blind shot made mostly for fun, but I guess I did OK – it was either 5 or none, if you were thinking certificate-wise.

There were two other guys waiting with me to take their tests and as usual I had bad luck – the computer assigned to me was out-of-date and received Prometric updates which had been applying for half an hour. It did not impact my test a lot, but I got a bit anxious at one moment and had to rebuild my focus. Then I left the room to catch my breath and prepare for second exam, but it turned out that the other computer I was supposed to use for testing hung and required assistance. All in all, I spent almost 6 hours at the testing center and finally took the wrong door while leaving.

The query exam (71-461) was an interesting one. I was expecting a lot of CRUD-like stuff. Instead, there were questions about database design and fundamental concepts such as keys and referential integrity, there was also a bit of DDL, all of course focused on features new to SQL Server 2012. If I were preparing for 70-461 exam, I would revisit:

  • new built-in functions, such as LEAD, LAG, TRY_PARSE
  • data types, paying special interest to date/time
  • DDL stuff including triggers vs stored procedures
  • locking and isolation levels and their impact on query results

Those were things which probably have impacted my result. What I didn’t like and strongly argued about it in the comments were questions related to cursors – the feature I don’t like and try to avoid as much as possible. With CTE being available since SQL Server 2005 I would expect cursors to lose their usefulness, but the test could make me think otherwise.

When it came to 70-462 – an administration exam – I felt absolutely confident about it. This is by all means the part of SQL Server that I am most exposed to (my development skills have gone down lately) and I find it most interesting. If you prepare for it, pay special attention to:

  • backup and restore strategies, with reference to recovery models
  • extended events – this is new diagnostic feature introduced in SQL Server 2008, so this is the first opportunity to test it
  • AlwaysOn – there is a big hype about it with SQL Server 2012 and you want to know everything about it
  • contained databases – what are they and how to convert standard database to a contained one

I had a few questions about database mirroring, which I found strange given that it is considered deprecated in next versions of SQL Server. I even argued about one of those, which was single-choice question and from what I understood two options were 100% correct.

Speaking of exam quality – after reading a few sources I expected a lot of bugs and holes. I was surprised (in a positive way) to see only one error and a number of inconsistencies. The results are going to be available at the moment when real exams go-live, which is about June 12. I am looking forward to results – it was a very interesting experience.

SQL Server 2005 SP3 to 2008 R2 upgrade – planning and issues

As part of my DBA activities I was tasked with performing feasibility study of upgrade of SQL Server 2005 to 2008 R2, which produced a nice roadmap I’m using now to perform step-by-step migrations. I thought it was going to be quite smooth, but sometimes – mostly due to inconsistencies in environment – issues happen, unfortunately.

First of all, if you happen to lead a project of such migration be sure to run SQL Server Upgrade Advisor to identify blocking issues early and solve them in advance. This may include old-style joins using *= or =* (if you have compatibility model 80 databases). Most SQL Server 2005 and later instances are compatible so it should be quite straightforward, but do yourself a favour, check it and be on a safe side.

Second, when you’re approaching a migration, clean the environment up as much as possible. Here’s my case – I had a server where databases had no owner, so I fixed it. But then it turned out that DEP is prevented installation of SQL Server from remote source. If you’re on Windows Server 2003, chances are that you need .NET 3.5 and Windows Installer 4.5 and it requires additional restart (that’s what happened to me). When I managed to run SQL Server Setup, final upgrade checks returned “SQL Server health check” failed. This was a bit tricky, because it required my account to get “Debug programs” privilege on local machine (careful! it needs logout), which took another 30 minutes.

Consider the restarts also take time. You might need 2 or 3 restarts, if going from 2005 SP3 to 2008 R2 SP1 and if you’re running servers like HP ProLiant DL380G7 which are quick, but take awful lot of time to boot, you may count 30 minutes for those restarts. 30 minutes in 2 hours maintenance window is 25% and it’s really a lot when a quarter of downtime you’re off and waiting for server to come up.

If you’re responsible for upgrade, have some time on side for such unexpected situations – if you think a maintenance window of 2 hours is sufficient, consider to have 4 hours booked. This will give you time to solve things if they appear without great time pressure and even if you happen to have a total failure that you need to do restore, you’re still on time.

Third, do all the backups needed. Take a backup of user databases and master prior to backup, take one after you complete upgrade and if you apply further SPs and CUs, backup after them as well. If you use a tape drive, do a full backup of your whole server after you complete migration. And don’t forget to verify the backup when you take it.

All those tips could be briefly wrapped up in a PM proverb “if you fail to plan, you plan to fail”. You can’t argue with that kind of wisdom.

[Story] Deployment with missing indexes and how to deal with that

Well, this is a kind of story that happens to every DBA once in a while. You must have experienced this if you had an application developed by a third-party, which has been poorly load-tested. The symptoms are: you test an application in QA environment and confirm functionalities on a small sample, but once you go live suddenly the performance suffers badly. You check server performance and SQL Server is eating 100% of CPU, HDDs are screaming, disk queues are growing longer and all the important queries time out.

As you may expect, the indexes are missing. If it’s the case, you will go down – but how fast depends on your load, number of connections and transactions, average users’ count and some other factors. The heavier the worklad, the quicker the crash. Luckily, you can defend from that.

First of all, if you don’t have any kind of warning system in-place, which will notify you when queries are starting to run longer, you can only respond. In the beginning you should prevent those cases, have your queries monitored with a SQL trace filtering queries high CPU, Reads and Writes. You may also try setting up a SQL Server performance alert or use extended events for that purpose, but a trace is the easiest, plus you get the data you may provide to Database Tuning Advisor as input and have a quick fix in a short time.

What if you don’t have query monitoring? You’re not completely doomed. Use the missing indexes DMVs to quickly find poorly peforming queries/tables, but be careful – it can only show 500 entries, so you may not get all the results you need. The best solution might be to use Profiler to record a workload and then optimize it, but since the performance already suffers you have to decide whether to run it or not. You may use new Activity Monitor and Recent Expensive Queries feature to probe for candidates but again, it is recommended to run DTA on the queries.

I had this situation a while ago when introducing a solution which monitors factory equipment and records it’s status. Initial tests were completed successfully, but shortly after going live with whole production floor a new functionality, which was highly expected by users, suddenly caused a system to stop, threatening to stop the business. We made a call to run Profiler in production environment and tuned the queries manually, which also involved some assembly reverse engineering with help of Red Gate’s Reflectorto identify some queries in order to optimize them later. The whole situation was fixed within 3 days, bu the second and third day were not looking so bright.

[Story] Database collation change issues

So another story coming… This one distrubs chronological order, as I have already posted a story about replication, which ultimately was caused by the actions depicted here.

A little background – there is a database which is about 10 years old, which had a refresh just recently. During development it turned out that in order to keep reporting working a collation change is required, mostly due to data interchange via linked servers. It was easier than to work with collation on a object level, instead of trying to identify all the queries and modify them to align collation. As you might expect, a simple default database collation change was not possible due to check constraints being dependent on text columns. I had to drop and recreate those check constraints after collation change, but then it also appeared to me that while applying database_default collation on columns in some tables it couldn’t be completed simply due to indexes on those columns. So those indexes had to be dropped prior to table alteration just to be recreated after that. A lot of scripting was involved and whole operation took about 8 hours to complete.

One more recollection – since it was in a virtual machine environment, low disk space occured in a meantime, so if you come across such massive collation change, keep in mind that a lot of disk space is required for logging. In this case I required about 4 times of database size for logging, not to mention backups before and after.