Building a clustered test lab with Windows Server 2012 RC and SQL Server 2012 RTM Evaluation

I have already described what my test lab at home looks like, however I find it interesting enough to provide details how to set it up. This post will be first in series and will cover planning and preparing the environment. Next posts will be about:

  • installing Windows Server 2012 in a VHD and enabling Hyper-V role
  • preparing virtual machine template and setting up virtual machines
  • preparing environment for setting up failover cluster 
  • installing SQL Server on 2-node cluster

I am using concepts shown in a blog post by Jose Barreto describing this setup with Windows Server 2008 R2 and SQL Server 2008 R2. We will use Windows Server 2012 and SQL Server 2012 but main idea is the same. To build this lab you require:

  • a PC which:
    • has a CPU that supports virtualization in hardware (for Intel you want a CPU supporting VT-x technology, for AMD – AMD-V) – most non-low-end CPUs do
    • has minimum of 4 GB RAM (8 GB or more recommended)
    • has 50 GB free HDD space (up to you)
  • some software – all available for free:
    • Windows Server 2012 RC or Hyper-V Server 2012 RC installation media (I will use “full” Windows Server as I didn’t do it using command-line/PowerShell in Hyper-V Server) – I don’t recommend using VHD provided by Microsoft since it expands from 8 to 40 GB during boot time and you’d need lots more disk space ;
    • SQL Server 2012 Evaluation – valid from 180 days, but it’s more than enough for start (you might try Developer Edition if you have one – you don’t have to worry about expiration then)
  • whole procedure takes about 8 hours (depends on your system speed, mainly I/O, so if you’re blessed with RAID 0 on SSD drives you might actually do it much faster) – be warned!

What we’re going to do is we will install fresh Windows Server 2012 operating system in a VHD (so as not to mess with existing setup – you may install on a physical disk if you don’t mind, performance-wise it’s negligible) – I assume that a main OS on the PC we’re using is Windows 7/Server 2008 R2 or newer, because XP does not support native-boot from VHD (not sure about Vista/Server 2008, but does anyone use it anymore? :P). Within this newly installed Server 2012 we will enable Hyper-V role and set up 4 virtual machines using one base template and four differencing disks. Then we will configure those machines like this:

  1. First machine is a domain controller and a DNS server – clustering requires a domain and Active Directory requires DNS.
  2. Second machine is storage controller – in the post by Jose (mentioned above) separate download of iSCSI Target is required for Windows Server 2008 R2, however it’s incorporated in Server 2012 – we don’t have to install anything, only enable additional OS features and configure them.
  3. Third machine is an active node of SQL Server failover cluster.
  4. Fourth machine is a passive node of SQL Server failover cluster.
  5. You may add more nodes to the cluster or consider setting up AlwaysOn.

This makes our lab completely standalone, so it can operate even if there is no connection to the outside. If it’s not required to have it this way, for example – you have a domain controller – you may skip creation of first VM. If you have a NAS/SAN, you may use it and skip the second machine. You may also consider combining first two machines into one (if you’re low on resources and don’t want to spawn a horde of machines). AD services in case of a domain so huge as this one will not be very demanding, definitely we’re not going to have a workload on storage impact a domian controller significantly. On the other hand you may also try setting up additional machines to see how it works – I made some with System Center 2012 RC, but you can also build a load-balanced SharePoint farm with a clustered database engine.

When it comes to network – no additional configuration will be done as clustering in Windows Server 2008 removed the need for dedicated heartbeat, so as long as our VMs see each other it will be fine (you may try to include DHCP server in this setup but remember to keep it contained so that it does not try to provide addresses in every network it may find). Just remember – live system may not have exactly the same architecture and whatever works for our homemade cluster may not be enough when it’s deployed into production (but hey, that’s a lab we’re building!).

When it comes to resource assignment for VMs I use the following pattern (I have Phenom II X4 CPU and 16 GB in my rig):

  1. For domain controller – 2 CPUs and 768 – 1024 MB RAM.
  2. For storage server – 2 CPUs and 768 MB RAM.
  3. For SQL Servers – 4 CPUs and 2048 – 3072 MB RAM.
  4. Rest – whatever I feel like/need.

When I’m setting this on a laptop where I have memory limitations (4 GB RAM) I cut down first two VMs to 512 MB RAM and SQL Servers to 1024 – 1280 MB. This allows me to remain operable, however sacrificing a lot of performance. Therefore 8 GB RAM is recommended.

And this concludes our preparation. Next post will get us through host setup – we will install Windows Server 2012 in a VHD and make it bootable so stay sharp!

SQL Server upgrade from 2005 to 2008 R2 – more notes

I have already put down some notes about upgrading SQL Server from 2005 to 2008 R2, but since it’s been a few weeks I gained some useful experience. I’ll divide it into two parts – what works and what doesn’t, so it’s pretty much like a standard lessons learnt document. It’s all pretty fresh – I had two upgrades in May, both successful, but not without hassle (self-inflicted, of course).

What works:

  1. Copying installation source locally instead of installing from a network share – network disruptions have no impact on upgrade. I had my upgrade at the same time as LAN maintenance and it had a minimal impact on SQL Server upgrade.
  2. Having a local account with administrative rights – in case you can’t use built-in Administrator account you might consider creating a local account and adding it to local Administrators group as it will free you from network/login issues. Just remember to clean up afterwards.
  3. Upgrading from SQL Server 2005 databases in a compatibility level 90 – straightforward, never met a single one that had problems after setting compatibility level to 100.
  4. Migrating of Reporting Services between SQL Servers (both 2005) by means of backup\restore – yes, but in 2005 I had to install SSRS, initialize its database and overwrite it on restore. Never tried it in 2008+, but I presume it works exactly the same. When I tried to restore to a new database, and then reconfigure SSRS, it wasn’t able to initialize. However, there’s a trick with deleting encryption keys which might also work in this case – worth checking in free time.

What doesn’t work

  1. Upgrading a read-only database – renders it unusable. I did it (nothing to be proud of) and to fix it I had to put it in emergency mode, detach, reattach and set it back to read-only. Please note it’s what I did and worked for me  – checked with DBCC CHECKDB afterwards and the database was fine. Why emergency mode, you might ask – the database startup fails during recovery phase, because it requires to be upgraded as server version has changed while it’s read-only. So it leaves you with a 2008 R2 server using databases with internal version 661, but a read-only database remains with internal version 611 (because it’s read-only), so the server issues an error that data files are inaccessible and you can’t even detach it. The emergency mode is used here to suppress recovery and allow detaching – I wasn’t brave enough to try setting the database to read-write after emergency mode on – also worth checking in free time if it works and what happens to the database. Alternative – restore latest backup (since the database ia read-only, it should not change) to another SQL Server 2005 box, set it to read-write and copy to 2008 R2 instance. Lucky me as the whole database was read-only – if it were only a filegroup inside, that would be a bigger issue (see Paul Randal’s post at SQLMag.com about it).
  2. Upgrading SQL Server 2000 databases to 2008 R2 – some of them contain some deprecated features like old-style joins and it turns out that you can upgrade the server, but have to leave the database on compatibility level 80. I’ve seen this happen a few times, and the only solution is to review database structure and remove all the deprecated features. The thing is – you might not always have time or resources to do it, but it’s where SQL Server Upgrade Advisor comes in, the reports can help you track where the problems can be.
  3. This is me grumbling about a thing that is by design – a number of restarts required to upgrade your SQL Server, especially on Windows Server 2003. If you didn’t reboot since last patching session, restart. If you don’t have Windows Installer 4.5 (and I don’t since I’m starting from 2005), next restart. Upgrade SQL Server, next restart. After Service Pack, another restart. Now multiply it by a number of instances on the server that are upgraded, add some time for some preparations before, clean up after, plan for some contingency, and you can barely squeeze two instances in 6 hours. Luckily this isn’t one of the new HP ProLiant DL380 G7 servers that take 10 minutes to reboot, spending 9 minutes on power and thermal calibration. This is also an important point – when planning your maintenance window, consider all the activities. If the installation takes 30 minutes and updates another 15, both require a restart which takes 15 minutes, plus one upfront, then your downtime is not 45 minutes, but 90! It is crucial because if you miscalculate, you might not fit in the maintenance window and then you have a service down, not to mention that if something goes wrong, you need some time for rollback plan (which cannot fail – but imagine that tape drives are busy with your 70%-completed Exchange backup with still 2 TB to go and you have to wait till it’s done to restore 10 GB database).

Now, I might have another set comments pretty soon – a round of upgrades is waiting for me in June plus a retirement. If you have any comments or questions, feel free to share.

SQL Server 2005 SP3 to 2008 R2 upgrade – planning and issues

As part of my DBA activities I was tasked with performing feasibility study of upgrade of SQL Server 2005 to 2008 R2, which produced a nice roadmap I’m using now to perform step-by-step migrations. I thought it was going to be quite smooth, but sometimes – mostly due to inconsistencies in environment – issues happen, unfortunately.

First of all, if you happen to lead a project of such migration be sure to run SQL Server Upgrade Advisor to identify blocking issues early and solve them in advance. This may include old-style joins using *= or =* (if you have compatibility model 80 databases). Most SQL Server 2005 and later instances are compatible so it should be quite straightforward, but do yourself a favour, check it and be on a safe side.

Second, when you’re approaching a migration, clean the environment up as much as possible. Here’s my case – I had a server where databases had no owner, so I fixed it. But then it turned out that DEP is prevented installation of SQL Server from remote source. If you’re on Windows Server 2003, chances are that you need .NET 3.5 and Windows Installer 4.5 and it requires additional restart (that’s what happened to me). When I managed to run SQL Server Setup, final upgrade checks returned “SQL Server health check” failed. This was a bit tricky, because it required my account to get “Debug programs” privilege on local machine (careful! it needs logout), which took another 30 minutes.

Consider the restarts also take time. You might need 2 or 3 restarts, if going from 2005 SP3 to 2008 R2 SP1 and if you’re running servers like HP ProLiant DL380G7 which are quick, but take awful lot of time to boot, you may count 30 minutes for those restarts. 30 minutes in 2 hours maintenance window is 25% and it’s really a lot when a quarter of downtime you’re off and waiting for server to come up.

If you’re responsible for upgrade, have some time on side for such unexpected situations – if you think a maintenance window of 2 hours is sufficient, consider to have 4 hours booked. This will give you time to solve things if they appear without great time pressure and even if you happen to have a total failure that you need to do restore, you’re still on time.

Third, do all the backups needed. Take a backup of user databases and master prior to backup, take one after you complete upgrade and if you apply further SPs and CUs, backup after them as well. If you use a tape drive, do a full backup of your whole server after you complete migration. And don’t forget to verify the backup when you take it.

All those tips could be briefly wrapped up in a PM proverb “if you fail to plan, you plan to fail”. You can’t argue with that kind of wisdom.