Posted by Szymon Wójcik on May 30, 2012
I have already put down some notes about upgrading SQL Server from 2005 to 2008 R2, but since it’s been a few weeks I gained some useful experience. I’ll divide it into two parts – what works and what doesn’t, so it’s pretty much like a standard lessons learnt document. It’s all pretty fresh – I had two upgrades in May, both successful, but not without hassle (self-inflicted, of course).
- Copying installation source locally instead of installing from a network share – network disruptions have no impact on upgrade. I had my upgrade at the same time as LAN maintenance and it had a minimal impact on SQL Server upgrade.
- Having a local account with administrative rights – in case you can’t use built-in Administrator account you might consider creating a local account and adding it to local Administrators group as it will free you from network/login issues. Just remember to clean up afterwards.
- Upgrading from SQL Server 2005 databases in a compatibility level 90 – straightforward, never met a single one that had problems after setting compatibility level to 100.
- Migrating of Reporting Services between SQL Servers (both 2005) by means of backup\restore – yes, but in 2005 I had to install SSRS, initialize its database and overwrite it on restore. Never tried it in 2008+, but I presume it works exactly the same. When I tried to restore to a new database, and then reconfigure SSRS, it wasn’t able to initialize. However, there’s a trick with deleting encryption keys which might also work in this case – worth checking in free time.
What doesn’t work
- Upgrading a read-only database – renders it unusable. I did it (nothing to be proud of) and to fix it I had to put it in emergency mode, detach, reattach and set it back to read-only. Please note it’s what I did and worked for me – checked with DBCC CHECKDB afterwards and the database was fine. Why emergency mode, you might ask – the database startup fails during recovery phase, because it requires to be upgraded as server version has changed while it’s read-only. So it leaves you with a 2008 R2 server using databases with internal version 661, but a read-only database remains with internal version 611 (because it’s read-only), so the server issues an error that data files are inaccessible and you can’t even detach it. The emergency mode is used here to suppress recovery and allow detaching – I wasn’t brave enough to try setting the database to read-write after emergency mode on – also worth checking in free time if it works and what happens to the database. Alternative – restore latest backup (since the database ia read-only, it should not change) to another SQL Server 2005 box, set it to read-write and copy to 2008 R2 instance. Lucky me as the whole database was read-only – if it were only a filegroup inside, that would be a bigger issue (see Paul Randal’s post at SQLMag.com about it).
- Upgrading SQL Server 2000 databases to 2008 R2 – some of them contain some deprecated features like old-style joins and it turns out that you can upgrade the server, but have to leave the database on compatibility level 80. I’ve seen this happen a few times, and the only solution is to review database structure and remove all the deprecated features. The thing is – you might not always have time or resources to do it, but it’s where SQL Server Upgrade Advisor comes in, the reports can help you track where the problems can be.
- This is me grumbling about a thing that is by design – a number of restarts required to upgrade your SQL Server, especially on Windows Server 2003. If you didn’t reboot since last patching session, restart. If you don’t have Windows Installer 4.5 (and I don’t since I’m starting from 2005), next restart. Upgrade SQL Server, next restart. After Service Pack, another restart. Now multiply it by a number of instances on the server that are upgraded, add some time for some preparations before, clean up after, plan for some contingency, and you can barely squeeze two instances in 6 hours. Luckily this isn’t one of the new HP ProLiant DL380 G7 servers that take 10 minutes to reboot, spending 9 minutes on power and thermal calibration. This is also an important point – when planning your maintenance window, consider all the activities. If the installation takes 30 minutes and updates another 15, both require a restart which takes 15 minutes, plus one upfront, then your downtime is not 45 minutes, but 90! It is crucial because if you miscalculate, you might not fit in the maintenance window and then you have a service down, not to mention that if something goes wrong, you need some time for rollback plan (which cannot fail – but imagine that tape drives are busy with your 70%-completed Exchange backup with still 2 TB to go and you have to wait till it’s done to restore 10 GB database).
Now, I might have another set comments pretty soon – a round of upgrades is waiting for me in June plus a retirement. If you have any comments or questions, feel free to share.
Posted in DBA stories, SQL Server | Tagged: maintenance, planning, read-only database, Reporting Services, SQL Server 2005, SQL Server 2008 R2, upgrade | Leave a Comment »
Posted by Szymon Wójcik on May 24, 2012
This time I’m going to write about something less SQL Server related. It’s about my home lab and I’m planning to do with it in the next couple of weeks.
First few words about the hardware. Since I’m always tight on budget, I’m on AMD team since 10 years and as they have been doing some marvellous job in Athlon XP and Athlon 64 era, last five years are completely dominated by Intel. The only field where AMD remained competitive is price – past tense is not by accident, because I consider last year’s release of AMD FX CPUs (also known as Bulldozer) as marketing move without necessary (or promised) technical advantage. I use 3-year-old Phenom II X2 550 with 4 cores unlocked and I’m extremely satisfied with performance of this little 100$ worth CPU. However, if I were to buy a new computer at the moment, I would definitely go for Intel’s i5-2500 or i7-3770 – their performance is simply mind-blowing, not to mention their overclockability (they easily reach 4,5 GHz). Looking at the performance charts (for example at hwbot.org) you will see that today’s desktops have higher performance ratings than three year old dual socket servers.
The Phenom II uses AMD AM3 socket which backwards compatible with AM2+ motherboards, therefore the CPU has two integrated memory controllers – one for DDR2, second for DDR3. I tested both and found that two sticks of DDR2 and DDR3 are no problem for it, however it has some issues with 4 DDR3 sticks – requires dropping down memory frequency and loosening tmimings to work correctly. I use 16 GB (4×4) of DDR3 RAM and I’m quite happy about it – it really allows to have fun with virtualization.
Speaking of virtualization, I use virtual machines a lot thanks to wonderful guide by Jose Barreto – before reading it I had three separate machines creeping in size and hardly manageable. After reading and utilizing techniques inside I became resource-hungry since my VMs started to consume all RAM I had, so I expanded in one year from 4 to 16 GB. It also helped me get some basic acquaintance with Active Directory, clustering, DFS and System Center. I recommend setting your own environment like this one to every Windows administrator – fantastic educational purposes. It really does not cost you a lot – you might consider using your laptop for it and if you don’t like to mess with another OS on it – install Windows Server 2008 R2 on USB drive or even flash stick (I used PWBoot together with Windows image – confirmed to work with Windows 7, Windows Server 2008 R2 and Hyper-V Server 8 Beta).
In short how this environment looks like – you need Windows Server 2008 R2 SP1 Enterprise or Datacenter Edition – you may even use Evaluation which is available for free from Microsoft. SP1 is highly recommended – it extends significantly Hyper-V role features giving it for example Dynamic Memory option. You install in on a physical machine, configure to get it working, apply necessary patches and enable Hyper-V role. Inside Hyper-V, you require one base virtual disk for each operating system you are going to use and a number of differencing disks relating to base drives. Base virtual is obtained by using following procedure:
- Create a new machine with an empty dynamically expanding VHD
- Adjust VM parameters as needed (CPU, RAM), mount OS image and install it
- Apply latest Service Pack and security updates
- Perform sysprep with generalize option and shutdown the machine
After those steps the base VHD is ready and you can start spawning VMs by creating differencing VHDs linking to this one. Then you create virtual machines, assign an empty differencing VHD to each VM, configure parameters (CPU, RAM, network) and start it. This way you can have 5 new VMs operable in about an hour – it saves a lot of time if you install the OS only once. By the way, sysprep is supported for each system since Windows NT 4.0, so if you want to do a lot of experiments you can build your base virtual disks library by expense of less than 100 GB and then just create ready-to-use VMs with a few mouse clicks.
My home lab contains following VMs at the moment:
- Domain controller
- Storage server – using iSCSI Target
- 2 clustered file servers
- 3 clustered SQL Servers (using 2008 R2 and 2012 RTM – it’s a multi-instance cluster)
- System Center Virtual Machine Manager 2012 RC
- System Center Operations Manager 2012 RC
I rarely have all of them running – no need for that, but this setup fits just fine in 16 GB – if I wanted to add more VMs to this, I would definitely require more RAM or build another Hyper-V host and play with Quick/Live Migration.
Since there are two factors limiting me at the moment – RAM and disk space – I’m thinking of doing something spectacular with this but don’t have too many options. At first, I considered building a massive NT 4.0 environment, but since NT 4 allows only 2 servers to be grouped in a cluster, it’s not that interesting (on the other hand, spawning 50 NT 4s in an hour seems feasible with help from SCVMM, but it’s of little use, apart from pure fun). I might also consider using Windows 2000 but Advanced Server requires 128 MB RAM, the same as Windows Server 2003 Enterprise Edition. So it needs to be decided – maybe 8-node failover cluster of SQL Server 2005 on Windows Server 2003 is not that far away.
Definitely I need a facelift for this lab – upgrade from Windows Server 2008 R2 to 2012 once RC comes out end of June. This will also allow me to rebuild whole environment since it’s got a bit messy during last year. I might scrap SCOM in favor of Sharepoint 2010 and have some fun with new BI features of SQL Server 2012. I’m also considering playing with big data by distributing the load between a number a virtual computers.
Well, that’s it for now, so if you have some comments on a home lab design or want to ask something, please comment, I’m awaiting responses.
Posted in Windows Server | Tagged: amd, cpu, Hyper-V, intel, lab, phenom II, virtual machine, windows server 2008 r2, windows server 2012 | 1 Comment »
Posted by Szymon Wójcik on May 16, 2012
As it is with every SQL Server release, 2012 also has a white paper which covers in details the process of upgrade to SQL Server 2012. I’m sure that by now a lot of people have already tried it in their labs but it’s good to have a document that describes whole scenario with risks and possible caveats.
The document is available here. I strongly recommend reading it not only because of technical stuff (which is obviously useful in one place), but for project management-related information it contains. It also features ready-to-use checklists you migt want to incorporate into you upgrade projects.
Posted in SQL Server | Tagged: project management, SQL Server 2012, technical, upgrade | Leave a Comment »
Posted by Szymon Wójcik on May 8, 2012
Now it’s time for something more useful These pieces of T-SQL come in handy when auditing permissions, especially tracking extra db_owner members. There is also a stored procedure shipped by Microsoft named sp_HelpRoleMember but it returns additional server-level principal SID, which is not always required.
List all members of a database role in one database
select r.name as role_name, m.name as member_name from sys.database_role_members rm
inner join sys.database_principals r on rm.role_principal_id = r.principal_id
inner join sys.database_principals m on rm.member_principal_id = m.principal_id
-- where r.name = 'db_owner' and m.name != 'dbo' -- you may want to uncomment this line
List all members of a database role in all databases
To get results from all database you may try a slightly adapted version (remember about sp_MSforeachdb issues!):
create table ##RolesMembers
exec dbo.sp_MSforeachdb 'insert into ##RolesMembers select ''[?]'', ''['' + r.name + '']'', ''['' + m.name + '']''
from [?].sys.database_role_members rm
inner join [?].sys.database_principals r on rm.role_principal_id = r.principal_id
inner join [?].sys.database_principals m on rm.member_principal_id = m.principal_id
-- where r.name = ''db_owner'' and m.name != ''dbo'' -- you may want to uncomment this line';
select * from ##RolesMembers
order by [Database], [RoleName]
drop table ##RolesMembers
List all members with associated logins used sp_HelpRoleMember
create table ##RolesMembers
insert into ##RolesMembers exec sp_helprolemember 'db_owner'
select rm.RoleName, rm.MemberName, sp.name as LoginName from ##RolesMembers rm
inner join sys.server_principals sp on rm.MemberSID = sp.sid
drop table ##RolesMembers
Those are the methods I use for checking database role membership quickly. If you have some of your own and want to share, I would love to see it.
Posted in SQL Server | Tagged: database roles, database users, db_owner, role membership, sp_helprolemember | Leave a Comment »
Posted by Szymon Wójcik on May 1, 2012
SQL Server 2012 features a lot of new improvements in all of its aspects. Microsoft boasts their cloud readiness in all the new products – Windows Server, System Center and SQL Server as well. This post will describe one of the Azure-like features – contained databases.
A contained database, as stated in Books Online, is a database which isolated from other database and SQL Server instance. This means that part of the metadata which is required for database operation is included in it. In SQL Server 2012 two containment model are introduced:
- full containment – all the metadata exists in a database and there is no dependence on SQL Server instance
- partial containment – some of the metadata exists in a database and some on the instance-level
At this moment SQL Server 2012 supports only partial containment of databases which implies also that uncontained entities are only identified, where in full mode containment could be enforced (see Aaron Bertrand’s post about containment). There are two ways of identifying uncontained entities in a database – sys.dm_db_uncontained_entities DMV and database_uncontained_usage Extended Event. Beware – both show only those entities which are identified at run time.
The term which is closely related to containment level is database boundary, which separates elements that exist in a database (called database model) from those on the instance level (management model). We can say that contained elements exist within a database whereas uncontained elements cross database boundary. Of course, being uncontained is transitive – if an entity references or depends on another, which is uncontained, it is uncontained as well.
There are several benefits of contained databases:
- probably the most straightfoward is easy movement of contained databases – since the metadata is stored inside a database, it is easier to move it around. Till now, you could do backup\restore, import\export or detach\copy\attach, but none of these methods takes care of login mapping, server-level security, linked servers, Database Mail settings or SQL Server Agent jobs;
- following from the previous is the usefulness for database mirroring or AlwaysOn failover – if a database is contained, all you have to care for after failover is to point your connection strings to new location because the database is already there with all the information required to operate
- database development – developer might not have all the details required to properly set up a database at initial stage and may focus at the job at hand instead of concerning of all possible environmental impacts
- simplified database administration – in certain cases it is required for a database owner to have elevated permissions to server-level roles; this may be mitigated with database containment
Of course, there is always a downside – since a database is supposed to be contained, collation dependence is also to be redesigned. For a non-contained database there are three collations to be considered:
- DATABASE_DEFAULT – for user data (default)
- tempdb collation – for temporary data (default)
- instance default collation – for variables, cursor names and goto labels
In a contained database we are not supposed to rely upon instance information and tempdb, hence a CATALOG_DEFAULT collation which is used by system metadata and temporary objects in a database (by default equal to Latin1_General_100_CI_AS_WS_KS_SC). There are only two collations to be considered:
- DATABASE_DEFAULT – for user data and temporary data (default)
- CATALOG_DEFAULT – for metadata, temporary metadata, variables, cursor names and goto labels.
See more details in Books Online article on contained database collations.
Lastly, there are several limitations of contained databases which follow from separation from system databases and collation changes (taken from Books Online):
- replication, change tracking and change data capture is not permitted
- numbered procedures are not allowed
- schema-bound objects that depend on built-in functions with collation changes
- binding change resulting from collation changes
- temporary stored procedures are currently permitted but they are marked for deprecation in future versions of contained databases
Wrapping all up – database containment is an interesting option and it surely will be a hot subject in future versions of SQL Server. It will enhance database portability but there will be some points to consider implemeting them (see collation and supported features). I for certain am looking forward to working with it.
Posted in SQL Server, Uncategorized | Tagged: collation, contained databases, database boundary, partial containment, SQL Server 2012, sys.dm_db_uncontained_entities | Leave a Comment »