Miscellaneous linked server creation – aliases, other database providers and Excel

After publishing recent posts regarding linked servers I noticed there are still some topics about linked servers that are interesting but not so straightforward. So, I decided to describe them details to cover these points fully. This will include a lot of messing with connection strings – I recommend using http://www.connectionstrings.com as a reference.

Here we go:

  1. Create linked server with different name and Windows Authentication.

  2. This will require using OLE DB provider for SQL Server – we will use a different name (alias) for the linked server and provide a full connection string like:

    Provider=SQLNCLI10;Data Source=myServerAddress;Initial Catalog=myDataBase;Integrated Security=True;

    Don’t forget to choose correct security option – I vote for number two or three! 馃檪

  3. Create linked server with automatic failover (for use with AlwaysOn/database mirroring).

  4. This is an extension of an idea above – we will use OLE DB Provider, a different name (alias) for the linked server and provide a full connection string with definitione of failover partner like:

    Provider=SQLNCLI10;Data Source=myServerAddress;Initial Catalog=myDataBase;Integrated Security=True;Failover Partner=mirrorServer

    Adding a Failover Partner parameter to the connection string allows the client to attempt automatic reconnection in case of automatic failover, but you have to keep the mirror server up-to-date with server objects, such as logins or Agent jobs, otherwise your application might behave erratically.

  5. Create linked server to MySQL Server.

  6. This one is more tricky. There is no official OLE DB driver for MySQL, so we have to use ODBC instead. It has to be downloaded from MySQL site, installed and then we need to define system ODBC before we can connect to MySQL. After that it’s just a matter of providing the right connection string, for example:

    Driver={MySQL ODBC 5.1 Driver};Server=myServerAddress;Database=myDataBase;User=myUsername;Password=myPassword;Option=3;

    Let’s try it. Installation of the driver is straightforward, so the fun starts with system ODBC creation. I have MySQL server ready and waiting at mysql5.beton.test with one database and a table inside that contains two records:

    mysql> use beton
    Database changed
    mysql> select * from table1;
    | idTable1 | table1col |
    |        1 | aaaaa     |
    |        2 | acbd      |
    2 rows in set (0.00 sec)

    First I have to create ODBC data source (through Administrative Tools -> ODBC Data Sources): MySQL-ODBC-Step2MySQL-ODBC-Step1

    – since I am on 64-bit system I am using 64-bit driver – you might check if you need 32-bit or 64-bit;
    – I set up System DSN so that SQL Server can use it;
    – I choose Unicode driver and then I provide connection details – data source name, description, target server and port, user and password to connect to MySQL server, and startup database;
    – upon test connection it says “Connection successful”.

    After that initial setup I can proceed with linked server creation in SQL Server. In SSMS, I go to New Linked Server and provide following parameters:

    – Linked Server Name: MYSQL5 (it’s an alias I want to use);
    – Provider – Microsoft OLE DB Provider for ODBC Drivers;
    – Data Source – target server name, mysql5.beton.test in my case;
    – test connection – it should succeed at this point.

    mysql5_linked_serverFew notes about it:

    1. I explicitly mentioned user/password in the connection string. This causes all connections to be done in the context of this user. I couldn’t manage to get the connection working when using user mapping or domain accounts (might be related to ODBC setup). I’ll work on that but I am also open to any ideas about how to get it done.
    2. This does not allow for browsing the catalog we are used to in case of remote SQL Server – you can see the tables, but you can’t simply do SELECT col1, col2 FROM table1. It is caused by incompatibility of metadata in information_schema, so you have to prepare your queries in MySQL and then use OPENQUERY, which has to be configured first via sp_configure:
    3. sp_configure 'ad hoc distributed queries', 1
      reconfigure with override

      Then you can query your MySQL server like

      select * from OPENQUERY([MYSQL5], 'select * from table1;')

      This is the warning light should start flashing. Since any user can use OPENQUERY and the permissions on the remote server are determined based on the ODBC/linked server definition, you really want to give the account you use to connect the least access rights possible, otherwise be prepared to face uncontrolled data modifications. Also, ad hoc distributed queries is a server setting – once you enable it, everyone can use it in all sorts of ways, so keep it in mind if you want to maintain control over your data (you can also try to use “Disallow ad hoc access” option of linked server provider, but remember that it affects all linked servers using this provider).

      In both cases of PostgreSQL and MySQL, you don’t see the structure on remote database in Object Explorer until you enable ‘Allow inprocess‘. If this option is disabled though, it should protect SQL Server from errors caused by the provider. However, I observed strange behavior of crash of SQL Server when trying to use ODBC linked server provider with “Allow inprocess” option disabled. Steps to reproduce:

      1. Install ODBC driver – I used official drivers (64-bit version) for MySQL and for PostgreSQL.
      2. Set up a System DSN and confirm it works successfully using test connection option.
      3. In SSMS, create a linked server for MySQL/PostgreSQL. Check that “Allow inprocess” option is unchecked in MSDASQL provider properties.
      4. Open new query window and run query similar to SELECT * FROM OPENQUERY(linked_server, ‘SELECT * FROM table’), where linked_server is the name of newly created linked server, and table is any table. this will give error like:
      5. Msg 7399, Level 16, State 1, Line 8
        The OLE DB provider "MSDASQL" for linked server "MYSQL5" reported an error. Access denied.
        Msg 7350, Level 16, State 2, Line 8
        Cannot get the column information from OLE DB provider "MSDASQL" for linked server "MYSQL5".
      6. Run this query two more times – first gives exactly the same error, but after second SQL Server crashes:
        Faulting application name: sqlservr.exe, version: 2011.110.3000.0, time stamp: 0x5082086a
        Faulting module name: ntdll.dll, version: 6.2.9200.16384, time stamp: 0x5010acd2
        Exception code: 0xc0000374
        Fault offset: 0x00000000000ea2b9
        Faulting process id: 0xdbc
        Faulting application start time: 0x01cec279fec9e95b
        Faulting application path: C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
        Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
        Report Id: 665792a8-2e6d-11e3-93fd-00155d040513
        Faulting package full name: 
        Faulting package-relative application ID: 
      7. I experienced this on SQL Server 2012 SP1 builds 3000 and 3128, but I suspect it’s a bug in 2012 SP1 as I haven’t found this mentioned in changelists of all 2012 SP1 CUs. I also found similar topic on SQLServerCentral forums which also mentions build 3128. At first I suspected GDR update 3128 to be the cause, but after uninstalling it and going back to 2012 SP1 RTM the issue persists.
      8. The workaround is to enable “Allow inprocess” on MSDASQL linked server provider – after enabling it all those problems go away.

  7. Create linked server to PostgreSQL Server.

  8. Linking to PostgreSQL is similar to MySQL, first you have to install ODBC driver for PostgreSQL (available here), set up system ODBC and create linked server in reference to ODBC definition. Then query a linked server using OPENQUERY or four-part name.

  9. Create linked server to Excel/Access.

  10. There is a comprehensive guide to linking to Excel/Access available here. In short, you have to install data provider (Microsoft.Jet.OLEDB.4.0 for Excel up to 2003 or Microsoft.ACE.OLEDB.12.0 for 2007 or newer) and then use it when defining new linked server. Sample T-SQL code:

    EXEC sp_addlinkedserver
        @server = 'ExcelServer2',
        @srvproduct = 'Excel', 
        @provider = data_provider_name,
        @datasrc = filename,
        @provstr = 'excel_version;IMEX=1;HDR=YES;'

    In the pseudocode above, data_provider_name is the provider to be used (based on Excel/Access version), excel_version is Excel 8.0 for 2003 and Excel 12.0 for 2007 and above. For more details I recommend visiting mentioned link.

Summary – in this post I described how to use aliases for linked SQL Servers, how to set up a linked server for MySQL/PostgreSQL and Excel. I also showed an error I spotted for ODBC linked server provider in SQL Server 2012 SP1. As usual, I await comments/questions.

I’m an MCSA: SQL Server 2012, it’s official! :)

As of April 29, 2013 I became Microsoft Certified SQL Server (MCSA): SQL Server 2012. I passed last of three exams (70-463) required for this title.
It was a difficult one – especially when I had not much SSIS experience in last 18 months other than creation of maintenance plans. I had to spent a few weekends with Evaluation Edition exercising in order to feel quite comfortable with my knowledge.

For all of you preparing to this exam, I recommend going through all SSIS components mentioned and exercising various concepts mentioned in course curriculum. The theoretical basis is also important but the main focus is on practical aspects of development in SSIS, so do your due dilligence.

I still plan to complete remaining two exams required to become MCSE: Data Platform this year, so hold on, I will keep posting about that.

Building a clustered test lab with Windows Server 2012 RC and SQL Server 2012 RTM Evaluation

I have already described what my test lab at home looks like, however I find it interesting enough to provide details how to set it up. This post will be first in series and will cover planning and preparing the environment. Next posts will be about:

  • installing Windows Server 2012 in a VHD and聽enabling Hyper-V role
  • preparing virtual machine template and setting up virtual machines
  • preparing environment for setting up failover cluster聽
  • installing SQL Server on 2-node cluster

I am using concepts shown in a blog post by Jose Barreto describing this setup with Windows Server 2008 R2 and SQL Server 2008 R2.聽We will use Windows Server 2012 and SQL Server 2012 but main idea is the same.聽To build this lab you require:

  • a PC which:
    • has a CPU that supports virtualization in hardware (for Intel you want a CPU supporting VT-x technology, for AMD – AMD-V) – most聽non-low-end聽CPUs do
    • has minimum of 4 GB RAM (8 GB or more recommended)
    • has 50 GB free HDD space (up to you)
  • some software – all available for free:
    • Windows Server 2012 RC聽or Hyper-V Server 2012 RC聽installation media (I will use “full” Windows Server as I didn’t do it using command-line/PowerShell in Hyper-V Server) – I don’t recommend using VHD provided by Microsoft since it expands from 8 to 40 GB during boot time and you’d need lots more disk space ;
    • SQL Server 2012 Evaluation聽– valid from 180 days, but it’s more than enough for start (you might try Developer Edition if you have one – you don’t have to worry about expiration then)
  • whole procedure takes about 8 hours (depends on your system speed, mainly I/O, so if you’re blessed with RAID 0 on SSD drives you might actually do it much faster) – be warned!

What we’re going to do is we will install fresh Windows Server 2012 operating system in a VHD (so as not to mess with existing setup – you may install on a physical disk if you don’t mind, performance-wise it’s negligible) – I assume that a main OS on the PC we’re using is Windows 7/Server 2008 R2聽or newer, because XP does not support native-boot from VHD (not sure about Vista/Server 2008, but does anyone use it anymore? :P). Within this newly installed Server 2012 we will enable Hyper-V role and set up 4 virtual machines using one base template and four differencing disks. Then we will configure those machines like this:

  1. First machine is a domain controller and聽a DNS server – clustering requires聽a domain and Active Directory requires DNS.
  2. Second machine is storage controller – in the post by Jose (mentioned above) separate download of iSCSI Target is required for Windows Server 2008 R2, however it’s incorporated in Server 2012 – we don’t have to install anything, only enable additional OS features and configure them.
  3. Third machine is an active node of SQL Server failover cluster.
  4. Fourth machine is a passive node of SQL Server failover cluster.
  5. You may add more nodes to the cluster or consider setting up AlwaysOn.

This makes our lab completely standalone, so it can operate even if there is no connection to the outside. If it’s not required to have it this way, for example – you have a domain controller – you may skip creation of first VM. If you have a NAS/SAN, you may use it and skip the second machine. You may also consider combining first two machines into one (if you’re low on resources and don’t want to spawn a horde of machines). AD services in case of a domain so huge as this one will not be very demanding, definitely we’re not going to have a workload on storage impact a domian controller significantly. On the other hand you may also try setting up additional machines to see how it works – I made some with System Center 2012 RC, but you can also build a load-balanced SharePoint farm with a clustered database engine.

When it comes to network – no additional configuration will be done as clustering in Windows Server 2008 removed the need for dedicated heartbeat, so as long as our VMs see each other it will be fine (you may try to include DHCP server in this setup but remember to keep it聽contained so that it does not try to provide addresses in every network it may find). Just remember – live system may not have exactly the same architecture and whatever works for our homemade cluster may not be enough when it’s deployed into production (but hey, that’s a lab we’re building!).

When it comes to resource assignment for VMs I use the following pattern (I have Phenom II X4 CPU and 16 GB in my rig):

  1. For domain controller – 2 CPUs and 768 – 1024 MB RAM.
  2. For storage server – 2 CPUs and 768 MB RAM.
  3. For SQL Servers – 4 CPUs and 2048 – 3072 MB RAM.
  4. Rest – whatever I feel like/need.

When I’m setting this on a laptop where I have memory limitations (4 GB RAM) I cut down first two VMs to 512 MB RAM and SQL Servers to 1024 – 1280 MB. This allows me to remain operable, however sacrificing a lot of performance. Therefore 8 GB RAM is recommended.

And this concludes our preparation. Next post will get us through host setup – we will install Windows Server 2012 in a VHD and make it bootable so stay sharp!

SQL Server 2012 – 70-461 and 70-462 exams passed

Yesterday I found some interesting information on Prometric site – I passed 71-461 and 71-462 exams (will count as regular 70-461/462). I don’t know the exact scores, but I did it and I am really happy about it. I only wish now that I didn’t take more of the beta exams – I might as well be an MCSA/MCSE at the moment.
Results will appear on my transcript later, probably by the time I get the official confirmation e-mail from Microsoft, which takes up to few weeks.
Time to prepare for 70-463 and then upgrade to MCSE: DP – I wonder if it’s possible to do it this way – upgrade my MCITP DBA from SQL Server 2005 to 2008 (70-453), then earn MCSA: SQL Server 2008 (70-448), pass 70-463 to get MCSA: SQL Server 2012 and finally upgrade MCITP to MCSE: DP with 70-459.

SQL Server 2012 Upgrade Technical Guide white paper released

As it is with every SQL Server release, 2012 also has a white paper which covers in details the process of upgrade to SQL Server 2012. I’m sure that by now a lot of people have already tried it in their labs but it’s good to have a document that describes whole scenario with risks and possible caveats.

The document is available here. I strongly recommend reading it not only because of technical stuff (which is obviously useful in one place), but for project management-related information it contains. It also features ready-to-use checklists you migt want to incorporate into you upgrade projects.

Contained databases in SQL Server 2012 overview

SQL Server 2012 features a lot of new improvements in all of its aspects. Microsoft boasts their cloud readiness in all the new products – Windows Server, System Center and SQL Server as well. This post will describe one of the Azure-like features – contained databases.

A contained database, as stated in Books Online, is a database which isolated from other database and SQL Server instance. This means that part of the metadata which is required for database operation is included in it. In SQL Server 2012 two containment model are introduced:

  • full containment – all the metadata exists in a database and there is no dependence on SQL Server instance
  • partial containment – some of the metadata exists in a database and some on the instance-level

At this moment SQL Server 2012 supports only partial containment of databases which implies also that uncontained entities are only identified, where in full mode containment could be enforced (see Aaron Bertrand’s post about containment). There are two ways of identifying uncontained entities in a database – sys.dm_db_uncontained_entities DMV and database_uncontained_usage Extended Event. Beware – both show only those entities which are identified at run time.

The term which is closely related to containment level is database boundary, which separates elements that exist in a database (called database model) from those on the instance level (management model). We can say that contained elements exist within a database whereas uncontained elements cross database boundary. Of course, being uncontained is transitive – if an entity references or depends on another, which is uncontained, it is uncontained as well.

There are several benefits of contained databases:

  • probably the most straightfoward is easy movement of contained databases – since the metadata is stored inside a database, it is easier to move it around. Till now, you could do backup\restore, import\export or detach\copy\attach, but none of these methods takes care of login mapping, server-level security, linked servers, Database Mail settings or SQL Server Agent jobs;
  • following from the previous is the usefulness for database mirroring or AlwaysOn failover – if a database is contained, all you have to care for after failover is to point your connection strings to new location because the database is already there with all the information required to operate
  • database development – developer might not have all the details required to properly set up a database at initial stage and may focus at the job at hand instead of concerning of all possible environmental impacts
  • simplified database administration – in certain cases it is required for a database owner to have elevated permissions to server-level roles; this may be mitigated with database containment

Of course, there is always a downside – since a database is supposed to be contained, collation dependence is also to be redesigned. For a non-contained database there are three collations to be considered:

  • DATABASE_DEFAULT – for user data (default)
  • tempdb collation – for temporary data (default)
  • instance default collation – for variables, cursor names and goto labels

In a contained database we are not supposed to rely upon instance information and tempdb, hence a CATALOG_DEFAULT collation which is used by system metadata and temporary objects in a database (by default equal to Latin1_General_100_CI_AS_WS_KS_SC). There are only two collations to be considered:

  • DATABASE_DEFAULT – for user data and temporary data (default)
  • CATALOG_DEFAULT – for metadata, temporary metadata, variables, cursor names and goto labels.

See more details in Books Online article on contained database collations.

Lastly, there are several limitations of contained databases which follow from separation from system databases and collation changes (taken from Books Online):

  • replication, change tracking and change data capture is not permitted
  • numbered procedures are not allowed
  • schema-bound objects that depend on built-in functions with collation changes
  • binding change resulting from collation changes
  • temporary stored procedures are currently permitted but they are marked for deprecation in future versions of contained databases

Wrapping all up – database containment is an interesting option and it surely will be a hot subject in future versions of SQL Server. It will enhance database portability but there will be some points to consider implemeting them (see collation and supported features). I for certain am looking forward to working with it.

Database mirroring rolling upgrade

I was testing聽database mirroring rolling upgrade in my lab when upgrading from SQL Server 2012 RC0 to RTM shortly after RTM was available. One of the side results was proven incompatibility of internal database versions between RC0 and RTM of SQL Server 2012聽(705 and 706, respectively). In this post I would like to focus on the main activity that was performed.

Imagine you have database mirroring with automatic failover implemented in your environment and you want to perform an upgrade of server instance while minimizing possible downtime. It does not seem quite challenging if you follow the right procedure:

  1. Disable witness.
  2. Perform maintenance on the mirror.
  3. Fail over.
  4. Since the principal is now mirror, perform maintenance on it.
  5. Fail over again.
  6. Enable witness.

If you look at it closely, you’ll notice close resemblance to failover cluster rolling upgrade, when you upgrade passive nodes one by one. In this case, steps 1 and 6 are there to ensure that no unexpected failover occurs that would disrupt the progress of upgrade.

Keep in mind that as always with failover in database mirroring it’s quick but there always is a downtime, which length is determined by several factors:

  • length of REDO queue on the mirror;
  • processing power of the mirror server and the remaining workload it has to deal with;
  • SQL Server edition on the mirror.

Whereas first two points are quite straightforward – failover requires roll-forward to complete and it’s when REDO queue is processed –聽last one is connected to them indirectly聽becuase roll-forward process uses one thread in following situations:

  • Standard Edition is installed or
  • Enterprise Edition is installed and the instance has up to 4 processor cores available

If you have Enterprise Edition installed and more than 4 cores are available for SQL Server, roll-forward uses one thread for every 4 cores the server utilizes.

In addition, you should remember to protect your data, which means not only doing standard your backup routine prior to maintenance, but switching from high-performance mirroring to high-safety mode. This will help you ensure that all transactions on the principal have been delivered to the mirror minimizing the risk of data loss. You should also check your database(s) for consistency before and after failovers.

This constitutes a detailed action plan like this (S1 is the principal, S2 is the mirror):

  1. Run DBCC CHECKDB on all databases on S1.
  2. Perform full backup聽of all databases on S1.
  3. If running mirroring in high-performance mode, switch to high-safety.
  4. Remove witness from mirroring session.
  5. Perform upgrade on S2.
  6. Fail over all databases to S2.
  7. Run DBCC CHECKDB on all databases on S2.
  8. Perform upgrade on S1.
  9. Fail over all database to S1.
  10. Run DBCC CHECKDB on all database on S1.
  11. Perform full backup of all databases on S1.
  12. If mirroring was initally set to high- performance mode, switch mirroring modes.
  13. Upgrade witness (actually you can do it any time).
  14. Add witness to mirroring session.

I have done upgrade from 2012 RC0 to RTM this way and it took me about聽60 minutes on my lab PC at home. The application using those database servers is System Center Configuration Manager 2012 RC0 and I had no perceivable downtime during upgrade.

The thing I have to verify is – in case of upgrade from SQL Server 2008 R2 to 2012 is it possible to have a witness on 2008聽R2 with partners on 2012? If anyone knows, please share it with me.

SQL Server 2012 beta exams – my feelings

Earlier this month I had the opportunity to participate in beta tests of SQL Server 2012 certification exams. I have already described what exams are prepared and what the new certification path is in one of previous posts, so let me briefly describe how the exams loked like.

First of all, it’s a standard Prometric test. You come to testing center with two IDs (both with a signature, one of them needs to have a photo of you) half an hour before the exam, sign all the paperwork, leave all unnecessary stuff in a locker and wait till the test administrator calls you. I understand that the exam rules are pretty standard, but I love the part which says “no weapons are allowed at the Prometric test center”. Well, perhaps there are places in the world, where you have to fight your way to the test, fortunately Poland is not one of those.

I signed up for two beta exams (71-461 and 71-462) on one day, mostly due to time pressure. I am currently heavily loaded at work due to new system implementations and I didn’t have much time to do some real preparations. I had a bunch of VMs set up at my home PC lab,聽spent some time聽installing and configuring SQL Server 2012 but didn’t actually go deeper into it, for instance I had planned to establish AlwaysOn HA solution as a practice, but it ended up with reading Books Online a day before the exams. I had no idea what the certifications would be then apart from recertification requirement, so it was a blind shot made mostly for fun, but I guess I did OK – it was either 5 or none, if you were thinking certificate-wise.

There were two other guys waiting with me to take their tests and as usual I had bad luck – the computer assigned to me was out-of-date and received Prometric updates which had been applying for half an hour. It did not impact my test a lot, but I got a bit anxious at one moment and had to rebuild my focus. Then I left the room to catch my breath and prepare for second exam, but it turned out that the other computer I was supposed to use for testing hung and required assistance. All in all, I spent almost 6 hours at the testing center and finally took the wrong door while leaving.

The query exam (71-461) was an interesting one. I was expecting a lot of CRUD-like stuff. Instead, there were questions about database design and fundamental concepts such as keys and referential integrity, there was also a bit of DDL, all of course focused on features new to SQL Server 2012. If I were preparing for 70-461 exam, I would revisit:

  • new built-in functions, such as LEAD, LAG, TRY_PARSE
  • data types, paying special interest to date/time
  • DDL stuff including triggers vs stored procedures
  • locking and isolation levels and their impact on query results

Those were things which probably have impacted my result. What I didn’t like and strongly argued about it in the comments were questions related to cursors – the feature I don’t like and try to avoid as much as possible. With CTE being available since SQL Server 2005 I would expect cursors to lose their usefulness, but the test could make me think otherwise.

When it聽came to 70-462 – an administration exam – I felt absolutely confident about it. This is by all means the part of SQL Server that I am most exposed to (my development skills have gone down lately) and I find it most interesting. If you prepare for it, pay special attention to:

  • backup and restore strategies, with reference to recovery models
  • extended events – this is new diagnostic feature introduced in SQL Server 2008, so this is the first opportunity to test it
  • AlwaysOn – there is a big hype about it with SQL Server 2012 and you want to know everything about it
  • contained databases – what are they and how to convert standard database to a contained one

I had a few questions about database mirroring, which I found strange given that it is considered deprecated in next versions of SQL Server. I even argued about one of those, which was single-choice question and from what I understood two options were 100% correct.

Speaking of exam quality – after reading a few sources I expected a lot of bugs and holes. I was surprised (in a positive way) to see only one error and a number of inconsistencies. The results are going to be available at the moment when real exams go-live, which is about June 12. I am looking forward to results – it was a very interesting experience.

SQL Server 2012 certifications – overview and comments

With SQL Server 2012 out, there is going to be another shift in SQL Server certifications. Microsoft is pushing hard on cloud solutions and it is reflected in all materials related to new certifications. There’s new naming, changed certification path and a recertification requirement. Let’s go through it one by one.聽

First of all, all MCTS (Technology Specialist) certs聽are replaced by MCSA (Microsoft Certified Solutions Associate). It is going to demonstrate that a person posseses a core set of skills in SQL Server 2012聽and requires to pass three exams:

It is a first step in SQL Server 2012 certification and as you can see, the hype about big data is reflected here in data warehouse exam which previously belonged to BI track. My feeling about it is that it is definitely going to create confusion even for seasoned DBAs who will now have to learn a lot of SSIS just to keep their credentials for a product. On the other hand, introduction of exam of querying is definitely a step towards scripting and possibly automation as Microsoft also makes a lot of noise around PowerShell. Combining those two skills – querying and PowerShell – gives a lot of options for administrators regarding remote task execution or聽monitoring.

The name MCSA itself reappears as there used to be a certificate for systems adminstrators by that name (Microsoft Certified Systems Administrator) in the times of Windows Server 2000 and 2003 and there are lots of people having it. This may probably lead to a kind of confusion due to the naming (“which MCSA are you? old or new?”).

Now let’s see what changes in MCITP. The IT professionals will now be MCSE (Microsoft Certified Systems Expert) – again a name is reused, since it used to be an Engineer, not an Expert. To become a new MCSE you would either have to be an MCSA in SQL Server 2012 and pass two more exams, or MCITP in SQL Server 2008 and pass three exams. There are only two MCSE tracks for SQL Server 2012 (compared to three MCITP flavours in SQL Server 2005/2008):

My first thought about Data Platform was that two separate worlds are being brought together (DBA and developers – don’t forget data warehouse skills :D), which is actually good. I know a few people doing DBA stuff who are absolutely uninterested in honing their scripting skills or getting to know even the most basic and commonly used DMVs. On the contrary, I worked with developers who were completely oblivious of security or performance matters. I understand the purpose of one Data Platform cert – it proves that you are a well-rounded professional with experience and knowledge of both aspects of database-related work. I’m just wondering whether a generalization like that – one cert to rule them both – is not too big.

If you are a MCSE: DP, there’s no quick telling if you’re a great DBA, fantastic developer, both or – worse – none. Previous MCITPs could give you a hint, but not a proof – have a certification actually proves only that you passed the exam, yet in reality you may suck – and MCSE: DP raises the level of uncertainty even higher.

Speaking of MCITP – if you have one in SQL Server 2008, you may apply for upgrade to MCSE by passing three exams:

  • 70-457 and 70-458 – transition from MCTS to MCSA (transition to MCSA is the same regardless of whether you’re on DP on BI track)
  • 70-459 – for DBA/developers or 70-460 for BI people

Finally, if you become an MCSE, you are bound to recertify after three years, which is actually good – you may not work with SQL Server anymore, forget most of stuff or learn thousands of new things and this is the way to prove it to entire world.

Time-wise – as stated on exam pages all 70-461 to 70-467 are going to be available on June 11th, and the upgrades (457 – 460) are scheduled to be available in August (no exact date available yet). However, I expect that due to all those changes SQL Server 2012 certifications will not be as popular as they were in 2005/2008, mostly due to preparation you have to go through (3 big exams for start, 2 more for higher perk). Personally I agree with Paul Randal’s opinion that it’s MCM (now it’s going to be MCSM) that only truly matters, since no other SQL Server exams tests your actual knowledge.

My plan is to upgrade to MCITP DBA on SQL Server 2008 (still :D) and seek the way up to MCSE: DP (undecided yet). I can’t procrastinate with it for too long – SQL Server 2008 certifications are gone in July 2013 (I might do it as with 2005 –聽earned my MCITP on a penultimate day).

Database mirroring rambling follow-up

In my last post containing a number of comments on database mirroring I left some points for follow-up.

First of all, when it comes to edition requirements it stays the same for mirroring in SQL Server 2012 as it was in previous versions. However, you have remember that for high performance (asynchronous) mirroring you have to use Enterprise Edition. Confirmed by official license documentation.

Second, you have to remember that if you have mirroring set up between different database engine versions the principal has to have earlier version than the mirror. The reason for this is backward incompatibility of database versions (don’t confuse it with compatibility model) which does not allow to use a database from newer server with an older one. If you establish such mirroring, you will not be able to do failover for the same reason – after failover in this scenario a principal is of newer database version and incompatible with a mirror. The mirroring will enter paused state and will not be resumable until mirror’s version matches the principal.

I noticed that in case of SQL Server 2012 CTP0 (build 11.0.1750) and RTM (11.0.2100) – the respective database version numbers are 705 and 706, soI could mirror from CTP0 to RTM, but then in order to have a failover possibility an upgrade was required for CTP0. I had in mind building a more comprehensive list of database versions, since all information I could come up with can be summarized in a following table:

SQL Server version

Internal database version

SQL Server 2012 RTM


SQL Server 2012 CTP0


SQL Server 2008 R2


SQL Server 2008


SQL Server 2005 SP2 + vardecimal enabled


SQL Server 2005


SQL Server 2000


SQL Server 7.0


聽Third point was 鈥 I was wondering about FileTable in context of FILESTREAM incompatibility with mirroring. You can鈥檛 have both 鈥 it鈥檚 either mirroring or FILESTREAM. And after short research I found out that a FileTable is actually an extension to FILESTREAM which is designed to work with AlwaysOn. But basic assumptions remain the same and FILESTREAM is exclusive with mirroring.