Miscellaneous linked server creation – aliases, other database providers and Excel

After publishing recent posts regarding linked servers I noticed there are still some topics about linked servers that are interesting but not so straightforward. So, I decided to describe them details to cover these points fully. This will include a lot of messing with connection strings – I recommend using http://www.connectionstrings.com as a reference.

Here we go:

  1. Create linked server with different name and Windows Authentication.

  2. This will require using OLE DB provider for SQL Server – we will use a different name (alias) for the linked server and provide a full connection string like:

    Provider=SQLNCLI10;Data Source=myServerAddress;Initial Catalog=myDataBase;Integrated Security=True;
    

    Don’t forget to choose correct security option – I vote for number two or three! 馃檪

  3. Create linked server with automatic failover (for use with AlwaysOn/database mirroring).

  4. This is an extension of an idea above – we will use OLE DB Provider, a different name (alias) for the linked server and provide a full connection string with definitione of failover partner like:

    Provider=SQLNCLI10;Data Source=myServerAddress;Initial Catalog=myDataBase;Integrated Security=True;Failover Partner=mirrorServer
    

    Adding a Failover Partner parameter to the connection string allows the client to attempt automatic reconnection in case of automatic failover, but you have to keep the mirror server up-to-date with server objects, such as logins or Agent jobs, otherwise your application might behave erratically.

  5. Create linked server to MySQL Server.

  6. This one is more tricky. There is no official OLE DB driver for MySQL, so we have to use ODBC instead. It has to be downloaded from MySQL site, installed and then we need to define system ODBC before we can connect to MySQL. After that it’s just a matter of providing the right connection string, for example:

    Driver={MySQL ODBC 5.1 Driver};Server=myServerAddress;Database=myDataBase;User=myUsername;Password=myPassword;Option=3;
    

    Let’s try it. Installation of the driver is straightforward, so the fun starts with system ODBC creation. I have MySQL server ready and waiting at mysql5.beton.test with one database and a table inside that contains two records:

    mysql> use beton
    Database changed
    mysql> select * from table1;
    +----------+-----------+
    | idTable1 | table1col |
    +----------+-----------+
    |        1 | aaaaa     |
    |        2 | acbd      |
    +----------+-----------+
    2 rows in set (0.00 sec)
    mysql>
    

    First I have to create ODBC data source (through Administrative Tools -> ODBC Data Sources): MySQL-ODBC-Step2MySQL-ODBC-Step1

    – since I am on 64-bit system I am using 64-bit driver – you might check if you need 32-bit or 64-bit;
    – I set up System DSN so that SQL Server can use it;
    – I choose Unicode driver and then I provide connection details – data source name, description, target server and port, user and password to connect to MySQL server, and startup database;
    – upon test connection it says “Connection successful”.

    After that initial setup I can proceed with linked server creation in SQL Server. In SSMS, I go to New Linked Server and provide following parameters:

    – Linked Server Name: MYSQL5 (it’s an alias I want to use);
    – Provider – Microsoft OLE DB Provider for ODBC Drivers;
    – Data Source – target server name, mysql5.beton.test in my case;
    – test connection – it should succeed at this point.

    mysql5_linked_serverFew notes about it:

    1. I explicitly mentioned user/password in the connection string. This causes all connections to be done in the context of this user. I couldn’t manage to get the connection working when using user mapping or domain accounts (might be related to ODBC setup). I’ll work on that but I am also open to any ideas about how to get it done.
    2. This does not allow for browsing the catalog we are used to in case of remote SQL Server – you can see the tables, but you can’t simply do SELECT col1, col2 FROM table1. It is caused by incompatibility of metadata in information_schema, so you have to prepare your queries in MySQL and then use OPENQUERY, which has to be configured first via sp_configure:
    3. sp_configure 'ad hoc distributed queries', 1
      go
      reconfigure with override
      go
      

      Then you can query your MySQL server like

      select * from OPENQUERY([MYSQL5], 'select * from table1;')
      

      This is the warning light should start flashing. Since any user can use OPENQUERY and the permissions on the remote server are determined based on the ODBC/linked server definition, you really want to give the account you use to connect the least access rights possible, otherwise be prepared to face uncontrolled data modifications. Also, ad hoc distributed queries is a server setting – once you enable it, everyone can use it in all sorts of ways, so keep it in mind if you want to maintain control over your data (you can also try to use “Disallow ad hoc access” option of linked server provider, but remember that it affects all linked servers using this provider).

      In both cases of PostgreSQL and MySQL, you don’t see the structure on remote database in Object Explorer until you enable ‘Allow inprocess‘. If this option is disabled though, it should protect SQL Server from errors caused by the provider. However, I observed strange behavior of crash of SQL Server when trying to use ODBC linked server provider with “Allow inprocess” option disabled. Steps to reproduce:

      1. Install ODBC driver – I used official drivers (64-bit version) 5.02.05.00 for MySQL and 9.02.01.00 for PostgreSQL.
      2. Set up a System DSN and confirm it works successfully using test connection option.
      3. In SSMS, create a linked server for MySQL/PostgreSQL. Check that “Allow inprocess” option is unchecked in MSDASQL provider properties.
      4. Open new query window and run query similar to SELECT * FROM OPENQUERY(linked_server, ‘SELECT * FROM table’), where linked_server is the name of newly created linked server, and table is any table. this will give error like:
      5. Msg 7399, Level 16, State 1, Line 8
        The OLE DB provider "MSDASQL" for linked server "MYSQL5" reported an error. Access denied.
        Msg 7350, Level 16, State 2, Line 8
        Cannot get the column information from OLE DB provider "MSDASQL" for linked server "MYSQL5".
      6. Run this query two more times – first gives exactly the same error, but after second SQL Server crashes:
        Faulting application name: sqlservr.exe, version: 2011.110.3000.0, time stamp: 0x5082086a
        Faulting module name: ntdll.dll, version: 6.2.9200.16384, time stamp: 0x5010acd2
        Exception code: 0xc0000374
        Fault offset: 0x00000000000ea2b9
        Faulting process id: 0xdbc
        Faulting application start time: 0x01cec279fec9e95b
        Faulting application path: C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
        Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
        Report Id: 665792a8-2e6d-11e3-93fd-00155d040513
        Faulting package full name: 
        Faulting package-relative application ID: 
        
      7. I experienced this on SQL Server 2012 SP1 builds 3000 and 3128, but I suspect it’s a bug in 2012 SP1 as I haven’t found this mentioned in changelists of all 2012 SP1 CUs. I also found similar topic on SQLServerCentral forums which also mentions build 3128. At first I suspected GDR update 3128 to be the cause, but after uninstalling it and going back to 2012 SP1 RTM the issue persists.
      8. The workaround is to enable “Allow inprocess” on MSDASQL linked server provider – after enabling it all those problems go away.

  7. Create linked server to PostgreSQL Server.

  8. Linking to PostgreSQL is similar to MySQL, first you have to install ODBC driver for PostgreSQL (available here), set up system ODBC and create linked server in reference to ODBC definition. Then query a linked server using OPENQUERY or four-part name.

  9. Create linked server to Excel/Access.

  10. There is a comprehensive guide to linking to Excel/Access available here. In short, you have to install data provider (Microsoft.Jet.OLEDB.4.0 for Excel up to 2003 or Microsoft.ACE.OLEDB.12.0 for 2007 or newer) and then use it when defining new linked server. Sample T-SQL code:

    EXEC sp_addlinkedserver
        @server = 'ExcelServer2',
        @srvproduct = 'Excel', 
        @provider = data_provider_name,
        @datasrc = filename,
        @provstr = 'excel_version;IMEX=1;HDR=YES;'
    

    In the pseudocode above, data_provider_name is the provider to be used (based on Excel/Access version), excel_version is Excel 8.0 for 2003 and Excel 12.0 for 2007 and above. For more details I recommend visiting mentioned link.

Summary – in this post I described how to use aliases for linked SQL Servers, how to set up a linked server for MySQL/PostgreSQL and Excel. I also showed an error I spotted for ODBC linked server provider in SQL Server 2012 SP1. As usual, I await comments/questions.

I’m an MCSA: SQL Server 2012, it’s official! :)

As of April 29, 2013 I became Microsoft Certified SQL Server (MCSA): SQL Server 2012. I passed last of three exams (70-463) required for this title.
It was a difficult one – especially when I had not much SSIS experience in last 18 months other than creation of maintenance plans. I had to spent a few weekends with Evaluation Edition exercising in order to feel quite comfortable with my knowledge.

For all of you preparing to this exam, I recommend going through all SSIS components mentioned and exercising various concepts mentioned in course curriculum. The theoretical basis is also important but the main focus is on practical aspects of development in SSIS, so do your due dilligence.

I still plan to complete remaining two exams required to become MCSE: Data Platform this year, so hold on, I will keep posting about that.

Building a clustered test lab with Windows Server 2012 RC and SQL Server 2012 RTM Evaluation

I have already described what my test lab at home looks like, however I find it interesting enough to provide details how to set it up. This post will be first in series and will cover planning and preparing the environment. Next posts will be about:

  • installing Windows Server 2012 in a VHD and聽enabling Hyper-V role
  • preparing virtual machine template and setting up virtual machines
  • preparing environment for setting up failover cluster聽
  • installing SQL Server on 2-node cluster

I am using concepts shown in a blog post by Jose Barreto describing this setup with Windows Server 2008 R2 and SQL Server 2008 R2.聽We will use Windows Server 2012 and SQL Server 2012 but main idea is the same.聽To build this lab you require:

  • a PC which:
    • has a CPU that supports virtualization in hardware (for Intel you want a CPU supporting VT-x technology, for AMD – AMD-V) – most聽non-low-end聽CPUs do
    • has minimum of 4 GB RAM (8 GB or more recommended)
    • has 50 GB free HDD space (up to you)
  • some software – all available for free:
    • Windows Server 2012 RC聽or Hyper-V Server 2012 RC聽installation media (I will use “full” Windows Server as I didn’t do it using command-line/PowerShell in Hyper-V Server) – I don’t recommend using VHD provided by Microsoft since it expands from 8 to 40 GB during boot time and you’d need lots more disk space ;
    • SQL Server 2012 Evaluation聽– valid from 180 days, but it’s more than enough for start (you might try Developer Edition if you have one – you don’t have to worry about expiration then)
  • whole procedure takes about 8 hours (depends on your system speed, mainly I/O, so if you’re blessed with RAID 0 on SSD drives you might actually do it much faster) – be warned!

What we’re going to do is we will install fresh Windows Server 2012 operating system in a VHD (so as not to mess with existing setup – you may install on a physical disk if you don’t mind, performance-wise it’s negligible) – I assume that a main OS on the PC we’re using is Windows 7/Server 2008 R2聽or newer, because XP does not support native-boot from VHD (not sure about Vista/Server 2008, but does anyone use it anymore? :P). Within this newly installed Server 2012 we will enable Hyper-V role and set up 4 virtual machines using one base template and four differencing disks. Then we will configure those machines like this:

  1. First machine is a domain controller and聽a DNS server – clustering requires聽a domain and Active Directory requires DNS.
  2. Second machine is storage controller – in the post by Jose (mentioned above) separate download of iSCSI Target is required for Windows Server 2008 R2, however it’s incorporated in Server 2012 – we don’t have to install anything, only enable additional OS features and configure them.
  3. Third machine is an active node of SQL Server failover cluster.
  4. Fourth machine is a passive node of SQL Server failover cluster.
  5. You may add more nodes to the cluster or consider setting up AlwaysOn.

This makes our lab completely standalone, so it can operate even if there is no connection to the outside. If it’s not required to have it this way, for example – you have a domain controller – you may skip creation of first VM. If you have a NAS/SAN, you may use it and skip the second machine. You may also consider combining first two machines into one (if you’re low on resources and don’t want to spawn a horde of machines). AD services in case of a domain so huge as this one will not be very demanding, definitely we’re not going to have a workload on storage impact a domian controller significantly. On the other hand you may also try setting up additional machines to see how it works – I made some with System Center 2012 RC, but you can also build a load-balanced SharePoint farm with a clustered database engine.

When it comes to network – no additional configuration will be done as clustering in Windows Server 2008 removed the need for dedicated heartbeat, so as long as our VMs see each other it will be fine (you may try to include DHCP server in this setup but remember to keep it聽contained so that it does not try to provide addresses in every network it may find). Just remember – live system may not have exactly the same architecture and whatever works for our homemade cluster may not be enough when it’s deployed into production (but hey, that’s a lab we’re building!).

When it comes to resource assignment for VMs I use the following pattern (I have Phenom II X4 CPU and 16 GB in my rig):

  1. For domain controller – 2 CPUs and 768 – 1024 MB RAM.
  2. For storage server – 2 CPUs and 768 MB RAM.
  3. For SQL Servers – 4 CPUs and 2048 – 3072 MB RAM.
  4. Rest – whatever I feel like/need.

When I’m setting this on a laptop where I have memory limitations (4 GB RAM) I cut down first two VMs to 512 MB RAM and SQL Servers to 1024 – 1280 MB. This allows me to remain operable, however sacrificing a lot of performance. Therefore 8 GB RAM is recommended.

And this concludes our preparation. Next post will get us through host setup – we will install Windows Server 2012 in a VHD and make it bootable so stay sharp!

SQL Server 2012 – 70-461 and 70-462 exams passed

Yesterday I found some interesting information on Prometric site – I passed 71-461 and 71-462 exams (will count as regular 70-461/462). I don’t know the exact scores, but I did it and I am really happy about it. I only wish now that I didn’t take more of the beta exams – I might as well be an MCSA/MCSE at the moment.
Results will appear on my transcript later, probably by the time I get the official confirmation e-mail from Microsoft, which takes up to few weeks.
Time to prepare for 70-463 and then upgrade to MCSE: DP – I wonder if it’s possible to do it this way – upgrade my MCITP DBA from SQL Server 2005 to 2008 (70-453), then earn MCSA: SQL Server 2008 (70-448), pass 70-463 to get MCSA: SQL Server 2012 and finally upgrade MCITP to MCSE: DP with 70-459.

SQL Server 2012 Upgrade Technical Guide white paper released

As it is with every SQL Server release, 2012 also has a white paper which covers in details the process of upgrade to SQL Server 2012. I’m sure that by now a lot of people have already tried it in their labs but it’s good to have a document that describes whole scenario with risks and possible caveats.

The document is available here. I strongly recommend reading it not only because of technical stuff (which is obviously useful in one place), but for project management-related information it contains. It also features ready-to-use checklists you migt want to incorporate into you upgrade projects.

Contained databases in SQL Server 2012 overview

SQL Server 2012 features a lot of new improvements in all of its aspects. Microsoft boasts their cloud readiness in all the new products – Windows Server, System Center and SQL Server as well. This post will describe one of the Azure-like features – contained databases.

A contained database, as stated in Books Online, is a database which isolated from other database and SQL Server instance. This means that part of the metadata which is required for database operation is included in it. In SQL Server 2012 two containment model are introduced:

  • full containment – all the metadata exists in a database and there is no dependence on SQL Server instance
  • partial containment – some of the metadata exists in a database and some on the instance-level

At this moment SQL Server 2012 supports only partial containment of databases which implies also that uncontained entities are only identified, where in full mode containment could be enforced (see Aaron Bertrand’s post about containment). There are two ways of identifying uncontained entities in a database – sys.dm_db_uncontained_entities DMV and database_uncontained_usage Extended Event. Beware – both show only those entities which are identified at run time.

The term which is closely related to containment level is database boundary, which separates elements that exist in a database (called database model) from those on the instance level (management model). We can say that contained elements exist within a database whereas uncontained elements cross database boundary. Of course, being uncontained is transitive – if an entity references or depends on another, which is uncontained, it is uncontained as well.

There are several benefits of contained databases:

  • probably the most straightfoward is easy movement of contained databases – since the metadata is stored inside a database, it is easier to move it around. Till now, you could do backup\restore, import\export or detach\copy\attach, but none of these methods takes care of login mapping, server-level security, linked servers, Database Mail settings or SQL Server Agent jobs;
  • following from the previous is the usefulness for database mirroring or AlwaysOn failover – if a database is contained, all you have to care for after failover is to point your connection strings to new location because the database is already there with all the information required to operate
  • database development – developer might not have all the details required to properly set up a database at initial stage and may focus at the job at hand instead of concerning of all possible environmental impacts
  • simplified database administration – in certain cases it is required for a database owner to have elevated permissions to server-level roles; this may be mitigated with database containment

Of course, there is always a downside – since a database is supposed to be contained, collation dependence is also to be redesigned. For a non-contained database there are three collations to be considered:

  • DATABASE_DEFAULT – for user data (default)
  • tempdb collation – for temporary data (default)
  • instance default collation – for variables, cursor names and goto labels

In a contained database we are not supposed to rely upon instance information and tempdb, hence a CATALOG_DEFAULT collation which is used by system metadata and temporary objects in a database (by default equal to Latin1_General_100_CI_AS_WS_KS_SC). There are only two collations to be considered:

  • DATABASE_DEFAULT – for user data and temporary data (default)
  • CATALOG_DEFAULT – for metadata, temporary metadata, variables, cursor names and goto labels.

See more details in Books Online article on contained database collations.

Lastly, there are several limitations of contained databases which follow from separation from system databases and collation changes (taken from Books Online):

  • replication, change tracking and change data capture is not permitted
  • numbered procedures are not allowed
  • schema-bound objects that depend on built-in functions with collation changes
  • binding change resulting from collation changes
  • temporary stored procedures are currently permitted but they are marked for deprecation in future versions of contained databases

Wrapping all up – database containment is an interesting option and it surely will be a hot subject in future versions of SQL Server. It will enhance database portability but there will be some points to consider implemeting them (see collation and supported features). I for certain am looking forward to working with it.

Database mirroring rolling upgrade

I was testing聽database mirroring rolling upgrade in my lab when upgrading from SQL Server 2012 RC0 to RTM shortly after RTM was available. One of the side results was proven incompatibility of internal database versions between RC0 and RTM of SQL Server 2012聽(705 and 706, respectively). In this post I would like to focus on the main activity that was performed.

Imagine you have database mirroring with automatic failover implemented in your environment and you want to perform an upgrade of server instance while minimizing possible downtime. It does not seem quite challenging if you follow the right procedure:

  1. Disable witness.
  2. Perform maintenance on the mirror.
  3. Fail over.
  4. Since the principal is now mirror, perform maintenance on it.
  5. Fail over again.
  6. Enable witness.

If you look at it closely, you’ll notice close resemblance to failover cluster rolling upgrade, when you upgrade passive nodes one by one. In this case, steps 1 and 6 are there to ensure that no unexpected failover occurs that would disrupt the progress of upgrade.

Keep in mind that as always with failover in database mirroring it’s quick but there always is a downtime, which length is determined by several factors:

  • length of REDO queue on the mirror;
  • processing power of the mirror server and the remaining workload it has to deal with;
  • SQL Server edition on the mirror.

Whereas first two points are quite straightforward – failover requires roll-forward to complete and it’s when REDO queue is processed –聽last one is connected to them indirectly聽becuase roll-forward process uses one thread in following situations:

  • Standard Edition is installed or
  • Enterprise Edition is installed and the instance has up to 4 processor cores available

If you have Enterprise Edition installed and more than 4 cores are available for SQL Server, roll-forward uses one thread for every 4 cores the server utilizes.

In addition, you should remember to protect your data, which means not only doing standard your backup routine prior to maintenance, but switching from high-performance mirroring to high-safety mode. This will help you ensure that all transactions on the principal have been delivered to the mirror minimizing the risk of data loss. You should also check your database(s) for consistency before and after failovers.

This constitutes a detailed action plan like this (S1 is the principal, S2 is the mirror):

  1. Run DBCC CHECKDB on all databases on S1.
  2. Perform full backup聽of all databases on S1.
  3. If running mirroring in high-performance mode, switch to high-safety.
  4. Remove witness from mirroring session.
  5. Perform upgrade on S2.
  6. Fail over all databases to S2.
  7. Run DBCC CHECKDB on all databases on S2.
  8. Perform upgrade on S1.
  9. Fail over all database to S1.
  10. Run DBCC CHECKDB on all database on S1.
  11. Perform full backup of all databases on S1.
  12. If mirroring was initally set to high- performance mode, switch mirroring modes.
  13. Upgrade witness (actually you can do it any time).
  14. Add witness to mirroring session.

I have done upgrade from 2012 RC0 to RTM this way and it took me about聽60 minutes on my lab PC at home. The application using those database servers is System Center Configuration Manager 2012 RC0 and I had no perceivable downtime during upgrade.

The thing I have to verify is – in case of upgrade from SQL Server 2008 R2 to 2012 is it possible to have a witness on 2008聽R2 with partners on 2012? If anyone knows, please share it with me.