Microsoft SQL Server 2022 is here! – What do we know so far

MS Ignite 2021 started yesterday on November 2 and during the keynote, Microsoft officially announced SQL Server 2022 private preview. According to Microsoft, SQL Server 2022 is the most cloud-enabled release of SQL Server yet, and I think they are right.

It seems to me that main focus points for the next major version are Cloud Integration, Big Data and Performance Enhancements. I attended SQLBits conference in 2019 and Bob Ward had a keynote speech where he announced SQL Server 2019. I have to admit that the announcements and focus points looked quite similar to me. Microsoft still pursues the goal of converting SQL Server from just a relational database to a hybrid, meeting place that database engineers, business analysts, data engineers and data scientists can connect and work on their tasks.

Based on the information that are available to public at this moment, which is not abundant, I will list the most important features and enhancements that will be shipped with the next major version of SQL Server.

  1. Bi-directional HA/DR with Azure Managed Instance Link: Link On-Premise SQL Server 2022 with Azure Managed Instance
  2. Built-In Query Intelligence: Parameter sensitive Plan Optimization & Query Store Improvements
  3. Multi-write Peer to Peer Replication:  A multi-write environment with automatic conflict resolution
  4. SQL Server Ledger: A Blockchain ledger for auditing purposes
  5. Azure Synapse Link Integration
  6. Azure Purview Integration

We can group them into two main categories:

  • Cloud Integration: Bi-directional HA/DR with Azure Managed Instance Link , Azure Synapse Link Integration, Azure Purview integration
  • Enhancements to Database Engine regarding Performance, Security and Availability: Built-In Query Intelligence, Multi-write Replication and SQL Server Ledger.

Let’s take a closer look.

Bi-directional HA/DR with Azure Managed Instance Link

One of the biggest news is Azure Managed Instance(MI) link. Azure Managed Instance(MI) can be attached to an on-premise SQL 2022 database thanks to new Managed Instance link feature.

This works by using a built-in Distributed Availability Group (DAG) in the background to replicate data (near real time) to a previously deployed Azure SQL Managed Instance as a DR replica site. Then, whenever you need, initiate a manual failover to cloud, you have two options: Planned[data lossless] Failover or Forced[potential data loss] failover.

Taken from Bob Ward’s Mechanics Video.

Good thing is, you don’t have to do any Always On configuration. All you need to do is connect your Azure account, use “Azure SQL Managed Instance Link” on SSMS, choose your target Azure Managed Instance. It will seed your database to cloud and deploy a Distributed Availability Group (DAG) automatically for you.

There are few important things about this new feature. First, you can have a hybrid setup quite easily without actually migrating to the cloud. It used to be a hassle to migrate from on premise to Azure Managed Instance, migrating back from Azure Managed Instance to on premise was not possible. Now, it is possible to restore a versionless database back to your on-prem environment.

Azure Managed Instance can be used as Disaster Recovery node, as well as readable secondary replica. That means, you can offload workloads to read-only secondary’s on Azure to take advantage of a fully managed database platform, performance, and scale. 

Failing over to Azure Managed Instance is quite straight forward using SSMS, but in case you would like to fail it back to on-premise, there is no magic button or GUI. You have to take a backup of Azure MI database to Azure Blob Storage first and then restore it on premise with a T-SQL command(either overwrite it or restore as a new database). At least, that’s how Bob Ward did it in introduction video. Maybe they will sort this out until next year.

It’s worth to mention that the feature has been released in a limited public preview, and it is available for SQL Server 2019 with CU 13 and above, and SQL Server 2022 in private preview at this time. Microsoft’s long-term plan is to enable link feature support for SQL Server 2016 and above.

Built-In Query Intelligence

Microsoft constantly tries to enhance Intelligent Query Processing (IQP) and Query Store in every new major version. SQL Server 2022 is no exception and here are the headlines:

  1. Parameter Sensitive Plan optimization: Oh boy, I am very excited about this one. Hopefully, this will address one of the most common performance issues called parameter sniffing. I don’t know how many times I received a call from developers or business side claiming that they didn’t change anything but database is dreadfully slow all of a sudden.
    SQL Server always caches the first execution plan for a stored procedure execution. If the execution plan is made for a value that returns a small result set, it will perform seek operator. If same stored procedure is called with a different value that returns a big result set, performance will be degraded significantly due to running on wrong execution plan. SQL Server 2022 will solve this by caching multiple plans in cache for the same stored procedure or parameterized SQL statement.
  2. Improvements to Cardinality Estimator and MAXDOP: Query Store will be used to create a feedback cycle to automatically adapt and resolve problems with common query patterns.
  3. New enhancements to memory grant feedback including persistence and new algorithms to smoothly handle memory grant fluctuations.
  4. Query Store now on by default for new databases. I never understand why it wasn’t enabled until now, probably they were not confident enough with QS. No need to enable Query Store on every database anymore, thank you Microsoft!
  5. Query Store is now supported on read replicas allowing performance analytics on your read workloads. This is quite significant for database engineers when it comes to tune performance on readable secondary replicas, because it was not possible to monitor query activity on secondary read replicas due to not having write access to the Query Store.
  6. Query Store hints to easily shape query plans with no code changes.
  7. T-SQL will support new JSON functions and time-series capabilities.

Multi-write Replication with Automatic Conflict Resolution

Recently, it was announced on this blog post by Kevin Farlee that starting with SQL Server CU 13, there will be a new functionality which facilitates peer-to-peer replication with conflict detection & resolution using a last-writer-wins strategy. This enables to configure multi-write replication for the first time ever. Detailed technical explanation can be found here.

With SQL Server 2022, the last-writer wins rule is going to be automated. Now, when a conflict is detected, the most recent modification time will be chosen to be persisted on all replicas.

SQL Ledger

As Microsoft voice this quite vocally, they focus on Cloud database and introduce new features to Azure SQL first. SQL Ledger is another feature that is introduced on Azure SQL database first, then inherited to SQL Server 2022. Details about SQL Ledger can be found here.

It is basically an audit record that uses immutable blockchain technology. That means even database engineers can not change audit records, which provides more secure and reliable data for internal and external auditors and ensure that data can not be tampered.

Azure Synapse Link

What if I tell you that there is no need for ETL pipelines anymore to move data from your on-premise SQL server to Azure Synapse? Azure Synapse Link for SQL Server 2022 removes the need for ETL to run analytics on your operational data. Azure Synapse Link provides automatic change feeds that capture the changes within SQL Server are send to Azure Synapse Analytics automatically. Since it only captures the changes, and not the entire data, it is much faster and provides near real-time analysis on operational systems.

Azure Purview Integration

Azure Purview is a cloud service which recently announced for the general availability. It is a unified data governance solution and management service. It helps you manage and govern your on-premises, multicloud, and software-as-a-service (SaaS) data.

Now, SQL Server has a direct integration with this cloud service. Through this integration you will be able to:

  • Automatically scan your on-premises SQL Server for free to capture metadata.
  • Classify data using built-in and custom classifiers and Microsoft Information Protection sensitivity labels.
  • Set up and control specific access rights to SQL Server.

Conclusion

SQL Server 2022 is finally here. It has several integrations to Azure that helps customers to design and implement hybrid environment better and easier. It also has great features and improvements to core database engine. I can’t wait to get my hands on it.

I’d like to point out again that this is a private preview. So, If you are interested in exploring more, you can apply for Early Adaption Program by filling this form. Yet, be aware that completion of this questionnaire does not guarantee acceptance into the EAP.

Sources:

Microsoft Announcement Page: https://www.microsoft.com/en-us/sql-server/sql-server-2022

Microsoft Mechanics Video: https://www.youtube.com/watch?v=ncF-zFzBDAY

Bob Ward’s LinkedIn Post: https://www.linkedin.com/pulse/introducing-sql-server-2022-bob-ward/

Cloud Blogs post: https://cloudblogs.microsoft.com/sqlserver/2021/11/02/announcing-sql-server-2022-preview-azure-enabled-with-continued-performance-and-security-innovation/

Tech Community Post: https://techcommunity.microsoft.com/t5/azure-sql/managed-instance-link-connecting-sql-server-to-azure-reimagined/ba-p/2911614

TempDB files don’t exist/are not reachable anymore, what to do?

TempDB is one of four(well, actually five with ‘mssqlsystemresource’ but that’s not the point of this blog post) system databases of SQL Server that holds temporary user and system objects for various operations, such as temp tables or sorting operations.

It is a crucial item on the startup checklist for SQL Database engine service. At every restart, tempdb is recreated and starts empty to serve its role. Any problem that you may face during this creation operation will cause the database service fail to start.

The Scenario

Let’s imagine a scenario where there are 2 tempdb data files defined, one of them is on E drive and the other is on F drive. What happens if we lose F drive? It won’t crash the instance but you will start seeing alerts in SQL Agent logs such as

“The operating system returned the error ‘2(The system cannot find the file specified.)’ while attempting ‘GetVolumePathName’ on ‘F:\temp2.ndf’ at ‘storagedmv.cpp'(443).”

Clearly, SQL Server is trying to access the defined tempdb data file but it doesn’t exist anymore, so it fails.

The first idea that comes to mind is to remove those file(s) and we should be okay. But you can’t just remove them, because SQL Server can’t remove a file if it is not empty.

USE [tempdb]
GO
ALTER DATABASE [tempdb]  REMOVE FILE [temp2]
GO

Msg 5042, Level 16, State 1, Line 3
The file ‘temp2’ cannot be removed because it is not empty.

Then how about emptying the file first and then remove it? No, it will not work also because we can’t access the file anymore.

USE [tempdb]
GO
DBCC SHRINKFILE (N'temp2', EMPTYFILE)
GO

Msg 823, Level 24, State 2, Line 3
The operating system returned error 21(The device is not ready.) to SQL Server during a read at offset 0x000001f3402000 in file ‘F:\temp2.ndf’. Additional messages in the SQL Server error log and operating system error log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

The fix

We have to tell SQL Server that we lost these files and during next startup, it should create them on another available disk. How do we do that? By giving a new path for tempdb database files using ALTER DATABASE command.

USE [tempdb]
GO
ALTER DATABASE [tempdb] MODIFY FILE 
(name = temp2, filename = 'E:\temp2.ndf')--New available path
GO

The file “temp2” has been modified in the system catalog. The new path will be used the next time the database is started.

As can be seen in the result output, SQL updated its system catalog and will try to create this data file on given network path.

We can also verify this change by querying master_files dmv

SELECT * FROM sys.master_files WHERE database_id=2

As long as there is enough space on that drive, SQL should start without any problem and create tempdb files as configured.

If there is not enough space, then it will create tempdb files based on model database configuration. This will also be logged in error log file and sql agent log as

Message
MODIFY FILE encountered operating system error 112(There is not enough space on the disk.) while attempting to expand the physical file 'E:\temp2.ndf'.

SQL Server without tempdb?

One day you may want / need to start SQL Server without tempdb, maybe because tempdb drive is corrupt or not reachable and SQL doesn’t start successfully. Then you can start database engine service in a minimal configuration mode with the -f command-line parameter, either from SQL Configuration Manager tool by setting Service Properties or use the command-line

SQLServr.exe -f

Alternatively, you can use Trace Flag 3608 which only starts master database

SQLServr.exe -T3608

After bringing the service up, you can connect to it using SQLCMD command line tool and change the paths of tempdb files to a suitable folder. Now we can stop the database service, restart it and tempdb will be automatically created on new path without any issue.

Wait, Who is taking backup of SQL Server database?

If you are a production dba and already spent enough years on the field, probably this question is very familiar to you.

Most of the time you will be alerted that differential backups are failing or if you are monitoring backups history (I hope you do), you will notice extra backups in your records.

It is crucial to identify and prevent the secondary process that takes backup of your databases to ensure your backup maintenance plan will be executed successfully and restores will be performed without an error if needed in future. Otherwise, the backup chain will be broken, restores will fail and no dba should find themselves in such difficult position. Thus, you need to act asap and stop pirate backup takers!

What/Who is taking these mysterious backups?

We realized that there is another process backing up our databases, so it’s time to identify it now.

We will query SQL Server dmvs to get a clear picture about backup details:

 SELECT
	database_name,
	user_name,
	server_name,
    backup_start_date,
	backup_finish_date,
    CASE msdb..backupset.type
         WHEN 'D' THEN 'Full'
		 WHEN 'I' THEN 'Diff'
         WHEN 'L' THEN 'Log'
    END AS [Backup Type],
	msdb..backupset.is_snapshot,
    physical_device_name,
	CASE device_type
         WHEN 2 THEN 'Disk'
		 WHEN 5 THEN 'Tape'
		 WHEN 7 THEN 'Virtual Device'
		 WHEN 9 THEN 'Azure Storage'
	END AS [Device Type]
FROM msdb.dbo.backupmediafamily
INNER JOIN msdb.dbo.backupset
ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
WHERE (CONVERT(datetime, backup_start_date, 102) >= GETDATE() - 7)
ORDER BY backup_set_id DESC

 

This query will give you a nice overview for all the backups performed in the last 7 days assuming that secondary backup process has been identified quickly, you can adjust it based on your case and needs.

You can see which user initiated the backups, backup type, backup path, device type, snapshot and so on. This should provide adequate information to resolve most of the cases, but in case you need more information, please go to the bottom of the post and refer to Microsoft documentation to comprehend and see what you can get more out of system dmvs.

There are various possibilities about who/what may take backups but I’d like to give one specific case I recently experienced.

GUID values shown as physical device name, what is this?

After I notice that there is a new process taking backups, I immediately queried system backup dmvs to get information and tried to locate the backup files. First, I saw that physical device name column shows guid values. This may be due to:

  • Third party backup software taking backups to a tape
  • VM / Snapshot backup

Device type was a virtual device and backups were running under Local system account. Server name was the local database host. Additionally, “is_snapshot” was 1 as I assumed, so this clearly indicates that there is an external process/service/application on the database host which takes snapshot backups using VSS.

To prove my point, I started a trace and capture backup events. Application name was “Microsoft SQL Server VSS Writer” so the theory is validated. This can be a Hyper-V / VMware backup or Third Party Tool such as NetBackup, Tivoli, Arcserve and so on.

Surprise, it is a Security Application!

In my case, it was none of above mentioned processes, it was a security application that is installed on database servers and it decided to take full backups every 4 hours.

There is an effective command that list vss writer services, open a command line and execute:

vssadmin list writers

That’s how we identified the security application. After stopping security application services on database hosts, backups are stopped and issue is resolved.

If you have time to read more

For more details , refer to Micosoft Documentation:

backupmediafamily (Transact-SQL) – SQL Server | Microsoft Docs

backupset (Transact-SQL) – SQL Server | Microsoft Docs

SQL Server Back up Applications – Volume Shadow Copy Service (VSS) and SQL writer – SQL Server | Microsoft Docs

Microsoft Announced a Vulnerability(CVE 2021 1636) and Released Security Fix

2 days ago, Microsoft released a Security Fix for ALL SQL Server versions to mitigate risks about a privilege escalation vulnerability that leverages Extended Events.

According to Microsoft website:

How can an attacker exploit this vulnerability? An authenticated attacker can send data over a network to an affected SQL Server when configured to run an Extended Event session.

https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2021-1636

Due to obvious reasons, they didn’t share a lot of information about the vulnerability yet. Make sure to plan this in your SQL Server environment and patch all SQL Servers as soon as possible!

Download Links:

SQL Server 2019CU8https://www.microsoft.com/en-us/download/details.aspx?id=102617
SQL Server 2019RTMhttps://www.microsoft.com/en-us/download/details.aspx?id=100442
SQL Server 2017CU22https://www.microsoft.com/en-us/download/details.aspx?id=102619
SQL Server 2017RTMhttps://www.microsoft.com/en-us/download/details.aspx?id=102620
SQL Server 2016 SP2 CU15https://www.microsoft.com/en-us/download/details.aspx?id=102621
SQL Server 2016 SP2https://www.microsoft.com/en-us/download/details.aspx?id=102622
SQL Server 2014SP3 CU4https://www.microsoft.com/en-us/download/details.aspx?id=102623
SQL Server 2014SP3https://www.microsoft.com/en-us/download/details.aspx?id=102624
SQL Server 2012SP4https://www.microsoft.com/en-us/download/details.aspx?id=102625
Make sure to download correct patch file

SQL Server XML Conversion Error: “Conversion of one or more characters from XML to target collation impossible”

A quick and short post this time regarding a sql server error being thrown when you try to convert xml characters. XML column has an unicode character that can not be converted into ASCII, thus our engine is not happy about it.

Let’s say you have a “Table1” which stores a “XMLColumn” and you’d like to query on that column with LIKE operator. Hence, you convert it to varchar data type first.

SELECT XMLColumn FROM Table1
WHERE Convert(varchar(max), XMLColumn) IS LIKE %ARollingStone%

Then, you’ll get below error:

Msg 6355, Level 16, State 1, Line 2
Conversion of one or more characters from XML to target collation impossible

To fix this, you need to simply change varchar to nvarchar, that’s it.

SELECT XMLColumn FROM Table1
WHERE Convert(nvarchar(max), XMLColumn) IS LIKE %ARollingStone%

Query Store stopped working after Windows Update on Database Host

Firstly, This is not going to be a long, detailed blog post. I would like to share an issue with Query Store that I experienced today after Windows Updates on SQL Server hosts and a quick fix for that.

Average Wait time is too high on SQL Server

I was doing sanity check after windows updates and noticed severe wait times on a server, caused by a specific database. This very database had a problematic select query which confuses optimizer to choose a wrong execution plan. Query Store helped me to address the issue in the past, I forced the right plan and move on.

I immediately went after the same query and realized it is the one that causes high wait times and lock issues. I checked Query Store and everything looked fine. My forced plan was still there and according to query store statistics, average execution time was 10 miliseconds for problematic query. However, I can see that query takes 10-15 seconds by checking system dmvs. Obviously query was not running with right execution plan. So how can that be and what’s wrong here?

Query Store stopped collecting data

It took some time for me to realize Query Store statistics stop around 7 AM in the morning, coinciding Windows Update time. For some reason, Query Store stopped working and collecting data. That’s why, it was not reflecting what is going on currently and I was blind.

I checked Disk usage and Query Store Space, both were fine. Then I tried to unforce and force the same plan, it didn’t help either. I couldn’t restart the instance or the database so I needed something else to provide quick fix.

FIX – Purge Query Store Data!

I correctly assumed that purging query data would trigger a mechanism internally to restart Query Store and decided to go for it. Voilà! Right execution plan is picked up right away by Query Optimizer, all locks are released and database performance went back to normal. I am not sure why query store stopped working after server reboots for Windows updates. To be able to identify the relation and what was the underlying reason, one needs a proper deep-dive investigation.

You can purge query data by right clicking on database, going into “Properties” and then jump on “Query Store” tab. There you will see a button at the bottom as also shown in below image.

Purge Query Data!

Adding DB to Always On Availability Group “Password required”

It is just one of those days which you are getting ready confidently to add another database into AlwaysOn Availability Group just like you did countless times since 2014, maybe you work on a version upgrade project and finished a migration from earlier version to newer version just like me. You open SSMS, launched “Add Database” wizard but there is a strange error which you have never seen before. Wizard says it is not possible to add database to AlwaysOn Availability Group because “Password required”. But what password? You then start to wonder and confuse, because this database is not encrypted, at least not on your watch!

What is it Doc?

I’d like to explain what is going on behind the curtain for curious, passionate database administrator before providing easy and quick solution. If you are willing to just fix it and move on, you can jump to last part.

Before moving forward, I’d like to explain my scenario. I migrated a database, which was already in availability group, from 2014 to 2016 and tried to add database to availability group, then I saw this error. At that moment, I didn’t know that the database is encrypted.

Now, let’s understand why we are required to enter a password nto be able to make database highly available. When you click on “Password required” hyperlink, it shows a dialog which clearly states that this database has some keys which needs to be opened via password.
“This database is encrypted by database master key, you need to provide valid password when adding it to the availability group.”

That’s right. Database is encrypted. But how and when?

After checking carefully, I noticed that application database has a symmetric key since big bang, okay cool but this database was added to Availability Group in SQL Server 2014 without any issue, why is it an issue now?

Because Microsoft introduced a new check step while adding a database to an availability group in SSMS 2016 :’Checking password of the database master key’. It turns out this is a known issue with SSMS when you migrate a database into a new server and choose “Full database and log backup” method for initial synchronization. According to this blog, you may skip it by choosing another initialization method.

That’s why it was just fine on SQL Server 2014 but suddenly not possible in SQL Server 2016. It was not SQL version but Management Studio version that was causing this issue.

For those who likes to reproduce issues

Here is the code block for you to reproduce this issue on test box.

--First let's create an Empty Test DB to reproduce the scenario
CREATE DATABASE EncryptedDB;
GO
--Now create master key under database
USE [EncryptedDB];
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Db@RuL3z!'
GO

--Take a full backup to meet requirements
BACKUP DATABASE [EncryptedDB] TO  DISK = N'\\BackupPath\EncryptedDB.bak' ;
GO

--Then open "Add Database" wizard under Always On High availability

Just tell me how to fix this

To solve this problem and go back to our peaceful, calm daily dba life(heavy sarcasm), you have simply two options.

  • Provide the password for database key and click “Refresh”
  • Drop the key

If you choose the first approach, it may be harder to find where to put password than finding the password itself since it is not very obvious in the wizard. There is a vague, hidden text box which you can enter password under “Password” column as shown in below screenshot.

Don’t forget to hit Refresh after filling it!

If you prefer the latter, then just drop it. Here is MS documentation to Drop master key.

USE [EncryptedDB];
GO
DROP MASTER KEY; 
GO

Until next weirdness, au revoir!

Implement Transparent Data Encryption(TDE) in SQL Server 2019 Standard Edition

The most striking change in feature set of Standard Edition SQL Server 2019 is by far Transparent Data Encryption(TDE) support. That means you don’t have to pay Enterprise License just to secure your data at rest. What does “at rest” that mean? I will explain in a bit.

Standard Edition License is much cheaper(around 4 times) compared to Enterprise Edition License. Naturally, it lacks numerous cool features like Always On Availability Groups (Standard only has Basic Availability Group), Online Page Restore, Online Index Create and Rebuild to name a few. With every new version of SQL Server, Microsoft adds a couple of features into Standard Edition set, this year it is TDE. GDPR and information security is a very hot topic nowadays, thus this looks like a nice and neat strategy from Microsoft. Enough talk, lets get to business.

What is TDE?

TDE is an encryption mechanism that secures data at rest which means it encrypts data on physical disk. Thus, data is really transparent to the client so they don’t have to install a new driver or change their code base. It is a physical security which means your data is still vulnerable in buffer cache, network traffic and so on. Nevertheless, In case of a burglary in your data center, thieves will be sad when they realize your data is encrypted on disk. Encryption of the database file is performed at the page level. Thus, pages are encrypted first then written to the disk, they are decrypted when read into memory.

TDE secures data in:

  • Database Data Files
  • Database Transactional Log Files
  • Database Snapshots
  • Database Backups
  • TempDB

TDE doesn’t deal with:

  • SQL Dump Files
  • FILESTREAM data
  • Buffer Pool Extension(BPE)

TempDB may be a surprise for you. Yes, TempDB will be encrypted if any database is encrypted with TDE. This may cause performance degradation for unencrypted databases on the same server according to Microsoft.

How it works?

It is all about hierarchy.

  1. Windows Data Protection API(DPAPI) protects Service Master Key(SMK).
  2. SMK protects Database Master Key(DMK)
  3. DMK protects all objects in that database such as Certificates, Keys etc
  4. Certificate in master database protects Database Encryption Key(DEK)
  5. DEK encrypts all pages in the candidate database for TDE.

How to setup TDE?

I’d like to list steps first and explain them in detail one by one.

  1. Create Master Database Key
  2. Create Certificate
  3. Backup Certificate
  4. Create Database Encryption Key
  5. Enable Encryption

That’s it. Pretty straightforward and easy steps to implement TDE.

Lets have a closer look

Understanding how each step works and why it is needed is crucial to perform administrative tasks on databases, especially with encryption technologies. SQL Server Encryption terms are not very clear and A BIT confusing for a lot of people so I will explain each of them in detail.

1. Create Database Master Key(DMK)

First of all, this has nothing to do with master database. DMK is a symmetric key which protects other keys, like King of Kings. You have to have a DMK to be able to encrypt all the encryptable objects within the database such as

  • Symmetric Keys
  • Asymmetric Keys
  • Certificates

DMK can be created for any database, but for TDE, you need a DMK in master database. DMK is protected both with a password and SMK. A copy of DMK is encrypted by Service Master Key(SMK) so that whenever database is used, the service account can decrypt the SMK and use that key to decrypt the DMK without us entering the password.

PS: For a super secure environment, you can break this link between SMK and DMK so that the only way to open and use encrypted database is a DBA enters the password. That is a very rare case and so I’m not going to discuss it here.

Service Master Key:

SMK is the root of SQL Server Encryption Hierarchy and it is created during SQL Server Setup and protected by Windows Data Protection API(DPAPI).

Command:

USE master; 

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘VeryStrongPassword’;

GO

2. Create Certificate

Now that we have a DMK in master database, we are able to create keys and certificates then encrypt them using DMK.

A certificate is a digitally signed security object that contains a public (and optionally a private) key for SQL Server. We are going to use a Certificate to protect Database Encryption Key(DEK) in next step.

Why do we need a certificate? Why don’t we use DMK to secure DMK?

Because when we want to migrate our database to a new server, we have to migrate our DMK too since it is required for decryption. If new server already uses TDE, then there will be a problem because new server has its own DMK already. There are additional reasons but this is the main one.

Command:

USE master; 

CREATE CERTIFICATE TDECert WITH SUBJECT= ‘TDE Certification

GO

3. Backup Certificate

This is super important because if you lose your certificate, you lose your data, period.

That’s why, best practice is taking backup immediately after you create it.

BACKUP CERTIFICATE TDECert
TO FILE = ‘D:\BK\MyTDECert.cer’
WITH PRIVATE KEY
(
FILE = ‘D:\BK\CertPrivateKey.key’,
ENCRYPTION BY PASSWORD = ‘VeryStrongPassword’
);
GO

We are safe now. If we need to restore or migrate our database into a new server, we need below things to access data:

  • Backup of the database
  • Certificate file
  • Private Key file
  • Encryption Password

4. Create Database Encryption Key

It is time to create our main key, Database Encryption Key. The certificate in master database will protect DEK and DEK will encrypt data on every page.

You can only have one DEK in each database. It is stored in the database so that means it will move with the database. That’s why it is not included to the list of required items above.

T-SQL Command is pretty easy to grasp. I will recommend using AES_256 algorithm.

Command:

CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE TDECert;

5. Enable Encryption

Last step is turning encryption on for the database. This will start a thread to encrypt entire database using Database Encryption Key. According to my experiences, percentage column in sql server dmv is not working very precise, especially with large databases, but it gives you an idea.

TDE Scan

I am glad to announce this new feature we have in SQL 2019: TDE Scan. We can suspend and resume TDE process now using TDE Scan as described here. During encryption, you can track the process using dynamic management view sys.dm_database_encryption_keys.

Commands:

SELECT db_name(database_id) as db_name, percent_complete, * FROM sys.dm_database_encryption_keys
GO

ALTER DATABASE TDETest SET ENCRYPTION ON;

ALTER DATABASE TDETest SET ENCRYPTION SUSPEND;

ALTER DATABASE TDETest SET ENCRYPTION RESUME;

Database is encrypted now on the physical layer.

Summary

Let’s go over all one more time.

We have data stored in pages on physical layer. This pages are encrypted using Database Encryption Key.  DEK is a symmetric key which means you can encrypt and decrypt data using same key. SQL Server uses it while accessing data.

DEK is secured by the certificate. Certificate is encrypted using Database Master Key. Both of them are in master database. DMK is encrypted using Service Master Key. Finally SMK is secured using Windows DPAPI.

Don’t forget that database backups will be encrypted also. You will need DMK password and certificate to be able to restore database.

Upgrading SQL Server 2019 from CTP to RTM version: Is it even possible?

We have been waiting so long for this moment. SQL Server 2019 RTM is finally out! Maybe Microsoft should announced it as SQL Server 2020 when taking into consideration that only one month left to reach new year. Anyway, better late than never.

As I stated, It took so long that we did not only refresh Microsoft downloads website and waited with open arms, but also we wanted to do something in the meantime.  That’s why, we followed early versions and installed SQL Server 2019 CTP 3.0 Evaluation Edition to get things going while waiting for RTM. I told the business partners that this is an Evaluation Edition which will ends in 6 months, took the sign-off and started waiting for RTM release.

Now that the time has come, I am asking this question:

Is it possible to upgrade SQL Server 2019 CTP 3.0 version to RTM version?

Looks like it is not. 

I read Release Notes for SQL Server 2019 , there is no information about upgrade from CTP or RC version. Then, I kept searching on the internet and came across this documentation page which clearly states that they only support upgrades from Developer Edition and Release Candidates with a special condition on being Early Adopter Program member: Supported Version and Edition Upgrades for SQL Server 2019

I decided to give it a shot and launched Installation Wizard. First I tried “Upgrade from a previous version” option.

UpgradeWizard1UpgradeWizard2

Unfortunately, that didn’t work due to the fact that installer gets information about already installed CTP version and forces you to create a new named instance which is not our desire. So it is a dead end.

Then It occurred to me that this is also an Edition Upgrade from Enterprise Evaluation to Enterprise. At this point, I have nothing to loose. I took backup of databases, server objects and accepted the fact that I have to uninstall CTP and start with a clean install of RTM.Thus I decided to try Edition Upgrade also. 

UpgradeWizard3UpgradeWizard4UpgradeWizard5UpgradeWizard6

Obviously, this didn’t work too, Nevertheless, They may say I have failed but No one can say I didn’t try.

I removed everything that has ‘2019’ in it from Programs and Features, restarted server and finally installed brand new SQL Server 2019 RTM Enterprise Edition.

During installation, I’ve noticed two new tabs in “Database Engine Configuration” section. They are MaxDOP and Memory as you can see below.

NewTabsNewTabMemory

Good to have Recommended Memory Adjustments in the wizard, that means I don’t need the piece of code in my installation script where I set these values manually.

Memory settings are crucial for database engine. Thus, Microsoft would like to make sure that you know what you are doing. If you don’t tick the box at the bottom, GUI gives an error:

NewTabMemory2

The rest is just usual installation wizard steps.

To sum up, it is not possible to upgrade from CTP or RC versions to RTM directly. You have to uninstall everything first and then install a fresh one.

I understand why this is not supported from Microsoft but I would expect some unofficial documentation or blogs about this upgrade but looks like no one even bothered trying or documenting.

Always On Availability Group with Mixed SQL Server versions

Upgrades and Migrations are natural part of a database administrator’s life. If you think about the product cycle of SQL Server, Microsoft reduced it from four years(2000-2005-2008-2012) to two years( 2012-2014-2016-2017 and 2019) . We even have 2016 and 2017 as back to back versions.  That means almost bi-yearly upgrade and migration projects for database administrators.

We all have been there and know how terrible this process can be. Trying to identify all applications connecting to a consolidated server, talking to application specialists and system engineers to change configuration files to point them to new server, legacy applications which doesn’t allow to use new features or no one wants to touch it due to  hidden configs that no one knows where it is.  All those hassles that reminds me bad memories.

7 years ago Starting with SQL Server 2012, AlwaysOn Availability Groups(AG) came along as a High Availability and Disaster Recovery solution. It was an enhanced version of Database Mirroring technology but it allowed people to create groups of databases that can automatically failover together. Moreover, AG Listener gets all client connection requests and directs them to the appropriate availability replica. No need to update DNS records in case of a failover anymore! It has tons of other features as readable secondary, read-only routing lists and so on. Most companies loved this technology and used it as a primary HA solution for SQL Server landscape.

What is going to happen when we have to migrate from SQL Server 2012 to a higher version? Of course, you can create new environment with new SQL Server version, lets say 2016, and migrate applications one by one in a timely fashion. How about when you don’t have that luxury and you have to do it all in one go? Then you start wondering like

“Can I use AlwaysOn technology for this? Can I add new SQL 2016 servers as secondary and failover entire server in a minute?”

yeswecan

Achtung!

Before jump in the pool of joy, There are a few thing to keep in mind with this approach though:

  • Microsoft says this is only supported in a rolling upgrade scenario and it shouldn’t be kept for a long time. Upgrading Always On Availability Groups
  • Secondary servers with higher versions can not be read. They will have “Synchronized / In Recovery” status which I will show below.
  • Once you failover to new servers with higher versions, there is no way back. By design, it is not permitted to replicate data from higher version to lower version. I guess the only reason why this is possible to replicate in reverse is to support this kind of upgrade scenarios.

So Capua, shall I begin?

Capua

Setting up Simulation Environment

I am going to perform this test with 4 Virtual Machines to mimic most common 2 node Always On Availability Group setup.

I will not mention all steps in detail such as how to create an availability group via Wizard or T-SQL scripts because that deserves another blog post. Instead, I am going to list the steps:

  1. Install SQL Server 2014 on Server A and Server B, then apply latest SP and CU
  2. Install SQL Server 2016 on Server C and Server D, then apply latest SP and CU
  3. Create WSFC and Add Server A and Server B as Nodes
  4. Enable Always On feature on both Server A and B
  5. Restart Database Engine Service on both Server A and B
  6. Create Always On Availability Group with a Listener
  7. Add a database into AG

At this point, we have the same setup with current production environment. Up and running 2 node AG in sync mode. Now it is time to add new Servers into equation and start real testing.

To be able to add them as secondary replicas to existing AG, we need to enable AG feature and that is only possible if those servers are added as Nodes to WSFC. Thus, first add them to cluster.

WSFC

After enabling AlwaysOn feature and restarting SQL services, I added them as Secondary Replicas to AG without any problem.

SSMS

Let’s Add Tutorial DB to AG. I usually don’t use wizard for this kind of operations due to tons of valid reasons and I recommend you to do so. If you use the wizard, at least take the script on the last screen and close GUI. Even that is better.

AG

At this point, I noticed that secondary replicas with 2016 versions are synchronized but not readable as stated earlier above. Now, let’s try to fail over to 2016 to finish this simulation.

:Connect NL-SQL16-T1

ALTER AVAILABILITY GROUP [NL-SQLDAGT-AG1] FAILOVER;

GO

As soon as failover completes, 2014 replicas becomes “Not synchronizing” as seen below. As Wolfsheim once said, es gibt kein weg zurück which means there is no way back.

AG2

I truly believe that this is a great feature to cover corner cases. On the other hand, I would rather migrate application one by one in a more controlled way rather than moving them all in one go. After all the years I worked as DBA, one thing I learned from the hard way is “Never put all your eggs in one basket”.