Microsoft SQL Server 2022 is here! – What do we know so far

MS Ignite 2021 started yesterday on November 2 and during the keynote, Microsoft officially announced SQL Server 2022 private preview. According to Microsoft, SQL Server 2022 is the most cloud-enabled release of SQL Server yet, and I think they are right.

It seems to me that main focus points for the next major version are Cloud Integration, Big Data and Performance Enhancements. I attended SQLBits conference in 2019 and Bob Ward had a keynote speech where he announced SQL Server 2019. I have to admit that the announcements and focus points looked quite similar to me. Microsoft still pursues the goal of converting SQL Server from just a relational database to a hybrid, meeting place that database engineers, business analysts, data engineers and data scientists can connect and work on their tasks.

Based on the information that are available to public at this moment, which is not abundant, I will list the most important features and enhancements that will be shipped with the next major version of SQL Server.

  1. Bi-directional HA/DR with Azure Managed Instance Link: Link On-Premise SQL Server 2022 with Azure Managed Instance
  2. Built-In Query Intelligence: Parameter sensitive Plan Optimization & Query Store Improvements
  3. Multi-write Peer to Peer Replication:  A multi-write environment with automatic conflict resolution
  4. SQL Server Ledger: A Blockchain ledger for auditing purposes
  5. Azure Synapse Link Integration
  6. Azure Purview Integration

We can group them into two main categories:

  • Cloud Integration: Bi-directional HA/DR with Azure Managed Instance Link , Azure Synapse Link Integration, Azure Purview integration
  • Enhancements to Database Engine regarding Performance, Security and Availability: Built-In Query Intelligence, Multi-write Replication and SQL Server Ledger.

Let’s take a closer look.

Bi-directional HA/DR with Azure Managed Instance Link

One of the biggest news is Azure Managed Instance(MI) link. Azure Managed Instance(MI) can be attached to an on-premise SQL 2022 database thanks to new Managed Instance link feature.

This works by using a built-in Distributed Availability Group (DAG) in the background to replicate data (near real time) to a previously deployed Azure SQL Managed Instance as a DR replica site. Then, whenever you need, initiate a manual failover to cloud, you have two options: Planned[data lossless] Failover or Forced[potential data loss] failover.

Taken from Bob Ward’s Mechanics Video.

Good thing is, you don’t have to do any Always On configuration. All you need to do is connect your Azure account, use “Azure SQL Managed Instance Link” on SSMS, choose your target Azure Managed Instance. It will seed your database to cloud and deploy a Distributed Availability Group (DAG) automatically for you.

There are few important things about this new feature. First, you can have a hybrid setup quite easily without actually migrating to the cloud. It used to be a hassle to migrate from on premise to Azure Managed Instance, migrating back from Azure Managed Instance to on premise was not possible. Now, it is possible to restore a versionless database back to your on-prem environment.

Azure Managed Instance can be used as Disaster Recovery node, as well as readable secondary replica. That means, you can offload workloads to read-only secondary’s on Azure to take advantage of a fully managed database platform, performance, and scale. 

Failing over to Azure Managed Instance is quite straight forward using SSMS, but in case you would like to fail it back to on-premise, there is no magic button or GUI. You have to take a backup of Azure MI database to Azure Blob Storage first and then restore it on premise with a T-SQL command(either overwrite it or restore as a new database). At least, that’s how Bob Ward did it in introduction video. Maybe they will sort this out until next year.

It’s worth to mention that the feature has been released in a limited public preview, and it is available for SQL Server 2019 with CU 13 and above, and SQL Server 2022 in private preview at this time. Microsoft’s long-term plan is to enable link feature support for SQL Server 2016 and above.

Built-In Query Intelligence

Microsoft constantly tries to enhance Intelligent Query Processing (IQP) and Query Store in every new major version. SQL Server 2022 is no exception and here are the headlines:

  1. Parameter Sensitive Plan optimization: Oh boy, I am very excited about this one. Hopefully, this will address one of the most common performance issues called parameter sniffing. I don’t know how many times I received a call from developers or business side claiming that they didn’t change anything but database is dreadfully slow all of a sudden.
    SQL Server always caches the first execution plan for a stored procedure execution. If the execution plan is made for a value that returns a small result set, it will perform seek operator. If same stored procedure is called with a different value that returns a big result set, performance will be degraded significantly due to running on wrong execution plan. SQL Server 2022 will solve this by caching multiple plans in cache for the same stored procedure or parameterized SQL statement.
  2. Improvements to Cardinality Estimator and MAXDOP: Query Store will be used to create a feedback cycle to automatically adapt and resolve problems with common query patterns.
  3. New enhancements to memory grant feedback including persistence and new algorithms to smoothly handle memory grant fluctuations.
  4. Query Store now on by default for new databases. I never understand why it wasn’t enabled until now, probably they were not confident enough with QS. No need to enable Query Store on every database anymore, thank you Microsoft!
  5. Query Store is now supported on read replicas allowing performance analytics on your read workloads. This is quite significant for database engineers when it comes to tune performance on readable secondary replicas, because it was not possible to monitor query activity on secondary read replicas due to not having write access to the Query Store.
  6. Query Store hints to easily shape query plans with no code changes.
  7. T-SQL will support new JSON functions and time-series capabilities.

Multi-write Replication with Automatic Conflict Resolution

Recently, it was announced on this blog post by Kevin Farlee that starting with SQL Server CU 13, there will be a new functionality which facilitates peer-to-peer replication with conflict detection & resolution using a last-writer-wins strategy. This enables to configure multi-write replication for the first time ever. Detailed technical explanation can be found here.

With SQL Server 2022, the last-writer wins rule is going to be automated. Now, when a conflict is detected, the most recent modification time will be chosen to be persisted on all replicas.

SQL Ledger

As Microsoft voice this quite vocally, they focus on Cloud database and introduce new features to Azure SQL first. SQL Ledger is another feature that is introduced on Azure SQL database first, then inherited to SQL Server 2022. Details about SQL Ledger can be found here.

It is basically an audit record that uses immutable blockchain technology. That means even database engineers can not change audit records, which provides more secure and reliable data for internal and external auditors and ensure that data can not be tampered.

Azure Synapse Link

What if I tell you that there is no need for ETL pipelines anymore to move data from your on-premise SQL server to Azure Synapse? Azure Synapse Link for SQL Server 2022 removes the need for ETL to run analytics on your operational data. Azure Synapse Link provides automatic change feeds that capture the changes within SQL Server are send to Azure Synapse Analytics automatically. Since it only captures the changes, and not the entire data, it is much faster and provides near real-time analysis on operational systems.

Azure Purview Integration

Azure Purview is a cloud service which recently announced for the general availability. It is a unified data governance solution and management service. It helps you manage and govern your on-premises, multicloud, and software-as-a-service (SaaS) data.

Now, SQL Server has a direct integration with this cloud service. Through this integration you will be able to:

  • Automatically scan your on-premises SQL Server for free to capture metadata.
  • Classify data using built-in and custom classifiers and Microsoft Information Protection sensitivity labels.
  • Set up and control specific access rights to SQL Server.

Conclusion

SQL Server 2022 is finally here. It has several integrations to Azure that helps customers to design and implement hybrid environment better and easier. It also has great features and improvements to core database engine. I can’t wait to get my hands on it.

I’d like to point out again that this is a private preview. So, If you are interested in exploring more, you can apply for Early Adaption Program by filling this form. Yet, be aware that completion of this questionnaire does not guarantee acceptance into the EAP.

Sources:

Microsoft Announcement Page: https://www.microsoft.com/en-us/sql-server/sql-server-2022

Microsoft Mechanics Video: https://www.youtube.com/watch?v=ncF-zFzBDAY

Bob Ward’s LinkedIn Post: https://www.linkedin.com/pulse/introducing-sql-server-2022-bob-ward/

Cloud Blogs post: https://cloudblogs.microsoft.com/sqlserver/2021/11/02/announcing-sql-server-2022-preview-azure-enabled-with-continued-performance-and-security-innovation/

Tech Community Post: https://techcommunity.microsoft.com/t5/azure-sql/managed-instance-link-connecting-sql-server-to-azure-reimagined/ba-p/2911614

TempDB files don’t exist/are not reachable anymore, what to do?

TempDB is one of four(well, actually five with ‘mssqlsystemresource’ but that’s not the point of this blog post) system databases of SQL Server that holds temporary user and system objects for various operations, such as temp tables or sorting operations.

It is a crucial item on the startup checklist for SQL Database engine service. At every restart, tempdb is recreated and starts empty to serve its role. Any problem that you may face during this creation operation will cause the database service fail to start.

The Scenario

Let’s imagine a scenario where there are 2 tempdb data files defined, one of them is on E drive and the other is on F drive. What happens if we lose F drive? It won’t crash the instance but you will start seeing alerts in SQL Agent logs such as

“The operating system returned the error ‘2(The system cannot find the file specified.)’ while attempting ‘GetVolumePathName’ on ‘F:\temp2.ndf’ at ‘storagedmv.cpp'(443).”

Clearly, SQL Server is trying to access the defined tempdb data file but it doesn’t exist anymore, so it fails.

The first idea that comes to mind is to remove those file(s) and we should be okay. But you can’t just remove them, because SQL Server can’t remove a file if it is not empty.

USE [tempdb]
GO
ALTER DATABASE [tempdb]  REMOVE FILE [temp2]
GO

Msg 5042, Level 16, State 1, Line 3
The file ‘temp2’ cannot be removed because it is not empty.

Then how about emptying the file first and then remove it? No, it will not work also because we can’t access the file anymore.

USE [tempdb]
GO
DBCC SHRINKFILE (N'temp2', EMPTYFILE)
GO

Msg 823, Level 24, State 2, Line 3
The operating system returned error 21(The device is not ready.) to SQL Server during a read at offset 0x000001f3402000 in file ‘F:\temp2.ndf’. Additional messages in the SQL Server error log and operating system error log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

The fix

We have to tell SQL Server that we lost these files and during next startup, it should create them on another available disk. How do we do that? By giving a new path for tempdb database files using ALTER DATABASE command.

USE [tempdb]
GO
ALTER DATABASE [tempdb] MODIFY FILE 
(name = temp2, filename = 'E:\temp2.ndf')--New available path
GO

The file “temp2” has been modified in the system catalog. The new path will be used the next time the database is started.

As can be seen in the result output, SQL updated its system catalog and will try to create this data file on given network path.

We can also verify this change by querying master_files dmv

SELECT * FROM sys.master_files WHERE database_id=2

As long as there is enough space on that drive, SQL should start without any problem and create tempdb files as configured.

If there is not enough space, then it will create tempdb files based on model database configuration. This will also be logged in error log file and sql agent log as

Message
MODIFY FILE encountered operating system error 112(There is not enough space on the disk.) while attempting to expand the physical file 'E:\temp2.ndf'.

SQL Server without tempdb?

One day you may want / need to start SQL Server without tempdb, maybe because tempdb drive is corrupt or not reachable and SQL doesn’t start successfully. Then you can start database engine service in a minimal configuration mode with the -f command-line parameter, either from SQL Configuration Manager tool by setting Service Properties or use the command-line

SQLServr.exe -f

Alternatively, you can use Trace Flag 3608 which only starts master database

SQLServr.exe -T3608

After bringing the service up, you can connect to it using SQLCMD command line tool and change the paths of tempdb files to a suitable folder. Now we can stop the database service, restart it and tempdb will be automatically created on new path without any issue.

Wait, Who is taking backup of SQL Server database?

If you are a production dba and already spent enough years on the field, probably this question is very familiar to you.

Most of the time you will be alerted that differential backups are failing or if you are monitoring backups history (I hope you do), you will notice extra backups in your records.

It is crucial to identify and prevent the secondary process that takes backup of your databases to ensure your backup maintenance plan will be executed successfully and restores will be performed without an error if needed in future. Otherwise, the backup chain will be broken, restores will fail and no dba should find themselves in such difficult position. Thus, you need to act asap and stop pirate backup takers!

What/Who is taking these mysterious backups?

We realized that there is another process backing up our databases, so it’s time to identify it now.

We will query SQL Server dmvs to get a clear picture about backup details:

 SELECT
	database_name,
	user_name,
	server_name,
    backup_start_date,
	backup_finish_date,
    CASE msdb..backupset.type
         WHEN 'D' THEN 'Full'
		 WHEN 'I' THEN 'Diff'
         WHEN 'L' THEN 'Log'
    END AS [Backup Type],
	msdb..backupset.is_snapshot,
    physical_device_name,
	CASE device_type
         WHEN 2 THEN 'Disk'
		 WHEN 5 THEN 'Tape'
		 WHEN 7 THEN 'Virtual Device'
		 WHEN 9 THEN 'Azure Storage'
	END AS [Device Type]
FROM msdb.dbo.backupmediafamily
INNER JOIN msdb.dbo.backupset
ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
WHERE (CONVERT(datetime, backup_start_date, 102) >= GETDATE() - 7)
ORDER BY backup_set_id DESC

 

This query will give you a nice overview for all the backups performed in the last 7 days assuming that secondary backup process has been identified quickly, you can adjust it based on your case and needs.

You can see which user initiated the backups, backup type, backup path, device type, snapshot and so on. This should provide adequate information to resolve most of the cases, but in case you need more information, please go to the bottom of the post and refer to Microsoft documentation to comprehend and see what you can get more out of system dmvs.

There are various possibilities about who/what may take backups but I’d like to give one specific case I recently experienced.

GUID values shown as physical device name, what is this?

After I notice that there is a new process taking backups, I immediately queried system backup dmvs to get information and tried to locate the backup files. First, I saw that physical device name column shows guid values. This may be due to:

  • Third party backup software taking backups to a tape
  • VM / Snapshot backup

Device type was a virtual device and backups were running under Local system account. Server name was the local database host. Additionally, “is_snapshot” was 1 as I assumed, so this clearly indicates that there is an external process/service/application on the database host which takes snapshot backups using VSS.

To prove my point, I started a trace and capture backup events. Application name was “Microsoft SQL Server VSS Writer” so the theory is validated. This can be a Hyper-V / VMware backup or Third Party Tool such as NetBackup, Tivoli, Arcserve and so on.

Surprise, it is a Security Application!

In my case, it was none of above mentioned processes, it was a security application that is installed on database servers and it decided to take full backups every 4 hours.

There is an effective command that list vss writer services, open a command line and execute:

vssadmin list writers

That’s how we identified the security application. After stopping security application services on database hosts, backups are stopped and issue is resolved.

If you have time to read more

For more details , refer to Micosoft Documentation:

backupmediafamily (Transact-SQL) – SQL Server | Microsoft Docs

backupset (Transact-SQL) – SQL Server | Microsoft Docs

SQL Server Back up Applications – Volume Shadow Copy Service (VSS) and SQL writer – SQL Server | Microsoft Docs

Microsoft Announced a Vulnerability(CVE 2021 1636) and Released Security Fix

2 days ago, Microsoft released a Security Fix for ALL SQL Server versions to mitigate risks about a privilege escalation vulnerability that leverages Extended Events.

According to Microsoft website:

How can an attacker exploit this vulnerability? An authenticated attacker can send data over a network to an affected SQL Server when configured to run an Extended Event session.

https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2021-1636

Due to obvious reasons, they didn’t share a lot of information about the vulnerability yet. Make sure to plan this in your SQL Server environment and patch all SQL Servers as soon as possible!

Download Links:

SQL Server 2019CU8https://www.microsoft.com/en-us/download/details.aspx?id=102617
SQL Server 2019RTMhttps://www.microsoft.com/en-us/download/details.aspx?id=100442
SQL Server 2017CU22https://www.microsoft.com/en-us/download/details.aspx?id=102619
SQL Server 2017RTMhttps://www.microsoft.com/en-us/download/details.aspx?id=102620
SQL Server 2016 SP2 CU15https://www.microsoft.com/en-us/download/details.aspx?id=102621
SQL Server 2016 SP2https://www.microsoft.com/en-us/download/details.aspx?id=102622
SQL Server 2014SP3 CU4https://www.microsoft.com/en-us/download/details.aspx?id=102623
SQL Server 2014SP3https://www.microsoft.com/en-us/download/details.aspx?id=102624
SQL Server 2012SP4https://www.microsoft.com/en-us/download/details.aspx?id=102625
Make sure to download correct patch file

Disable/Enable export data option for Power BI report visuals

I wrote two blog posts in the past regarding similar goals for PowerBI Report Server which can be seen here:

Disable Download Button on PowerBI Report Server: https://bahtisametcoban.home.blog/2019/07/15/disable-download-button-in-power-bi-report-server/

Disable Print and Export Data Buttons for PowerBI Report Server:

https://bahtisametcoban.home.blog/2020/06/04/disable-print-and-export-data-buttons-for-powerbi-report-server/

This time, we would like to control the option to export data from PowerBI report visuals.

In PowerBI Service, this can be achieved by changing configuration in “Tenant Settings”. For On-Premise PowerBI Report Server, you need to connect to Reporting Service instance using Management Studio.

 Steps:

  1. Open SSMS, choose “Reporting Services” for Server Type and connect to Reporting Service as shown in Figure 2. You may also use Web Portal/Service URL to connect to it.
  2. Right-click on Servername in Object Explorer and click Properties.
  3. Click “Advanced” Tab
  4. Under “Other” section, Set “EnablePowerBIReportExportData” to “False” to Disable data export. Set it to “True” to Enable data export.

WARNING: This field is not a Dropdown list, instead a simple text box which can be edited directly!

Uninstall SQL Server Reporting Services 2019

First and foremost, remove and decommission unused / idle services without waiting! They consume resources such as storage, computing power, url reservations et cetera. Nothing is free.

I wouldn’t write a blog post about removing SSRS in normal conditions. However, SQL Server Reporting Services is a separate service now, starting from SQL 2017. That means few things:

  • It is not in SQL Server Installation Media anymore (Like SSMS)
  • It is a separate service now under Services
  • Hence, SSRS is an individual item on “Control Panel – Programs and Feature”

Before SQL 2017, we used to remove components like SSRS, SSIS and SSAS by clicking Microsoft SQL Server 20XX (Version bit) item but now as described above, you need to find and choose “Microsoft SQL Server Reporting Services”, then click “Uninstall/Change”. The rest is Microsoft’s new interface and just one click.

SQL Server XML Conversion Error: “Conversion of one or more characters from XML to target collation impossible”

A quick and short post this time regarding a sql server error being thrown when you try to convert xml characters. XML column has an unicode character that can not be converted into ASCII, thus our engine is not happy about it.

Let’s say you have a “Table1” which stores a “XMLColumn” and you’d like to query on that column with LIKE operator. Hence, you convert it to varchar data type first.

SELECT XMLColumn FROM Table1
WHERE Convert(varchar(max), XMLColumn) IS LIKE %ARollingStone%

Then, you’ll get below error:

Msg 6355, Level 16, State 1, Line 2
Conversion of one or more characters from XML to target collation impossible

To fix this, you need to simply change varchar to nvarchar, that’s it.

SELECT XMLColumn FROM Table1
WHERE Convert(nvarchar(max), XMLColumn) IS LIKE %ARollingStone%

Query Store stopped working after Windows Update on Database Host

Firstly, This is not going to be a long, detailed blog post. I would like to share an issue with Query Store that I experienced today after Windows Updates on SQL Server hosts and a quick fix for that.

Average Wait time is too high on SQL Server

I was doing sanity check after windows updates and noticed severe wait times on a server, caused by a specific database. This very database had a problematic select query which confuses optimizer to choose a wrong execution plan. Query Store helped me to address the issue in the past, I forced the right plan and move on.

I immediately went after the same query and realized it is the one that causes high wait times and lock issues. I checked Query Store and everything looked fine. My forced plan was still there and according to query store statistics, average execution time was 10 miliseconds for problematic query. However, I can see that query takes 10-15 seconds by checking system dmvs. Obviously query was not running with right execution plan. So how can that be and what’s wrong here?

Query Store stopped collecting data

It took some time for me to realize Query Store statistics stop around 7 AM in the morning, coinciding Windows Update time. For some reason, Query Store stopped working and collecting data. That’s why, it was not reflecting what is going on currently and I was blind.

I checked Disk usage and Query Store Space, both were fine. Then I tried to unforce and force the same plan, it didn’t help either. I couldn’t restart the instance or the database so I needed something else to provide quick fix.

FIX – Purge Query Store Data!

I correctly assumed that purging query data would trigger a mechanism internally to restart Query Store and decided to go for it. Voilà! Right execution plan is picked up right away by Query Optimizer, all locks are released and database performance went back to normal. I am not sure why query store stopped working after server reboots for Windows updates. To be able to identify the relation and what was the underlying reason, one needs a proper deep-dive investigation.

You can purge query data by right clicking on database, going into “Properties” and then jump on “Query Store” tab. There you will see a button at the bottom as also shown in below image.

Purge Query Data!

Disable Print and Export Data Buttons for PowerBI Report Server

Here we are again with another Power BI Report Server post. Last time I explained how to disable Download button in this post. This time, we will take a look on how to disable Print button and Export Data options.

Disable Print Button

Just like we did for download button, this is also possible via changing server properties. Before disabling, let me show you “Print Button”

 Steps:

  1. Open SSMS, choose “Reporting Services” for Server Type and connect to Reporting Service as shown in Figure 2. You may also use Web Portal/Service URL to connect to it.
  2. Right-click on Servername in Object Explorer and click Properties.
  3. Click “Advanced” Tab
  4. Under “Other” section, Set “EnableClientPrinting” to False.
It is a dropdown menu, simply click and change from “True” to “False”

PS: You don’t need to restart Power BI Reporting Service or your server. Just refresh the page.

Voilà! Now it is gone.

Disable Export Data Options

This is a bit more complex. There isn’t a simple switch that we can change and hide “Export Drop Down menu”. Again, let me show you this button.

I spend quite some time to find a way to disable this cute floppy disk button but I couldn’t find. Nonetheless, There is another nasty method to hide options via editing Reporting Service config file.

 Steps:

  1. Locate your configuration file “RSReportServer.config”. Normally it would be under your installation path i.e. “C:\Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer”
  2. Open it with any text editor(Wordpad,Notepad et cetera) and search for “<Render” [Look at the screenshot#1 below]. This is the code section that we are interested. You will see various <Extension> objects under <Render> parent tag.
  3. Add ‘ Visible=”false” ‘ at the end of each <Extension> object [Look at the screenshot#2 below]
  4. Save the changes and close configuration file
  5. Restart Power BI Reporting Services from Services vmi
Find Render section
Change Visible attribute to hide export data options

You can also copy below code if you are too lazy.

    <Render>
      <Extension Name="WORDOPENXML" Type="Microsoft.ReportingServices.Rendering.WordRenderer.WordOpenXmlRenderer.WordOpenXmlDocumentRenderer,Microsoft.ReportingServices.WordRendering" Visible="false" />
      <Extension Name="WORD" Type="Microsoft.ReportingServices.Rendering.WordRenderer.WordDocumentRenderer,Microsoft.ReportingServices.WordRendering" Visible="false" />
      <Extension Name="EXCELOPENXML" Type="Microsoft.ReportingServices.Rendering.ExcelOpenXmlRenderer.ExcelOpenXmlRenderer,Microsoft.ReportingServices.ExcelRendering" Visible="false" />
      <Extension Name="EXCEL" Type="Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer,Microsoft.ReportingServices.ExcelRendering" Visible="false" />
      <Extension Name="PPTX" Type="Microsoft.ReportingServices.Rendering.PowerPointRendering.PptxRenderingExtension,Microsoft.ReportingServices.PowerPointRendering" Visible="false" />
      <Extension Name="PDF" Type="Microsoft.ReportingServices.Rendering.ImageRenderer.PDFRenderer,Microsoft.ReportingServices.ImageRendering" Visible="false" />
      <Extension Name="IMAGE" Type="Microsoft.ReportingServices.Rendering.ImageRenderer.ImageRenderer,Microsoft.ReportingServices.ImageRendering" Visible="false" />
      <Extension Name="MHTML" Type="Microsoft.ReportingServices.Rendering.HtmlRenderer.MHtmlRenderingExtension,Microsoft.ReportingServices.HtmlRendering" Visible="false">
        <Configuration>
          <DeviceInfo>
            <DataVisualizationFitSizing>Approximate</DataVisualizationFitSizing>
          </DeviceInfo>
        </Configuration>
      </Extension>
      <Extension Name="CSV" Type="Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering" Visible="false" />
      <Extension Name="XML" Type="Microsoft.ReportingServices.Rendering.DataRenderer.XmlDataReport,Microsoft.ReportingServices.DataRendering" Visible="false" />
      <Extension Name="ATOM" Type="Microsoft.ReportingServices.Rendering.DataRenderer.AtomDataReport,Microsoft.ReportingServices.DataRendering" Visible="false" />
      <Extension Name="NULL" Type="Microsoft.ReportingServices.Rendering.NullRenderer.NullReport,Microsoft.ReportingServices.NullRendering" Visible="false" />
      <Extension Name="RGDI" Type="Microsoft.ReportingServices.Rendering.ImageRenderer.RGDIRenderer,Microsoft.ReportingServices.ImageRendering" Visible="false" />
      <Extension Name="HTML4.0" Type="Microsoft.ReportingServices.Rendering.HtmlRenderer.Html40RenderingExtension,Microsoft.ReportingServices.HtmlRendering" Visible="false">
        <Configuration>
          <DeviceInfo>
            <DataVisualizationFitSizing>Approximate</DataVisualizationFitSizing>
          </DeviceInfo>
        </Configuration>
      </Extension>
      <Extension Name="HTML5" Type="Microsoft.ReportingServices.Rendering.HtmlRenderer.Html5RenderingExtension,Microsoft.ReportingServices.HtmlRendering" Visible="false">
        <Configuration>
          <DeviceInfo>
            <DataVisualizationFitSizing>Approximate</DataVisualizationFitSizing>
          </DeviceInfo>
        </Configuration>
      </Extension>
      <Extension Name="RPL" Type="Microsoft.ReportingServices.Rendering.RPLRendering.RPLRenderer,Microsoft.ReportingServices.RPLRendering" Visible="false" LogAllExecutionRequests="false" />
    </Render>

After restarting “Power BI Report Server” service and refresh your report, you will see that button is still there but there is no dropdown menu anymore since we disabled all options.

Until next challenge, stay safe.

PowerBI Reporting Services: How to get a previous version of overwritten report

It is 2020 and Microsoft still didn’t give us version control for reports, I invite you to take a moment of silence with for this nonsense.

You are working on your fancy powerbi report to impress management yourself and listening “Chemical Brothers-Galvanise” because you are just a cool kid. You added new features, it is time to update production report and publish it on PowerBI Report Server. You are feeling good and chorus is coming, your finger is on the button. You can’t wait anymore and “PUSH THAT BUTTON!“. Oops, you did it again. You accidentally overwrote your report with an older version and now wondering how you are going to get your report back. You are confident that somewhere on internet, someone should have blogged about this, after all you can’t be the only one! Well, you are an oracle.

I will give 2 PowerShell scripts for 2 different scenarios:

1- Rebuild a single RDL file: You are only interested in recovering one single RDL file

2- Rebuild all files(RDL,PBIX,RDS)

You need to perform a few steps for first scenario and adjust it according to your needs. I will explain the general idea, important tables to give an idea about the logic to individuals with no prior experience on this matter.

Preparation

First of all, Due to the fact that you published the report and persisted it to database, you have to find an old backup of Reporting Service database and restore it with a different name.

USE [master]
RESTORE DATABASE [REPORTSERVER_DATABASE_NAME] FROM  DISK = N'D:\REPORTSERVER_DATABASE_NAME.bak' WITH  FILE = 1,  STATS = 5
GO

Now we have our desired version of the report in the database.

Reports are stored in your PowerBI Report Server database on Microsoft SQL Server instance, in [Catalog] and [CatalogItemExtendedContent] tables. Feel free to check both table’s designs and explore.

Scenario 1

If you are working on scenario 1,you have to identify your report’s unique item identifier first and build the SQL query to retrieve necessary information from database tables to rebuild it.

We will note down [Catalog].[ItemID] of the report to use it as an unique filter in PowerShell script.

SELECT	[Catalog].[Path]
     , [Catalog].[ItemID]
FROM dbo.[Catalog] 

This query will return the data we need to rebuild specific RDL report.

SELECT	[Catalog].[Path]
        ,[Catalog].[Type]
	,ISNULL([CatalogItemExtendedContent].ContentType,'SSRS') as ContentType
        ,CONVERT(varbinary(max), [Catalog].[Content]) AS RDL_BinaryContent
FROM dbo.[Catalog] 
LEFT OUTER JOIN dbo.[CatalogItemExtendedContent] 
ON [Catalog].ItemID = [CatalogItemExtendedContent].ItemId
WHERE [Catalog].ItemID ='YOUR_REPORTS_ITEM_ID'

We are now ready to use our Powershell magic. You have to fill 4 variables into PS script:

  • SQL Server Instance Name
  • Reporting Service Database Name
  • Windows File Path to Export Files
  • Report Item Identifier

Be aware: The account runs this code needs SELECT permission on the ReportServer database!

# Set the variables!
[string] $server   = "$SQLINSTANCENAME";    
[string] $database = "$REPORTSERVER_DATABASE_NAME";       
[string] $folder   = "$EXPORT_PATH";  
  
# Extract data from report server database
$sql = "SELECT	[Catalog].[Path]
        ,[Catalog].[Type]
	,ISNULL([CatalogItemExtendedContent].ContentType,'SSRS') as ContentType
        ,CONVERT(varbinary(max), [Catalog].[Content]) AS RDL_BinaryContent
FROM dbo.[Catalog] 
LEFT OUTER JOIN dbo.[CatalogItemExtendedContent] 
ON [Catalog].ItemID = [CatalogItemExtendedContent].ItemId
WHERE [Catalog].ItemID ='YOUR_REPORTS_ITEM_ID'";		

Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Process started");
 
# Create Database Connection Object
# This script use Kerberos Authentication with executer's Windows credentials, If you'd like to use SQL Authentication, adjust it accordingly!
$connection = New-Object Data.SqlClient.SqlConnection;
$connection.ConnectionString = "Data Source=$server;Initial Catalog=$database;Integrated Security=True;";
$connection.Open();

Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Connection has been established with database");

# Build SQL Command Object
$command = New-Object Data.SqlClient.SqlCommand $sql, $connection;
$reader = $command.ExecuteReader();

$invalids = [System.IO.Path]::GetInvalidFileNameChars();

# Looping through all selected datasets.

While ($reader.Read())
{
    Try
    {
        $name = $reader.GetString(0);
		Write-Output "Reading $name"
        foreach ($invalid in $invalids)
           {    $name = $name.Replace($invalid, "-");    }

        If ($reader.GetInt32(1) -eq 2)
            {    $name = $name + ".rdl";    }
        ElseIf ($reader.GetInt32(1) -eq 5)
            {    $name = $name + ".rds";    }
        ElseIf ($reader.GetInt32(1) -eq 8)
            {    $name = $name + ".rsd";    }
        ElseIf ($reader.GetInt32(1) -eq 11)
            {    $name = $name + ".kpi";    }
        ElseIf ($reader.GetInt32(1) -eq 13)
	    {   $name = $name + ".pbix";    }

			
			
        Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Exporting {0}" -f $name);

        $name = [System.IO.Path]::Combine($folder, $name);

        # New BinaryWriter; existing file will be overwritten.
        $fs = New-Object System.IO.FileStream ($name), Create, Write;

        $BinaryWriter = New-Object System.IO.BinaryWriter($fs);

        # Read Content 
		$BinaryContent = $reader.GetSqlBinary(3).Value;
		
		
        $BinaryWriter.Write($BinaryContent, 0, $BinaryContent.Length);
        $BinaryWriter.Flush();
        $BinaryWriter.Close();
        $fs.Close();
    }
    Catch
    {
        Write-Output ($_.Exception.Message)
    }
    Finally
    {
        $fs.Dispose();
    }
}

# Cleanup
$reader.Close();
$command.Dispose();
$connection.Close();
$connection.Dispose();

Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Process Finished");

Now go and check your export folder. You successfully restored your report and saved your valuable time!

Scenario 2

Maybe you need more than one file or just want to be safe and rebuild everything. Then, welcome to scenario 2. this is more straight forward compared to first one. After restoring database, you can execute below PowerShell script directly. All you need to do, set the variables:

  • SQL Server Instance Name
  • Reporting Service Database Name
  • Windows File Path to Export Files

Be aware: The account runs this code needs SELECT permission on the ReportServer database!

# Set the variables!
[string] $server   = "$SQLINSTANCENAME";    
[string] $database = "$REPORTSERVER_DATABASE_NAME";       
[string] $folder   = "$EXPORT_PATH";  
        
# Extract data from report server database 
$sql = "SELECT	CT.[Path]
        ,CT.[Type]
		,ISNULL(cc.ContentType,'SSRS') as ContentType
        ,CONVERT(varbinary(max), cc.[Content]) AS PBI_BinaryContent
        ,CONVERT(varbinary(max), ct.[Content]) AS RDL_BinaryContent
FROM dbo.[Catalog] AS CT
		LEFT OUTER JOIN dbo.CatalogItemExtendedContent cc
			ON ct.ItemID = cc.ItemId
WHERE CT.[Type] IN (2, 8, 5,13)
	AND ISNULL(cc.ContentType,'CatalogItem') = 'CatalogItem'";		
 
 
Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Process Started");

# Create Database Connection Object
# This script use Kerberos Authentication with executer's Windows credentials, If you'd like to use SQL Authentication, adjust it accordingly!
$connection = New-Object Data.SqlClient.SqlConnection;
$connection.ConnectionString = "Data Source=$server;Initial Catalog=$database;Integrated Security=True;";
$connection.Open();


Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Connection has been established with database");

# Build SQL Command Object
$command = New-Object Data.SqlClient.SqlCommand $sql, $connection;
$reader = $command.ExecuteReader();

$invalids = [System.IO.Path]::GetInvalidFileNameChars();
 
# Looping through all selected datasets.

While ($reader.Read())
{
    Try
    { 
        $name = $reader.GetString(0);
		Write-Output "Reading $name"
        foreach ($invalid in $invalids)
           {    $name = $name.Replace($invalid, "-");    }

        If ($reader.GetInt32(1) -eq 2)
            {    $name = $name + ".rdl";    }
        ElseIf ($reader.GetInt32(1) -eq 5)
            {    $name = $name + ".rds";    }
        ElseIf ($reader.GetInt32(1) -eq 8)
            {    $name = $name + ".rsd";    }
        ElseIf ($reader.GetInt32(1) -eq 11)
            {    $name = $name + ".kpi";    }
        ElseIf ($reader.GetInt32(1) -eq 13)
	    {   $name = $name + ".pbix";    }

			
        Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Exporting {0}" -f $name);

        $name = [System.IO.Path]::Combine($folder, $name);

        # New BinaryWriter; existing file will be overwritten.
        $fs = New-Object System.IO.FileStream ($name), Create, Write;

        $BinaryWriter = New-Object System.IO.BinaryWriter($fs);

        # Read Content 
        if ($reader.GetString(2) -eq "SSRS") {
			$BinaryContent = $reader.GetSqlBinary(4).Value;
		} else{
			$BinaryContent = $reader.GetSqlBinary(3).Value;
		}
		
		
        $BinaryWriter.Write($BinaryContent, 0, $BinaryContent.Length);
        $BinaryWriter.Flush();
        $BinaryWriter.Close();
        $fs.Close();
    }
    Catch
    {
        Write-Output ($_.Exception.Message)
    }
    Finally
    {
        $fs.Dispose();
    }
}

# Cleanup
$reader.Close();
$command.Dispose();
$connection.Close();
$connection.Dispose();


Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Process Finished");

Hope this small script will save your valuable time.