PowerBI Reporting Services: How to get a previous version of overwritten report

It is 2020 and Microsoft still didn’t give us version control for reports, I invite you to take a moment of silence with for this nonsense.

You are working on your fancy powerbi report to impress management yourself and listening “Chemical Brothers-Galvanise” because you are just a cool kid. You added new features, it is time to update production report and publish it on PowerBI Report Server. You are feeling good and chorus is coming, your finger is on the button. You can’t wait anymore and “PUSH THAT BUTTON!“. Oops, you did it again. You accidentally overwrote your report with an older version and now wondering how you are going to get your report back. You are confident that somewhere on internet, someone should have blogged about this, after all you can’t be the only one! Well, you are an oracle.

I will give 2 PowerShell scripts for 2 different scenarios:

1- Rebuild a single RDL file: You are only interested in recovering one single RDL file

2- Rebuild all files(RDL,PBIX,RDS)

You need to perform a few steps for first scenario and adjust it according to your needs. I will explain the general idea, important tables to give an idea about the logic to individuals with no prior experience on this matter.

Preparation

First of all, Due to the fact that you published the report and persisted it to database, you have to find an old backup of Reporting Service database and restore it with a different name.

USE [master]
RESTORE DATABASE [REPORTSERVER_DATABASE_NAME] FROM  DISK = N'D:\REPORTSERVER_DATABASE_NAME.bak' WITH  FILE = 1,  STATS = 5
GO

Now we have our desired version of the report in the database.

Reports are stored in your PowerBI Report Server database on Microsoft SQL Server instance, in [Catalog] and [CatalogItemExtendedContent] tables. Feel free to check both table’s designs and explore.

Scenario 1

If you are working on scenario 1,you have to identify your report’s unique item identifier first and build the SQL query to retrieve necessary information from database tables to rebuild it.

We will note down [Catalog].[ItemID] of the report to use it as an unique filter in PowerShell script.

SELECT	[Catalog].[Path]
     , [Catalog].[ItemID]
FROM dbo.[Catalog] 

This query will return the data we need to rebuild specific RDL report.

SELECT	[Catalog].[Path]
        ,[Catalog].[Type]
	,ISNULL([CatalogItemExtendedContent].ContentType,'SSRS') as ContentType
        ,CONVERT(varbinary(max), [Catalog].[Content]) AS RDL_BinaryContent
FROM dbo.[Catalog] 
LEFT OUTER JOIN dbo.[CatalogItemExtendedContent] 
ON [Catalog].ItemID = [CatalogItemExtendedContent].ItemId
WHERE [Catalog].ItemID ='YOUR_REPORTS_ITEM_ID'

We are now ready to use our Powershell magic. You have to fill 4 variables into PS script:

  • SQL Server Instance Name
  • Reporting Service Database Name
  • Windows File Path to Export Files
  • Report Item Identifier

Be aware: The account runs this code needs SELECT permission on the ReportServer database!

# Set the variables!
[string] $server   = "$SQLINSTANCENAME";    
[string] $database = "$REPORTSERVER_DATABASE_NAME";       
[string] $folder   = "$EXPORT_PATH";  
  
# Extract data from report server database
$sql = "SELECT	[Catalog].[Path]
        ,[Catalog].[Type]
	,ISNULL([CatalogItemExtendedContent].ContentType,'SSRS') as ContentType
        ,CONVERT(varbinary(max), [Catalog].[Content]) AS RDL_BinaryContent
FROM dbo.[Catalog] 
LEFT OUTER JOIN dbo.[CatalogItemExtendedContent] 
ON [Catalog].ItemID = [CatalogItemExtendedContent].ItemId
WHERE [Catalog].ItemID ='YOUR_REPORTS_ITEM_ID'";		

Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Process started");
 
# Create Database Connection Object
# This script use Kerberos Authentication with executer's Windows credentials, If you'd like to use SQL Authentication, adjust it accordingly!
$connection = New-Object Data.SqlClient.SqlConnection;
$connection.ConnectionString = "Data Source=$server;Initial Catalog=$database;Integrated Security=True;";
$connection.Open();

Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Connection has been established with database");

# Build SQL Command Object
$command = New-Object Data.SqlClient.SqlCommand $sql, $connection;
$reader = $command.ExecuteReader();

$invalids = [System.IO.Path]::GetInvalidFileNameChars();

# Looping through all selected datasets.

While ($reader.Read())
{
    Try
    {
        $name = $reader.GetString(0);
		Write-Output "Reading $name"
        foreach ($invalid in $invalids)
           {    $name = $name.Replace($invalid, "-");    }

        If ($reader.GetInt32(1) -eq 2)
            {    $name = $name + ".rdl";    }
        ElseIf ($reader.GetInt32(1) -eq 5)
            {    $name = $name + ".rds";    }
        ElseIf ($reader.GetInt32(1) -eq 8)
            {    $name = $name + ".rsd";    }
        ElseIf ($reader.GetInt32(1) -eq 11)
            {    $name = $name + ".kpi";    }
        ElseIf ($reader.GetInt32(1) -eq 13)
	    {   $name = $name + ".pbix";    }

			
			
        Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Exporting {0}" -f $name);

        $name = [System.IO.Path]::Combine($folder, $name);

        # New BinaryWriter; existing file will be overwritten.
        $fs = New-Object System.IO.FileStream ($name), Create, Write;

        $BinaryWriter = New-Object System.IO.BinaryWriter($fs);

        # Read Content 
		$BinaryContent = $reader.GetSqlBinary(3).Value;
		
		
        $BinaryWriter.Write($BinaryContent, 0, $BinaryContent.Length);
        $BinaryWriter.Flush();
        $BinaryWriter.Close();
        $fs.Close();
    }
    Catch
    {
        Write-Output ($_.Exception.Message)
    }
    Finally
    {
        $fs.Dispose();
    }
}

# Cleanup
$reader.Close();
$command.Dispose();
$connection.Close();
$connection.Dispose();

Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Process Finished");

Now go and check your export folder. You successfully restored your report and saved your valuable time!

Scenario 2

Maybe you need more than one file or just want to be safe and rebuild everything. Then, welcome to scenario 2. this is more straight forward compared to first one. After restoring database, you can execute below PowerShell script directly. All you need to do, set the variables:

  • SQL Server Instance Name
  • Reporting Service Database Name
  • Windows File Path to Export Files

Be aware: The account runs this code needs SELECT permission on the ReportServer database!

# Set the variables!
[string] $server   = "$SQLINSTANCENAME";    
[string] $database = "$REPORTSERVER_DATABASE_NAME";       
[string] $folder   = "$EXPORT_PATH";  
        
# Extract data from report server database 
$sql = "SELECT	CT.[Path]
        ,CT.[Type]
		,ISNULL(cc.ContentType,'SSRS') as ContentType
        ,CONVERT(varbinary(max), cc.[Content]) AS PBI_BinaryContent
        ,CONVERT(varbinary(max), ct.[Content]) AS RDL_BinaryContent
FROM dbo.[Catalog] AS CT
		LEFT OUTER JOIN dbo.CatalogItemExtendedContent cc
			ON ct.ItemID = cc.ItemId
WHERE CT.[Type] IN (2, 8, 5,13)
	AND ISNULL(cc.ContentType,'CatalogItem') = 'CatalogItem'";		
 
 
Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Process Started");

# Create Database Connection Object
# This script use Kerberos Authentication with executer's Windows credentials, If you'd like to use SQL Authentication, adjust it accordingly!
$connection = New-Object Data.SqlClient.SqlConnection;
$connection.ConnectionString = "Data Source=$server;Initial Catalog=$database;Integrated Security=True;";
$connection.Open();


Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Connection has been established with database");

# Build SQL Command Object
$command = New-Object Data.SqlClient.SqlCommand $sql, $connection;
$reader = $command.ExecuteReader();

$invalids = [System.IO.Path]::GetInvalidFileNameChars();
 
# Looping through all selected datasets.

While ($reader.Read())
{
    Try
    { 
        $name = $reader.GetString(0);
		Write-Output "Reading $name"
        foreach ($invalid in $invalids)
           {    $name = $name.Replace($invalid, "-");    }

        If ($reader.GetInt32(1) -eq 2)
            {    $name = $name + ".rdl";    }
        ElseIf ($reader.GetInt32(1) -eq 5)
            {    $name = $name + ".rds";    }
        ElseIf ($reader.GetInt32(1) -eq 8)
            {    $name = $name + ".rsd";    }
        ElseIf ($reader.GetInt32(1) -eq 11)
            {    $name = $name + ".kpi";    }
        ElseIf ($reader.GetInt32(1) -eq 13)
	    {   $name = $name + ".pbix";    }

			
        Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Exporting {0}" -f $name);

        $name = [System.IO.Path]::Combine($folder, $name);

        # New BinaryWriter; existing file will be overwritten.
        $fs = New-Object System.IO.FileStream ($name), Create, Write;

        $BinaryWriter = New-Object System.IO.BinaryWriter($fs);

        # Read Content 
        if ($reader.GetString(2) -eq "SSRS") {
			$BinaryContent = $reader.GetSqlBinary(4).Value;
		} else{
			$BinaryContent = $reader.GetSqlBinary(3).Value;
		}
		
		
        $BinaryWriter.Write($BinaryContent, 0, $BinaryContent.Length);
        $BinaryWriter.Flush();
        $BinaryWriter.Close();
        $fs.Close();
    }
    Catch
    {
        Write-Output ($_.Exception.Message)
    }
    Finally
    {
        $fs.Dispose();
    }
}

# Cleanup
$reader.Close();
$command.Dispose();
$connection.Close();
$connection.Dispose();


Write-Output ((Get-Date -format yyyy-MM-dd-HH:mm:ss) + ": Process Finished");

Hope this small script will save your valuable time.

Always On Secondary Database in Reverting/In Recovery

The other day I had an interesting incident with my Always On servers. After a successful failover to other site, new secondary(old primary) database stuck in Reverting / In Recovery status for a looooong time. I wondered why, i did some research and here is what I found.

PROBLEM:

We have 2 Stand Alone Always On nodes on a Multi-Subnet Failover Cluster.

Availability Mode: Sync Commit

Failover Mode: Automatic Failover

We performed a failover from Node 1 to Node 2. Failover is finished successfully and database is online now on Node 2. However, Node 1 did not switch to “Sychronized” state. Instead, it got stuck in Reverting/In Recovery state like in below figure.

Reverting

Since it is in “Reverting / In Recovery” state, applications and end-users were not able to access it on secondary.

CAUSE:

According to MS documentation for dm_hadr_database_replica_states:

3 = Reverting. Indicates the phase in the undo process when a secondary database is actively getting pages from the primary database. Caution: When a database on a secondary replica is in the REVERTING state, forcing failover to the secondary replica leaves the database in a state in which it cannot be started as a primary database. Either the database will need to reconnect as a secondary database, or you will need to apply new log records from a log backup.

It basically tells that we initiated a failover from Node 1 to Node 2 while there was running transactions on Node 1 for long time and they were interrupted by failover. Naturally Node 1 have to complete rolling back those transactions before accept new pages from primary site and become available again. During this process, secondary will be unavailable to reports, read-only routing requests, backups etc.

FIX:

Always check open and running transactions before initiating a failover! Especially the ones that produces tons of logs such as index maintenance, big chunks of DML operations and so on.If you see such activity on availability database(s), either wait them to finish or stop them.

Well, there is no fix. If you already did failover and saw database in reverting state,  there is nothing you can do other than waiting for the completion of rollback operation. Undo phase should be finished and this may take long time. In my case, it took 30 minutes to rollback all transactions and complete reverting stage . Only after than secondary replica became available and switched to “Synchronized” state. There will be 3 phases of secondary database replica state during undo process:

  1. Synchronization State: “NOT SYNCHRONIZING” ; Database State: ONLINE
  2. Synchronization State: “NOT SYNCHRONIZING” ; Database State: RECOVERING
  3. Synchronization State: “REVERTING” ; Database State: RECOVERING

This states can be viewed by executing below query:

select DB_NAME(database_id) as DatabaseName
,synchronization_state_desc
,database_state_desc
from sys.dm_hadr_database_replica_states
where is_local=1 and is_primary_replica=0

RevertingState

But hey, cheer up! You can watch 3rd phase of undo process by looking at the counter :

SQLServer:Database Replica Log remaining for undo

You may monitor it via Performance Monitor tool or from system dmv. We are using Windows Core on database hosts thus I have to monitor this counter from dm_os_performance_counters as:

SELECT [object_name],
[counter_name],
[cntr_value] FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE ‘%Database Replica%’
AND [counter_name] = ‘Log remaining for undo’

LogForUndo

This Counter Value shows the amount of log in kilobytes remaining to complete the undo phase according to MS documentation.

When this is done, database is switching to Synchronized immediately.

To sum up, if you have long running transactions on primary site before failover occurs, it will be in reverting stage for a while. That’s why, I think it is a good practice to check running processes before initiating failover even though AlwaysOn Dashboard shows no data loss. If you ever encounter this, you may monitor the process by looking at replica states of secondary replica and watching Log remaning for undo counter.

Always On Availability Groups with Multi-Subnet Failover Cluster

So you decided to create Always On Availability Groups with Multi-Subnet Failover Cluster which gives you the opportunity to failover across different data centers that you have in different regions or continents. Lately, we created this scenario for our monitoring tool with 2 different Subnets. Along the way, we have faced some issues and fixed them. I prepared this post to give you a quick summary about our experiences.

First of all, Multi Subnet Failover Cluster needs to be created and configured by Windows or Domain Admins.

Command:

Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

New-Cluster -Name MS-SQL-FC -Node SCOMDB1, SCOMDB2 -StaticAddress x.x.x.x, y.y.y.y

In this example:

Cluster Name Object(CNO) : MS-SQL-FC

Nodes: SCOMDB1(Primary),SCOMDB2(Secondary)

After Multi-Subnet Failover Cluster ready, Always On feature needs to be enabled on both SQL Services. After this feature enabled, a service restart is required. If you enable this feature before cluster is fully operational and ready  you will get an error when you try to create AG such as:

Msg 35220, Level 16, State 1, Line 92
Could not process the operation. Always On Availability Groups replica manager is waiting for the host computer to start a Windows Server Failover Clustering (WSFC) cluster and join it. Either the local computer is not a cluster node, or the local cluster node is not online. If the computer is a cluster node, wait for it to join the cluster. If the computer is not a cluster node, add the computer to a WSFC cluster. Then, retry the operation.
Disconnecting connection from HOSTNAME…

Now, we can continue to Always On Availability Group setup. The tricky part here is not using New Availability Group Wizard but using TSQL! There are 3 reasons:

1- You can not create an empty Availability Group with Wizard. Most of the time you need to setup Availability Group prior to application setup. In that case, people (even some DBAs with 15 years background) create dummy databases just to create AG, that breakes my heart in two. You can create empty AG with TSQL.

2- When you create an Availability Group with Wizard and add a database with backup & restore seeding method, the user who launced the Wizard will be the owner of :

  • Endpoint
  • Availability Group
  • Database

Guess what is going to happen when that individual leaves the company? Thus use T-SQL Scripts and specify an account with EXECUTE AS LOGIN command! It could be sa or a dedicated svc account for this purpose. In my example I will be using sa account.

3- It’s fun, fast and more secure.

Below script should be executed on both SCOMDB1 and SCOMDB2 to create Endpoint for AG communication and Extended Events to monitor AG:

USE [master]
GO
EXECUTE AS LOGIN=’sa’
CREATE ENDPOINT [Hadr_endpoint]
AS TCP (LISTENER_PORT = 5022)
FOR DATA_MIRRORING (ROLE = ALL, ENCRYPTION = REQUIRED ALGORITHM AES)
GO

IF (SELECT state FROM sys.endpoints WHERE name = N’Hadr_endpoint’) <> 0
BEGIN
ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED
END

USE [master]
GO
EXECUTE AS LOGIN=’sa’
GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO [sa]
GO

IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name=’AlwaysOn_health’)
BEGIN
ALTER EVENT SESSION [AlwaysOn_health] ON SERVER WITH (STARTUP_STATE=ON);
END
IF NOT EXISTS(SELECT * FROM sys.dm_xe_sessions WHERE name=’AlwaysOn_health’)
BEGIN
ALTER EVENT SESSION [AlwaysOn_health] ON SERVER STATE=START;
END
GO

Create Always On Availability Group on SCOMDB1:(Make sure to edit Failover Mode, Availability Mode and other options according to your business needs)

EXECUTE AS LOGIN=’sa’

CREATE AVAILABILITY GROUP [SCOMBAG]
WITH (AUTOMATED_BACKUP_PREFERENCE = PRIMARY,
BASIC,
DB_FAILOVER = ON,
DTC_SUPPORT = NONE)
FOR
REPLICA ON N’SCOMDB1′ WITH (ENDPOINT_URL = N’TCP://SCOMDB1.FQDN:5022′, FAILOVER_MODE = AUTOMATIC, AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, SEEDING_MODE = AUTOMATIC, SECONDARY_ROLE(ALLOW_CONNECTIONS = NO)),
N’SCOMDB2′ WITH (ENDPOINT_URL = N’TCP://SCOMDB1.FQDN:5022′, FAILOVER_MODE = AUTOMATIC, AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, SEEDING_MODE = AUTOMATIC, SECONDARY_ROLE(ALLOW_CONNECTIONS = NO));

Create Always On Listener on SCOMDB1

EXECUTE AS LOGIN=’sa’

ALTER AVAILABILITY GROUP [SCOMBAG]
ADD LISTENER N’SCOMBAG-L’ (
WITH IP
((N’10.x.x.z’, N’255.255.255.0′),
(N’10.y.y.z’, N’255.255.255.0′)
)
, PORT=1433);

Join Secondary Host to AG

EXECUTE AS LOGIN=’sa’
ALTER AVAILABILITY GROUP [SCOMBAG] JOIN;

EXECUTE AS LOGIN=’sa’
ALTER AVAILABILITY GROUP [SCOMBAG] GRANT CREATE ANY DATABASE;

Now, You can create your database(s) and add them to Availability Group. There are two prerequisites for adding a database to Availability Group:

  • Database should be in FULL recovery mode
  • At least one full backup should be performed

After a database meets these requirements, you can prepare database on secondary with backup & restore method by executing below script in SQLCMD mode

–Prepare database on secondary with backup & restore method

:Connect SCOMDB1
EXECUTE AS LOGIN = ‘sa’;
BACKUP DATABASE [DBNAME] TO DISK = N’DBNAME_AG.bak’
:Connect SCOMDB2
EXECUTE AS LOGIN = ‘sa’;
RESTORE DATABASE [DBNAME] FROM DISK = N’DBNAME_AG.bak’ WITH NORECOVERY
:Connect SCOMDB1
EXECUTE AS LOGIN = ‘sa’;
BACKUP LOG [DBNAME] TO DISK = N’DBNAME_AG_log.bak’
:Connect SCOMDB2
EXECUTE AS LOGIN = ‘sa’;
RESTORE LOG [DBNAME] FROM DISK = N’DBNAME_AG_log.bak’ WITH NORECOVERY

Now, Database is ready and you can add it to Availability Group
–CONNECT TO PRIMARY
:Connect SCOMDB1
EXECUTE AS LOGIN = ‘sa’;

ALTER AVAILABILITY GROUP [SCOMBAG] ADD DATABASE OperationsManager;

–CONNECT TO SECONDARY
:Connect SCOMDB2

EXECUTE AS LOGIN = ‘sa’;
ALTER DATABASE [OperationsManager] SET HADR AVAILABILITY GROUP = [SCOMBAG];

–Check the dashboard.

The thing about multiple DNS Records

When we create the cluster, CNO creates DNS records under its name on DNS server for each subnet on the failover cluster such as:

MS-SQL-FC  10.x.x.x

MS-SQL-FC  10.y.y.y

When we create Always On Listener from SQL Server,RegisterAllProvidersIP setting will be set to 1 automatically. As a result, Listener Cluster Resource creates DNS records for each subnet ip it has been assigned such as:

SCOMBAG-L 10.x.x.z

SCOMBAG-L 10.y.y.z

This is an intended behaviour by design. If you add “MultiSubnetFailover=true”  clause into connection string and your connector supports this feature(you can check here) database connector gets two DNS records from DNS server and then tries to connect both IP addresses in parallel (theoratically) and connects to right one. Thus, downtime will be minimized in case of a failover.

However, there may be such cases like older database clients doesn’t support MultiSubnetFailover option or your application doesn’t support this feature like our monitoring tool. Then, clients will get timeout errors for %50 percent of their requests inevitably since half of the connections would use wrong DNS record.

To solve this,  you have to set RegisterAllProvidersIP to 0, then only the online subnet’s IP will be registered with DNS. You also need to set HostRecordsTTL parameter to reduce downtime. By default it is 1200 seconds, which means that every 20 minutes Local DNS Cache will be updated and application server will learn online IP. Common best practice is setting it to 600 so Local DNS cache will be updated every 5 minutes. Thus, in case of a failover, downtime will be only 5 minutes. We set it to 60 seconds in our case. You can set this parameters for Cluster Name and Listener Network Name through Powershell as:

Get-ClusterResource “Cluster Name | Set-ClusterParameter -Name HostRecordTTL -Value 120 

Get-ClusterResource “Cluster Name” | Set-ClusterParameter -Name RegisterAllProvidersIP -Value 0

Get-ClusterResource “SCOMBAG-L” | Set-ClusterParameter -Name HostRecordTTL -Value 120 

Get-ClusterResource “SCOMBAG-L” | Set-ClusterParameter -Name RegisterAllProvidersIP -Value 0

The listener name resource has to be taken offline and back online for the above changes to take effect. If you do these parameter settings in setup phase, then you are a wise man. If you find out after you installed the environment and clients started to complain about timeout errors, then taking Listener name resource offline will also take the availability group offline since it is dependent on Listener name resource. To prevent this, you should remove the dependency with Powershell or by the Windows Failover Cluster Manager utility.

Windows Failover Cluster Manager:

  • Open Properties of Availability Group Role
  • Click Dependencies Tab and Delete Listener Name Resource

Powershell:

Remove-ClusterResourceDependency -Resource SCOMBAG -Provider SCOMBAG-L

Now you have to take listener name resource offline and back online.

Stop-ClusterResource SCOMBAG-L

Start-ClusterResource SCOMBAG-L

Force DNS Update

Get-ClusterResource SCOMBAG-L | Update-ClusterNetworkNameResource

Re-add dependency

Windows Failover Cluster Manager:

  • Open Properties of Availability Group Role
  • Click Dependencies Tab and Click Resource dropbox
  • Select Listener Name Resource and Apply

Powershell:

Add-ClusterResourceDependency -Resource  SCOMBAG  -Provider SCOMBAG-L

Reconfigure Listener Port configuration from SSMS:

ALTER AVAILABILITY GROUP [SCOMBAG] MODIFY LISTENER ‘SCOMBAG-L’ (PORT=1433);

Server principal ‘LoginName’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

A good DBA likes to keep his/her SQL Server environment clean and neat, that’s why dropping old, unused or unnecessary logins and users is always fun. However,  dropping a user or login can cause many unexpected troubles if you don’t carefully observe the permissions and roles and objects that could be related to that account. Unfortunately, there is no easy way to track down or analyze all server or database level permissions, users and roles related to one specific login.

I have scripts which runs every hour and checks and stores all roles and permissions to inventory tables. Before I clean up my logins, i always have a look at these tables to prevent any possible problem.

Today I was doing my routine task but i couldn’t drop one domain account and SQL Server returned this generic error :

loginfail

I am surprised because I double check my tables, there shouldn’t be a object or role or permissions bound to this account. The problem was this account was the grantor of a permission, not the grantee. I looked at my tables and I found that this account granted “CONNECT” permission on Always On Endpoint to another account.

This can also be checked with the below query:

SELECT * FROM sys.server_permissions
WHERE grantor_principal_id = ( SELECT principal_id FROM sys.server_principals WHERE NAME = N’LoginName’)