Kerberos Constrained Delegation with Group Managed Service Accounts

Background

One of the most painful troubleshooting experiences for me has been trying to figure out how to setup SQL Server Reporting Services (SSRS) to use Kerberos Constrained Delegation. This is where you try to execute a report using Data from a SQL Server Instance on a different computer. As a result you receive the unhelpful and annoying ‘NT Authority\ Anonymous Logon’ error whenever you try to run your report. The issue stems from the fact that the server running reports cannot pass your authentication to the data source

The problem that I always ran into with this issue is that I could never find the right blog, or guide to get through the entire setup without the guide deviating from my config or becoming unclear. Then to add to the complexity you need to edit Service Principal Names (SPN) on Active Directory (AD) computer objects, which means you need a Domain Admin if you are not one already. After a lot of struggling and understanding how this works and finally getting it to work on SSRS I felt I would never run into this problem again, until today.

I have a SQL 2016 Always On Availability Group cluster that needs a linked server to a SQL 2017 Server (a different but similar problem as the SSRS example above). The one variance from the normal Kerberos setup is that the 2016 cluster is using a Group Managed Service Account to run the SQL Service. This is a new type of domain controlled service account introduced in Server 2016. Instead of being a traditional AD user object (for which you may need to manually change the password from time to time) the gMSA is managed by AD and has it’s password changed automatically based on the domain policy. So unlike a standard user it does not have the delegation tab in Active Directory Users and Computers (ADUC) once an SPN has been set for it. (If you are interested in the processes of setting up gMSA with SQL I suggest this blog from Norm Eberly)

(Note: If you are having issues with setting up Kerberos for SSRS I highly suggest guy In A Cube’s video and write up on this )

Methodology

The steps we are doing follows most of the same guiding steps you would need for other Kerberos Constrained Delegation setup…

  • Ensure SPNs are configured on service account and Data Sources
  • Delegate Access on the service account to the Data Source
  • Test linked server (or other delegated service)

Checking SPNs

The Service Principal Name on a AD object helps advertise a service for a specific object in the network. When you install SQL on a computer and use the default service the HOST/ SPN is already setup on it and usually handles Kerberos authentication from one AD object to another. You should check each computer and service account used in your setup. To check the SPN of a Computer account or the gMSA use the following:

PS C:\Windows\System32> setspn -l gMSAsqlservice

For your servers this should return services like HOST/ServerName HTTP/ServerName WSMAN/ServerName MSSql… If the gMSA returns nothing for you will need to add the MSSQLSvc for each of the nodes in the AG Cluster. Use the following (Note: you will need Domain Admin access or be delegated permissions to execute this against an object):

PS C:\> setspn -a MSSQLSvc/Node1 gMSAsqlservice
PS C:\> setspn -a MSSQLSvc/Node1.domain.local gMSAsqlservice

If you are using a named instance then you can add :Name to the end. In some instances you may need to give the port number as well if there are multiple instances on one box.

PS C:> setspn -a MSSQLSvc/Node1:Name gMSAsqlservice
PS C:> setspn -a MSSQLSvc/Node1.domain.local:Name gMSAsqlservice
PS C:> setspn -a MSSQLSvc/Node1.domain.local:1433 gMSAsqlservice

Configure Delegation

In a typical setup with a standard AD User Object you could open ADUC and click the delegation tab, but in this case of a gMSA no delegation tab exists after this step. The key of this delegation tab is that you are marking which service (on which computer) the current service account is allowed to pass a users credentials to. While the specific tab does not exist if you open the Advanced Options on ADUC you can view the Extension Attributes. (Note: You may also set all of these settings via PowerShell when you create the account via New-ADServiceAccount , or by updating the account Set-ADServiceAccount)

First: locate the msDS-AllowedToDelegateTo

  • You need to make sure that you launch ADUC and have “Advanced Features” enabled.
    • This can be done by clicking the “view” > “Advanced Features.”
  • Navigate to the gMSA, Right Click, and select “Properties”
  • Select the “Attribute Editor” tab
  • Navigate to “msDS-AllowedToDelegateTo” Attribute
  • Click “Edit”
  • Type the Service Name of the Data Source you want your Linked Server to pull data from and click “Add” and “OK”
    • Remember that the gMSA is running the SQL Service you are setting the Linked Server up on.

Second: Set the userAccountControl

  • Still looking at the “Attribute Editor” tab on the gMSA account
  • Navigate down to the “userAccountControl” attribute
  • Click “Edit”
  • Replace the Value with 16781312
    • This is not in the official list, but this value was discovered via configuring a standard AD User Object.
  • Click “OK” and close all Dialogues

Testing LinkServer

You may need to restart the SQL Service before creating your Linked Server, but you should be able to create it after this and execute a query against it. If you are still receiving errors, you may need to make sure that you have the correct SPNs on the correct Service Account, and that you have permissions on the destination. It may also help to have both the NetBIOS name and FQDN of the destination server as an SPN.

Conclusion

This go around of figuring out Kerberos Constrained Delegation was not as difficult as my first go around (which lasted about 6 months of on and off troubleshooting), but it is nothing I would wish on anyone. The benefits of this if you get it right are the key – it allows you to setup a linked server (or even SSRS) with the constraints of AD credentials. This promotes least privileged access on your SQL Instance where you may have a shared environment and not everyone should have access to the remote system.

Creating Word Documents with PowerShell

Background

For the last few months I have been working on a course called “Introduction to Windows PowerShell” for Pragmatic Works. This has been a challenging and rewarding task as I have found that you learn something best when you teach it to someone else.

Part of the deliverables for this course was providing labs for students to walk through on their own. Also for each lab, there is a solution that I provide in case they get stuck. On the first pass I had placed all of these labs and solutions in a separate .ps1 file, but after further review it was going to be better if they were all in a single Word Document. This however meant combining roughly 20 files into 2 Documents.

Being somewhat crunched for time, I decided to see if I could automate feeding these .ps1 files into a Word Document using PowerShell.

Methodology

My gut response for the last few years has been to automate  as many things as possible using PowerShell. If this could not be done entirely with PowerShell, at least I would try and do as much as possible.

In order to do this I had to learn a little bit:

  • Are there any CLI options for Word?
  • If not, are there any .NET Classes exposed for Word?
  • And if either of these first two were true could  I produce a new script?

.NET Classes for Microsoft Word

Upon doing some digging I found a couple forum posts on manipulating files using PowerShell (StackOverflow, MSDN ). What caught my eye from these posts is that they both used ‘Word.Application’ as a COM object. Looking into this, led me to information about Word’s VBA interface for scripting (MSDN – VBA). The VBA scripting  wasn’t exactly what I was looking for, but I figured I would keep digging. Finally I found the gold at the end of the rainbow. The entire .NET namespace documentation for Office 2013+  (MSDN – Namespace). There is a lot that this documentation covers, but I will just walk through the basics of creating a file, formatting the text, adding text from other files, and then finally saving the document.

Creating a File

In order to create a file you must have an instance of Word in your PowerShell Session. In order to do this you can use the New-Object cmdlet and use the -ComObject parameter as shown below:

PS C:\WINDOWS\system32> $WordApplication = New-Object -ComObject "word.application"

This starts a instance of Word, which you can view in processes, but does not open a visual window. If you want to see the window you can do the following:

PS C:\WINDOWS\system32> $WordApplication.Visible = $true

If you look at Word, it is now visible, but no document is open. We must now create the document we want to use. We could also open up a document if we had an existing one.

Microsoft Word

Off of the $WordApplication variable we can create a new document. It is best to save this new Document into another variable for future use.

 PS C:\WINDOWS\system32> $Document = $WordApplication.Documents.Add()

At this point you now have a document to work with.

Writing the Document

By this point I spent a lot of time trying and failing. Since there was both a variable for the Application ( $WordApplication ) and the Document ( $Document ), it was difficult determining which one I needed to writing from my sources. I first assumed it was the Document variable because that is what I was editing, but all of the interfaces were through the ‘Selection’ interface on the Application variable.

PS C:\WINDOWS\system32> $Writer = $WordApplication.Selection

There are several things I wanted to do inside of this document. First I wanted to bring content in from various files, but I also wanted to create a heading for each file that was written. The first part seemed easy, because there exists a method called .TypeText() . I found that you can insert text by writing a string or calling another cmdlet like Get-Content inside of this method. Also I found that you could set the style of the document before you ‘typed’ and then change it back to the normal format. The most confusing part was the inconsistency in the methods. When calling $Writer.TypeText((Get-Content -path $blah)) you have parentheses around the value, but when changing the style you assigned a value like this $Writer.Style = 'Heading 1'.

In the end, I wrote a foreach loop in the following way to import all of the text while setting the style on import.

 PS C:\WINDOWS\system32> 1..10 | foreach -Process {
>> $labpath = "C:\PragmaticWorks\Introduction to Windows PowerShell\Module$_\Lab-Solution.ps1"
>> $Writer.Style = 'Heading 1'
>> $Writer.TypeText("Module $_ Lab Solution")
>> $Writer.TypeParagraph()
>> $Writer.Style = 'Normal'
>> $Writer.TypeText((Get-Content $labpath))
>> $Writer.TypeParagraph()
>> }

note: .TypeParagraph() inserts a new line so that I would have a break between text.

When I finished inserting all of my files I saved the file and closed the document.

PS C:\WINDOWS\system32> $Document.SaveAs($filepath)
PS C:\WINDOWS\system32> $Document.Close()

Conclusion

This was a fun conceptual project to see if it was possible, and it did end up saving me the pain of copy and pasting all 20 documents into 2 separate documents. I did have some issues with formatting, which I would have loved to explore further, but I am not sure I could have automated this formatting as I had code examples in my source documents. I ended up manually aligning these code examples once in word. I did notice a .Find() interface on the Application variable and this would be neat to explore deeper for gathering information from Word Documents on distributed systems.

An MVP I follow suggested that I make this into a module. What are your thoughts? Would you like to see a PowerShell module for interfacing with documents? Feel free to tweet me @joshCorr for more info or if you benefited from this blog.

Backing Up Facebook Photos using PowerShell

Background

I have been using social media since 2006 when I first signed up for Myspace. That was the day and age when my parents had dial up and going on to check social media (or chat on AIM) was a 15 minute treat once a week (somewhat joking). Myspace was cool. You could edit your page. It was the ultimate self expression for an IT nerd wanna be (I had some pretty cool HTML and CSS on that page). After a few years Myspace was eh, but Facebook was the new hotness. No ads, clean, uniform, mature, verified users. I decided to get one in spring of 2008 and have used it up until recently. Within the last 2 years I started working on withdrawing from using it for various reasons (too many to get into here). The only thing that kept from deleting it long ago was the budding relationship with my wife which was thoroughly documented and photographed on Facebook.

So I decided to see if there was a way to backup the photos and albums in my Facebook account programmatically.

Methodology

I could not find any apps that looked ligit for downloading facebook albums. I ran across a stack overflow article that described downloading photos from a Facebook page using their graph api (Get All Photos of a Page Using Facebook Api) which got me thinking. I am not a programmer, so writing a javascript or C# app to do it didn’t seem viable for accomplishing this task. However, I have a fair amount of experience with PowerShell which has Invoke-WebRequest and Invoke-RestMethod and figured I could use these cmdlets to gather the info and download these pictures.

In order to do this I had to learn a little bit:

Getting an Access Token

In order to connect to graph.facebook.com I first needed an Access Token to authorize my connection. You can generate this using code, but the easiest way I found to do this for a oAuth Noob, was in the following way.

  1. Login to Facebook and Navigate to https://developers.facebook.comDevelopers.facebook.com home page
  2. Click on “Tools & Support” in the navigation menu
  3. Click on “Graph API Explorer” under popular tools
  4. Click “Get Token”
  5. Click “Get User Access Token” on drop down
  6. Select the “user_photos” permission and click “Get Access Token” (I select “user_posts”, and “publish_actions” because I wanted to check posts as well as potentially update the photos)
  7. A facebook app authorization page will pop up. Accept this.
  8. Now you should have a valid Access Token above the API navigation bar
  9. Copy and Paste this Access Token into a PowerShell variable named $AccessToken

note: you can check the info on the token by clicking the blue i next to the Access Token to see when it expires. Mine usually lasted an hour and a half to two hours.

Finding my Photos

Now that I have an AccessToken I can use PowerShell to grab a list of my photos or my albums using some of the following code.

To gather all* photos I am tagged in:

Invoke-RestMethod -Method Get -Uri https://graph.facebook.com/v2.12/me/photos -Headers @{Authorization = "Bearer $AccessToken"}

To gather albums I have created and uploaded:

Invoke-RestMethod -Method Get -Uri https://graph.facebook.com/v2.12/me/albums -Headers @{Authorization = "Bearer $AccessToken"}

note: Be sure to include  'Bearer' in your Authorization header as you token will not work without this.

This execution returns a psCustomObject called Data which is a group of hashtables inside of a hashtable. The parent hastable can be expanded to show the ID, name and other fields of the child hashtables. If you want to include fields not in the default view you can add them to the end of your Uri with the following syntax.

Invoke-RestMethod -Method Get -Uri https://graph.facebook.com/v2.12/me/photos?fields=ID,Name,Images,Link -Headers @{Authorization = "Bearer $AccessToken"}

And if you want to return more than the default 25 add something like: &limit=100

Invoke-RestMethod -Method Get -Uri https://graph.facebook.com/v2.12/me/photos?fields=ID,Name,Images,Link&limit=100 -Headers @{Authorization = "Bearer $AccessToken"}

For full reference on the methods, fields, and edges on this api please reference Graph Api Photo Reference and Graph Api Album Reference

Downloading my Photos

At this point I can stick the output of Invoke-RestMethod into a variable called $Pictures to which I will extract the ‘Data’ field into another Variable called $Images. (This is somewhat a redundant step, but I did it for ease later on in a loop).

$Images = $($Pictures.Data)

Initially I thought the Link field was where the photos were stored, but in fact this was only the link to the post of the picture. Also important to know, the ‘images’ field does not include one reference to the image, but usually 7-9 different sizes of the same image (Thankfully sorted from largest to smallest).

From here we can use Invoke-WebRequest in a loop to download the images. Each Image has its own source location which is the url to original file. The Photos can be saved by redirecting this source url to an outfile named after the image’s ID.

  foreach ($p in $Images){Invoke-WebRequest -Uri $($p.images[0].source) -OutFile "D:\TaggedPhotos\$($p.id).jpg" -Headers @{Authorization = "Bearer $AccessToken"} }

note: Not sure if this completely makes a difference, but I am using PowerShell v5.1 because core recently re-wrote Invoke-WebRequest to use HttpClient instead of WebRequest)

Conclusion

The results were that my tagged photos were downloaded to the requested location at the largest resolution on facebook. I did not do the same for my albums since you can download these entirely from Facebook, but it could be done with the same general method. It would require a few more loops as you need to get the Album ID first and then use this to discover all of the Photos inside of the album before finding each corresponding image source.

In the end this got most, but not all of my tagged photos. Depending on the permissions set by my friends I could not get specific pictures (however I could still see them on facebook). This was more of a learning experience for me as I had no prior experience with using an API and had never used oAuth Access Tokens in this way. Overall it was a great learning experience.

Checking Your Backup Strategy with DBAChecks

Within the last 18 months I have become an accidental DBA and found myself grasping for information and tools in order to help manage the handful of database servers for which I am responsible. In that journey I discovered dbatools dbatools.io which is a great community-built PowerShell module for configuring SQL Instances. If you’re not familiar with them, they have cmdlets that include the likes of Copy-DbaLogin, Find-DbaBackup, Get-DbaMemoryUsage, and Test-DbaPowerPlan. The last of these cmdlets I just listed really encapsulates the heart of the dbatools team as they are seeking to provide tools that reflect the best practices for data professionals. (If you don’t know that the Power Plan improperly set can cause performance issues, you should probably download dbatools and give it a shot).

Continuing this desire to help the community practice good things, the dbatools team has now released dbachecks dbachecks.io. This is a module which leverages pester (a powershell module for running unit testing) to run tests against your sql instances. You can find full details on what is required and how to install this module on their GitHub page here.

There are a large number of tests included with the dbachecks module, but you can create your own if you are familiar with writing tests in pester. A few of the built in tests that I found most helpful (and actually made me re-evaluate my production systems) were the tests for Backups.

After loading the module you can check out all of the predefined tests by running Get-DbcCheck. In order to get a list of all of the tests that check backups, use the -Pattern parameter to search for tests with matching keywords.

Get-DbcCheck -Pattern backup

In order to run the test you execute Invoke-DbcCheck with the following syntax to return successful passes of tests and failures.

Invoke-DbcCheck -Check BackupPathAccess,DefaultBackupCompression -SqlInstance localhost -ComputerName localhost

As you can see the backup location is accessible, and I have not configured compression for my backup sets. It also only took less than a half a second to run these checks. Now one of the most commonly unchecked aspects of databases backup is it will restore, and conveniently enough dbachecks has a test for that as well. Under the hood, this test is using the Test-DbaLastBackup to try restoring the database with the flag -VerifyOnly only:

Invoke-DbcCheck -Check TestLastBackupVerifyOnly -SqlInstnace localhost -ComputerName localhost

The user databases for my SSRS instances were successfully tested, but there were no backups for my system databases. As a result I decided to double check and everything was in order, but you can never be sure.

So my suggestion is to set this as a scheduled task, or agent job and make sure that your backups are up to date. This module makes it so easy to make sure that you don’t lose your job because of data loss. This is just a highlight of a few of theses tests. Imagine the rest of the checks and tests that are possible with this module whether they are built in or they are one of your own creation. Download the module from the PowerShell Gallery and give it a try.

 

 

Deploying SQL 2017 in 15 minutes

This last Wednesday (10/11/17) I had the opportunity to share about PowerShell DSC at JAXSUG and I wanted to follow up and share a little more about what I was doing in my demo.

PowerShell Desired State Configuration (referred to as DSC from here on) is Microsoft’s new attempt to make server configuration easier and more stable for the Admin. DSC is a feature that is available in any version of Windows Server (or even Client) running PowerShell Version 4 or greater. One of the draws of DSC is that the configurations are human readable, declarative configurations, which allows ease of understanding for anyone reading the script. Additionally, these configurations provide a way to configure nodes in mass as well as offering a way to keep nodes from creeping out of its configuration.

Each node has a Local Configuration Manager(LCM), which is responsible for processing the configurations it receives. You can find out what your LCM by running Get-DSCLocalConfigurationManager in an Administrative PowerShell console. PowerShell is used to create the configurations as a .MOF file (or meta.mof if it is for the Local Configuration Manager). These files comply with the CIM (Common Information Model) which is an open standard and used by many different platforms. (Just a cool side note there are some people who use MOF files to control Linux systems and Networking devices, which have been made using PowerShell).

Each Configuration that is written may also need additional resources in order to extend the type of commands that can be given. By default PowerShell knows about 13 different resources for manipulating files and server settings. In order to Install SQL Server I needed to download and use the xSQLServer resource which may be found on GitHub

Below is a sample of the code for my LCM from my Demo with some comments

#These settings produce a $computer.meta.mof
# and are used to Setup the LocalConfiguration Manager
[DSCLocalConfigurationManager()]
Configuration LCM_Push
{
    #Paramater block allows this configuration to be used many times
    Param(
     [string[]]$ComputerName
    )
    #The Node keyword is used to determine 
    #what the name of the meta.mof will be called
    Node $ComputerName
    {
         Settings
         {
         #These settings are only a few of all that are possible
         AllowModuleOverwrite = $True 
         ConfigurationMode = 'Applyonly'
         RefreshMode = 'Push'
         RebootNodeIfNeeded = $True
         #The CertificateID allows you to use a Document Signing 
         #certificate to Encrypt credentials within the .Mof file
         CertificateID = "56201A6926C2134876123409874C437E7E66BD"
         }
     }
}

If you would like to check out the script I used you can download it here: Push-SQL2017.

While my Demo only took about 15 minutes to preform the, the initial setup of the environment (Domain Controller, Authoring Host, and deployment Vm), the certificate server and certificates for encrypting the MOF (option in test, but necessary in prod), and the Configuration itself took many hours of testing and troubleshooting it has turned out to be a necessary tool as many requests come in to me for new instances or test instances and I now know I can reliably roll out a new version with a few clicks.

Copy-SqlLinkedServer

Note: This is the last of my series on dbatools.io and upon publishing it I discovered that dbatools is changing their prefix to be Verb-DBAnoun. Read about it here https://dbatools.io/bagofbobbish-beta/

In the series on dbatools module (https://dbatools.io) we at are going to look next at the Copy-SqlLinkedServer

Something that may not be as widely used by some organizations, but was widely used on one of my servers are linked servers. These are usually a work around for some other Oracle DB or even a means of connecting to a reporting data source. These connections can be complicated as they sometimes have local credentials stored in the connection and have no means of retrieving the password. By using this command in the following way you may copy the linked server to another SQL instance with ease.

Copy-SQLLinkedServer -Source SQL2012 -Destination SQL2016 -LinkedServer OracleDB2

Any settings that you need for the linked server (RPC OUT, Data Access, etc) should be turned on before moving the linked server with this method as it only copies the name and login (if present) and inherits all other settings from the destination instance.

 

Copy-SqlDatabaseMail

In the series on dbatools module (https://dbatools.io) we at are going to look next at the Copy-SqlDatabaseMail

If you are not in a particularly large shop (or a well funded one) you may not have the most robust monitoring solution, but you may at least have an open Email relay for you use. Setting up Email alerts on an Sql instance are very important so that you may receive notifications of job failures, or completions of integration service jobs. Additionally you may have triggered jobs which notify you of specific database fill. Once you configure these settings it may be a pain to recreate these on another instance especially if you are not the email admin.

By using this syntax you can easily migrate the settings

Copy-SQLDatabaseMail -Source SQL2012 -Destination SQL2016

After migrating the settings you may still need to enable the mail services on the specific instance you are migrating towards (right click agent > Alert System > Enable Mail Profile), but once you do this you will have all the settings in place for your alerts. Now you may also want to use the Copy-SqlOperator if you have specific Email addresses setup on your Jobs. This is absolutely a pre-requisite before you migrate specific jobs from another instance.

 

Copy-SqlJob

In the last post on the dbatools module (https://dbatools.io) we talked about the Copy-SQLCredential. Using this previous cmdlet may need to be a prerequisite before using Copy-SQLJob.

The Copy-SQLJob will move SQL Agent jobs, including the schedule from one instance to another instance. This can be highly efficient if moving an instance from one server to another server. The only downside of this cmdlet is that it will copy the job with the exact same object id, which means that if you are trying to use this to duplicate jobs as a template will cause them to be linked on a single instance. This means that if you change a setting on one it will affect the other jobs copied from the same instance.

The syntax is as follows

Copy-SqlJob -Source SQL2012 -Destination SQL2016 -jobs NightlyMaintenance,HourlyTransactionBackups

Copy-SqlCredential

In the series on dbatools module (https://dbatools.io) we at are going to look next at the Copy-SqlCredential.

As you may or may not know, if a DBA is going to keep his database up to shape and continue to process ETL (Extract Transform and Load) jobs they need to leverage automation. One method that is provided by SQL is the Agent, which may have jobs that leverage all sorts of items (SQL, SSIS packages, even Batch and PowerShell Scripts). Most of these jobs can run as the service account running the agent, but in some cases (Batch and SSIS among others) a Credential and Proxy are needed to run these jobs.

In migrating from one server to another, the DBA may not have access to the passwords for the Credentials that exist to run certain packages. I found that the Copy-SQLCredential to be helpful for this. In addition, this can be leveraged in the following way with Copy-SQLProxyAccount.

Copy-SQLCredential -Source SQL2012 -Destination SQL2016 -Credential 'Domain\CredentialedAccount'
 
Copy-SQLProxyAccount -Source SQL2012 -Destination SQL2016 -Credential 'Proxy_CredentialedAccount'

 

Copy-SQLlogin

As I mentioned in my last post, I am going to be spending some time going over a couple of my favorite tools from the dbatools (https://dabtool.io) PowerShell module.

The one cmdlet that actually led me to the module in the first place, which has been a tried and true tool for me, is the “Copy-SQLlogin.” I am a relatively new to the work of a DBA, but one of the most difficult lessons I learned early on is that if you have a SQL Always On Availability Group (AG) setup the you have to maintain a copy of the Logins on all replicas where you need to potentially access the data from in a read capacity. From what I have read, to keep these groups in sync you would need to either grab the Login, “Create to Script” on the other replica or use partial containment to get around this problem.

This cmdlet however, offers a method similar to the first one, but with the options of syncing all of the Logins or only select Logins

My preferred method of using this command would be in the following syntax

1
  Copy-SQLLogin -Source SQL2012 -Destination SQL2016 -Logins 'Domain\ServiceAccount','LocalSQLLogin'

This provides you the flexibility to copy the Logins for the databases that you may be moving. After doing this from the original source I would then switch the source and destination for the primary and any secondary replica in the cluster. One caviat that I found to this is that if you have another AG on the secondary that is not a readable secondary or is read-intent only this cmdlet may fail or provide a warning that not all the databases were available.

Another robust version of this cmdlet is just using it with the SyncOnly

1
  Copy-SQLLogin -Source SQL2012 -Destination SQL2016 -synconly

This does not add or drop the user, but merely syncs up the objects securables. For Instances that are 2012 and newer this cmdlet can be used to move the SIDS, passwords, and other items. It makes it a powerful addition to any DBA’s tool belt.