This last Wednesday (10/11/17) I had the opportunity to share about PowerShell DSC at JAXSUG and I wanted to follow up and share a little more about what I was doing in my demo.
PowerShell Desired State Configuration (referred to as DSC from here on) is Microsoft’s new attempt to make server configuration easier and more stable for the Admin. DSC is a feature that is available in any version of Windows Server (or even Client) running PowerShell Version 4 or greater. One of the draws of DSC is that the configurations are human readable, declarative configurations, which allows ease of understanding for anyone reading the script. Additionally, these configurations provide a way to configure nodes in mass as well as offering a way to keep nodes from creeping out of its configuration.
Each node has a Local Configuration Manager(LCM), which is responsible for processing the configurations it receives. You can find out what your LCM by running Get-DSCLocalConfigurationManager in an Administrative PowerShell console. PowerShell is used to create the configurations as a .MOF file (or meta.mof if it is for the Local Configuration Manager). These files comply with the CIM (Common Information Model) which is an open standard and used by many different platforms. (Just a cool side note there are some people who use MOF files to control Linux systems and Networking devices, which have been made using PowerShell).
Each Configuration that is written may also need additional resources in order to extend the type of commands that can be given. By default PowerShell knows about 13 different resources for manipulating files and server settings. In order to Install SQL Server I needed to download and use the xSQLServer resource which may be found on GitHub
Below is a sample of the code for my LCM from my Demo with some comments
#These settings produce a $computer.meta.mof
# and are used to Setup the LocalConfiguration Manager
#Paramater block allows this configuration to be used many times
#The Node keyword is used to determine
#what the name of the meta.mof will be called
#These settings are only a few of all that are possible
AllowModuleOverwrite = $True
ConfigurationMode = 'Applyonly'
RefreshMode = 'Push'
RebootNodeIfNeeded = $True
#The CertificateID allows you to use a Document Signing
#certificate to Encrypt credentials within the .Mof file
CertificateID = "56201A6926C2134876123409874C437E7E66BD"
If you would like to check out the script I used you can download it here: Push-SQL2017.
While my Demo only took about 15 minutes to preform the, the initial setup of the environment (Domain Controller, Authoring Host, and deployment Vm), the certificate server and certificates for encrypting the MOF (option in test, but necessary in prod), and the Configuration itself took many hours of testing and troubleshooting it has turned out to be a necessary tool as many requests come in to me for new instances or test instances and I now know I can reliably roll out a new version with a few clicks.
Note: This is the last of my series on dbatools.io and upon publishing it I discovered that dbatools is changing their prefix to be Verb-DBAnoun. Read about it here https://dbatools.io/bagofbobbish-beta/
In the series on dbatools module (https://dbatools.io) we at are going to look next at the Copy-SqlLinkedServer
Something that may not be as widely used by some organizations, but was widely used on one of my servers are linked servers. These are usually a work around for some other Oracle DB or even a means of connecting to a reporting data source. These connections can be complicated as they sometimes have local credentials stored in the connection and have no means of retrieving the password. By using this command in the following way you may copy the linked server to another SQL instance with ease.
Copy-SQLLinkedServer -Source SQL2012 -Destination SQL2016 -LinkedServer OracleDB2
Any settings that you need for the linked server (RPC OUT, Data Access, etc) should be turned on before moving the linked server with this method as it only copies the name and login (if present) and inherits all other settings from the destination instance.
In the series on dbatools module (https://dbatools.io) we at are going to look next at the Copy-SqlDatabaseMail
If you are not in a particularly large shop (or a well funded one) you may not have the most robust monitoring solution, but you may at least have an open Email relay for you use. Setting up Email alerts on an Sql instance are very important so that you may receive notifications of job failures, or completions of integration service jobs. Additionally you may have triggered jobs which notify you of specific database fill. Once you configure these settings it may be a pain to recreate these on another instance especially if you are not the email admin.
By using this syntax you can easily migrate the settings
Copy-SQLDatabaseMail -Source SQL2012 -Destination SQL2016
After migrating the settings you may still need to enable the mail services on the specific instance you are migrating towards (right click agent > Alert System > Enable Mail Profile), but once you do this you will have all the settings in place for your alerts. Now you may also want to use the Copy-SqlOperator if you have specific Email addresses setup on your Jobs. This is absolutely a pre-requisite before you migrate specific jobs from another instance.
In the last post on the dbatools module (https://dbatools.io) we talked about the Copy-SQLCredential. Using this previous cmdlet may need to be a prerequisite before using Copy-SQLJob.
The Copy-SQLJob will move SQL Agent jobs, including the schedule from one instance to another instance. This can be highly efficient if moving an instance from one server to another server. The only downside of this cmdlet is that it will copy the job with the exact same object id, which means that if you are trying to use this to duplicate jobs as a template will cause them to be linked on a single instance. This means that if you change a setting on one it will affect the other jobs copied from the same instance.
The syntax is as follows
Copy-SqlJob -Source SQL2012 -Destination SQL2016 -jobs NightlyMaintenance,HourlyTransactionBackups
In the series on dbatools module (https://dbatools.io) we at are going to look next at the Copy-SqlCredential.
As you may or may not know, if a DBA is going to keep his database up to shape and continue to process ETL (Extract Transform and Load) jobs they need to leverage automation. One method that is provided by SQL is the Agent, which may have jobs that leverage all sorts of items (SQL, SSIS packages, even Batch and PowerShell Scripts). Most of these jobs can run as the service account running the agent, but in some cases (Batch and SSIS among others) a Credential and Proxy are needed to run these jobs.
In migrating from one server to another, the DBA may not have access to the passwords for the Credentials that exist to run certain packages. I found that the Copy-SQLCredential to be helpful for this. In addition, this can be leveraged in the following way with Copy-SQLProxyAccount.
Copy-SQLCredential -Source SQL2012 -Destination SQL2016 -Credential 'Domain\CredentialedAccount'
Copy-SQLProxyAccount -Source SQL2012 -Destination SQL2016 -Credential 'Proxy_CredentialedAccount'
As I mentioned in my last post, I am going to be spending some time going over a couple of my favorite tools from the dbatools (https://dabtool.io) PowerShell module.
The one cmdlet that actually led me to the module in the first place, which has been a tried and true tool for me, is the “Copy-SQLlogin.” I am a relatively new to the work of a DBA, but one of the most difficult lessons I learned early on is that if you have a SQL Always On Availability Group (AG) setup the you have to maintain a copy of the Logins on all replicas where you need to potentially access the data from in a read capacity. From what I have read, to keep these groups in sync you would need to either grab the Login, “Create to Script” on the other replica or use partial containment to get around this problem.
This cmdlet however, offers a method similar to the first one, but with the options of syncing all of the Logins or only select Logins
My preferred method of using this command would be in the following syntax
Copy-SQLLogin -Source SQL2012 -Destination SQL2016 -Logins 'Domain\ServiceAccount','LocalSQLLogin'
This provides you the flexibility to copy the Logins for the databases that you may be moving. After doing this from the original source I would then switch the source and destination for the primary and any secondary replica in the cluster. One caviat that I found to this is that if you have another AG on the secondary that is not a readable secondary or is read-intent only this cmdlet may fail or provide a warning that not all the databases were available.
Another robust version of this cmdlet is just using it with the SyncOnly
Copy-SQLLogin -Source SQL2012 -Destination SQL2016 -synconly
This does not add or drop the user, but merely syncs up the objects securables. For Instances that are 2012 and newer this cmdlet can be used to move the SIDS, passwords, and other items. It makes it a powerful addition to any DBA’s tool belt.
Recently I found myself responsible for a large database migration and knew I would need to make sure that the environment had the exact same setup as the previous systems and was setup and migrated as fast and as accurately as possible. As a Result I discovered dbatools (https://dbatools.io) which is a PowerShell module that provides some neat cmdlets for SQL Administration. This module can be installed on any recent version of PowerShell (v 4 and v5) from the PowerShell gallery via Find-module and Install-module.
The most helpful of the over 150 cmdlets are the following
In some of the following posts I will share how some of these are used.
So back in January I decide to start blogging on tech. I was trying to keep it low impact and focused on the content; however, I still did not get anywhere
My idea at the time was that:
As I don’t have a lot of time to curate and create this kind of material I have adopted a few tactics. First I am trying to do this with as little technical complexity as possible. I know that sounds weird coming from a person writing on Technology, but I did not want to bother with WordPress or another blog solution (because of hosting limitations) and didn’t want to get sucked into finding the right theme or tweaking the php for my layout. Secondly I didn’t want to be under the pressure to create “super blog posts” (you know those blogs that have a million advertisers and never seem to provide answers). I wanted to be able to start writing (taking as little time as possible to write) in a clean and minimalist looking site.
So as a result I found my self using a flat CMS called stacey. Setup took me a little over 2 hours – to get pages written, change some of the defaults, and write this post. Hope you enjoy (and I hope this isn’t the only post).
Unfortunately I still did not get anywhere with this, and found myself needing more robust tools, so I decided to give it a full effort, “money on the table” attempt. I set this blog up using a LEMP stack with WordPress as the CMS.
Overall my drive to blog has been around for a while, it is just finding the right topic to talk about. As I started working on a new team at work (moved from desktop support to systems engineer), and with a lot of new information and technology being thrown at me I began taking lots of notes and found topics to write on. Eventually I figured I might as well just make my notes public in some sense and save some one the time that I lost. That is provided they have the right google foo to find it.