Tuesday, December 19, 2017

PowerShell cmdlet to query .NET Framework version


Whenever I'm doing new installs of Exchange, I'm annoyed when I need to figure out the .NET Framework version installed based on registry keys. You can see the documentation here:
 Just like the bad infomercial says: "There's got to be a better way!"

This has bothered me for sometime and I considered writing a small script to query and display the version. Then I thought...that would be good if I put it in the PowerShell Gallery so that I could easily access it from anywhere. Then I thought...I'm probably not the first person to have this idea.

In the PowerShell gallery, Joakim Borger Svendsen has been kind enough to create this for us as a module named DotNetVersionLister. He's also kind enough to keep it updated. There have been seven releases in 2017.

On a computer that has Internet access, you can install directly from the PowerShell Gallery:
Install-Module DotNetVersionLister
You can also save it to file for distribution to computers that don't have Internet access by using:
Save-Module DotNetVersionLister -Path C:\Temp
After you install the module, you have a single cmdlet named Get-DotNetVersion. This cmdlet is quite flexible and allows you to query the .NET Framework version from the local computer and remote systems. For remote systems, it can query the information using remote registry access or PowerShell remoting. So, it's flexible depending on your configuration.


 
Query the .NET Framework version from the local computer:
Get-DotNetVersion -LocalHost
Query the .NET Framework version from a remote computer via remote registry:
Get-DotNetVersion -ComputerName Exch01
Query the .NET Framework version from a remote computer via PowerShell remoting:
Get-DotNetVersion -ComputerName Exch01 -PSRemoting


Related links:
 

Tuesday, December 12, 2017

Docker Fails to Start on Reboot

For the last couple of weeks I've been working with Windows containers using Docker. I ran into a severe problem with the networking. I created a transparent network on the host and rebooted. After reboot, the docker service wouldn't start and had the following error in the event logs:
Log Name:      Application
Source:        docker
Event ID:      4
Task Category: None
Level:         Error
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to get endpoints from store: failed to decode endpoint IPv4 address () after json unmarshal: invalid CIDR address:
My first fix for this issue was to delete the following file:
  • C:\ProgramData\docker\network\files\local-kv.db
After this file was deleted, I was able to start the Docker service and it stayed running. That file was recreated when the docker service started and I was able to run docker commands.
 
Running docker network ls showed me that a transparent network I had created just before the restart was broken. That network was renamed a long random string. At this point, I could delete the randomly named transparent network, but a new one came back after each restart of either the Docker service or the host.

The final fix to stop that recurrence was running:

.\WindowsContainerNetworking-LoggingAndCleanupAide.ps1 -Cleanup -ForceDeleteAllSwitches

That script is provided by Microsoft on GitHub here:
It's also worth noting that others are also having this issue:

Thursday, September 28, 2017

Customizing File Types for Common Attachment Types Filter

One of the simplest things you can do to prevent malware from spreading through email in Office 365 is blocking attachment types that are commonly used to send malware. This includes executables (.exe), scripts (.vbs), and macro enabled office documents (.docm).

The anti-malware policies in Office 365 have a setting Common Attachment Types Filter that is off by default. I definitely recommend that you turn it on.


When you turn it on, the following file types are blocked by default:
  • ace
  • ani
  • app
  • docm
  • exe
  • jar
  • reg
  • scr
  • vbe
  • vbs
Office 365 has an existing list of many other file types that you can add, but in Exchange admin center, there is no method to add your own customized file types. For example, xlsm (Excel macros) is not in the list. You can add your own customized file types by using Windows PowerShell in Exchange Online.

To add your own customized file types to the malware policy, you can use the Set-MalwareFilterPolicy cmdlet. The general process is as follows:
  1. Retrieve the existing list of file types in an array.
  2. Add the new file types to the array.
  3. Set the file types for the malware policy by using the array

$FileTypesAdd = Get-MalwareFilterPolicy -Identity Default | Select-Object -Expand FileTypes  
$FileTypesAdd += "xlsm","pptm"  
Set-MalwareFilterPolicy -Identity Default -EnableFileFilter $true -FileTypes $FileTypesAdd  

Note that when you run Set-MalwareFilterPolicy, you will probably get an error indicating that you need to run Enable-OrganizationCustomization. This creates additional objects in your Exchange Online tenant that allow additional customizations like this one.


After you have added the file types to the policy, they are visible in Exchange admin center. You can modify the the list of file types in Exchange admin center after this point, and it does not accidentally remove the customized file types you added.



Another way to accomplish this same goal is by using transport rules. Create a rule to apply if Any attachment's file extension matches. And then Redirect the message to hosted quarantine. However, this does not give the same options for notifications as using the malware policy. You could probably build the same functionality into the rule if you add enough actions, but I think it's easier to have one central location that controls all of the malware rather than adding rules.



Additional resources:



Wednesday, September 27, 2017

Automating Let's Encrypt DNS Verification with GoDaddy DNS for Exchange

The script that I reference in this post can be downloaded here: GoDaddyDNSUpdatePublic.ps1.txt

I love the concept of using Let's Encrypt for free SSL/TLS certificates. However, the short 90-day lifetime of the certificates is designed for automated renewal. In this blog post I'm going to show the steps required to script the use of GoDaddy for DNS verification.

For the basic steps on how to get a SAN certificate by using Let's Encrypt and DNS verification by using Windows PowerShell, please see my previous blog post: Using Let's Encrypt Certificates for Exchange Server

Let's Encrypt requires you to create an identifier for each DNS name that you want to include on a certificate. You need to validate each identifier to prove ownership of the domain. When you are using DNS validation, you need to create a TXT record in DNS for each identifier.

Unfortunately (from an ease of user perspective), the validation for an identifier is only valid for 30 days. This means, when you go to renew a certificate, you also need validate your identifiers again. Practically, this means you create new identifiers with the same DNS name, but a different alias, and validate them before generating the certificate.

Since DNS validation requires you to create a TXT record in DNS, you need a way to automate this. Fortunately, many DNS providers have a web API that you can use to programmatically access and create DNS records. However, be aware that there is no wide spread standard for this API. GoDaddy has created DomainConnect and submitted it as a standard, but I've not seen wide acceptance of it.

For this blog, I'll be showing the use of GoDaddy's API mostly because it's the DNS provider that I use most often.

Authentication to create and manage DNS is done by using an API key and a secret. Both of these are included in the URL when you perform tasks. You get the API key and secret from the GoDaddy Developer Portal (developer.godaddy.com).

On the GoDaddy Developer portal:
  1. Sign-in by using your GoDaddy Account
  2. Go to the Keys tab and click Create Key.
  3. Give your key a name.
  4. Copy the key and the secret to a file and click OK.
This creates a test key which you cannot use for updating your DNS. However, you're now at a screen where you can create a production key. Save the details of that production key to a file for using in the script.

At this point, it is assumed that you've already created a vault and registered by using the ACMESharp cmdlets. The remaining steps are purely to automate the process.

 #define domain that records are being created in  
 #script only supports a single domain  
 $domain = 'yourdomain.com'  
   
 #For Pfx file  
 $pfxPass = "PasswordOfYourChoiceToSecurePfxFile"  
 $pfxPath = "C:\Scripts\cert.pfx"  
   
 #header used for accessing GoDaddy API  
 $key = 'YourBigLongKeyHere'  
 $secret = 'YourBigLongPasswordHere'  
 $headers = @{}  
 $headers["Authorization"] = 'sso-key ' + $key + ':' + $secret  
   
 #First identity will be the subject, all others in SAN   
 $identities = "mail.yourdomain.com","autodiscover.yourdomain.com" 

 $allAlias = @()

I started my script by defining some variables used later on:
  • $domain is your domain in GoDaddy where the DNS records are being created.
  • $pfxPass and $pfxPath are used used then the certificate is exported to a PFX file before being imported into the Exchange Server.
  • $key and $secret are provided by GoDaddy when you obtain your production key for the API.
  • $headers is included as the authentication information later on when the call is made to the GoDaddy web api to create the TXT record.
  • $identities contains the DNS names that will be included in the certificate. My example has two names, but more names can be added.
  • $allAlias is defined as an array so that later functionality adding aliases can be processed.

 Foreach ($ident in $identities) {  
   [string]$unique = (Get-Date).Ticks  
   $alias = ($ident.Replace(".",""))+$unique  
   $allAlias = $allAlias + $alias  
   $id=New-ACMEIdentifier -Dns $ident -alias $alias   
   If ($id.Status -eq "valid") {Continue}  
   $chResults = Complete-ACMEChallenge $alias -ChallengeType dns-01 -Handler manual  
   $chData = $chResults.Challenges | Where-Object {$_.Type -eq "dns-01"}  
   $value=$chData.Challenge.RecordValue  
   $name=$chData.Challenge.RecordName  
     
   #remove domain name from $name  
   $recordname = $name.Replace(".$domain","")  

I use a Foreach look to create each identity and verify it using DNS. I'm going to go through this Foreach loop in chunks.

Since I'm going to need to create multiple identities over time, I wanted a unique identifier ($unique) to ensure there wouldn't be conflicts in naming. I chose to use the ticks value from time. This has the added advantage that you could sort them based on when they were created.

Each identity is referred by by it's alias. I defined the alias for each identity as the DNS name of the identity with the dots removed and $unique added. After each alias is generated, it's added to the $allAlias array.

When the identifier is created for Let's Encrypt, it's placed in the $id variable. The $id variable is then used to verify the status of the identifier. If an identifier with the same DNS name has already been created and verified then the status is valid. If it's valid, we don't need to do any of the other work in the loop and Continue to tell the script to carry on with the next identifier. If the status is not valid (which is expected for new identifiers) then we process the rest of the loop.

The results for the Complete-ACME challenge are placed in $chResults. The Challenges property for those results for the dns-01 challenge type are placed in $chData where we can get the RecordValue and RecordName properties:
  • $name contains the name of the TXT record required for validation
  • $value contains the text string that needs to be included in the TXT record for validation
When the TXT record is created by using the GoDaddy API, the data used in the request does not contain the domain name in the name. The $name variable is processed to remove the domain name (the domain name is replaced with nothing) and the results placed in $recordname which contains the data that will be submitted to the GoDaddy API.

 #create DNS record  
 $json = ConvertTo-Json @{data=$value}
 Invoke-WebRequest https://api.godaddy.com/v1/domains/$domain/records/TXT/$recordname -method put -headers $headers -Body $json -ContentType "application/json"  

UPDATE: July 2, 2018
A reader named Jason has reported that the json format used by GoDaddy has changed and that the above code snippet needs to be updated. I have not verified, but Jason says updating the middle line to the following fixes the problem:
  • $json = ConvertTo-Json @(@{data=$value})


After all the processing of data is done, creating the TXT record is fairly straightforward. The data for the request is put into a hash table that is converted into Json. This hash table only requires the data but you can include other information like the TTL.

Invoke-WebRequest accesses the GoDaddy web api with a put method to send the data. This same method that web forms use to return data to a web site. The URL being access needs to contain your domain name and the type of record being created. In this case, I hard coded TXT as the record type in the URL, but $domain is used to insert the domain name. The $recordname variable is included in the URL because we only want to create that specific record. If the URL ends at TXT then the API assumes that we're providing an array of all the TXT records and any other existing TXT records are wiped out. The $headers variable (defined earlier) provides the authentication information for the request.

 #Submit the challenge to verify the DNS record  
 #30 second wait is to ensure that DNS record is available via query  
 Do {    
   Write-Host "Waiting 30 seconds before testing DNS record"  
   Start-Sleep -Seconds 30
   $dnslookup = Resolve-DnsName -Type TXT -Name $name  
   $dnsSuccess = $false  
   If ($dnslookup.strings -eq $value) {  
     Write-Host "DNS query was successful"  
     Submit-ACMEChallenge $alias -ChallengeType dns-01  
     $dnsSuccess = $true  
   } Else {  
     Write-Host "DNS query for $name failed"  
     Write-Host "The value is not $value"  
     Write-Host "We will try again"  
   }  
 } Until ($dnsSuccess -eq $true)   
   

I ran into an issue when verifying the TXT records. After creating the TXT record there can be a delay (a few second to a few minutes) until the record is accessible via the DNS servers. If Let's Encrypt tries to verify before it accessible, then it fails and isn't recoverable. You need to create a new identifier. So, I added this code to verify the DNS record is accessible before telling Let's Encrypt to verify.

The Resolve-DnsName cmdlet looks for the TXT record we just created. If the lookup contains the expected value then Submit-ACMEChallenge is used to tell Let's Encrypt to verify it. Also, the variable $dnsSuccess is set to $true and the loop ends. If it's not successful, we try again at 30 second intervals until it resolves successfully. Since adding this code to the script I haven't had any failures from Let's Encrypt. I think there may be some caching of failed lookups on the client running the script which result in a two minute delay, but better that than the Let's Encrypt lookup failing.

 #Verify that the dns record was checked and valid  
 $chStatus = $null  
 Do {  
   Write-Host "Waiting 10 seconds before testing DNS validation"  
   Start-Sleep -Seconds 10    
   $chStatus = (Update-ACMEIdentifier $alias).Status   
   #$chStatus=((Update-ACMEIdentifier $alias -ChallengeType dns-01).Challenges | Where-Object {$_.Type -eq "dns-01"}).Status  
   If ($chStatus -eq "valid") {Write-Host "$ident is verfied"}  
   Else {Write-Host "DNS record not valid yet: $chStatus"}  
 } Until ($chStatus -eq "valid")  
   

After the DNS lookup for the TXT record is successful, the script uses Update-ACME to retrieve the status of the verification. There is a 10 second pause to allow Let's Encrypt to perform the verification. If the verification is still pending then the loop repeats again at 10 second intervals.

I have two methods for checking the status ($chStatus). The more complex version is one I saw in an example someone else provided. However, I found that the simpler version seems to work fine. However, I did see one person indicating that when the challenge type is not specified that a pending request is not properly retained in the local vault and fails. With my delay of 10 seconds, I'm not sure that's ever happened. Both versions do seem to work though.

This is the end of the Foreach loop that processes the identifiers. Each identifier has now been verfied and there is a variable $allAlias that contains the alias used for each identifier. Next up is creating the certificate.

 #Create Certificate  
 New-ACMECertificate $allAlias[0] -generate -AlternativeIdentifierRefs $allAlias -Alias $allAlias[0]  
 Submit-ACMECertificate $allAlias[0]  
 Write-Host "Waiting 10 seconds for certificate to be generated"  
 Start-Sleep -Seconds 10  
 Update-ACMECertificate $allAlias[0]  
 Get-ACMECertificate $allAlias[0] -ExportPkcs12 $pfxPath -CertificatePassword $pfxPass -Overwrite  
   

The New-ACMECertificate cmdlet creates a certificate request. The first identifier ($allAlias[0]) becomes the subject of the certificate. Then the entire array $allAlias is submitted as alternative references which is the Subject Alternative Names attribute in the certificate. When you list the alternative identifiers manually, you can skip the identifier used for the subject because it's automatically added to the SAN attribute also. However, it works fine when that identifier is specified also. The alias for the certificate is set to be the same as the alias for the identifier used as the subject.

The certificate request is submitted and the script waits for 10 seconds to ensure that Let's Encrypt has time to generate the certificate. Update-ACMECertificate retrieve the certificate information from Let's Encrypt and puts it in the local vault.

Get-ACMECertificate with the -ExportPkcs parameter is used to export the certificate to a PFX file that can be imported into Exchange Server. While a password is not required while overwriting, the import of a PFX file without a password won't work properly. All will appear good, but the certificate will behave as if it has no private key. The -Overwrite parameter is specified because it's assumed that this script will be automated and this allows the file generated to be overwritten each time.

 #Assign certificate using EMS commands  
 #If run as a scheduled task must be run at EMS prompt  
 $pfxSecurePass = ConvertTo-SecureString -String $pfxPass -AsPlainText -Force  
 $cert = Import-ExchangeCertificate -FileName $pfxPath -Password $pfxSecurePass -FriendlyName $allAlias[0] -PrivateKeyExportable $true  
 Enable-ExchangeCertificate $cert.Thumbprint -Services IIS,SMTP -Force  
   

Finally, we import the certificate into Exchange Server. The Import-ExchangeCertificate cmdlet won't accept a plain text password. So, the password is converted to a secure string that can be used and placed into $pfxSecurePass.

To make it easier to identify the certificate on the Exchange server, the alias is used as the friendly name. The private key is also marked as exportable because by default it is not.

After the certificate is imported, then Enable-ExchangeCertificate is used to specify that it should be applied to the IIS and SMTP services. The -Force parameter enables the cmdlet to complete without requiring user input. However, it will replace the default SMTP certificate with this option.

Because the Exchange cmdlets are used, you need to schedule this script so that it runs in the Exchange Management Shell and not a regular PowerShell prompt.

If you are going to use this certificate on multiple Exchange servers, then you should update this script to import and enable the certificate on all of the Exchange servers in the site. The current configuration assumes one Exchange server and it applies only on the local Exchange server where the script is run.

Is it worth it?

In reality, the complexity of keeping this up an running is probably not worth it for Exchange Server. Exchange Server is a mission critical part of your organization. Instead I'd go for a low cost certificate provider and get a cert that lasts for 3 years. A SAN certificate from NameCheap.com costs about $35US per year. That's far less than the value of your time getting this up and going.

That said, this was a fun exercise for me and I'll probably use this for test environments. Now that I have the script it's pretty easy for me to use it. 

Alternative for IIS

As part of developing this, I worked out a method to supply the certificate only to IIS. I don't think that it's very useful for Exchange Server since we also want it to apply to SMTP to allow secured SMTP communication. However, I'm including it in case anyone is interested.

 #Import certificate from pfx to server  
 #script must be running as administrator to perform this action  
 $cert=Import-PfxCertificate -FilePath $pfxPath -CertStoreLocation Cert:\LocalMachine\My -Password $pfxSecurePass -Exportable  
   
 #Assign certificate to default web site  
 Import-Module WebAdministration #required for IIS: PSDrive  
 Set-Location IIS:\SSLBindings  
 Get-ChildItem | Where-Object { $_.sites.Value -eq "Default Web Site" -and $_.Port -eq 443 -and $_.IPAddress.IPAddressToString -eq "0.0.0.0" } | Remove-Item  
 $cert | New-Item .\0.0.0.0:443  
   

Importing the script is done with the Import-PfxCertificate. Again, we specify a path, password, and mark it as exportable.

We manually import the WebAdministration module to get access to the IIS: PSDrive. In Windows Server 2012 and later, modules autoload for using cmdlets, but not PSDrives.

SSL bindings are located in IIS:\SSLBindings. So, the location is set to there. In that location, Get-ChildItem gets a list of the SSL bindings. The binding for all IP addresses on port 443 of Default Web Site is deleted.

The certificate information is piped to New-Item with .\0.0.0.0:443. This creates a new binding in the current directory for that certificate to 0.0.0.0 on port 443.

References:

Wednesday, September 20, 2017

ACMESharp and Visual Studio Code Error

I lost a fair bit of time troubleshooting an error that turned out to be an odd compatibility issue between the ACMESharp module and Visual Studio Code. Hopefully this saves someone else the time.

In Visual Studio Code, when running Submit-ACMECertificate, I got this error:
Submit-ACMECertificate : Error resolving type specified in JSON 'ACMESharp.PKI.CsrDetails, ACMESharp'. Path '$type', line 2, position 48.
At line:1 char:1
+ Submit-ACMECertificate nosub
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Submit-ACMECertificate], JsonSerializationException
    + FullyQualifiedErrorId : Newtonsoft.Json.JsonSerializationException,ACMESharp.POSH.SubmitCertificate



I read a bunch of stuff about Newtonsoft.Json being installed in the Global Assembly Cache, but it wasn't on my computer.

I tested the same script on my desktop instead of the laptop. Nope, same error.

It turned out that the command worked just fine at a normal PowerShell prompt. So, my best guess is that Visual Studio Code is doing something different with that Json conversion that PowerShell by itself doesn't.

Worked all good:


I've run into oddities in the past when using PowerShell ISE for some things. However, that was minor stuff like text colors not updating. First time I've seen something this large dependent on the dev tool.

Thursday, September 14, 2017

Getting Detailed Error Messages for Mailbox Moves

In Office 365 or Exchange Server 2013/2016, you can use the administration console to view information about migration batches. To find out information about failing moves, you can view the details of the migration batch and then view the report for individual mailboxes. When you view the report for a mailbox a text file is downloaded for viewing.

The report provides detailed information about how much data has been downloaded. Also, if there are errors, they are contained in the report. Unfortunately sometimes the errors are pretty generic. For example, one error I got recently was:
Transient error TimeoutErrorTransientException has occurred. The system will retry (200/1300).
Instructions on how to review the report:
Since the error was happening often, we needed to get more information. Fortunately that detail is available, but not in that report. Instead, you need to use Windows PowerShell to view the move request statistics. If you are moving the mailbox to Office 365, you need to use a PowerShell prompt connected to Exchange online.

$stats = Get-MoveRequest user@domain.com | Get-MoveRequestStatistics -IncludeReport
$moveErr = $stats.Report.Entries | Where-Object {$_.Type -eq "Error"}

This leaves you will an array named $moveErr that contains all of the errors. You can view each error by specifying an individual array item.

$moveErr[1]

In my case, I got this more detailed information in the Failure property:

TimeoutErrorTransientException: The call to 'https://hybridserver/EWS/mrsproxy.svc
hybridserver (15.1.1034.26 caps:07FD6FFFBF5FFFFFCB07FFFF)' timed out. Error
details: The request channel timed out while waiting for a reply after 00:00:30.9527011.
Increase the timeout value passed to the call to Request or increase the SendTimeout
value on the Binding. The time allotted to this operation may have been a portion of a
longer timeout. --> The request operation did not complete within the allotted timeout of
00:00:50. The time allotted to this operation may have been a portion of a longer
timeout. --> The request channel timed out while waiting for a reply after
00:00:30.9527011. Increase the timeout value passed to the call to Request or increase
the SendTimeout value on the Binding. The time allotted to this operation may have been a
portion of a longer timeout. --> The request operation did not complete within the
allotted timeout of 00:00:50. The time allotted to this operation may have been a portion
of a longer timeout.

That error information gave me enough information to start looking at timeout settings for the MRS proxy service.

Tuesday, September 12, 2017

Using Let's Encrypt Certificates for Exchange Server

Have you ever fantasized about using free SSL/TLS certificates for Exchange Server? If so, then this blog post is for you.

I’ve always hated the cost associated with SSL/TLS certificates. For what seemed like a pretty basic service some of the certificate authorities (CAs) were charging hundreds or thousands of dollars. You could always set up your own CA, but that didn’t work well with random clients on the Internet because they won’t trust certificates generated by your CA.

At the end of 2015, there was a game changing development. Let’s Encrypt started giving away SSL/TLS certificates for free. At the time, the certificates were only for a single name. So, without SAN support, not good for Exchange Server. However, now there is support for SAN/UCC certificates. And, in 2018 they are planning to support wildcard certificates.

What’s the Catch?

The certificates are free. There is no catch there. But, they do have a short lifetime of 90 days. The short lifetime is to ensure that compromised certificates are not available for an extended period of time. Because of the short lifetime, it is strongly recommended that you automate certificate renewal.

Note: This blog post only shows the manual steps for obtaining a certificate. I'll put up another one showing automation.

The process for generating and renewing a certificate is a bit complex. But, once the initial process is defined, it’s pretty easy to work with.

Unlike a typical CA, Let’s Encrypt does not provide a web site to manage your certificate requests. Instead you need client software that communicates with the Let’s Encrypt servers. Since I already work with Windows PowerShell on a regular basis, I like the ACMESharp module that provides PowerShell cmdlets for working with Let’s Encrypt.

Installing the ACMESharp Module

The ACMESharp module is available in the PowerShell Gallery. To download and install modules from the PowerShell Gallery, you use the Install-Module cmdlet that is part of the PowerShellGet module. The PowerShellGet modules is included as part of the Windows Management Framework 5 (part of Windows 10 and Windows Server 2016).

If you are not using Windows 10 or Windows Server 2016, you can download and install WMF 5 or a standalone MSI installer for PowerShellGet here:

After you have PowerShellGet, installed run the command:
Install-Module AcmeSharp

When you run this command, you might be prompted to install NuGet. If you are prompted, say yes to install it. NuGet is provides the functionality to obtain packages from the PowerShell Gallery. The PowerShellGet cmdlets use NuGet.

You might also be prompted that the repository PSGallery is untrusted. This is the PowerShell Gallery that you want to download files from. So, say yes to trust PSGallery.



Connecting to Let’s Encrypt


The first step after installing the ACMESharp module is creating a local data store for the ACMESharp client. The data store is used by the client for secure storage of requests and keys.
To create the local data store run the following command:
Initialize-ACMEVault

Then, to create an account with Let’s Encrypt, run the following command:
New-ACMERegistration -Contacts mailto:youremail@yourdomain.com -AcceptTos

Validating DNS Names

Let’s Encrypt requires you to verify ownership of each DNS name that you want to include in a certificate. Each DNS name is referred to as an identifier. For a SAN certificate, you will generate 2 or more identifiers then specify the identifiers when you create the certificate.

You can validate an identifier in three ways:
  • Dns - You need to create a TXT record in DNS that is verified by Let’s Encrypt.
  • HTTP - You need to place a file on your web server that is verified by Let’s Encrypt.
  • TLS-SNI - You need to place an SSL/TLS certificate on your web server that is verified by Let’s Encrypt.
In the projects I work on, I typically do not have access to the main company web server/site, but do have access to create DNS records. So, I use DNS validation.

To create a new identifier:
New-ACMEIdentifier -dns server.domain.com -alias idAlias
You should include an alias each time you create an identifier. If you don’t create an alias, there is no easy way to refer to the identifier in later steps. If you forget to create an alias, just create a another new identifier with the same DNS name and include the alias.

In my example, the DNS name has four parts because I was testing by using a subdomain. In most cases, the DNS name will have only three parts.


Next you need to specify how the identifier will be verified. When you do this the cmdlet reports back the proof you need to provide. For HTTP verification, it identifies the file name. For DNS verification, it identifies the TXT record that needs to be created.

The command to start verifying the identifier is:
Complete-ACMEChallenge idAlias -ChallengeType dns-01 -Handler manual
Use the alias of the identifier to specify which identifier is being verified.

The manual handler identifies that you are specifying the challenge type. There are other automated handlers that automatically create the response for the HTTP-based challenges. The automated handlers are specific to different web servers.

The challenge type dns-01 identifies that you will create a TXT record in DNS. Note that dns-01 must be in lowercase. 


After you have created the DNS record that corresponds to the challenge, then you submit the challenge. When you submit the challenge, Let’s Encrypt verifies it.

To submit a challenge:
Submit-ACMEChallenge idAlias -ChallengeType dns-01

When you submit the challenge, you need to specify the alias of the identifier and the challenge type.
The validation may or may not complete immediately. You can verify the status of the validation with the following command:
(Update-ACMEIdentifier idAlias -ChallengeType dns-01).challenges
The Update-ACMEIdentifier cmdlet queries the status of the identifier from the Let’s Encrypt servers. The challenges property contains the challenges generated when the identifier was created. The status for the dns-01 challenge will change to valid when validation is complete.



Creating a Certificate

After you have validated all of the identifiers that will be included in your certificate, you can generate the certificate request. When you generate the certificate request, you need to specify the identifiers to include and a new alias for the certificate.

To generate the certificate request:
New-ACMECertificate idAlias -generate -AlternativeIdentifierRefs idAlias1,idAlias2 -Alias certAlias
The initial alias identifier that you provide is the subject for the certificate. This name is also added to the subject alternative names automatically. You don’t need to repeat it as an alternative identifier.
The -AlternativeIdentifierRefs parameter is used to identify additional identifiers that are included in the certificate. All identifiers used here need to be validated.



To submit the certificate request:
Submit-ACMECertificate certAlias
The certificate alias used here is the alias that was set when running the New-ACMECertificate cmdlet.


After submitting, notice that the IssuerSerialNumber is blank. This is how you can identify that the results from the request you submitted have not been retrieved. You need to update the data in the local vault before you can export it:
Update-ACMECertificate certAlias

To export the completed certificate as a pfx file that includes the certificate and private key:
Get-ACMECertificate certAlias -ExportPkcs12 filename.pfx -CertificatePassword “password
The -ExportPkcs12 parameter can be given a filename or a full path. If there are spaces in either one, you need to put quotes around it.


When I first ran the Get-ACMECertificate cmdlet to export the pfx file, I got an error:
Issuer certificate hasn’t been resolved.

While I found some references to this being an issue with an intermediate certificate not being installed, my issue was simpler. I had forgotten to update the certificate information in the local vault with Update-ACMECertificate before trying to export to a PFX file.

However, if you have this error and you have already used Update-ACMECertificate then it may be caused by the missing intermediate certificate. You want the X3 intermediate certificate for the Let's Encrypt CA.


If necessary, you can get the X3 intermediate certificate here:
After you have the pfx file, you can import it and assign to Exchange Server using the normal Exchange Management cmdlets.

Friday, September 1, 2017

Remove Proxy Address from Office 365 User

I ran into an issue today where I needed to remove a proxy address from a cloud-based administrative user in Office 365 that was unlicensed. This user had a proxy address that was conflicting with a proxy address that was being synced with Azure AD Connect for another user account.

The cloud user was originally created as byron@OnPremDomain.com and renamed to be byron@CloudDom.onmicrosoft.com. When this was done, the original address (byron@OnPremDomain.com) was kept as a proxy address. You could view both addresses when using Get-MsolUser. This address caused a synchronization error for an on-premises user named byron@OnPremDomain.com.

To resolve this error, I need to remove byron@OnPremDomain.com from the list of proxy addresses. However, you can't do this with Set-MsolUser. The mechanism for managing proxy addresses in Office 365 is Set-Mailbox. But, without a license, there is no mailbox for the user account.

The solution is to add a license temporarily:
  1. Add a license for byron@CloudDom.onmicrosoft.com which creates a mailbox.
  2. Use Set-Mailbox -EmailAddresses to remove the incorrect proxy address.
  3. Verify Get-MsolUser shows only the correct proxyaddresses.


Monday, August 28, 2017

AD Synchronization Error When Adding Exchange 2016

When I implement hybrid mode for an organization, we typically implement Exchange Server 2016 to be the long term hybrid server. This provides the most recent Exchange Server version for management.

Today when I was installing Exchange Server 2016 into an Exchange 2010 organization we started getting directory synchronization errors for some system mailboxes (four of them). This occurred after I ran /PrepareAD, but before the remainder of Exchange Server 2016 was installed.

SystemMailbox{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}@XXXXX.com

Unable to update this object in Azure Active Directory, because the attribute [Username], is not valid. Update the value in your local directory services.
We only noticed these errors because Office 365 sent an Identity Synchronization Error Report that listed them.

When I looked in Synchronization Service Manager to see the details in Azure AD Connect, the following errors were there in the export to the Office 365 tenant:
I did some searching around and found a few articles that talked about manually updating attributes for these objects. However, when I looked at these objects the data didn't match what those articles were talking about.

Eventually I found a posting in an MS discussion forum that said to just wait until the install was finished. Apparently /PrepareAD creates the objects, but their configuration is not complete until the rest of Exchange Server 2016 is installed.

Sure enough, after the Exchange Server 2016 install was finished the synchronization errors went away.

Posting in MS discussion forum for reference (see answer from Ytsejamer1):
I did also refresh the connector schema as suggested in this post. However, I did that after running /PrepareAD and before Exchange Server 2016 was installed. I can't say for sure whether this step is required.




 

Monday, August 21, 2017

Updating SIP Addresses in Skype for Business

When you migrate to Office 365, the preferred configuration is to have user email addresses and UPNs the same. Having a single identity makes it easier for users to understand.

If you are implementing Skype for Business in Office 365, it will take the UPN of the user as the Skype address. Again, keeping a single identity is good.

However, if you have an on-premises implementation of Skype for Business, then the Skype identity is configured in the attribute msRTCSIP-PrimaryUserAddress.  This attribute contains a SIP (session initiation protocol) address that looks like an email address but with “sip:” at the start. For example: “sip:user@contoso.com”.

The SIP addresses defined in your on-premises Skype for Business may or may not match the email addresses of the users. You need to verify whether the addresses match. If the SIP address does not match the email address, it is easy to change.

On the Skype Server run the following PowerShell command:
Set-CsUser -Identity userUPN -SipAddress "sip:emailaddress"

When you update the SIP address for UserA there are two considerations:
  • UserA will be signed out of Skype within about 1 minute. After UserA is signed out, they need to sign in again by using their new SIP address.
  • Any users with UserA as a contact are not updated immediately. These users need to sign-out and sign-in again for the contact to be updated and properly view presence information. However, this only works if the address book as been updated.
Address book updates are the big gotcha in SIP address updates. By default, Skype for Business clients use a cached address book that only updates once per day. Even if you force address book updates on the server, the new address information is not retrieved until the next day unless you manually delete the cached address book files from the client to force a reload.

To address this problem, you can switch clients to use an online address book. Clients using the online address book query the Skype for Business server each time they search the address book. When using the online address book, changes show up within a few minutes.

Address book look ups are controlled by client policies. By default, there is one policy named Global. You can update this policy to use online address book resolution with the following command:
Set-CsClientPolicy -Identity Global -AddressBookAvailability WebSearchOnly

Alternatively, you can create a new policy and assign that to users incrementally:
New-CsClientPolicy -Identity OnlyWebAB -AddressBookAvailability WebSearchOnly
Grant-CsClientPolicy -Identity UserName -PolicyName OnlyWebAB

The new client policy takes effect the next time the user signs in. To increase the likelihood that users have the new policy, you should configure it and then wait until the next day before making large scale changes.

Some useful links: