Saturday, January 27, 2018

Outlook on the web offline access failure

There is a bug in Exchange Server 2016 CU6, CU7, and CU8 that prevents clients from using offline access for Outlook on the web. It was working properly in CU5.

In CU5, in Options, you select Turn on offline access and then another window pane is shown that asks whether you have exclusive use of the computer or not. In CU8 (also CU6 and CU7), that pane appears very quickly and then goes away. Basically the text flickers but it is removed right away. You can see a video of this behavior below.



I tested this with Internet Explorer, Edge, Firefox, and Chrome with consistent results. I also tested this using both Windows 10 and Windows Server 2016 as the client operating systems.

After trying everything I could think of for testing, I posted this issue to the Microsoft support forums and they confirmed that this is a bug and will be fixed in a future update.

Wednesday, January 24, 2018

Install-AntispamAgents.ps1 Fails for Exchange 2016 CU7 and CU8

Not many organizations enable the antispam agents on their Exchange servers since they're probably paying for an additional service to do spam filtering. However, if you attempt to enable it on Exchange Server 2016 CU7 or CU8, there is an error in the script that you need to run. As of right now, CU8 is the latest update available so, this may continue in later updates.

To enable the antispam agents in Exchange Server 2016, you run the Install-AntispamAgents.ps1 script located in C:\Program Files\Microsoft\Exchange Server\V15\Scripts. However in CU7 and CU8, you get an error like this:
There are several screens of error information, but the key part is:
A parameter cannot be found that matches parameter name 'EscalationTeam'.
If you open the script and go to line 50, you'll see the following at the end of the line:
If you delete the following text from that line, the script works properly:
-EscalationTeam "AntiSpam";
Please note that this script does have a digital signature. Editing the script invalidates the digital signature. So, to run this script ensure that your execution policy is not set to AllSigned. It will run fine with execution policy set to either Unrestricted or RemoteSigned.

Monday, January 22, 2018

All Certificate Names MUST be in Subject Alternative Names

This has been popping up for a while, but it's worth pointing out again. When you get a SAN/UCC certificate, the DNS name that you use for the subject (common name) also needs to be in the subject alternative names attribute.

For example:
  • Subject:
    • webmail.contoso.com
  • Subject alternative names
    • webmail.contoso.com
    • exch1.contoso.com
    • exch2.contoso.com
    • Autodiscover.contoso.com
When you get your certificate from a public certificate authority (CA), most of them ensure that the subject is also added as a subject alternative name. However, some might not and you should watch for it.

It's important to note that the Microsoft CA does not automatically add the subject to the list of subject alternative names. So, make sure that you do it as part of your certificate request for your internal CA.

The reason you need to do this is because of how web browsers are processing subject alternative names. Web browsers are ignoring the subject if a list of subject alternative names is present. All web browsers seem to be enforcing this now. A few years ago, the web browsers would process both the subject and the subject alternative names.

Here's the quote from RFC 2818 from May 2000 (yes, that long ago):
If a subjectAltName extension of type dNSName is present, that MUST be used as the identity. Otherwise, the (most specific) Common Name field in the Subject field of the certificate MUST be used. Although the use of the Common Name is existing practice, it is deprecated and Certification Authorities are encouraged to use the dNSName instead.

As a slightly interesting side note. The subject field is optional when SANs are defined. However, some older software might still require the subject to be defined.

If you are configuring certificates in Exchange server, be aware that the Exchange admin console (EAC) does not properly differentiate between subject and subject alternative names. In the certificate shown below, webmail.adatum.com is in the subject and not in subject alternative names. EAC (Exchange 2016 CU8) is including the subject in the Subject Alternative Names box which can be misleading.


 

Tuesday, December 19, 2017

PowerShell cmdlet to query .NET Framework version


Whenever I'm doing new installs of Exchange, I'm annoyed when I need to figure out the .NET Framework version installed based on registry keys. You can see the documentation here:
 Just like the bad infomercial says: "There's got to be a better way!"

This has bothered me for sometime and I considered writing a small script to query and display the version. Then I thought...that would be good if I put it in the PowerShell Gallery so that I could easily access it from anywhere. Then I thought...I'm probably not the first person to have this idea.

In the PowerShell gallery, Joakim Borger Svendsen has been kind enough to create this for us as a module named DotNetVersionLister. He's also kind enough to keep it updated. There have been seven releases in 2017.

On a computer that has Internet access, you can install directly from the PowerShell Gallery:
Install-Module DotNetVersionLister
You can also save it to file for distribution to computers that don't have Internet access by using:
Save-Module DotNetVersionLister -Path C:\Temp
After you install the module, you have a single cmdlet named Get-DotNetVersion. This cmdlet is quite flexible and allows you to query the .NET Framework version from the local computer and remote systems. For remote systems, it can query the information using remote registry access or PowerShell remoting. So, it's flexible depending on your configuration.


 
Query the .NET Framework version from the local computer:
Get-DotNetVersion -LocalHost
Query the .NET Framework version from a remote computer via remote registry:
Get-DotNetVersion -ComputerName Exch01
Query the .NET Framework version from a remote computer via PowerShell remoting:
Get-DotNetVersion -ComputerName Exch01 -PSRemoting


Related links:
 

Tuesday, December 12, 2017

Docker Fails to Start on Reboot

For the last couple of weeks I've been working with Windows containers using Docker. I ran into a severe problem with the networking. I created a transparent network on the host and rebooted. After reboot, the docker service wouldn't start and had the following error in the event logs:
Log Name:      Application
Source:        docker
Event ID:      4
Task Category: None
Level:         Error
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to get endpoints from store: failed to decode endpoint IPv4 address () after json unmarshal: invalid CIDR address:
My first fix for this issue was to delete the following file:
  • C:\ProgramData\docker\network\files\local-kv.db
After this file was deleted, I was able to start the Docker service and it stayed running. That file was recreated when the docker service started and I was able to run docker commands.
 
Running docker network ls showed me that a transparent network I had created just before the restart was broken. That network was renamed a long random string. At this point, I could delete the randomly named transparent network, but a new one came back after each restart of either the Docker service or the host.

The final fix to stop that recurrence was running:

.\WindowsContainerNetworking-LoggingAndCleanupAide.ps1 -Cleanup -ForceDeleteAllSwitches

That script is provided by Microsoft on GitHub here:
It's also worth noting that others are also having this issue:

Thursday, September 28, 2017

Customizing File Types for Common Attachment Types Filter

One of the simplest things you can do to prevent malware from spreading through email in Office 365 is blocking attachment types that are commonly used to send malware. This includes executables (.exe), scripts (.vbs), and macro enabled office documents (.docm).

The anti-malware policies in Office 365 have a setting Common Attachment Types Filter that is off by default. I definitely recommend that you turn it on.


When you turn it on, the following file types are blocked by default:
  • ace
  • ani
  • app
  • docm
  • exe
  • jar
  • reg
  • scr
  • vbe
  • vbs
Office 365 has an existing list of many other file types that you can add, but in Exchange admin center, there is no method to add your own customized file types. For example, xlsm (Excel macros) is not in the list. You can add your own customized file types by using Windows PowerShell in Exchange Online.

To add your own customized file types to the malware policy, you can use the Set-MalwareFilterPolicy cmdlet. The general process is as follows:
  1. Retrieve the existing list of file types in an array.
  2. Add the new file types to the array.
  3. Set the file types for the malware policy by using the array

 $FileTypesAdd = Get-MalwareFilterPolicy -Identity Default | Select-Object -Expand FileTypes  
   
 $FileTypesAdd += "xlsm","pptm"  
   
 Set-MalwareFilterPolicy -Identity Default -EnableFileFilter $true -FileTypes $FileTypesAdd  

Note that when you run Set-MalwareFilterPolicy, you will probably get an error indicating that you need to run Enable-OrganizationCustomization. This creates additional objects in your Exchange Online tenant that allow additional customizations like this one.


After you have added the file types to the policy, they are visible in Exchange admin center. You can modify the the list of file types in Exchange admin center after this point, and it does not accidentally remove the customized file types you added.



Another way to accomplish this same goal is by using transport rules. Create a rule to apply if Any attachment's file extension matches. And then Redirect the message to hosted quarantine. However, this does not give the same options for notifications as using the malware policy. You could probably build the same functionality into the rule if you add enough actions, but I think it's easier to have one central location that controls all of the malware rather than adding rules.



Additional resources:



Wednesday, September 27, 2017

Automating Let's Encrypt DNS Verification with GoDaddy DNS for Exchange

The script that I reference in this post can be downloaded here: GoDaddyDNSUpdatePublic.ps1.txt

I love the concept of using Let's Encrypt for free SSL/TLS certificates. However, the short 90-day lifetime of the certificates is designed for automated renewal. In this blog post I'm going to show the steps required to script the use of GoDaddy for DNS verification.

For the basic steps on how to get a SAN certificate by using Let's Encrypt and DNS verification by using Windows PowerShell, please see my previous blog post: Using Let's Encrypt Certificates for Exchange Server

Let's Encrypt requires you to create an identifier for each DNS name that you want to include on a certificate. You need to validate each identifier to prove ownership of the domain. When you are using DNS validation, you need to create a TXT record in DNS for each identifier.

Unfortunately (from an ease of user perspective), the validation for an identifier is only valid for 30 days. This means, when you go to renew a certificate, you also need validate your identifiers again. Practically, this means you create new identifiers with the same DNS name, but a different alias, and validate them before generating the certificate.

Since DNS validation requires you to create a TXT record in DNS, you need a way to automate this. Fortunately, many DNS providers have a web API that you can use to programmatically access and create DNS records. However, be aware that there is no wide spread standard for this API. GoDaddy has created DomainConnect and submitted it as a standard, but I've not seen wide acceptance of it.

For this blog, I'll be showing the use of GoDaddy's API mostly because it's the DNS provider that I use most often.

Authentication to create and manage DNS is done by using an API key and a secret. Both of these are included in the URL when you perform tasks. You get the API key and secret from the GoDaddy Developer Portal (developer.godaddy.com).

On the GoDaddy Developer portal:
  1. Sign-in by using your GoDaddy Account
  2. Go to the Keys tab and click Create Key.
  3. Give your key a name.
  4. Copy the key and the secret to a file and click OK.
This creates a test key which you cannot use for updating your DNS. However, you're now at a screen where you can create a production key. Save the details of that production key to a file for using in the script.

At this point, it is assumed that you've already created a vault and registered by using the ACMESharp cmdlets. The remaining steps are purely to automate the process.

 #define domain that records are being created in  
 #script only supports a single domain  
 $domain = 'yourdomain.com'  
   
 #For Pfx file  
 $pfxPass = "PasswordOfYourChoiceToSecurePfxFile"  
 $pfxPath = "C:\Scripts\cert.pfx"  
   
 #header used for accessing GoDaddy API  
 $key = 'YourBigLongKeyHere'  
 $secret = 'YourBigLongPasswordHere'  
 $headers = @{}  
 $headers["Authorization"] = 'sso-key ' + $key + ':' + $secret  
   
 #First identity will be the subject, all others in SAN   
 $identities = "mail.yourdomain.com","autodiscover.yourdomain.com" 

 $allAlias = @()

I started my script by defining some variables used later on:
  • $domain is your domain in GoDaddy where the DNS records are being created.
  • $pfxPass and $pfxPath are used used then the certificate is exported to a PFX file before being imported into the Exchange Server.
  • $key and $secret are provided by GoDaddy when you obtain your production key for the API.
  • $headers is included as the authentication information later on when the call is made to the GoDaddy web api to create the TXT record.
  • $identities contains the DNS names that will be included in the certificate. My example has two names, but more names can be added.
  • $allAlias is defined as an array so that later functionality adding aliases can be processed.

 Foreach ($ident in $identities) {  
   [string]$unique = (Get-Date).Ticks  
   $alias = ($ident.Replace(".",""))+$unique  
   $allAlias = $allAlias + $alias  
   $id=New-ACMEIdentifier -Dns $ident -alias $alias   
   If ($id.Status -eq "valid") {Continue}  
   $chResults = Complete-ACMEChallenge $alias -ChallengeType dns-01 -Handler manual  
   $chData = $chResults.Challenges | Where-Object {$_.Type -eq "dns-01"}  
   $value=$chData.Challenge.RecordValue  
   $name=$chData.Challenge.RecordName  
     
   #remove domain name from $name  
   $recordname = $name.Replace(".$domain","")  

I use a Foreach look to create each identity and verify it using DNS. I'm going to go through this Foreach loop in chunks.

Since I'm going to need to create multiple identities over time, I wanted a unique identifier ($unique) to ensure there wouldn't be conflicts in naming. I chose to use the ticks value from time. This has the added advantage that you could sort them based on when they were created.

Each identity is referred by by it's alias. I defined the alias for each identity as the DNS name of the identity with the dots removed and $unique added. After each alias is generated, it's added to the $allAlias array.

When the identifier is created for Let's Encrypt, it's placed in the $id variable. The $id variable is then used to verify the status of the identifier. If an identifier with the same DNS name has already been created and verified then the status is valid. If it's valid, we don't need to do any of the other work in the loop and Continue to tell the script to carry on with the next identifier. If the status is not valid (which is expected for new identifiers) then we process the rest of the loop.

The results for the Complete-ACME challenge are placed in $chResults. The Challenges property for those results for the dns-01 challenge type are placed in $chData where we can get the RecordValue and RecordName properties:
  • $name contains the name of the TXT record required for validation
  • $value contains the text string that needs to be included in the TXT record for validation
When the TXT record is created by using the GoDaddy API, the data used in the request does not contain the domain name in the name. The $name variable is processed to remove the domain name (the domain name is replaced with nothing) and the results placed in $recordname which contains the data that will be submitted to the GoDaddy API.

 #create DNS record  
 $json = ConvertTo-Json @{data=$value}  
 Invoke-WebRequest https://api.godaddy.com/v1/domains/$domain/records/TXT/$recordname -method put -headers $headers -Body $json -ContentType "application/json"  
   

After all the processing of data is done, creating the TXT record is fairly straightforward. The data for the request is put into a hash table that is converted into Json. This hash table only requires the data but you can include other information like the TTL.

Invoke-WebRequest access the GoDaddy web api with a put method to send the data. This same method that web forms use to return data to a web site. The URL being access needs to contain your domain name and the type of record being created. In this case, I hard coded TXT as the record type in the URL, but $domain is used to insert the domain name. The $recordname variable is included in the URL because we only want to create that specific record. If the URL ends at TXT then the API assumes that we're providing an array of all the TXT records and any other existing TXT records are wiped out. The $headers variable (defined earlier) provides the authentication information for the request.

 #Submit the challenge to verify the DNS record  
 #30 second wait is to ensure that DNS record is available via query  
 Do {    
   Write-Host "Waiting 30 seconds before testing DNS record"  
   Start-Sleep -Seconds 30
   $dnslookup = Resolve-DnsName -Type TXT -Name $name  
   $dnsSuccess = $false  
   If ($dnslookup.strings -eq $value) {  
     Write-Host "DNS query was successful"  
     Submit-ACMEChallenge $alias -ChallengeType dns-01  
     $dnsSuccess = $true  
   } Else {  
     Write-Host "DNS query for $name failed"  
     Write-Host "The value is not $value"  
     Write-Host "We will try again"  
   }  
 } Until ($dnsSuccess -eq $true)   
   

I ran into an issue when verifying the TXT records. After creating the TXT record there can be a delay (a few second to a few minutes) until the record is accessible via the DNS servers. If Let's Encrypt tries to verify before it accessible, then it fails and isn't recoverable. You need to create a new identifier. So, I added this code to verify the DNS record is accessible before telling Let's Encrypt to verify.

The Resolve-DnsName cmdlet looks for the TXT record we just created. If the lookup contains the expected value then Submit-ACMEChallenge is used to tell Let's Encrypt to verify it. Also, the variable $dnsSuccess is set to $true and the loop ends. If it's not successful, we try again at 30 second intervals until it resolves successfully. Since adding this code to the script I haven't had any failures from Let's Encrypt. I think there may be some caching of failed lookups on the client running the script which result in a two minute delay, but better that than the Let's Encrypt lookup failing.

 #Verify that the dns record was checked and valid  
 $chStatus = $null  
 Do {  
   Write-Host "Waiting 10 seconds before testing DNS validation"  
   Start-Sleep -Seconds 10    
   $chStatus = (Update-ACMEIdentifier $alias).Status   
   #$chStatus=((Update-ACMEIdentifier $alias -ChallengeType dns-01).Challenges | Where-Object {$_.Type -eq "dns-01"}).Status  
   If ($chStatus -eq "valid") {Write-Host "$ident is verfied"}  
   Else {Write-Host "DNS record not valid yet: $chStatus"}  
 } Until ($chStatus -eq "valid")  
   

After the DNS lookup for the TXT record is successful, the script uses Update-ACME to retrieve the status of the verification. There is a 10 second pause to allow Let's Encrypt to perform the verification. If the verification is still pending then the loop repeats again at 10 second intervals.

I have two methods for checking the status ($chStatus). The more complex version is one I saw in an example someone else provided. However, I found that the simpler version seems to work fine. However, I did see one person indicating that when the challenge type is not specified that a pending request is not properly retained in the local vault and fails. With my delay of 10 seconds, I'm not sure that's ever happened. Both versions do seem to work though.

This is the end of the Foreach loop that processes the identifiers. Each identifier has now been verfied and there is a variable $allAlias that contains the alias used for each identifier. Next up is creating the certificate.

 #Create Certificate  
 New-ACMECertificate $allAlias[0] -generate -AlternativeIdentifierRefs $allAlias -Alias $allAlias[0]  
 Submit-ACMECertificate $allAlias[0]  
 Write-Host "Waiting 10 seconds for certificate to be generated"  
 Start-Sleep -Seconds 10  
 Update-ACMECertificate $allAlias[0]  
 Get-ACMECertificate $allAlias[0] -ExportPkcs12 $pfxPath -CertificatePassword $pfxPass -Overwrite  
   

The New-ACMECertificate cmdlet creates a certificate request. The first identifier ($allAlias[0]) becomes the subject of the certificate. Then the entire array $allAlias is submitted as alternative references which is the Subject Alternative Names attribute in the certificate. When you list the alternative identifiers manually, you can skip the identifier used for the subject because it's automatically added to the SAN attribute also. However, it works fine when that identifier is specified also. The alias for the certificate is set to be the same as the alias for the identifier used as the subject.

The certificate request is submitted and the script waits for 10 seconds to ensure that Let's Encrypt has time to generate the certificate. Update-ACMECertificate retrieve the certificate information from Let's Encrypt and puts it in the local vault.

Get-ACMECertificate with the -ExportPkcs parameter is used to export the certificate to a PFX file that can be imported into Exchange Server. While a password is not required while overwriting, the import of a PFX file without a password won't work properly. All will appear good, but the certificate will behave as if it has no private key. The -Overwrite parameter is specified because it's assumed that this script will be automated and this allows the file generated to be overwritten each time.

 #Assign certificate using EMS commands  
 #If run as a scheduled task must be run at EMS prompt  
 $pfxSecurePass = ConvertTo-SecureString -String $pfxPass -AsPlainText -Force  
 $cert = Import-ExchangeCertificate -FileName $pfxPath -Password $pfxSecurePass -FriendlyName $allAlias[0] -PrivateKeyExportable $true  
 Enable-ExchangeCertificate $cert.Thumbprint -Services IIS,SMTP -Force  
   

Finally, we import the certificate into Exchange Server. The Import-ExchangeCertificate cmdlet won't accept a plain text password. So, the password is converted to a secure string that can be used and placed into $pfxSecurePass.

To make it easier to identify the certificate on the Exchange server, the alias is used as the friendly name. The private key is also marked as exportable because by default it is not.

After the certificate is imported, then Enable-ExchangeCertificate is used to specify that it should be applied to the IIS and SMTP services. The -Force parameter enables the cmdlet to complete without requiring user input. However, it will replace the default SMTP certificate with this option.

Because the Exchange cmdlets are used, you need to schedule this script so that it runs in the Exchange Management Shell and not a regular PowerShell prompt.

If you are going to use this certificate on multiple Exchange servers, then you should update this script to import and enable the certificate on all of the Exchange servers in the site. The current configuration assumes one Exchange server and it applies only on the local Exchange server where the script is run.

Is it worth it?

In reality, the complexity of keeping this up an running is probably not worth it for Exchange Server. Exchange Server is a mission critical part of your organization. Instead I'd go for a low cost certificate provider and get a cert that lasts for 3 years. A SAN certificate from NameCheap.com costs about $35US per year. That's far less than the value of your time getting this up and going.

That said, this was a fun exercise for me and I'll probably use this for test environments. Now that I have the script it's pretty easy for me to use it. 

Alternative for IIS

As part of developing this, I worked out a method to supply the certificate only to IIS. I don't think that it's very useful for Exchange Server since we also want it to apply to SMTP to allow secured SMTP communication. However, I'm including it in case anyone is interested.

 #Import certificate from pfx to server  
 #script must be running as administrator to perform this action  
 $cert=Import-PfxCertificate -FilePath $pfxPath -CertStoreLocation Cert:\LocalMachine\My -Password $pfxSecurePass -Exportable  
   
 #Assign certificate to default web site  
 Import-Module WebAdministration #required for IIS: PSDrive  
 Set-Location IIS:\SSLBindings  
 Get-ChildItem | Where-Object { $_.sites.Value -eq "Default Web Site" -and $_.Port -eq 443 -and $_.IPAddress.IPAddressToString -eq "0.0.0.0" } | Remove-Item  
 $cert | New-Item .\0.0.0.0:443  
   

Importing the script is done with the Import-PfxCertificate. Again, we specify a path, password, and mark it as exportable.

We manually import the WebAdministration module to get access to the IIS: PSDrive. In Windows Server 2012 and later, modules autoload for using cmdlets, but not PSDrives.

SSL bindings are located in IIS:\SSLBindings. So, the location is set to there. In that location, Get-ChildItem gets a list of the SSL bindings. The binding for all IP addresses on port 443 of Default Web Site is deleted.

The certificate information is piped to New-Item with .\0.0.0.0:443. This creates a new binding in the current directory for that certificate to 0.0.0.0 on port 443.

References: