[updated: July 20, 2015] Script: putting Exchange Server 2013 into Maintenance Mode

Latest Update:

v1.8 (07/20/2015): fixed a copy/paste error in the script and cleaned up the code to be a little more efficient (removed redundant IF-statement. Published the script to the TechNet Script Gallery for easier download access.

Introduction

In Exchange 2010 one had the option to put a Mailbox server which was part of a DAG into “maintenance mode” by running the “StartDagServerMaintenance.ps1” script that was included with the product. Likewise StopDagServerMaintenance.ps1 was used to pull a server out of this so-called maintenance state. In fact, this script would move any active mailbox databases to another node in the DAG and mark this server as temporarily unavailable to the other servers. That way, if a failover would occur during the server was in ‘maintenance mode’ you wouldn’t risk that it ended up as a valid failover target.

Exchange 2013 now has the ability to go beyond what was possible before and extend this functionality. You now have the possibility to put an entire server into maintenance mode, meaning that also components like e.g. Transport Service or the Unified Messaging Call Router are temporarily put on hold why you do some work on your server.

There might be various reasons to put a server into maintenance mode. For instance when you need to install software or you want to do some troubleshooting without affecting users that might have a mailbox in an active mailbox database on that server. To facilitate the process, I created two scripts which will automatically put an Exchange 2013 Server in or take it back out of Maintenance Mode.

The manual process

The process for putting an Exchange 2013 server into maintenance mode is relatively straightforward. To enable the Maintenance Mode, you must run the commands below.

If the server is a Mailbox server and before you can disable the transport service, all active queues need to be drained first. To help clearing out the queues, existing messages on the server will be moved to another server. Please note that the TargetServer value has to be a FQDN:

Set-ServerComponentState  -Component HubTransport -State Draining -Requester Maintenance
Redirect-Message -Server  -Target <server_fqdn>

If the server is part of a DAG, you must also run these commands:

Suspend-ClusterNode
Set-MailboxServer  -DatabaseCopyActivationDisabledAndMoveNow $true
Set-MailboxServer  -DatabaseCopyAutoActivationPolicy Blocked

Once all queues are empty, you can disable all components:

Set-ServerComponentState  -Component ServerWideOffline -State Inactive -Requester Maintenance

Taking the server out of Maintenance Mode is a matter of simply reversing the actions we took to put it into Maintenance Mode.

First, we reactive all components:

Set-ServerComponentState  -Component ServerWideOffline -State Active -Requester Maintenance

If the server is part of a DAG, you need to reactive it in the cluster (by resuming the cluster node):

Resume-ClusterNode
Set-MailboxServer  -DatabaseCopyActivationDisabledAndMoveNow $false
Set-MailboxServer  -DatabaseCopyAutoActivationPolicy Unrestricted

If the server is a Mailbox Server, the transport queues need to be resumed as well:

Set-ServerComponentState –Identity  -Component HubTransport -State Active -Requester Maintenance

Although not explicitly required, it’s best to restart the transport services after changing their component states. This ensures they ‘pick up’ the changed component states immediately rather than having to wait for Managed Availability (Health Service) to take action.

Using the scripts

Sometimes it can take a while before active queues are drained. Because I do not always want to wait in front of the screen and periodically check the queues myself, I created two little script that fully automate the process explained above. Besides the required steps, the scripts also perform additional safety-checks and inform you about other server component states which might prevent a server from working correctly.

The first script, Start-ExchangeServerMaintenanceMode.ps1 will put a server into Maintenance Mode, whereas Stop-ExchangeServerMaintenanceMode.ps1 can be used to take a server out of the maintenance state.

Please note that the scripts rely on built-in Exchange functions and therefore need to be run from the Exchange Management Shell.

Version history

v1.8 (07/20/2015): fixed copy/paste bug; removed duplicate code and made some overall improvements to script efficiency.

v1.7 (07/08/2015): removed the requirement to dot-source the script. Published the script to the TechNet Script Gallery for easier download access.

v1.6 (29/11/2013): some minor bug fixes in the Start-ExchangeMaintenanceMode script.

v1.5 (28/11/2013): Based on feedback from several readers, I’ve improved the scripts by rewriting parts of the code and, as such, making it more lenient and more usable in scenarios where you want to run the script from a remote Exchange server. The script now also restarts the Transport service(s) after changing their component states. This ensures that the new component states are picked up immediately, rather than after Managed Availability kicks in. Without the change it could take anywhere from a few minutes to a few hours before the transport services were really inactive/active again. The download links at the bottom of the page are updated to point to the new versions of the scripts. Last, but not least, when ending a maintenance mode, the script will query the server for any components that might still be inactive and display a warning if any are found. A special thanks to Dave Stork for some of the ideas!

v1.4: update the script to include some additional error checks. First it will check whether the person who is executing the script has local admin rights. If not, the script will throw a warning and exit. Secondly it will also check whether the TargetServer name can be resolved. If it’s not an FQDN, it will resolve it to an FQDN. If it cannot be resolved, an error will be thrown.

v1.3: after some feedback from Brian Reid (thanks Brian!), I’ve finally updated the script to include the “Redirect-Message” cmdlet. This will ensure that the queues will drain more quickly on the server by moving messages from one server to another. Have a look at Brian’s blog if you need more info: http://blog.c7solutions.com/2012/10/placing-exchange-2013-into-maintenance.html

v1.2: Maarten Piederiet emailed me pointing out that he had encountered some issues while using the script. Apparently, while draining the message queues, the script ran forever because it waits for every queue to become empty; including Poison- & Shadow Redundancy queues. To avoid this from happening, he made a minor change to the script to now excluded both queues. Thanks for the tip!

The scripts

Below you find links to my SkyDrive from where you can download the scripts. Enjoy!

Start-ExchangeServerMaintenanceMode (v1.8)

Stop-ExchangeServerMaintenanceMode (v1.5)

Disclaimer: these scripts are provided “as-is” and are to be used on your own responsibility. I do not and cannot take any reliability for the use of these scripts in your environment. Please use with caution and always test them before use.

If you have suggestions, comments or think things can be better: please let me know! Your feedback is greatly appreciated!

Blog Exchange PowerShell

Hybrid Free/Busy lookups might fail if Outlook Anywhere is disabled

Recently, I bumped into Jaap Wesselius’ article about an issue he encountered about Hybrid Free/Busy lookups failing. As this relates to Hybrid Exchange, I was – of course – intrigued and remembered that I (once) encountered a similar scenario, but could not remember how I resolved the problem back then.

After some digging, I came across the following KB article which describes the behavior of Free/Busy requests and why they might fail of Outlook Anywhere is disabled (blocked) on the user-level: http://support2.microsoft.com/kb/2734791/en-us?sd=rss&spid=13159

To make a long story short, if Outlook Anywhere is disabled at the user level, Autodiscover does not return the External EWS URL which is required to make the Free/Busy call.

The solution is as simple as the problem itself: re-enable Outlook Anywhere for the user and you would be fine. Of course, this might – depending on your environment – be a little challenging. This being said, however, I do suggest that you configure and (if possible) use Outlook Anywhere as it will make your life easier down the road (e.g. for migrations to Exchange 2013).

Exchange 2013 Hybrid Exchange Office 365

Microsoft manages to ship yet another broken Cumulative Update…

A few days ago, Microsoft released Cumulative Update 6 for Exchange 2013 to the world. There used to be a time where Exchange server updates were fairly safe. However, pretty much like in every other Cumulative Update for Exchange 2013, this one also includes some bugs which break functionality in one way or another. While one would say that it starts to become painful for Microsoft, I’m starting to believe it’s more of a joke.

Exchange Server MVP Jeff Guillet was the one to first report the issue. As it turns out, the Hybrid Configuration Wizard in CU6 runs just fine, but some of the features (like initiating a mailbox move from the on-premises EAC or the ability to switch between the on-prem/cloud EAC) no longer work. Although the scope of the break is somewhat limited (it only applies to customers in a hybrid deployment), one could argue it’s an important focus area for Microsoft – especially given that it’s cloud-related. Microsoft has been trying really hard (with success, may I add) to promote Office 365 and get customers to onboard to “the service”. As such, I find it really surprising that it’s the n-th issue related to hybrid deployments in such a short time. In Cumulative Update 5, the Hybrid Configuration Wizard is broken and now there’s this.

Needless to say, you are warned about deploying Cumulative Updates into production. Pretty much every MVP which announced the Cumulative Update made the remark that you should better test the update before deploying it. I would say this is a general best-practice, but given the history of recent Exchange Server updates, I wouldn’t dare to deploy one without thoroughly testing it.

This brings me to another point: what happened to testing, Microsoft? I understand that it’s impossible to test every customer scenario that you can find out there, but how come that pretty obvious functionalities like these manage to slip through the cracks? If it were a one-time event, I could understand. But there’s a clear trend developing here.

Running a service like Office 365 is not easy. More so, the cadence at which the service evolves can be really scathing. On-premises customers have been struggling to keep up with the updates that are being released in the cloud, but it seems that Microsoft itself is having a hard time to keep up too.

On a final note, I’m wondering what customers with a hybrid deployment should do. According to Microsoft support guidelines, hybrid customers are requested to stay current with Exchange Server updates. But given that this is now two consecutive update that are causing problems, one might start to wonder if it’s not better to stay at CU4 as it was the last CU which did not have any hybrid issues…

I imagine that Microsoft is working hard on a fix for this issue, even during a holiday weekend… Let’s wait and see what happens early next week!

Until then, I would hold off on deploying CU6 and revert to using CU5 with the interim update which fixes the HCW bug or – if you don’t like IUs – stick to CU4/SP1.

Blog Exchange 2013 Hybrid Exchange News Office 365

Microsoft releases updates for Exchange 2007, 2010, 2013

Today, Microsoft released its latest updates for Exchange 2007, 2010 and 2013.

The updates for Exchange 2007 and 2010 mostly evolve around the Daylight Saving Time changes and a bunch of fixes for the latter version.

Cumulative Update 6 for Exchange 2013 doesn’t introduce any new feature or feature changes, but I’m happy to see that the Hybrid Configuration Wizard bug – which caused the HCW to fail – is now included by default. An Interim Update was already available, but it’s nice to see it included into the full build.

Along with a bunch of other fixes, Cumulative Update 6 now also closes the gap with Office 365 when it comes to Public Folder performance and scalability: you can now also deploy up to 100,000 public folders on-premises. Along with this change, there are some other (minor) behavioral changes which Microsoft outlined beautifully here.

For more information on these updates, have a look at the following announcements for Microsoft:

Exchange 2013

Some thoughts on using the Exchange Integration Pack for Orchestrator…

Recently a customer decided to start using System Center Orchestrator to automate some of the recurring tasks with regards to the management and support of their Exchange organization. More specifically, they wanted to create a custom interface to which the support organization would get access to in order to request specific actions like mailbox moves, message tracking logs etc. This would allow them to execute some (basic) tasks themselves without needing any Exchange permissions or being forced to use PowerShell or the Exchange Management Console.

In fact, Orchestrator is very powerful when it comes down to automating things like this. The Integration Packs for products like Exchange, Active Directory and SharePoint allow you to easily develop such solutions.

What is described below are merely some thoughts I had while building the solution along with a colleague of mine. I’m in no way an expert in the use of Orchestrator, but that’s why I thought this information might be interesting to you.

Before diving into the quirks which may (or may not) be “by design”, let’s have a look at what we were trying to build.

The solution

The customer wanted to have a custom interface (not specified what it needed to be) which would allow members of the Service Desk to request message tracking information without the need for an Exchange administrator to intervene. After some debates on whether or not PowerShell should be leveraged to develop such a GUI, we decided to move forward using SharePoint lists. SharePoint lists are easy to setup and use and because it was already heavily used within this organization, it seemed like the logical step.

In order not to complicate things, we decided that we would ask the support engineer for the following information:

  • Sender Email Address
  • Recipient Email Address
  • Date + Time (hour)

 To enter the information into the SharePoint list, a custom form was created which also took care of data validation. As such, the engineer would be able to enter only a sender address, a recipient address or both. The date + time are always required and needed to be in a specific format (yyyy-mm-dd). The reason why we chose to do data validation in the form is two-fold. First, it’s extremely easy to do and secondly, it would save us quite a bit of code later on when building the solution in Orchestrator. The date + hour indication are used to indicate for what time the message tracking logs are requested. We found that 95% of all requests all were within a 24 hour timespan. That’s why the code we built (more about that later) would look in the message tracking logs 12 hours before and 12 hours after the indicated time.

Whenever an engineer would enter a new item in the list, it should automatically get processed by Orchestrator which would then fetch the information from the Message Tracking logs and write it to a CSV file which would be attached to the list item in SharePoint when finished processing.

All-in-all, the requirements were are pretty straight forward which should make the final solution relatively easy to build. In order to meet these requirements, we needed to following components in Orchestrator:

  • SharePoint Integration Pack
  • Exchange Integration Pack
  • Active Directory Integration Pack

Below is a screenshot of the actual runbook which we built:

image

As you can see, it’s not the most advanced runbook (and maybe not the most efficient either). However, it does the trick. As you will notice, we did built in some logging (in order to have a history what happened) and also do some data validation using the built-in capabilities of Orchestrator.

The quirks

Monitor Item Lists activity

The first problem we came across was using the “Monitor List Items” activity. When first adding this activity to the runbook, it automatically detects all the fields in the list and allows you to use them throughout the runbook as published data from that activity:

image

The problem, though, is that – for some reason – it doesn’t pick up changes made to the list after the activity has been added to the runbook. When you have designed your SharePoint list to have all the necessary fields from the get-go, this probably isn’t much of an issue. In our case, unfortunately, the customer tends to changes his mind once in a while…
No matter what we tried, newly added fields would not show up in the published data for that activity. The only workaround is to remove the activity and add it again. While this might work for very simple workflows, in larger workflows – like the one above – dealing with this is particularly painful. Especially because the published data from the activity is used in multiple places. This means that when you remove/re-add the activity, you have to go in to all the other activity where you referenced the data and ‘reconfigure’ it. Again, this is not a big problem if you know this up front, but it can cost you some time if you decide to make changes to the SharePoint list. So one word of advice: plan carefully!

Exchange Management Shell code

The Exchange Integration Pack allows you to directly run a bunch of Exchange PowerShell commands when you add the Run Exchange Management Shell activity. Pretty easy, right?!  Not really. As it turns out, not everything you can do in the Exchange Management Shell works in this activity.

For instance, in the Exchange Management Shell, the following code would work beautifully:

Get-TransportServer | Get-MessageTrackinglog

Bizarrely enough, this won’t work here. Instead, you have to break the pipeline as follows:

$servers = Get-TransportServer
$servers | Get-MessageTrackingLog

My best guess is that the runbook activity is already executing a pipeline which limits what you can do with it. Again, this isn’t a big problem. But it’s just something to keep in mind when developing code to be used in Orchestrator.

Here’s the code I (we) used:

#Defining Variables

$Subject = "{MT-Subject from "Monitor List Items"}"
$Sender = "{MT-Sender from "Monitor List Items"}"
$Recipient = "{MT-Recipient from "Monitor List Items"}"
$data = "{MT-Date from "Monitor List Items"}"
$Hour = "{MT-Hour from "Monitor List Items"}"

#Check Date Input
try{
[datetime]$inputdate = $date+" "+$Hour+":00:00"
[datetime]$startDate = $inputdate.AddHours(-12)
[datetime]$endDate = $inputdate.AddHours(+12)
}
catch{
$returnMsg = "Input date format invalid"
$returnMsg
exit
}

#Convert DateTimes back into usable format
$strStartDate = $startDate.ToString("yyyy/MM/dd HH:mm:ss")
$strEndDate = $endDate.ToString("yyyy/MM/dd HH:mm:ss")

#Construct cmdlet
$cmd += "Get-MessageTrackingLog -Server `$_.Name"

if($Subject -ne $null -and $Subject -ne ""){
$cmd += " -MessageSubject `"$Subject`""
}
if($Sender -ne $null -and $Sender -ne ""){
$cmd += " -Sender `"$Sender`""
}
if($Recipient -ne $null -and $Recipient -ne ""){
$cmd += " -Recipient `"$Recipient`""
}

$cmd += " -Start `"$strStartDate`""
$cmd += " -End `"$strEndDate`""

#workaround for Orchestrator limitation (limitation while executing pipeline)

$servers = Get-TransportServer
$result = $servers | %{
Invoke-Expression $cmd
}

$result | Select EventID,Source,Sender,Recipients,MessageSubject,MessageID

Exchange Management Shell code output

This is probably the biggest quirk we had to deal with. When you execute code within Orchestrator, the output from the code is then pushed to the next activity (provided that you subscribed to the data, of course). In this case, if multiple results came back from the Exchange Management Shell activity, it would fire the following activity for each result. To avoid this and to bundle the results and send them off to the next activity as a whole, you need to “flatten” the output first:

image

That was easy, wans’t it…? Well, that wasn’t really the problem either. When you take a closer look at the symbol used for separating the results, you’ll notice that we reverted to using a rather rarely used symbol. There’s a specific reason.

Here’s why: for some (stupid) reason, the results which are returned by the EMS runbook activity look like the following:

“[EventId: RECEIVE]
[Source: SMTP]
[Sender: member@linkedin.com]
[Recipients:email@domain.com]
[MessageSubject: email, please add me to your LinkedIn network]
[MessageId: somethinghere@somethingelse]
,[EventId: SEND]
[Source: SMTP]
[Sender: member@linkedin.com]
[Recipients: email@domain.com]
[MessageSubject: email, please add me to your LinkedIn network]
[MessageId: somethinghere@somethingelse]”

No this is not a weird way of display a hash table, it’s literally a large string which is passed on. This makes it incredibly difficult to deal with. In fact, there is no way that you can just push this into a CSV file and make something useful out of it without dealing with the output first.

A quick look on the internet, brought me to the following article: http://blog.coretech.dk/jgs/scorchestrator-2012-ps-function-for-parsing-the-result-of-the-run-exchange-management-shell-cmdlet-activity/

The author of that article suggested to parse the code and to convert the output into something usable. This seemed fair enough to me and I decided to re-use the code (although I needed to modify it to make it work in our case). This was then added to a second activity called “Convert Output” which is the “Run .NET Script” runbook activity. Here’s the code I used:

$inputText = "{Command Output for "Get MessageTracking Results"}"

if($inputText -ne ""){

$collection = $inputText.Split("¶")
$returnCollection = @()
foreach ($item in $collection)
{
$keypairCollection = $item.Split("`n")

$EventID = $keypairCollection[0].Trim().TrimStart("[").TrimEnd("]").TrimStart("EventId: ")
$Source = $keypairCollection[1].Trim().TrimStart("[").TrimEnd("]").TrimStart("Source:").TrimStart(" ")
$Sender = $keypairCollection[2].Trim().TrimStart("[").TrimEnd("]").TrimStart("Sender: ")
$Recipients = $keypairCollection[3].Trim().TrimStart("[").TrimEnd("]").TrimStart("Recipients: ")
$MessageSubject = $keypairCollection[4].Trim().TrimStart("[").TrimEnd("]").TrimStart("MessageSubject: ")
$MessageId = $keypairCollection[5].Trim().TrimStart("[").TrimEnd("]").TrimStart("MessageId: ")

$obj = New-Object -TypeName PSObject
Add-Member -InputObject $obj -MemberType NoteProperty -Name "EventID" -Value $EventID
Add-Member -InputObject $obj -MemberType NoteProperty -Name "Source" -Value $Source
Add-Member -InputObject $obj -MemberType NoteProperty -Name "Sender" -Value $Sender
Add-Member -InputObject $obj -MemberType NoteProperty -Name "Recipients" -Value $Recipients
Add-Member -InputObject $obj -MemberType NoteProperty -Name "MessageSubject" -Value $MessageSubject
Add-Member -InputObject $obj -MemberType NoteProperty -Name "MessageId" -Value $MessageId

$returnCollection += $obj
}
$output = $returnCollection | convertto-csv -delimiter ";"
}
else{
[string]$output = "No messages could be found with the specified parameters."
}

You might still wonder why I had to use the ¶ symbol as a delimeter for the output from the “Get MessageTracking Results” activity… The problem with parsing the code is that you have to split the individual results returned by the activity. Using a regular comma or semi-colon is out of the question as it’s quite likely that the subject for a message contains that same character which would cause the output conversion to fail (I found out the hard way). So we needed to decide on a symbol which allowed us to safely parse the code. Hence the ¶ symbol. For now, this does mean that if someone uses the ¶ symbol in the subject and it is returned by the message tracking logs, the above code will fail and there won’t be any output for the message tracking request.

If you have a better alternative: feel free to comment and let me know!

Some final thoughts

All in all, Orchestrator proves to be quite a flexible platform to perform tasks like the one described in this article. Especially the combination with SharePoint makes it very easy to build ‘custom’ tools which can be used by anyone without needing to provide them with permissions in Exchange (even though RBAC provides a nice framework to do just that). I only wish that some of the quirks got solved as it will remove the need for additional steps (and code) which only add complexity…

Exchange 2013 PowerShell

Spooky! The curious case of the ‘ghost’ File Share Witness…

Recently, I was doing some research for a book that I’m working on together with Exchange MVP’s Paul Cunningham and Steve Goodman. It involved recovering a “failed” server using the /m:recoverserver switch. The process itself is straightforward, but depending on what server role(s) you are recovering, you might have to perform some additional post-recovery steps.

In this particular case, I was recovering a Client Access Server (single role) which also happened to be the File Share Witness for one of my Database Availability Groups.
As such, you need to ‘reconfirm’ the recovered server as a File Share Witness. One way of doing so, is to run the following command:

Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup

However, upon executing the command, I was presented with the following error message:

clip_image002

Given that I didn’t use a System State Backup, I was surprised to read that a File Share Witness already existed.
The first thing I did was to check the restored server itself to see if the share existed. As expected, there was nothing to see.

By default, a DAG uses the File Share Witness + Node Majority Cluster Quorum model. This prevents you from removing the File Share Witness from the cluster because it is a critical resource. So, my next thought was to temporary ‘move’ the File Share Witness to another server and then move it back. First, I executed the following command to move the FSW to another server:

Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup –WitnessServer <server name>

The command completed successfully, after which I decided to move the FSW back to the recovered server using the following command:

Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup –WitnessServer <recovered server name>

I was surprised to see that the command failed with the same error message as before:

clip_image002[5]

I then took a peak at the cluster resources and found the following:

clip_image002[7]

It seemed there were now TWO File Share Witnesses for the same DAG, where the failed one is the one that used to live on the recovered server.

At this point, I decided to clear house and remove both resources. Before being able to do so, I had to switch the Quorum Model to “Node Majority Only”:

Set-ClusterQuorum –NodeMajority

I then re-ran the command to configure the recovered server as the File Share Witness:

Get-DatabaseAvailabilityGroup <DAG name>| Set-DatabaseAvailabilityGroup –WitnessServer <recovered server name>

Note:  when configuring the File Share Witness, the cluster’s quorum model is automatically changed back into NodeAndFileShareMajorty

After this series of steps, everything was back to the way it was and working as it should. I decided to double-check with Microsoft whether they had seen this before. That’s also where I got the [unofficial] naming for this “issue”: ghost file share witness (Thanks to Tim McMichael). If you ever land in this situation, I suggest that you contact Microsoft Support to figure out how you got into that situation. From personal testing, however, I can tell that this behaviour seems consistent when recovering a File Share Witness using the /m:recoverserver switch.

Blog Exchange Exchange 2013

Hybrid Configration Wizard fails with error ‘Mail Flow Default Receive Connector cannot be found on server…’

Recently, I got asked to assist with a Hybrid Configuration Wizard which was failing with the following error message:

Updating hybrid configuration failed with error ‎’Subtask NeedsConfiguration execution failed: Configure Mail Flow Default Receive Connector cannot be found on server <server name>. at Microsoft.Exchange.Management.Hybrid.MailFlowTask.DoOnPremisesReceiveConnectorNeedConfiguration‎()‎ at…

Although the message might not reveal much information at first sight, it does contain everything we need to start troubleshooting. Typically, I would suggest you go and have a look into the Hybrid Configuration Wizard log files (located in the logging\Update-HybridConfiguration folder), but the only thing you would find there is the exact same error message.

First, we know that the HCW is trying to configure the hybrid mail flow and that it failed trying to modify the default connector that’s in place. More specifically, it was trying to modify the receive connector on the server that’s specified in the error message.

In this particular case, it wasn’t even able to find the Default Receive Connector. However, when you run the Get-ReceiveConnector -Server <servername>, the receive connector does show up. How is this possible?

The Hybrid Configuration Wizard looks at more specifics than just the existence of the connector. In fact, it will check that the connector’s configuration is valid as well. As such, it will check the bindings on the connector and expect that both bindings for IPv4 and IPv6 are present. So to check whether your existing connector is valid, you should run the following command:

Get-ReceiveConnector -Server <servername> | fl Identity,Bindings

receiveconnector

In this particular case, the IPv6 bindings were missing. This was caused because IPv6 was disabled on the server (which shouldn’t be!). Re-enabling IPv6 and then either manually adding the binding to the connector or re-creating the connector solved the issue.

The morale here is that you shouldn’t disable IPv6 on an Exchange 2013 box. Even more so, it’s not supported if you do. I’ve seen companies that still disable IPv6 by default; maybe a remainder from earlier times where disabling IPv6 would actually solve issues instead of creating them. However, times have changed and the IPv6 implementation in Windows is much better now…

Blog Exchange 2013 Hybrid Exchange