Create a function to connect to and disconnect from Exchange Online

[Update 26/03/2013] I wanted to share a quick update to the script with you. Over the past few weeks, I have hitting an issue at some customers that used an outbound authenticating proxy which prevented me from connecting to Exchange Online using the functions from my profile. As such, I have tweaked the script a bit to now include a switch (-ProxyEnabled) that, when used, will trigger PowerShell to authenticate the session against the proxy.

The new script now looks like this:

function Connect-ExchangeOnline{
    [CmdLetBinding()]
    param(
        [Parameter(Position=0,Mandatory=$false)]
        [Switch]
        $ProxyEnabled
    )

	if($ProxyEnabled){
	    $Session = New-Pssession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell -Credential (Get-Credential) -Authentication Basic -AllowRedirection -sessionOption (New-PsSessionOption -ProxyAccessType IEConfig -ProxyAuthentication basic)
	}
	else{
	    $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell -Authentication Basic -AllowRedirection -Credential (get-credential)
	}

	Import-PSSession $session
}

function Disconnect-ExchangeOnline {
	Get-PSSession | ?{$_.ComputerName -like "*outlook.com"} | Remove-PSSession
}

[Original Post]

Office 365 allows you to connect remotely to Exchange online using PowerShell. However, if you had to type in the commands to connect every time, you would be losing quite some time:

$session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri <a href="https://ps.outlook.com/PowerShell">https://ps.outlook.com/PowerShell</a> -Authentication Basic -Credential (Get-Credential) -AllowRedirection

Import-PSSession $session

Equally, to disconnect, you’d have to type in the following command each time

Get-PSSession  | ?{$_.ComputerName -like "*.outlook.com"} | Remove-PSSession

However, it is relatively easy to add both commands into a function which you can afterwards add them into your profile:

Function Connect-ExchangeOnline{
   $session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri <a href="https://ps.outlook.com/powershell">https://ps.outlook.com/powershell</a> -Authentication Basic -AllowRedirection -Credential (Get-Credential)
   Import-PSSession $session
}

Function Disconnect-ExchangeOnline{
   Get-PSSession  | ?{$_.ComputerName -like "*.outlook.com"} | Remove-PSSession
}

To add these functions to your PowerShell profile, simply copy-past them into your profile. To find out where your profile-file is located, type the following:

PS C:\> $profile
C:\Users\Michael\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

To start using the functions, all you have to do is to call them from the PowerShell prompt:

PS C:\> Connect-ExchangeOnline

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential

WARNING: Your connection has been redirected to the following URI:
https://pod51014psh.outlook.com/PowerShell-LiveID?PSVersion=3.0

PS C:\> Disconnect-ExchangeOnline

Note   there is no output for the Disconnect-ExchangeOnline function

How-To's Office 365 PowerShell

How to check what the version of your tenant is in the cloud (Office 365)

I sometimes get the question how one can verify what the version of Exchange they’re running in the cloud. Although it should be pretty obvious based on the GUI (Exchange 2010 vs. Exchange 2013) and the fact that the latter isn’t generally available yet, it could come in handy once it does. According to some sources, the release might be sooner than later.

Additionally, when you’re planning on going hybrid with the new version of Exchange in Office 365, you’ll have to make sure your tenant isn’t in the process of being upgraded and is running version 15.

To check the version, open up a PowerShell session to your Office 365 tenant and run the following command:

Get-OrganizationConfig | ft AdminDisplayVersion,IsUpgradingOrganization

With the command for connecting to Office 365 via PowerShell, that would look something like this:

$session = New-PSSession –ConnectionUri https://ps.outlook.com/powershell –AllowRedirection –Authentication Basic –Credential (Get-Credential) –ConfigurationName Microsoft.Exchange

Import-PSSession $session

Get-OrganizationConfig | ft AdminDisplayVersion,IsUpgradingOrganization

Running these commands would then lead up to a result similar to the following:

Get-OrganizationConfig | ft AdminDisplayVersion,IsUpgradingOrganization -Autosize

AdminDisplayVersion IsUpgradingOrganization
------------------- -----------------------
0.10 (14.16.190.8)                    False
How-To's Office 365

You get an error: “The <name> connector cycle has stopped. Object with DN <GUID> failed…”

As part of setting up a hybrid configuration between Exchange on-premise and Exchange online (or when configuring Exchange Online Archiving), you also need to setup DirSync.

In these scenarios DirSync fulfills an important role as it will also configure the write-back of some attributes in your local Active Directory. This “write-back” is required for Hybrid/EOA to work. For a list of attributes that are sync to/from Office 365, have a look at the following article: http://support.microsoft.com/kb/2256198

As part of the best-practices when installing DirSync, you should always run the Office 365 Deployment Readiness tool which will scan your local Active Directory and search for incompatible objects. The tool will create a report in which incompatible objects are mentioned. This will allow you to modify these object before configuring DirSync.

However, sometimes object can still contain incompatible object attributes, which might cause issues for DirSync. In such case, you’ll likely be presented with the following error in the application event log. Please note that this example mentions an issue with the “TargetWebService” Management Agent. It could very well be that you’ll encounter an issue in the SourceAD Management Agent.

The TargetWebService Connector export cycle has stopped.  Object with DN CN=<guid> failed validation for the following attributes: proxyAddresses. Please refer to documentation for information on object attribute validation.

This error contains 2 important items:

  1. The Distinguished name (CN=<guid>)
  2. The attribute that is causing issues

However, matching the guid to a user-account isn’t very easy. The best way to go about is to open the MIIS Management Interface and work from there. Usually, the client can be found in the following directory:

C:\Program Files\Microsoft Online Directory Sync\SYNCBUS\Synchronization Service\UIShell

image

After opening the client, navigate to Management Agents, right-click the management agent mentioned in the error message and select Search Connector Space:

image

In the “Search Connector Space” window, select DN or Anchor from the drop-down list under Scope and specify the Distinguished Name from the error message. Afterwards, click Search:

image

The search should return a single object. Double-click it to view additional information. Search for the attribute that was mentioned in the event log entry to review its value(s):

image

In this particular case, one of the proxy addresses contained an illegal character which caused the Management Agent to fail. Once you determined what the issue was, correct the value in AD and re-start synchronization. Normally, synchronization should happen successfully now.

How-To's Office 365

Load Balancing Exchange 2013 – part 2

Introduction

In the first part of this article, we talked about Load Balancing in general and took a closer look at what the advantages and disadvantages of simple layer 4 load balancing for Exchange 2013 were. Today we’ll dive a bit deeper into the remaining two ways of load balancing Exchange 2013: layer 4 with multiple namespaces and ‘traditional’ layer 7.

Layer 7 Load Balancing

Layer 7 load balancing offers some additional functionalities over Layer 4. Because traffic is being decrypted, the load balancer can now ‘read’ (‘understand’) the traffic that is coming through and take appropriate actions based on the type (or destination) of the traffic.

By decrypting traffic, the load balancer can read the destination for a packet which allows you to make a distinction between traffic for the different Exchange workloads while still using a single virtual service. Based on the type of workload, traffic could e.g. be sent to a different set or servers. However, this was not the most important reason to do Layer 7 load balancing. In Exchange 2010, traffic coming from a certain client had to be persisted to the same endpoint (= the same Client Access Server). This meant that the initial connection could be sent to just about any CAS, but once the session was made superseding packets for that session had to be maintained with the same CAS.

A Load Balancer typically has multiple ways to maintain this client <> server relationship. Depending on the model and make of your Load Balancer, you might see the vendor refer to this relationship as “persistence”, “stickyness” etc… The most commonly used methods are:

  • Source IP
  • Session ID
  • Session Cookie

For a load balancer to be able to identify these things it needs to be able to read the traffic, forcing the need for traffic to be decrypted except when using Source IP. Although Source IP affinity doesn’t necessarily require decryption, in some scenarios using this type of affinity could cause an uneven distribution of load; especially when traffic is coming from behind a NAT device. Consider the following scenario:

Multiple internet-connected devices connect to your on-premises environment and before hitting the load balancers, they go through a router/firewall or other NAT device which will ‘change’(mask) the source IP Addresses (80.70.60.50 and 80.70.60.40) with a single internal address (10.20.30.40). If your load balancers are configured to persist connections based on the source IP address, all connections will potentially end up at the same CAS. This is – of course – something you’d want to avoid. Else, what purpose would your load balancer have? Right?

image

For this “problem” there are a few solutions:

  • You could disable NAT, which would reveal the client’s original IP address to the load balancer. Unfortunately, this isn’t always possible and depends on your network topology/routing infrastructure.
  • You could change the configuration of your load balancer to use something else than the Source IP to determine whether a connection should be persisted or not.

In the latter case persistence based on the SSL Session ID is a good alternative. The load balancer will examine every “packet” that flows through it, read the session ID and if it finds a match for a previously created session, it will send that packet to the same destination as before. While this works brilliantly, it will induce a higher load on your load balancer because:

  1. the load balancer needs to inspect each packet flowing through, consuming more CPU cycles
  2. the load balancer needs to maintain a bigger “routing table”, which consumes more memory. By that I mean a table where the Session ID is mapped to a destination server.

As mentioned earlier, because you are decrypting the traffic, you can e.g. determine from the packet what the destination URL is. In essence, this allows you to define multiple virtual services (one for each workload) and make the load balancer choose what virtual service to forward a packet to. In this specific example, the virtual services are “hidden” for the end user.

Let’s poor that into an image, things might become more clearly that way:

image

For external clients, there is still a single external URL (VIP) they connect to but ‘internally’  there is separate virtual service for each workload. Whenever a packet reaches the load balancer, it will be read and based on the destination URL, the appropriate virtual service is picked. The biggest advantage is that each virtual service can have its own set of health criteria. This also means that – because workloads are split – if e.g. OWA fails on one server, it won’t affect other workloads for that server (as they belong to a different virtual service). While OWA might be down, other protocols remain healthy and the LB will continue forwarding packets to that server for a specific workload.

With this in mind, we can safely conclude that Layer 7 Load Balancing clearly offers some benefits over simple layer 4. However it will cost you more in terms of hardware capacity for your load balancer. Given that a decently sized load balancer can cost a small fortune, it’s always nice to explore what other alternatives you have. On top of that, this kind of configuration isn’t really “easy” and requires a lot of work from a load balancer’s perspective. I’ll keep the configuration steps for a future article.

Layer 4 load balancing with multiple namespaces

As I showed in the first part of this article, Exchange 2013 greatly simplifies load balancing, compared to Exchange 2010. Unfortunately, this simplification comes at a cost. You loose the ability to do a per-protocol health check when using layer 4. And let’s face it: losing functionality isn’t something you like, right?

Luckily, there is a way to have the best of both worlds though…

Combining the simplicity of Layer 4 and finding a way to mimic the Layer 7 functionality is what the fuzz is all about. Because when using Layer 4 your load balancer has no clue what the endpoint for a given a connection is, we need to find a way to make the Load Balancer know what the endpoint is, without actually needing to decrypt traffic.

The answer is in fact as simple as the idea itself: use a different virtual service for each workload but this time with a different IP address for each URL. The result would be something like this:

image

Each workload now has its own virtual service and therefore you also get a per-workload (per-protocol) availability. This means that, just as with Layer 7, the failure of a single workload on a server has no immediate impact on other workloads while at the same time, you maintain the same level of simplicity as with simple Layer 4. Sounds cool, right?!

Obviously, there is a “huge” downside to this story: you potentially end up with a bunch of different external IP addresses. Although there are also solutions for that, it’s beyond the scope of this article to cover those.

Most people – when I tell them about this approach – don’t like the idea of exposing multiple IPs/URLs to an end user. Personally, I don’t see this as an issue though. “For my users it’s already hard enough to remember a single URL”, they say. Who am I to argue? However, when thinking about it; there’s only one URL an end user will actually be exposed to and that’s the one for OWA. All other URLs are configured automatically using Autodiscover. So even with these multiple URLs/IPs, your users really only need to remember one.

Of course, there’s also the element of the certificate, but depending on the load balancer you buy, that could still be cheaper than the L7 load balancer from the previous example.

Configuring Layer 4 load balancing with different namespaces is done the same way as you configure a single namespace. Only now you have to do it multiple times. The difference will be in the health checks you perform for each protocol and those are something that’s free for you to choose. For now, the health-check options are relatively limited (although that also depends from the load balancer you use). But the future might hold some interesting changes: at MEC, Greg Taylor explained that Microsoft is working with some load balancer vendors to make their load balancers capable of “reading” the health status of an Exchange produced by Managed Availability. This would mean that your load balancer no longer has to perform any specific health checks itself and can rely on the built-in mechanisms in Exchange. Unfortunately there isn’t much more information available right now, but rest assured I follow this closely.

Differentiator: health checks

As said before, the configuration to Layer 4 without multiple namespaces is identical. For more information on how to configure a virtual service with a KEMP load balancer, please take a look at the first part of this article.

The key difference lies within the difference in health checks you perform against each workload. Even then, there is a (huge) difference in what load balancers can do.

Even though I like KEMP load balancers a lot, they are – compared to e.g. F5 – limited in their health checks. From a KEMP load balancer perspective, your health check rule for e.g. Outlook Anywhere would look something like this:

image

From an F5 perspective, a BIG-IP LTM would allow you to “dive deeper” into the health checks You can define a user account that is used to authenticate against the rpcproxy.dll. Only if that fails, the service will be marked down, rather then using a “simple” HTTP GET.

On a side note: for about 90% of the deployments out there, the simple health check method proves more than enough…

Conclusion

Below you’ll find a table in which I summarized the most important pro’s and con’s per option.

  Pro’s Con’s

Layer 4 (Simple)

  • Easy to setup
  • Less resources on the LB required
  • No per-protocol availability
Layer 7
    • per-protocol availability
    • single external IP/URL
  • More difficult to setup
  • more resources on the LB required
Layer 4 (Multiple Namespaces)
  • Easy to setup
  • Les resources on the LB required
  • per-protocol availability
  • multiple external IP/URL

If it were op to me, I like the idea of Layer 4 with multiple namespaces. It’s clean, it’s simple to setup (both from a load balancer and Exchange point of view) and it will most probably save me some money. Unless you only have a limited amount of (external) IP addresses available, this seems like the way forward to me.

Nonetheless, this doesn’t mean you should kick out your fancy load balancer if you already have one. My opinion: use with what you got and above all: keep it simple!

Blog Exchange 2013 How-To's

Retrieving Exchange Autodiscover SCP information from AD via PowerShell

As part of the Autodiscover process, Outlook will query Active Directory in search for the Autodiscover SCP which it will use to discover the Autodiscover URL where it should send its request to.

The configuration information for Autodiscover can easily be retrieved with the Get-ClientAccessServer cmdlet, which will show you important information like:

  • AutoDiscoverSiteScope
  • AutoDiscoverServiceInternalUri
  • WhenCreated

The reason that I’m referring to these three items is because of the way Outlook will handle the retrieved Autodiscover information. In a nutshell, it will query AD and will retrieve a list of SCPs. This list will first be ordered based on the Site Scope (the site in which the client resides). If not, the list will be ordered by creation date (older entries will get queried first).

If you wish to find out more about this process, have a look at the following article: http://technet.microsoft.com/en-us/library/bb332063(v=exchg.80).aspx

What I wanted to do, is “mimic” this process without running the Get-ClientAccessServer cmdlet. One of the reasons is that Get-ClientAccessServer cmdlet depends on the Exchange Management Shell (or more accurately, the Exchange snapin).

Below you will find a code example which uses PowerShell’s ability to query AD directly (ADSI). It will use exactly the same query as Outlook does to retrieve a list of SCPs and will then query these SCPs for the information mentioned above. In the end, the information is displayed on screen, where the SCP records are filtered based on SiteScope and Creation date.

This should give you a pretty good idea of what URL/server your Outlook client will connect to (first) during the AutoDiscover process. Enjoy!

$obj = @()

$ADDomain = Get-ADDomain | Select DistinguishedName
$DSSearch = New-Object System.DirectoryServices.DirectorySearcher
$DSSearch.Filter = '(&(objectClass=serviceConnectionPoint)(|(keywords=67661d7F-8FC4-4fa7-BFAC-E1D7794C1F68)(keywords=77378F46-2C66-4aa9-A6A6-3E7A48B19596)))'
$DSSearch.SearchRoot = 'LDAP://CN=Configuration,'+$ADDomain.DistinguishedName
$DSSearch.FindAll() | %{

$ADSI = [ADSI]$_.Path
$autodiscover = New-Object psobject -Property @{
Server = [string]$ADSI.cn
Site = $adsi.keywords[0]
DateCreated = $adsi.WhenCreated.ToShortDateString()
AutoDiscoverInternalURI = [string]$adsi.ServiceBindingInformation
}
$obj += $autodiscover

}

Write-Output $obj | Select Server,Site,DateCreated,AutoDiscoverInternalURI | ft -AutoSize
Exchange How-To's PowerShell

Using New-Migrationbatch to perform local mailbox moves in Exchange Server 2013

Along with a whole bunch of other improvements and new features, New-Migrationbatch is one of my favorite new additions to the Management Shell.

Prior, if you were moving mailboxes between mailbox servers in e.g. Exchange 2010, you had to use New-MoveRequest to start/request a mailbox move from one database/server to the other. If you had multiple mailboxes you wanted to move at once, you had several options:

    • Use Get-Mailbox and pipe the results along to New-Moverequest
    • Import a CSV-file and pipe the results along to New-Moverequest
    • Create an individual move request for each users

While these options are still valid in Exchange Server 2013, you now also have the ability to create a migration batch using the New-MigrationBatch cmdlet.

This cmdlet will allow you to submit new move requests for a batch of users between two Exchange servers, local or remote (other forest) and on-prem or in the cloud (on-boarding/off-boarding). If you have been performing migrations to Office 365, this cmdlet shouldn’t be new to you as it was already available there.

The nice thing about migration batches is that you can easily create them, without having to start or complete them immediately. Although with a little effort you could’ve also done this using New-Moverequest, it’s not only greatly simplified, the use of migraiton batches also gives you additional benefits like:

  • Automatic Reporting/Notifications
  • Endpoint validation
  • Incremental Syncs
  • Pre-staging of data

Just as in Exchange Server 2010, the switchover from one database to the other is performed during the “completion phase” of the move. During this process, the remainder of items that haven’t been copied from the source mailbox to the target mailbox before are copied over after which the user is redirected to his “new” mailbox on the target database. (for purpose of staying on track with this article, I’ve oversimplified the explanation of what happens during the “completion” phase)

Creating a migration batch

To create a migration batch, you’ll need to have a CSV-file that contains the email addresses of the mailboxes you are going to move. These can be any of the email addresses that are assigned to the mailbox. There is – to my knowledge – no requirement to use the user’s primary email address.

Also, the file should have a ‘heading’ called “EmailAddress”:

image

Next, open the Exchange Management Shell and run the following cmdlet:

New-MigrationBatch –Name <name> –CSVData ([System.IO.File]::ReadAllBytes(“<full path to file>”)) –Local –TargetDatabase <dbname>

Running this cmdlet will start a local mailbox move between two Exchange server in the same forest. However, it will not automatically start moving the mailboxes as we haven’t used the –Autostart parameter. Furthermore, the moves won’t be completed automatically either because the –AutoComplete parameter wasn’t used either.

Note   It’s important that you specify the full path to where the csv-file is stored (e.g. C:\Files\Batch1.csv). Otherwise the cmdlet will fail because it will search for the file in the sytem32-folder by default.

Once the batch is created (and you didn’t use the –AutoStart parameter), you can launch the moves by running the following cmdlet:

Get-Migrationbatch | Start-MigrationBatch

Please note that if you have multiple migration batches, this cmdlet will start all of them.

Polling for the status of a migration

You can query the current status of a migration on a per-mailbox basis using the Get-MigrationUserStatistics cmdlet. The cmdlet will return the current status of the mailbox being moved and the amount of items that have been synced/skipped so far.

Get-MigrationUser | Get-MigrationUserStatistics

image

Note   Alternatively, you can also use the –NotificationEmails parameter during the creation of the migration batch. This parameter will allow you to specify an admin’s email address to which a status report is automatically sent. If you don’t use this parameter, no report is created/sent.

Completing the migration

If you didn’t specify the –AutoComplete parameter while creating the migration batch, you will have to manually start the “completion phase”. This can easily be done using the Complete-MigrationBatch cmdlet.

Get-MigrationBatch | Complete-MigrationBatch

When you take a look at the migration statistics, you’ll see that the status will be “Completing”:

image

Once mailbox moves have been completed successfully, the status will change to “Completed”Confused smile

Summary

As you can see, the New-Migrationbatch will certainly prove useful (e.g. if you want to pre-stage data without performing the actual switchover). Of course, there are other use cases as well: it’s the perfect companion to use for cross-forest moves and moves to/from Office 365 as  it contains numerous parameters that can be used to make your life easier. For instance the Test-MigrationEndpoint cmdlet can be used to verify if the remote host (to/from which you are migrating) is available and working correctly. This is especially useful in remote mailbox moves (cross-forest) or between on-prem/cloud.

If you want to find out more about the cmdlet, go and have a look at the following page:

Alternatively, you could also run Get-Help Get-NewMigrationBatch –Online from the Exchange Management Shell which will take you to the same page!

Until later!

Michael

Exchange 2013 How-To's PowerShell

Configuring High Availability for the Client Access Server role in Exchange Server 2013 Preview

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

Following one of my previous articles in which I described how you could configure a Database Availability Group to achieve high availability for the Mailbox Server Role, we will now take a look at the process of how to configure high availability for the Client Access Server.

CAS Array

To achieve high availability, you create a load-balanced array of Client Access Servers just like in Exchange Server 2010. Other than before, layer-4 load balancing now becomes a viable options, though that would only be in the smallest deployments where there’s no budget for load balancers.

Layer-4 load balancing only takes into account the IP (and TCP port). Yes, you are no longer required to configure “affinity”. The latter is the process where a connection – once it was built – had to be persisted through the same Client Access Servers. This is because CAS in Exchange Server 2013 doesn’t do any data-rendering anymore: everything happens on the backend (Mailbox servers).

I hear you thinking: does this mean we could use DNS load balancing (a.k.a. round-robin). The answer is yes and no. Yes, because it will load-balance between multiple Client Access Servers; no because if a server would fail, you’d have to remove the server (manually) from DNS and wait for the record to time-out on all the clients. While this might be a cost-effective way to have load-balancing and a very, very basic form of high availability, it is not a real viable solution for most deployments…

Ever since the CAS Array was first introduced, it was subject to quite some misconceptions. A lot of them were addressed by Brian Day in a very interesting article he wrote.  What I find is that people tend to mix up the RPC Client Access Array and the load-balanced array used for http-based traffic. Yes, the use of the term CAS Array can be a little confusing. No, they’re not the same! Winking smile

Now, since Exchange Server 2013 dumped using RPC-over-TCP, I no longer see the purpose in creating the RPC Client Access Array object (New-ClientAccessArray). Instead, it suffices to configure multiple Client Access Servers with the same internal hostname for Outlook Anywhere.

To understand what happens, let’s take a look at the following examples:

In the case where you’re using two Client Access Servers in the same AD site, by default Exchange will “load balance” traffic between the two end points. This means that the 1st request will go to CAS1, the second to CAS2, the third to CAS1 etc… While this does provide some sort of load-balancing, it doesn’t really provide high-availability. Once Outlook is connected to a CAS, it will keep trying to connect to that same server, even after the server is down. Eventually, it will try connecting to the other CAS, but in the meantime your Outlook client will be disconnected.

image

If we add a load balancer, we need to configure the Internal Hostname for OA to a shared value between the Client Access Servers. For example: outlook.exblog.be. This fqdn would then point to the VIP of the load balancer which, in turn, would take care of the rest. Because we’re using a load balancer, it will automatically detect a server failure and redirect the incoming connection to the surviving node. Since there is no affinity required, this “fail over” happens transparently to the end user:

image

As explained before, this load balancer could be anything from simple DNS load balancing, to WNLB or a full-blown hardware load balancer that’s got all the kinky stuff! However, in contrast to before (Exchange 2010), most of the advanced options are not necessary anymore…

Configuring Outlook Anywhere

To configure the internal hostname for Outlook Anywhere, run the following command for each Client Access Server involved:

Get-OutlookAnywhere – Server <server> | Set-OutlookAnywhere –InternalHostname <fqdn>

Configuring the Load Balancer

As I explained earlier, Layer-4 is now a viable option. Although this could mean that you’d just be using DNS load balancing, you would want to use some load balancing device (physical or virtual).

The benefit of using a load balancer over e.g. WNLB is that these devices usually give you more options towards health-checking of the servers/service that you’re load balancing. This will allow you to better control over the load balancing process. For example: you could check for a particular HTTP-response code to determine whether a server is running or not. It definitely beats using simple ICMP Pings…!

The example below is based on the load balancer in my lab: a KEMP Virtual Load Master 1000. As you will see, it’s setup in the most basic way:

I’ve configured no persistency and – because it’s a lab – I’m only checking the availability of the OWA virtual directory on the Exchange servers. Alternatively, you could do more complex health checks. If you’re looking for more information on how to configure a KEMP load balancer, I’d suggest you take a look at Jaap Wesselius’ blogs here and here. Although these articles describe the configuration of a Load Master in combination with Exchange 2010, the process itself (except for the persistency-stuff etc) is largely the same for Exchange Server 2013. Definitely worth the read!

image

image

image

Exchange 2013 How-To's