Microsoft is retiring the MCSM/MCA Program…

After having written about my experience of going through the MCSM program, I couldn’t resist writing down about how I feel about Microsoft’s latest decision to kill the MCSM/MCA program.

For those who aren’t fully up to date about what happened, let me enlighten you first. Early Saturday morning, (Friday evening if you’re in the US), Microsoft sent out the following statement to all existing (and aspiring) Certified Masters and Architects:

“We are contacting you to let you know we are making a change to the Microsoft Certified Master, Microsoft Certified Solutions Master, and Microsoft Certified Architect certifications. As technology changes so do Microsoft certifications and as such, we are continuing to evolve the Microsoft certification program. Microsoft will no longer offer Masters and Architect level training rotations and will be retiring the Masters level certification exams as of October 1, 2013. The IT industry is changing rapidly and we will continue to evaluate the certification and training needs of the industry to determine if there’s a different certification needed for the pinnacle of our program.”

Have a look at Neil Johnson, who also teaches during the MCSM rotation, his blog for the full email.

As you can see, for someone who recently went through the program, that’s not the kind of email I’d hoped to read any time soon. Effectively, the reactions from the community are mainly drenched in disbelief and anger; which also perfectly reflects how I feel about the decision.

Many people have already expressed their displeasure. Although twitter has the top of the talks, some others – like Paul Robichaux and Marcel van den Berg – have also blogged about their thoughts:

At this moment, all that we have is the email. Nothing more. The fact that Microsoft sent out the email right before a long weekend (Monday is apparently Labor Day in the US) doesn’t really help. Besides: who does that anyway? Sending out an email like that and then absenting from any discussion whatsoever? Maybe they hoped that the storm would settle down by Tuesday? There are many reasons why Microsoft might have chosen to kill the program. Most likely it’s cost-related. There’s no doubt that running the MCSM/MCA program costs a lot of money; maybe too much for what they get in return from a direct revenue point-of-view? If so, raising the price for the certification was no option either; it was already expensive as it was. Though I stand with my statement earlier where I said it’s more than worth it. Nonetheless, MCSM/MCA perhaps never became as big as Microsoft hoped for? Anyway. It doesn’t matter, does it?

Microsoft has lately been making a lot of rather strange decisions. A lot of IT Pros (including myself) are wondering what they (Microsoft) are trying to achieve. First, they decided to kill TechNet subscriptions, now the MCSM/MCA program. The question is what will be next…? They are, for sure, making it very difficult to keep on advocating for them…

I cherish no hope that Microsoft will reverse their decision, but I would like to have a more decent explanation as to why they made this decision and what they are up to next. This is the very least they can do for all those who have invested a lot of money and time to go through the program…

Until later,

– a very disappointed – Michael

Exchange 2013 News

Exchange Server 2013 reaches General Availability!

Only a few moments ago, Microsoft’s Exchange Product team announced that Exchange Server 2013 reached General Availability.

In short, this means that the product is now available for everyone to download and evaluate. Unfortunately, you still can only deploy Exchange 2013 in a greenfield install (no prior versions of Exchange).
However, the ETA for Exchange 2010 SP3 and the Exchange 2007 coexistence rollup was adjusted to Q1 2013 instead of H1 2013. Good news!

Read more about it in the official announcement here:

Blog Exchange 2013

Load Balancing Exchange 2013 – part 1


In one of my earlier posts, I already briefly explained what load balancing in Exchange 2013 looked like and how you could setup basic load balancing using a KEMP LoadMaster based on a blog post that Jaap Wesselius wrote. Today I will elaborate a bit more on the different load balancing options you have, along with how to configure a KEMP LoadMaster accordingly. In a future blog post (which I’m working on right now), I will be explaining how you can configure an F5 BIG-IP LTM to achieve the same results.

Load Balancing Primer

One of the ‘biggest’ changes towards load balancing in Exchange 2013 is that you can now use simple, plain Layer 4 load balancing. More ‘complex’ and more expensive layer 7 load balancing that you had to use in Exchange 2010 is no longer a hard requirement. However, this doesn’t mean that you cannot use Layer 7 load balancing. But more about that later (part 2).

In a nutshell, Layer 7 load balancing means that you can do application-aware load balancing. You achieve this by decrypting SSL traffic, which allows the load balancing device to ‘read’ the traffic that’s passing though. The Load Balancer will analyze the headers and based on the what it ‘finds’ (and how it’s configured), it will take an appropriate action (e.g. use another virtual service or use another load balancing option).

Traffic decryption uses additional resources from your load balancer. The impact of the amount of users connecting through your load balancing device usually is (much) higher in such case. Also, not all load balancers offer this functionality by default. Some require you to purchase additional license (or hardware) before being able to decrypt SSL traffic. Companies might choose to re-encrypt traffic that is sent to Exchange (SSL Bridging) because of security reasons (e.g. avoid sniffing). In Exchange 2010 you could also choose not to re-encrypt traffic when sending it to Exchange. This latter process is called SSL Offloading and required some additional configuration in Exchange. Not re-encrypting traffic (SSL Offloading) saves some resources on your load balancing device, but know that – for now – it is not supported for Exchange 2013; you are required to re-encrypt traffic when using Layer 7 load balancing! So make sure that you size your hardware load balancers accordingly. How to size your load balancer depends of the model and make. Contact the manufacturer for more information and guidance.

Load Balancing Options in Exchange 2013

When looking purely at Exchange 2013, you have different options with regards to load balancing. Each of the following options has its advantages and disadvantages. Let’s take a closer look at each of them and explain why:

  • Layer 4 (single namespace)
  • Layer 7
  • Layer 4 (separate namespaces)

Layer 4 (single namespace)

This is the most basic (and restricted) but also the easiest way of setting up load balancing for Exchange 2013. A single namespace will be used to load balance all traffic for the different Exchange workloads (OWA, EAC, ActiveSync, EWS, …).


Because you are using a single namespace with Layer 4, you cannot do workload-specific load balancing and/or health checking. As a result, a failure of a single workload can possibly impact the end user experience. To illustrate, let’s take the following example:

User A connects to and opens up OWA. At the same time, his mobile device syncs (using ActiveSync) over the same URL while his Outlook is connected (Outlook Anywhere), you’ve guessed it right, over the same external URL. In this example, the virtual service is configured to check the OWA Virtual Directory (HTTP 200 OK) to determine whether a server is available or not.

Let’s see what happens if OWA fails on a server. The health check that the load balancer executes will fail, therefore it will assume that the server isn’t functioning properly anymore and take that entire server out of service. It does this by preventing traffic from being forwarded to that server. Even though other workloads like Outlook Anywhere and ActiveSync, … might still work perfectly, the server will still considered ‘down’. From an efficiency point-of-view that is not the best thing that could happen:


Now, imagine that OWA works fine, but the RPC/HTTP components is experiencing some difficulties…At this point, OWA would still return a positive health check status and the load balancer would assume the server is still healthy. As a result traffic is still being sent to the server, including traffic for the failed RPC/HTTP components.  This will definitely impact a user’s experience.


As you can see, it’s perhaps not the best possible experience for the end user (in all cases), but it definitely is the easiest and most cost-effective way to setup load balancing for Exchange 2013. Because you don’t need advanced functionality from the load balancer, you will probably be able to fit more traffic through the load balancer before hitting it’s limits. Alternatively, if you don’t have a load balancer yet, it could allow you to buy a smaller scaled model to achieve the “same” results as with L7 load balancing.

Let’s take a look at how to setup a KEMP LoadMaster for this scenario:

Note   to keep things brief and to the point, I’m not going to discuss how to setup a KEMP LoadMaster in your environment and what options you have there (e.g. single-arm setup, …). We’ll only discuss how to setup the virtual services and other related configuration settings.

First, navigate to Virtual Services and click Add New:


Enter the Virtual IP Address (VIP), port, and optionally a descriptive name for the service. Then click Add this Virtual Service to create it:


Configure the Basic Properties as follows:


Because we’re not going to use Layer 7, make sure to disable Force L7 in the Standard Options. Because Exchange doesn’t need affinity (persistence), you do not have to configure anything specific for that either. You’re free to choose the load balancing mechanism (a.k.a. scheduling), but Round Robin should be OK:


I can imagine that you’re now like “Wait a minute, does that mean that every new connection possibly end up with another server in the array?” The answer is: yes. But that actually doesn’t matter. Every connection that is made is automatically re-authenticated. As a result, the fact that every new connection is forwarded to another server (even mid-session) doesn’t impact the end user’s experience. If you’d like to know a bit more about this, I suggest you listen to episode 12 of “The UC Architects”. In this episode, Greg Taylor (Microsoft) explains how that actually works.

Anyway, we got sidetracked. Back to the Load Master! Again, because we’re doing the simplest of the simplest (L4) here, we also don’t need to do any form of SSL decrypting, meaning that you can leave these options at their default:


If you will, you can specify a “Sorry Server” in the advanced properties. This is a server where traffic is redirected to if all servers within the array become unavailable:


Note   if you click Add HTTP Redirector the KEMP LoadMaster will automatically create a Virtual Service for the same VIP on TCP port 80 that will redirect traffic to the Virtual Service on TCP port 443. This way you can enforce SSL and you don’t necessarily have to do that elsewhere.

All that’s left now is to configure the servers to direct traffic to. Click Add New under Real Servers to add an Exchange 2013 server. Once you’ve filled in the details, click Add this Real Server. Repeat this step for all Exchange 2013 Client Access Servers you want to add.


We still have to configure how the LoadMaster will check the health of the servers in the array. There are different ways to do this. In the example below, the LoadMaster will query the OWA Virtual Directory and if it gets a response, the server will be deemed healthy. Alternatively, you could also opt not to specify an URL. Do not forget to click Set Check Port after entering the port number.


Believe it or not, but that’s it. Your LoadMaster is now setup to handle incoming traffic and distribute the load evenly over the configured servers.



Simple Layer 4 load balancing clearly has its benefits towards simplicity and cost (you usually need less performing hardware load balancers to achieve the same result as with L7). On the other hand, the limited options towards health checking could potentially take offline a server based on problems with a single workload while other workloads might still work fine on that server.

In the second part of this article, we’ll dive deeper into the option of using Layer 4 load balancing with multiple namespaces and how that compares to using Layer 7 load balancing, so stay tuned!



Blog Exchange 2013

Are we going to see location-based services in Exchange 2013 soon…?

I was reading through some Exchange Server 2013 documentation, when I came across the following information on the schema changes that are introduced:

It seems that new attributes now allow to add exact location information (coordinates) for mail recipients:

As the documentation points out, these attributes could be used to determine the location of an office or meeting room, but I can think of some ways that it could be used to locate e.g. other resources or people. In a connected world people do nothing else but sharing their whereabouts using Facebook, Twitter and Foursquare I can see why introducing such functionality into Exchange could make sense. The new “Bing Map”-app in Outlook, which will automatically map an address from an email onto Bing Maps, can already be seen as a step into that direction.

Interestingly enough, these attributes aren’t required to store a location or offer location-based services. An application could easily store this information elsewhere. However, while this information – at first – might seem irrelevant, to me it proves that Exchange is a product that is constantly evolving and trying to stay ahead of the curve. I’m looking forward to see if we’ll see some use of these attributes in future updates of Exchange or Outlook…

Please note that these thoughts are purely my own and are by no means related to the Exchange product team. To my knowledge, they haven’t made any announcement that location-based services will be used in future updates or releases of Exchange nor have they confirmed these attributes could be used to do so.

For more information on other schema changes, have a look at the following website:

Blog Exchange 2013

Using New-Migrationbatch to perform local mailbox moves in Exchange Server 2013

Along with a whole bunch of other improvements and new features, New-Migrationbatch is one of my favorite new additions to the Management Shell.

Prior, if you were moving mailboxes between mailbox servers in e.g. Exchange 2010, you had to use New-MoveRequest to start/request a mailbox move from one database/server to the other. If you had multiple mailboxes you wanted to move at once, you had several options:

    • Use Get-Mailbox and pipe the results along to New-Moverequest
    • Import a CSV-file and pipe the results along to New-Moverequest
    • Create an individual move request for each users

While these options are still valid in Exchange Server 2013, you now also have the ability to create a migration batch using the New-MigrationBatch cmdlet.

This cmdlet will allow you to submit new move requests for a batch of users between two Exchange servers, local or remote (other forest) and on-prem or in the cloud (on-boarding/off-boarding). If you have been performing migrations to Office 365, this cmdlet shouldn’t be new to you as it was already available there.

The nice thing about migration batches is that you can easily create them, without having to start or complete them immediately. Although with a little effort you could’ve also done this using New-Moverequest, it’s not only greatly simplified, the use of migraiton batches also gives you additional benefits like:

  • Automatic Reporting/Notifications
  • Endpoint validation
  • Incremental Syncs
  • Pre-staging of data

Just as in Exchange Server 2010, the switchover from one database to the other is performed during the “completion phase” of the move. During this process, the remainder of items that haven’t been copied from the source mailbox to the target mailbox before are copied over after which the user is redirected to his “new” mailbox on the target database. (for purpose of staying on track with this article, I’ve oversimplified the explanation of what happens during the “completion” phase)

Creating a migration batch

To create a migration batch, you’ll need to have a CSV-file that contains the email addresses of the mailboxes you are going to move. These can be any of the email addresses that are assigned to the mailbox. There is – to my knowledge – no requirement to use the user’s primary email address.

Also, the file should have a ‘heading’ called “EmailAddress”:


Next, open the Exchange Management Shell and run the following cmdlet:

New-MigrationBatch –Name <name> –CSVData ([System.IO.File]::ReadAllBytes(“<full path to file>”)) –Local –TargetDatabase <dbname>

Running this cmdlet will start a local mailbox move between two Exchange server in the same forest. However, it will not automatically start moving the mailboxes as we haven’t used the –Autostart parameter. Furthermore, the moves won’t be completed automatically either because the –AutoComplete parameter wasn’t used either.

Note   It’s important that you specify the full path to where the csv-file is stored (e.g. C:\Files\Batch1.csv). Otherwise the cmdlet will fail because it will search for the file in the sytem32-folder by default.

Once the batch is created (and you didn’t use the –AutoStart parameter), you can launch the moves by running the following cmdlet:

Get-Migrationbatch | Start-MigrationBatch

Please note that if you have multiple migration batches, this cmdlet will start all of them.

Polling for the status of a migration

You can query the current status of a migration on a per-mailbox basis using the Get-MigrationUserStatistics cmdlet. The cmdlet will return the current status of the mailbox being moved and the amount of items that have been synced/skipped so far.

Get-MigrationUser | Get-MigrationUserStatistics


Note   Alternatively, you can also use the –NotificationEmails parameter during the creation of the migration batch. This parameter will allow you to specify an admin’s email address to which a status report is automatically sent. If you don’t use this parameter, no report is created/sent.

Completing the migration

If you didn’t specify the –AutoComplete parameter while creating the migration batch, you will have to manually start the “completion phase”. This can easily be done using the Complete-MigrationBatch cmdlet.

Get-MigrationBatch | Complete-MigrationBatch

When you take a look at the migration statistics, you’ll see that the status will be “Completing”:


Once mailbox moves have been completed successfully, the status will change to “Completed”Confused smile


As you can see, the New-Migrationbatch will certainly prove useful (e.g. if you want to pre-stage data without performing the actual switchover). Of course, there are other use cases as well: it’s the perfect companion to use for cross-forest moves and moves to/from Office 365 as  it contains numerous parameters that can be used to make your life easier. For instance the Test-MigrationEndpoint cmdlet can be used to verify if the remote host (to/from which you are migrating) is available and working correctly. This is especially useful in remote mailbox moves (cross-forest) or between on-prem/cloud.

If you want to find out more about the cmdlet, go and have a look at the following page:

Alternatively, you could also run Get-Help Get-NewMigrationBatch –Online from the Exchange Management Shell which will take you to the same page!

Until later!


Exchange 2013 How-To's PowerShell

The “High-Availability” concept…-part 2

Welcome to this second article on my view on High Availability. In the first part, we’ve taken a look at what high availability was and what the potential impact of requiring a higher availability rate might be.

Today, we’re going to focus on the question: “what should be measured?” and how you take the answer to that question and build your solution.


I ended my previous article by explaining that you’d want to measure a functionality rather than building blocks of your application architecture. Regular monitoring won’t get you far though. Instead, so-called synthetic transactions (tests which mimic a user’s actions like e.g. logging into OWA) allow you to test a functionality end-to-end. This allows you to determine whether or not the functionality that is being tested is working as expect. Underneath there is still a collection of building blocks which may or may not be aware of each other’s presence. This is something you – as an architect/designer/whatever – should definitely be cautious about.

All too often I come across designs that do not take inter-component-dependencies into account: from an application point-of-view everything might seem operational but that does not necessarily reflect the situation as it is for and end user…

For example, your Exchange services might be up and running and everything might seem OK from the backend; if the connection between the server- and client network is broken or otherwise not functioning, your end-users won’t be able to use their email (or perhaps other services that depend on email) anyway.

This means that – when designing your solution – you should try (mentally) reconstructing the interaction between the end user and the application (in this case: Exchange). While doing so, you write down all components that come into play. Soon you’ll end up with a list that also involves elements that – at first – aren’t really part of your application’s core-architecture:

  • Networking (switches, routers, WAN-links, …)
  • Storage (e.g. very important in scenarios where virtualization of Exchange is used)
  • Physical hosts (hypervisors, clustering, …)
  • Human interaction
  • You’ll find out that the above elements play an important part in your application architecture which automatically make them sort of part of the core-architecture after all…

    Negotiate your SLAs

Whether or not these ‘external’ elements are your responsibility probably depend on how your IT department is organized. I can image that if you have no influence at all on e.g. the networking-components, you don’t want to be responsible if something ever goes wrong at that layer. While it is still important to know what components might influence the health of your application, it wouldn’t be a bad idea to leave these components out of the SLAs. In other words: if the outage of your application was due to the network layer it wouldn’t count towards your SLA.

In my opinion that beats the entire purpose of defining SLAs and trying to make IT work as a service for the business. After all: they don’t care what caused an outage, they only care about how long it takes you (or someone else) to get the service/functionality back up and running.

Now that I brought that up, imagine the following example: one of the business requirements state that mails to outside your company should always be delivered within x-period of time (Yes, I deliberately left the timeframe out because it’s inferior to the point I’m trying to make). When doing a component break-down, you could come up with something similar like this (high-level):

  • Client network
  • Mailbox Server(s)
  • Hub Transport Server(s)
  • Server network
  • WAN-links (internet)
  • While the first four components might lie within your reach to manage and remediate in case of an outage, the 5th (WAN link) usually isn’t. So if it takes your ISP 8 hours to solve an issue (because they can according to their SLA for instance), you might perhaps think twice before accepting a 99,9% uptime in your SLA… However, if that isn’t an option you could try finding an ISP who can solve your issues quicker or you could try installing a backup internet connection. Bottom-line: you also need to take into account external factors when designing your solution.

    In some cases, I’ve seen that WAN-links (or outages due to) were marked as an exception to the SLA, just because the probability of an outage was very low (and the cost of an additional backup link was too high).

Probability vs. impact

When you are designing your solution, you don’t always have to take into account every little bit that could go wrong. Simply because you cannot account for everything that can go wrong (Murphy?). While your design definitely should take into account whatever it can, it should also pay attention to the cost-effectiveness of the solution. Remember the graph in the first part which said that cost tend to grow exponentially when trying to achieve a higher availability rate?

This means that sometimes, because the cost to mitigate a single-point-of-failure or risk cannot be justified, you’ll have to settle for less. In such case, you’d want to assess what the probability of a potential error/fault is and how that might affect your application. If both probability that it occurs and impact on your application are low, it’s sometimes far more interesting to accept a risk then trying to mitigate/solve it. On the other hand, if there’s an error which is very likely to occur and might knock down your environment for a few hours, you might reconsider and try solving that.

Solving such an issue can be done in various ways (depending on what the error/fault can be): either increase the application’s resiliency or solve the issue at the layer that it occurs. For instance: if you’ve got a dodgy (physical) network in one of both sites; you might rethink your design to make more use of the site that has got a proper network OR you could try solving the issues at the network layer to make it less dodgy (which I would prefer).


Although I’m convinced that what I wrote didn’t surprise you, by now you should realize that creating a highly available (Exchange) solution takes proper planning. There are far more elements that come into play than one might think at first. Also keep in mind that I only touched on these different aspects superficially; when dealing with potential risks like human error there are other things that come into play like e.g. defining a training-plan to lower the risk of human error.

I personally believe that the importance of these elements will only grow in the future. I’m sure you’ve already heard of the phenomenon “IT as a service”? When approaching the aspect of high availability, try thinking you’re the electricity supplier and the business is the customer (which they actually are). You don’t care how electricity gets to your home or – if it doesn’t – why it doesn’t. All you care about is having electricity in the end…

Extra resources

Thanks to one of the comments on my previous article, I found out that co-UCArchitect Michel de Rooij also wrote an article on high availability a while back. You should check it out!


Configuring High Availability for the Client Access Server role in Exchange Server 2013 Preview


All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

Following one of my previous articles in which I described how you could configure a Database Availability Group to achieve high availability for the Mailbox Server Role, we will now take a look at the process of how to configure high availability for the Client Access Server.

CAS Array

To achieve high availability, you create a load-balanced array of Client Access Servers just like in Exchange Server 2010. Other than before, layer-4 load balancing now becomes a viable options, though that would only be in the smallest deployments where there’s no budget for load balancers.

Layer-4 load balancing only takes into account the IP (and TCP port). Yes, you are no longer required to configure “affinity”. The latter is the process where a connection – once it was built – had to be persisted through the same Client Access Servers. This is because CAS in Exchange Server 2013 doesn’t do any data-rendering anymore: everything happens on the backend (Mailbox servers).

I hear you thinking: does this mean we could use DNS load balancing (a.k.a. round-robin). The answer is yes and no. Yes, because it will load-balance between multiple Client Access Servers; no because if a server would fail, you’d have to remove the server (manually) from DNS and wait for the record to time-out on all the clients. While this might be a cost-effective way to have load-balancing and a very, very basic form of high availability, it is not a real viable solution for most deployments…

Ever since the CAS Array was first introduced, it was subject to quite some misconceptions. A lot of them were addressed by Brian Day in a very interesting article he wrote.  What I find is that people tend to mix up the RPC Client Access Array and the load-balanced array used for http-based traffic. Yes, the use of the term CAS Array can be a little confusing. No, they’re not the same! Winking smile

Now, since Exchange Server 2013 dumped using RPC-over-TCP, I no longer see the purpose in creating the RPC Client Access Array object (New-ClientAccessArray). Instead, it suffices to configure multiple Client Access Servers with the same internal hostname for Outlook Anywhere.

To understand what happens, let’s take a look at the following examples:

In the case where you’re using two Client Access Servers in the same AD site, by default Exchange will “load balance” traffic between the two end points. This means that the 1st request will go to CAS1, the second to CAS2, the third to CAS1 etc… While this does provide some sort of load-balancing, it doesn’t really provide high-availability. Once Outlook is connected to a CAS, it will keep trying to connect to that same server, even after the server is down. Eventually, it will try connecting to the other CAS, but in the meantime your Outlook client will be disconnected.


If we add a load balancer, we need to configure the Internal Hostname for OA to a shared value between the Client Access Servers. For example: This fqdn would then point to the VIP of the load balancer which, in turn, would take care of the rest. Because we’re using a load balancer, it will automatically detect a server failure and redirect the incoming connection to the surviving node. Since there is no affinity required, this “fail over” happens transparently to the end user:


As explained before, this load balancer could be anything from simple DNS load balancing, to WNLB or a full-blown hardware load balancer that’s got all the kinky stuff! However, in contrast to before (Exchange 2010), most of the advanced options are not necessary anymore…

Configuring Outlook Anywhere

To configure the internal hostname for Outlook Anywhere, run the following command for each Client Access Server involved:

Get-OutlookAnywhere – Server <server> | Set-OutlookAnywhere –InternalHostname <fqdn>

Configuring the Load Balancer

As I explained earlier, Layer-4 is now a viable option. Although this could mean that you’d just be using DNS load balancing, you would want to use some load balancing device (physical or virtual).

The benefit of using a load balancer over e.g. WNLB is that these devices usually give you more options towards health-checking of the servers/service that you’re load balancing. This will allow you to better control over the load balancing process. For example: you could check for a particular HTTP-response code to determine whether a server is running or not. It definitely beats using simple ICMP Pings…!

The example below is based on the load balancer in my lab: a KEMP Virtual Load Master 1000. As you will see, it’s setup in the most basic way:

I’ve configured no persistency and – because it’s a lab – I’m only checking the availability of the OWA virtual directory on the Exchange servers. Alternatively, you could do more complex health checks. If you’re looking for more information on how to configure a KEMP load balancer, I’d suggest you take a look at Jaap Wesselius’ blogs here and here. Although these articles describe the configuration of a Load Master in combination with Exchange 2010, the process itself (except for the persistency-stuff etc) is largely the same for Exchange Server 2013. Definitely worth the read!




Exchange 2013 How-To's