In the first part of this article, we talked about Load Balancing in general and took a closer look at what the advantages and disadvantages of simple layer 4 load balancing for Exchange 2013 were. Today we’ll dive a bit deeper into the remaining two ways of load balancing Exchange 2013: layer 4 with multiple namespaces and ‘traditional’ layer 7.
Layer 7 Load Balancing
Layer 7 load balancing offers some additional functionalities over Layer 4. Because traffic is being decrypted, the load balancer can now ‘read’ (‘understand’) the traffic that is coming through and take appropriate actions based on the type (or destination) of the traffic.
By decrypting traffic, the load balancer can read the destination for a packet which allows you to make a distinction between traffic for the different Exchange workloads while still using a single virtual service. Based on the type of workload, traffic could e.g. be sent to a different set or servers. However, this was not the most important reason to do Layer 7 load balancing. In Exchange 2010, traffic coming from a certain client had to be persisted to the same endpoint (= the same Client Access Server). This meant that the initial connection could be sent to just about any CAS, but once the session was made superseding packets for that session had to be maintained with the same CAS.
A Load Balancer typically has multiple ways to maintain this client <> server relationship. Depending on the model and make of your Load Balancer, you might see the vendor refer to this relationship as “persistence”, “stickyness” etc… The most commonly used methods are:
- Source IP
- Session ID
- Session Cookie
For a load balancer to be able to identify these things it needs to be able to read the traffic, forcing the need for traffic to be decrypted except when using Source IP. Although Source IP affinity doesn’t necessarily require decryption, in some scenarios using this type of affinity could cause an uneven distribution of load; especially when traffic is coming from behind a NAT device. Consider the following scenario:
Multiple internet-connected devices connect to your on-premises environment and before hitting the load balancers, they go through a router/firewall or other NAT device which will ‘change’(mask) the source IP Addresses (126.96.36.199 and 188.8.131.52) with a single internal address (10.20.30.40). If your load balancers are configured to persist connections based on the source IP address, all connections will potentially end up at the same CAS. This is – of course – something you’d want to avoid. Else, what purpose would your load balancer have? Right?
For this “problem” there are a few solutions:
You could disable NAT, which would reveal the client’s original IP address to the load balancer. Unfortunately, this isn’t always possible and depends on your network topology/routing infrastructure.
You could change the configuration of your load balancer to use something else than the Source IP to determine whether a connection should be persisted or not.
In the latter case persistence based on the SSL Session ID is a good alternative. The load balancer will examine every “packet” that flows through it, read the session ID and if it finds a match for a previously created session, it will send that packet to the same destination as before. While this works brilliantly, it will induce a higher load on your load balancer because:
the load balancer needs to inspect each packet flowing through, consuming more CPU cycles
the load balancer needs to maintain a bigger “routing table”, which consumes more memory. By that I mean a table where the Session ID is mapped to a destination server.
As mentioned earlier, because you are decrypting the traffic, you can e.g. determine from the packet what the destination URL is. In essence, this allows you to define multiple virtual services (one for each workload) and make the load balancer choose what virtual service to forward a packet to. In this specific example, the virtual services are “hidden” for the end user.
Let’s poor that into an image, things might become more clearly that way:
For external clients, there is still a single external URL (VIP) they connect to but ‘internally’ there is separate virtual service for each workload. Whenever a packet reaches the load balancer, it will be read and based on the destination URL, the appropriate virtual service is picked. The biggest advantage is that each virtual service can have its own set of health criteria. This also means that – because workloads are split – if e.g. OWA fails on one server, it won’t affect other workloads for that server (as they belong to a different virtual service). While OWA might be down, other protocols remain healthy and the LB will continue forwarding packets to that server for a specific workload.
With this in mind, we can safely conclude that Layer 7 Load Balancing clearly offers some benefits over simple layer 4. However it will cost you more in terms of hardware capacity for your load balancer. Given that a decently sized load balancer can cost a small fortune, it’s always nice to explore what other alternatives you have. On top of that, this kind of configuration isn’t really “easy” and requires a lot of work from a load balancer’s perspective. I’ll keep the configuration steps for a future article.
Layer 4 load balancing with multiple namespaces
As I showed in the first part of this article, Exchange 2013 greatly simplifies load balancing, compared to Exchange 2010. Unfortunately, this simplification comes at a cost. You loose the ability to do a per-protocol health check when using layer 4. And let’s face it: losing functionality isn’t something you like, right?
Luckily, there is a way to have the best of both worlds though…
Combining the simplicity of Layer 4 and finding a way to mimic the Layer 7 functionality is what the fuzz is all about. Because when using Layer 4 your load balancer has no clue what the endpoint for a given a connection is, we need to find a way to make the Load Balancer know what the endpoint is, without actually needing to decrypt traffic.
The answer is in fact as simple as the idea itself: use a different virtual service for each workload but this time with a different IP address for each URL. The result would be something like this:
Each workload now has its own virtual service and therefore you also get a per-workload (per-protocol) availability. This means that, just as with Layer 7, the failure of a single workload on a server has no immediate impact on other workloads while at the same time, you maintain the same level of simplicity as with simple Layer 4. Sounds cool, right?!
Obviously, there is a “huge” downside to this story: you potentially end up with a bunch of different external IP addresses. Although there are also solutions for that, it’s beyond the scope of this article to cover those.
Most people – when I tell them about this approach – don’t like the idea of exposing multiple IPs/URLs to an end user. Personally, I don’t see this as an issue though. “For my users it’s already hard enough to remember a single URL”, they say. Who am I to argue? However, when thinking about it; there’s only one URL an end user will actually be exposed to and that’s the one for OWA. All other URLs are configured automatically using Autodiscover. So even with these multiple URLs/IPs, your users really only need to remember one.
Of course, there’s also the element of the certificate, but depending on the load balancer you buy, that could still be cheaper than the L7 load balancer from the previous example.
Configuring Layer 4 load balancing with different namespaces is done the same way as you configure a single namespace. Only now you have to do it multiple times. The difference will be in the health checks you perform for each protocol and those are something that’s free for you to choose. For now, the health-check options are relatively limited (although that also depends from the load balancer you use). But the future might hold some interesting changes: at MEC, Greg Taylor explained that Microsoft is working with some load balancer vendors to make their load balancers capable of “reading” the health status of an Exchange produced by Managed Availability. This would mean that your load balancer no longer has to perform any specific health checks itself and can rely on the built-in mechanisms in Exchange. Unfortunately there isn’t much more information available right now, but rest assured I follow this closely.
Differentiator: health checks
As said before, the configuration to Layer 4 without multiple namespaces is identical. For more information on how to configure a virtual service with a KEMP load balancer, please take a look at the first part of this article.
The key difference lies within the difference in health checks you perform against each workload. Even then, there is a (huge) difference in what load balancers can do.
Even though I like KEMP load balancers a lot, they are – compared to e.g. F5 – limited in their health checks. From a KEMP load balancer perspective, your health check rule for e.g. Outlook Anywhere would look something like this:
From an F5 perspective, a BIG-IP LTM would allow you to “dive deeper” into the health checks You can define a user account that is used to authenticate against the rpcproxy.dll. Only if that fails, the service will be marked down, rather then using a “simple” HTTP GET.
On a side note: for about 90% of the deployments out there, the simple health check method proves more than enough…
Below you’ll find a table in which I summarized the most important pro’s and con’s per option.
Layer 4 (Simple)
|Layer 4 (Multiple Namespaces)||
If it were op to me, I like the idea of Layer 4 with multiple namespaces. It’s clean, it’s simple to setup (both from a load balancer and Exchange point of view) and it will most probably save me some money. Unless you only have a limited amount of (external) IP addresses available, this seems like the way forward to me.
Nonetheless, this doesn’t mean you should kick out your fancy load balancer if you already have one. My opinion: use with what you got and above all: keep it simple!