Rewriting URLs with KEMP LoadMaster

Earlier this month, I wrote an article on how you could use a KEMP LoadMaster to publish multiple workloads onto the internet using only a single IP address using a feature called content-switching.

Based on the principle of content switching, KEMP LoadMasters also allow you to modify traffic while it’s flowing through the device. More specifically, this article will show you how you can rewrite URLs using a Load Master.

The rewriting of URLs is quite common. The goal is to ‘send’ people to another destination than the one they are trying to reach. This could be the case when you are changing your domain name or maybe even as part of a merger and you want the other company’s traffic to automatically be redirected to your website.

Let’s start with a simple example we are all familiar with: you want to redirect traffic from the root of your domain to /owa. The goal is that when someone enters e.g. webmail.domain.com, that person is automatically redirected to webmail.domain.com/owa. Although Exchange 2013 already redirects traffic from the root to the /owa virtual directory out-of-the-box, the idea here is to illustrate how you can do it with KEMP instead of IIS. As a result you could just as easily send someone from one virtual directory (e.g. /test) to another (e.g. /owa).

How it works – Mostly

Just as in my previous article, everything evolves around content switching. However, next to the content matching rules, KEMP’s LoadMasters allow you to define so-called header modification rules as shown in the following screenshot:

image

By default, it suffices to create such a header modification rule and assign it to a virtual service. By doing so, you will rewrite traffic traffic to the root (or the /test virtual directory) and people will end up at the /owa virtual directory.

To create a header modification rule, perform the following steps:

  1. Login to your LoadMaster
  2. Click on Rules & Checking and then Content Rules
  3. On top of the Content Rules page, click Create New…
  4. Fill in the details as show in the following screenshot:

    image

  5. Click Create Rule.

Now that you have created the header modification rule, you need to assign it to the virtual service on which you want to use it:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

That’s it. All traffic that hits the LoadMaster on the root will now automatically be rewritten to /owa.

How it works with existing content rules (SUBVSs)

When you are already using content rules to ‘capture’ traffic and send it to a different virtual directory (as described in my previous article), the above approach won’t work – at least not entirely.

While the creating of the header modification rule and the addition of that rule to the virtual service remain entirely the same, there is an additional task you have to perform.

First, let me explain why. When you are already using Content Rules the LoadMaster will use these rules to evaluate traffic in order to make the appropriate routing decision. As a result, these content rules are processed before the header modification rules. However, when the LoadMaster doesn’t find a match in one of its content matching rules, it will not process the header modification rule – at least not when you are trying to modify a virtual directory. As I will describe later in this article, it will still process host-header modifications though.

So, in order for the LoadMaster to perform the rewrite, the initial destination has to be defined on the virtual directory where you want to redirect traffic to. Let’s take the following example: you are using content rules to direct traffic from a single IP address to different virtual directories. At the same time, you want traffic from an non-existing virtual directory (e.g. /test) to be redirected to /owa.

First, you start of again by creating a header modification rule. The process is the same as outlined above. The only thing that changes is that the match string will now be “/^\/test$/” instead of /^\/$/:

image

Next, create a new content rule, but this time create a content matching rule as follows:

image

Next, we’ll make the changes to the virtual service:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

Now, we still need to add the content matching rule to the /owa SubVs:

  1. In the properties of the Virtual Service, go down to SubVSs and click the button in the Rules column, for the OWA SubVS:

    image

  2. From the drop-down list, select the rule you created earlier (“test”) and click Add:

    image

  3. You should now have 2 rules in the SubVS:

    image

That’s it. If you now navigate to the /test virtual directory, the traffic will automatically be rewritten to /owa.

How about if I want to redirect more than a single virtual directory to /owa?

In theory you would need to create a header modification rule for each of the virtual directories and a content matching rule as well. However, if you are going to redirect multiple other virtual directories to /owa, you can also use the “default” content rule which acts as a catch-all. As a result, instead of creating and adding a separate content matching rule for each virtual directory, you just create a header modification rule for each of them and add the default content rule to the /owa virtual directory as shown below:

image

What about rewriting host names?

Rewriting host names is handled a tad differently than e.g. virtual directories. Unlike the latter, host header modifications are processed before the content matching rules for the Virtual Services. As a result, it suffices to create a modification rule and apply it to the virtual service. To create a Host Header modification rule, do the following:

  1. Go to Rules & Checking and then Content Rules
  2. Click Add New… and create the rule as follows:

    image

Once you have created the rule, add it the the HTTP Header Modification rules on the virtual services and your done. Traffic that hits test.domain.com will now automatically be rewritten to webmail.domain.com. It’s as easy as that.

Conclusion

Rewriting URLs with KEMP’s LoadMaster is relatively easy. You only have to watch out when you are already using content switching rules as I described earlier.

Until later,

Michael

Blog Exchange How-To's

You get an error “the connection to the server <servername> could not be completed” when trying to start a hybrid mailbox move in Exchange 2013.

As part of running through the “New Migration Batch”-wizard, the remote endpoint (the on-premises Exchange server) is tested for its availability. After running this step, the following error is displayed:

image

By itself, this error message does not reveal much information as to what might be causing the connection issues. In the background, the wizard actually leverages the “Test-MigrationServerAvailability” cmdlet. If you run this cmdlet yourself, you will get a lot more information:

image

In this particular case, you’ll see that the issue is caused by 501 response from the on-premises server. The question is of course: why? We recently moved a number of mailboxes and then we did not encounter the issue. The only thing that had changed between then and now is that we reconfigured our load balancers in front of Exchange to use Layer 7 instead of Layer 4. So that is why I shifted my attention to the load balancers.

While reproducing the error, I took a look at the “System Message File” log in the KEMP load balancer. This log can be found under Logging Options, System Log Files. Although I didn’t expect to see much here, I saw the following message which drew my attention:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.2.130:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

A quick lookup learned that the 157.56.251.92 address was indeed coming from Microsoft. So now I knew for sure that something was wrong here. A quick search on the internet brought me to the following article which suggested to change the 100-Continue Handling in the Layer 7 configuration of the Load Master: http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html

After changing the value from its default (RFC Conformant), I could now successfully complete the wizard and start a hybrid mailbox move. So the “workaround” was found. But I was wondering, why does the Load Master *think* that the request coming from Microsoft is non-RFC compliant?

The first thing I did is ask Microsoft if they could clarify a bit on what was happening. I soon got a reply that – from Microsoft’s point of view – they were respecting the RFC documentation regarding the 100 (Continue) Status. No surprise here.

After reading the RFC specifications I decided to take some network traces to find out what was happening and maybe understand how the 501 response was triggered. The first trace I took, was one from the Load Master itself. In that trace, I could actually see the following:

image

Effectively, Office 365 was making a call to the Exchange Web Services and using the 100-continue status. As described per the RFC documentation, the Exchange on-premises server should now respond appropriately to the 100-continue status. Instead, we can see that in the entire SSL conversation, exactly 5 seconds go by after which Office 365 makes another call to the EWS virtual directory without having received a response to the 100-continue status. At the point, the KEMP Load Master generated the “501 Invalid Request”.

I turned back to the (by the way, excellent) support guys from KEMP and explained them my findings. Furthermore, when I tested without Layer 7 or even without a Load Master in between, there wasn’t a delay and everything was working as expected. So I knew for sure that the Exchange 2013 on-premises was actually replying correctly to the 100-continue status. As a matter of fact, without the KEMP LM in between, the entire ‘conversation’ between Office 365 and Exchange 2013 on-premises was perfectly following the RFC rules.

So, changing the 100-continue settings from “RFC Conformant” to “Ignore Continue-100” made sense as now KEMP would just ignore the 100-continue “rules”. But I was still interested in finding out why the LM thought the conversation was not RFC conformant in the first place. And this is where it gets interesting. There is this particular statement in the RFC documentation:

“Because of the presence of older implementations, the protocol allows ambiguous situations in which a client may send “Expect: 100- continue” without receiving either a 417 (Expectation Failed) status or a 100 (Continue) status. Therefore, when a client sends this header field to an origin server (possibly via a proxy) from which it has never seen a 100 (Continue) status, the client SHOULD NOT wait for an indefinite period before sending the request body.”

In fact, that was exactly what is happening here. Office 365 (the client) sent an initial 100-continue status and waited for a response to that request. In fact, it waits for exactly 5 seconds and sends the payload, regardless of it having received a response. In my opinion, this falls within the boundaries of the scenario described above. However, talking to the KEMP guys there seems to be a slightly different interpretation of the RFC which caused this mismatch and therefore the KEMP issuing the 501.

In the end, there is still something we haven’t worked out entirely: why the LM doesn’t send back the Continue-100 status back to Office 365 even though it receives it back almost instantaneously from the Exchange 2013 server.

All in all, the issue was resolved rather quickly and we know that changing the L7 configuration settings in the Load Master solves the issue (and this workaround was also confirmed as being the final solution by KEMP support, btw). Again, changing the 100-continue handling setting too “Ignore” doesn’t render the configuration (or the communication between Office 365 or Exchange on-premises) non-RFC compliant. So there’s no harm in changing it.

I hope you found this useful!

-Michael

Blog Exchange 2013 Hybrid Exchange Office 365

You get an error “StalledDueToMailboxLock” when moving mailboxes to Office 365 in a hybrid configuration.

Recently, a customer of ours experienced an issue while moving mailboxes to Office 365 in a Hybrid Configuration. After the move request was generated, they would end up with a status “StalledDueToMailboxLock” after a short while. The move request would then stay in that state for an indefinite amount of time.

Although I wasn’t able to test everything myself, first hand, a certain mailbox move was reported to proceed after been in the “StalledDueToMailboxLock” for about 2 days. Obviously, this isn’t the pace you’d want your mailboxes to be moved to Office 365 and some further investigation is needed.

Before any “deep” troubleshooting was done, we checked if there wasn’t something that could cause a mailbox lock, maybe a process that was blocking access to the mailbox like a misconfigured Anti-Virus. However, we soon found out that everything seemed normal…

Troubleshooting

Whenever a move requests fails or seems to be hanging in a specific state for a while, it’s always a good idea to take a look at the Move Request Statistics using “Get-MoveRequestStatistics”:

Get-MoveRequest <Identity> | Get-MoveRequestStatistics –IncludeReport | fl

The command above will immediately output the results on-screen. However, for requests that have been running for a while, the amount of information from the command is too much to handle through the console. As an alternative, you can either output it to e.g. a text file using Out-File or – and in my opinion this is much easier to handle – export it to an XML file using the Export-Clixml command:

Get-MoveRequest <Identity> | Get-MoveRequestStatistics –IncludeReport | Export-Clixml c:\moverequeststats.xml

Now, you can open that XML file in a more specialized XML-reader (or just Internet Explorer if you want) and go through its content.

Cause

In this particular case, the problem was caused by Exchange Web Services. Somewhere in the XML file (and multiple times), you’d find the following reference:

<S N=”Message”>Communication with remote service ‘https://autodiscover.company.be/EWS/mrsproxy.svc CAS1.fqdn (14.3.123.2 caps:05FFFF)’ has failed. –&gt; The call to ‘https://autodiscover.company.be/EWS/mrsproxy.svc CAS1.fqdn (14.3.123.2 caps:05FFFF)’ failed. Error details: The remote endpoint no longer recognizes this sequence. This is most likely due to an abort on the remote endpoint. The value of wsrm:Identifier is not a known Sequence identifier. The reliable session was faulted..

Similarly, we also found the exact same reference, but for another CAS:

<S N=”Message”>Communication with remote service ‘https://autodiscover.company.be/EWS/mrsproxy.svc CAS2.fqdn (14.3.123.2 caps:05FFFF)’ has failed. –&gt; The call to ‘https://autodiscover.company.be/EWS/mrsproxy.svc CAS2.fqdn (14.3.123.2 caps:05FFFF)’ failed. Error details: The remote endpoint no longer recognizes this sequence. This is most likely due to an abort on the remote endpoint. The value of wsrm:Identifier is not a known Sequence identifier. The reliable session was faulted..

This points out that the move request was trying to connect over different Client Access Servers. However, when keeping in mind the load balancing requirements for Exchange 2010, it’s clearly stated that using EWS without affinity is not supported:

Exchange Web Services   Only a subset of Exchange Web Services requires affinity. Availability Service requests don’t require affinity, but subscriptions do. All aspects of Exchange Web Services experience performance enhancements from affinity. An affinity timeout value of 45 minutes is recommended for Exchange Web Services clients to ensure that periodic polling for events associated with a subscription are not directed to a new Client Access server resulting in inefficient new subscriptions for each request. We don’t support the use of Exchange Web Services without affinity.

Edit note: as you might’ve guessed, this problem happened because on-premises an Exchange 2010 hybrid server was used. If that would’ve been an Exchange 2013 environment, this wouldn’t have happened as there’s no need to keep affinity for EWS.

Solution

So, because of this, we took another look at the load balancer configuration. To confirm whether or not it was the load balancer causing the issue, we temporarily bypassed them and connected directly to one of the CAS servers after which we found out that the move requests processed successfully.

The efforts now turned back to the Load Balancer and it was found out that the affinity for EWS, which was configured in the first place, wasn’t applied correctly. After correcting the settings in the Load Balancers everything worked as expected.

Acknowledgements

I would like to thank William Rall (MSFT) for his efforts and time. He quickly pointed out the root cause, which helped us determine what caused the problem. His feedback also allowed me to dive deeper into the XML and find what I was looking for. Admitted: diving into the XML without actually knowing what to look for, isn’t always easy!

Blog Exchange Hybrid Exchange

Session Slides and pictures of our event on Load Balancing Exchange & Lync 2013 now online!

Earlier this week we had the pleasure to welcome you for our in-person event “Load Balancing and Reverse Proxying for Exchange 2013 and Lync 2013” at Microsoft Belgium. Despite the fact that some people cancelled at the very last minute, the turnout was really great!

Both session from Johan Delimon and myself could count on a lot of questions from you guys, which kept things interactive at all times. Thank you for that!

For those who couldn’t attend, below some picture to help you muse about what you’ve been missing out on:

DSC_0293DSC_0304DSC_0307
DSC_0310DSC_0313DSC_0322DSC_0315

As promised, we’ve also made the slides available for download here. (Note: you will be redirected to the Pro-Exchange website!)

If you have questions about the sessions, please feel free to contact us and we’ll get back to you as soon as possible! Also, don’t forget Ruben and Wim will be presenting on this years Community Day > “Exchange and Lync 2013: Better together”.

We’re looking forward to meeting you there!

Cheers,

Michael

Blog Events

Load Balancing Exchange 2013 – part 2

Introduction

In the first part of this article, we talked about Load Balancing in general and took a closer look at what the advantages and disadvantages of simple layer 4 load balancing for Exchange 2013 were. Today we’ll dive a bit deeper into the remaining two ways of load balancing Exchange 2013: layer 4 with multiple namespaces and ‘traditional’ layer 7.

Layer 7 Load Balancing

Layer 7 load balancing offers some additional functionalities over Layer 4. Because traffic is being decrypted, the load balancer can now ‘read’ (‘understand’) the traffic that is coming through and take appropriate actions based on the type (or destination) of the traffic.

By decrypting traffic, the load balancer can read the destination for a packet which allows you to make a distinction between traffic for the different Exchange workloads while still using a single virtual service. Based on the type of workload, traffic could e.g. be sent to a different set or servers. However, this was not the most important reason to do Layer 7 load balancing. In Exchange 2010, traffic coming from a certain client had to be persisted to the same endpoint (= the same Client Access Server). This meant that the initial connection could be sent to just about any CAS, but once the session was made superseding packets for that session had to be maintained with the same CAS.

A Load Balancer typically has multiple ways to maintain this client <> server relationship. Depending on the model and make of your Load Balancer, you might see the vendor refer to this relationship as “persistence”, “stickyness” etc… The most commonly used methods are:

  • Source IP
  • Session ID
  • Session Cookie

For a load balancer to be able to identify these things it needs to be able to read the traffic, forcing the need for traffic to be decrypted except when using Source IP. Although Source IP affinity doesn’t necessarily require decryption, in some scenarios using this type of affinity could cause an uneven distribution of load; especially when traffic is coming from behind a NAT device. Consider the following scenario:

Multiple internet-connected devices connect to your on-premises environment and before hitting the load balancers, they go through a router/firewall or other NAT device which will ‘change’(mask) the source IP Addresses (80.70.60.50 and 80.70.60.40) with a single internal address (10.20.30.40). If your load balancers are configured to persist connections based on the source IP address, all connections will potentially end up at the same CAS. This is – of course – something you’d want to avoid. Else, what purpose would your load balancer have? Right?

image

For this “problem” there are a few solutions:

  • You could disable NAT, which would reveal the client’s original IP address to the load balancer. Unfortunately, this isn’t always possible and depends on your network topology/routing infrastructure.
  • You could change the configuration of your load balancer to use something else than the Source IP to determine whether a connection should be persisted or not.

In the latter case persistence based on the SSL Session ID is a good alternative. The load balancer will examine every “packet” that flows through it, read the session ID and if it finds a match for a previously created session, it will send that packet to the same destination as before. While this works brilliantly, it will induce a higher load on your load balancer because:

  1. the load balancer needs to inspect each packet flowing through, consuming more CPU cycles
  2. the load balancer needs to maintain a bigger “routing table”, which consumes more memory. By that I mean a table where the Session ID is mapped to a destination server.

As mentioned earlier, because you are decrypting the traffic, you can e.g. determine from the packet what the destination URL is. In essence, this allows you to define multiple virtual services (one for each workload) and make the load balancer choose what virtual service to forward a packet to. In this specific example, the virtual services are “hidden” for the end user.

Let’s poor that into an image, things might become more clearly that way:

image

For external clients, there is still a single external URL (VIP) they connect to but ‘internally’  there is separate virtual service for each workload. Whenever a packet reaches the load balancer, it will be read and based on the destination URL, the appropriate virtual service is picked. The biggest advantage is that each virtual service can have its own set of health criteria. This also means that – because workloads are split – if e.g. OWA fails on one server, it won’t affect other workloads for that server (as they belong to a different virtual service). While OWA might be down, other protocols remain healthy and the LB will continue forwarding packets to that server for a specific workload.

With this in mind, we can safely conclude that Layer 7 Load Balancing clearly offers some benefits over simple layer 4. However it will cost you more in terms of hardware capacity for your load balancer. Given that a decently sized load balancer can cost a small fortune, it’s always nice to explore what other alternatives you have. On top of that, this kind of configuration isn’t really “easy” and requires a lot of work from a load balancer’s perspective. I’ll keep the configuration steps for a future article.

Layer 4 load balancing with multiple namespaces

As I showed in the first part of this article, Exchange 2013 greatly simplifies load balancing, compared to Exchange 2010. Unfortunately, this simplification comes at a cost. You loose the ability to do a per-protocol health check when using layer 4. And let’s face it: losing functionality isn’t something you like, right?

Luckily, there is a way to have the best of both worlds though…

Combining the simplicity of Layer 4 and finding a way to mimic the Layer 7 functionality is what the fuzz is all about. Because when using Layer 4 your load balancer has no clue what the endpoint for a given a connection is, we need to find a way to make the Load Balancer know what the endpoint is, without actually needing to decrypt traffic.

The answer is in fact as simple as the idea itself: use a different virtual service for each workload but this time with a different IP address for each URL. The result would be something like this:

image

Each workload now has its own virtual service and therefore you also get a per-workload (per-protocol) availability. This means that, just as with Layer 7, the failure of a single workload on a server has no immediate impact on other workloads while at the same time, you maintain the same level of simplicity as with simple Layer 4. Sounds cool, right?!

Obviously, there is a “huge” downside to this story: you potentially end up with a bunch of different external IP addresses. Although there are also solutions for that, it’s beyond the scope of this article to cover those.

Most people – when I tell them about this approach – don’t like the idea of exposing multiple IPs/URLs to an end user. Personally, I don’t see this as an issue though. “For my users it’s already hard enough to remember a single URL”, they say. Who am I to argue? However, when thinking about it; there’s only one URL an end user will actually be exposed to and that’s the one for OWA. All other URLs are configured automatically using Autodiscover. So even with these multiple URLs/IPs, your users really only need to remember one.

Of course, there’s also the element of the certificate, but depending on the load balancer you buy, that could still be cheaper than the L7 load balancer from the previous example.

Configuring Layer 4 load balancing with different namespaces is done the same way as you configure a single namespace. Only now you have to do it multiple times. The difference will be in the health checks you perform for each protocol and those are something that’s free for you to choose. For now, the health-check options are relatively limited (although that also depends from the load balancer you use). But the future might hold some interesting changes: at MEC, Greg Taylor explained that Microsoft is working with some load balancer vendors to make their load balancers capable of “reading” the health status of an Exchange produced by Managed Availability. This would mean that your load balancer no longer has to perform any specific health checks itself and can rely on the built-in mechanisms in Exchange. Unfortunately there isn’t much more information available right now, but rest assured I follow this closely.

Differentiator: health checks

As said before, the configuration to Layer 4 without multiple namespaces is identical. For more information on how to configure a virtual service with a KEMP load balancer, please take a look at the first part of this article.

The key difference lies within the difference in health checks you perform against each workload. Even then, there is a (huge) difference in what load balancers can do.

Even though I like KEMP load balancers a lot, they are – compared to e.g. F5 – limited in their health checks. From a KEMP load balancer perspective, your health check rule for e.g. Outlook Anywhere would look something like this:

image

From an F5 perspective, a BIG-IP LTM would allow you to “dive deeper” into the health checks You can define a user account that is used to authenticate against the rpcproxy.dll. Only if that fails, the service will be marked down, rather then using a “simple” HTTP GET.

On a side note: for about 90% of the deployments out there, the simple health check method proves more than enough…

Conclusion

Below you’ll find a table in which I summarized the most important pro’s and con’s per option.

  Pro’s Con’s

Layer 4 (Simple)

  • Easy to setup
  • Less resources on the LB required
  • No per-protocol availability
Layer 7
    • per-protocol availability
    • single external IP/URL
  • More difficult to setup
  • more resources on the LB required
Layer 4 (Multiple Namespaces)
  • Easy to setup
  • Les resources on the LB required
  • per-protocol availability
  • multiple external IP/URL

If it were op to me, I like the idea of Layer 4 with multiple namespaces. It’s clean, it’s simple to setup (both from a load balancer and Exchange point of view) and it will most probably save me some money. Unless you only have a limited amount of (external) IP addresses available, this seems like the way forward to me.

Nonetheless, this doesn’t mean you should kick out your fancy load balancer if you already have one. My opinion: use with what you got and above all: keep it simple!

Blog Exchange 2013 How-To's

Load Balancing Exchange 2013 – part 1

Introduction

In one of my earlier posts, I already briefly explained what load balancing in Exchange 2013 looked like and how you could setup basic load balancing using a KEMP LoadMaster based on a blog post that Jaap Wesselius wrote. Today I will elaborate a bit more on the different load balancing options you have, along with how to configure a KEMP LoadMaster accordingly. In a future blog post (which I’m working on right now), I will be explaining how you can configure an F5 BIG-IP LTM to achieve the same results.

Load Balancing Primer

One of the ‘biggest’ changes towards load balancing in Exchange 2013 is that you can now use simple, plain Layer 4 load balancing. More ‘complex’ and more expensive layer 7 load balancing that you had to use in Exchange 2010 is no longer a hard requirement. However, this doesn’t mean that you cannot use Layer 7 load balancing. But more about that later (part 2).

In a nutshell, Layer 7 load balancing means that you can do application-aware load balancing. You achieve this by decrypting SSL traffic, which allows the load balancing device to ‘read’ the traffic that’s passing though. The Load Balancer will analyze the headers and based on the what it ‘finds’ (and how it’s configured), it will take an appropriate action (e.g. use another virtual service or use another load balancing option).

Traffic decryption uses additional resources from your load balancer. The impact of the amount of users connecting through your load balancing device usually is (much) higher in such case. Also, not all load balancers offer this functionality by default. Some require you to purchase additional license (or hardware) before being able to decrypt SSL traffic. Companies might choose to re-encrypt traffic that is sent to Exchange (SSL Bridging) because of security reasons (e.g. avoid sniffing). In Exchange 2010 you could also choose not to re-encrypt traffic when sending it to Exchange. This latter process is called SSL Offloading and required some additional configuration in Exchange. Not re-encrypting traffic (SSL Offloading) saves some resources on your load balancing device, but know that – for now – it is not supported for Exchange 2013; you are required to re-encrypt traffic when using Layer 7 load balancing! So make sure that you size your hardware load balancers accordingly. How to size your load balancer depends of the model and make. Contact the manufacturer for more information and guidance.

Load Balancing Options in Exchange 2013

When looking purely at Exchange 2013, you have different options with regards to load balancing. Each of the following options has its advantages and disadvantages. Let’s take a closer look at each of them and explain why:

  • Layer 4 (single namespace)
  • Layer 7
  • Layer 4 (separate namespaces)

Layer 4 (single namespace)

This is the most basic (and restricted) but also the easiest way of setting up load balancing for Exchange 2013. A single namespace will be used to load balance all traffic for the different Exchange workloads (OWA, EAC, ActiveSync, EWS, …).

image

Because you are using a single namespace with Layer 4, you cannot do workload-specific load balancing and/or health checking. As a result, a failure of a single workload can possibly impact the end user experience. To illustrate, let’s take the following example:

User A connects to externalurl.domain.com and opens up OWA. At the same time, his mobile device syncs (using ActiveSync) over the same URL while his Outlook is connected (Outlook Anywhere), you’ve guessed it right, over the same external URL. In this example, the virtual service is configured to check the OWA Virtual Directory (HTTP 200 OK) to determine whether a server is available or not.

Let’s see what happens if OWA fails on a server. The health check that the load balancer executes will fail, therefore it will assume that the server isn’t functioning properly anymore and take that entire server out of service. It does this by preventing traffic from being forwarded to that server. Even though other workloads like Outlook Anywhere and ActiveSync, … might still work perfectly, the server will still considered ‘down’. From an efficiency point-of-view that is not the best thing that could happen:

image

Now, imagine that OWA works fine, but the RPC/HTTP components is experiencing some difficulties…At this point, OWA would still return a positive health check status and the load balancer would assume the server is still healthy. As a result traffic is still being sent to the server, including traffic for the failed RPC/HTTP components.  This will definitely impact a user’s experience.

image

As you can see, it’s perhaps not the best possible experience for the end user (in all cases), but it definitely is the easiest and most cost-effective way to setup load balancing for Exchange 2013. Because you don’t need advanced functionality from the load balancer, you will probably be able to fit more traffic through the load balancer before hitting it’s limits. Alternatively, if you don’t have a load balancer yet, it could allow you to buy a smaller scaled model to achieve the “same” results as with L7 load balancing.

Let’s take a look at how to setup a KEMP LoadMaster for this scenario:

Note   to keep things brief and to the point, I’m not going to discuss how to setup a KEMP LoadMaster in your environment and what options you have there (e.g. single-arm setup, …). We’ll only discuss how to setup the virtual services and other related configuration settings.

First, navigate to Virtual Services and click Add New:

image

Enter the Virtual IP Address (VIP), port, and optionally a descriptive name for the service. Then click Add this Virtual Service to create it:

image

Configure the Basic Properties as follows:

image

Because we’re not going to use Layer 7, make sure to disable Force L7 in the Standard Options. Because Exchange doesn’t need affinity (persistence), you do not have to configure anything specific for that either. You’re free to choose the load balancing mechanism (a.k.a. scheduling), but Round Robin should be OK:

image

I can imagine that you’re now like “Wait a minute, does that mean that every new connection possibly end up with another server in the array?” The answer is: yes. But that actually doesn’t matter. Every connection that is made is automatically re-authenticated. As a result, the fact that every new connection is forwarded to another server (even mid-session) doesn’t impact the end user’s experience. If you’d like to know a bit more about this, I suggest you listen to episode 12 of “The UC Architects”. In this episode, Greg Taylor (Microsoft) explains how that actually works.

Anyway, we got sidetracked. Back to the Load Master! Again, because we’re doing the simplest of the simplest (L4) here, we also don’t need to do any form of SSL decrypting, meaning that you can leave these options at their default:

image

If you will, you can specify a “Sorry Server” in the advanced properties. This is a server where traffic is redirected to if all servers within the array become unavailable:

image

Note   if you click Add HTTP Redirector the KEMP LoadMaster will automatically create a Virtual Service for the same VIP on TCP port 80 that will redirect traffic to the Virtual Service on TCP port 443. This way you can enforce SSL and you don’t necessarily have to do that elsewhere.

All that’s left now is to configure the servers to direct traffic to. Click Add New under Real Servers to add an Exchange 2013 server. Once you’ve filled in the details, click Add this Real Server. Repeat this step for all Exchange 2013 Client Access Servers you want to add.

image

We still have to configure how the LoadMaster will check the health of the servers in the array. There are different ways to do this. In the example below, the LoadMaster will query the OWA Virtual Directory and if it gets a response, the server will be deemed healthy. Alternatively, you could also opt not to specify an URL. Do not forget to click Set Check Port after entering the port number.

image

Believe it or not, but that’s it. Your LoadMaster is now setup to handle incoming traffic and distribute the load evenly over the configured servers.

image

Conclusion

Simple Layer 4 load balancing clearly has its benefits towards simplicity and cost (you usually need less performing hardware load balancers to achieve the same result as with L7). On the other hand, the limited options towards health checking could potentially take offline a server based on problems with a single workload while other workloads might still work fine on that server.

In the second part of this article, we’ll dive deeper into the option of using Layer 4 load balancing with multiple namespaces and how that compares to using Layer 7 load balancing, so stay tuned!

Cheers!

Michael

Blog Exchange 2013

KEMP to announce “Edge Security Gateway” security features soon!

Following the recent talks about KEMP that is going to be delivering security-related features to it’s Load Masters, I got in touch with one of KEMP’s employees who confirmed firmly that KEMP indeed intends to deliver the complete set of new security feature that were listed in the internal memo.

Additionally, he also stated that these features will be coming under the name “Edge Security Gateway” (ESG) and released early in 2013 as a “zero-cost enhancement to the core LoadMaster product family”. It also looks like that these features will be added to all LoadMaster models; physical and virtual because internet-facing deployments pose the same security challenges to small guys and big guys alike, according to KEMP.

It isn’t the first time that KEMP pulls of such a ‘stunt’, a few years back they also added features as HTTP Caching, Compression and IPS at no additional cost.

The reason that I’m actively blogging about this is two-folded. For one, as a KEMP user I like the value their devices offer. Sure, big players like F5 and Citrix have their added-value, but from a cost/benefit perspective it’s hard to surpass KEMP, in my humble opinion. Secondly, I like the way KEMP runs it business. Of course profit will be one of the biggest drives, but I also noticed that KEMP tends to listen to it’s customers and evenly important: the Exchange community.

I remember that, back at MEC, fellow UC Architect Serkan Varoglu and I were changing thoughts with two of the KEMP guys on this subject. We literally came to the conclusion that it would be nice to see KEMP “filling the gap” that TMG would leave behind. Knowing that our input, as well as the input of many others within the community is taken seriously and is also acted upon can only be applauded.

On Monday, KEMP will be making an official announcement so make sure to keep an eye on their twitter feed and website! I have signed up for testing these new features during the BETA. So expect to see some additional information when it becomes available!

Until later!

Michael

Note   I wasn’t asked to write this article and I am doing this by my own choice. I haven’t been given nor been promised anything in return by KEMP or anyone else. These thoughts and opinions are purely my own.

Blog Exchange