Protecting your AD FS environment from password (spray) attacks (in an Office 365 context)

Recently, I attended the Thrive Conference in Ljubljana, Slovenia. During the IT Pro “Ask the Experts”-panel, an interesting question was raised: “How do I protect an on-premises AD FS environment from password spray attacks?”. Why is this such an interesting (and relevant) question, you may wonder? Well, over the past couple of months a lot of my customers have been targets of (organized) attacks, and I have been helping them overcome the effects of such attacks. Unfortunately, my customers are not the only ones! There is a disturbing trend, showing the number of attacks is steadily increasing, worldwide. These attacks aren’t only targeted towards large enterprises, it’s organizations of all sizes; seemingly random even.

The short answer to this question is: there isn’t ONE thing you can do solve the issue. Instead, there’s a bunch of things you could (and probably should) do. Spoiler: enabling MFA is one of them!

Before we dive into the technicalities, let’s first look at the dynamics of such attack(s). In a password spray attack, attackers use common passwords across various services (applications) to try and gain (unauthorized) access to password-protected services/content. The passwords they use are typically the so-called ‘usual suspects’ and include passwords such as “Summer2018!”, “Winter2018”, “Passw0rd” or the more ‘complex’ variant thereof “P@$$sw0rd”. I know you just chuckled, and you know why!

Unfortunately, attackers do not stop there. As more and more databases with user credentials are breached (more recently, Marriott ‘lost’ data from approx. 300 million accounts –including login data), attackers use the username/password combinations from those breaches in a similar way to try and gain unauthorized access to resources.

Because, nowadays, a lot of services are published onto the Internet, it’s almost a child’s game to target public services and be quite successful with these types of attacks. Even with a success ration of less than 1%, targeting several thousands of public applications yields enough success to make it worth their while. They are so successful because humans are creatures of habit and tend to use the same password for more than one service…

Once the attackers are in, they will use the breached account to (try) and exploit other vulnerabilities and increase privileges within the environment. Ultimately, their goal might be to steal data, encrypt contents and ask for ransom, etc.

So, now that we have that out of the way, let’s take a closer look at what you can do to harden your environment. Mind you that these recommendations are targeted primarily to Office 365, but the same principles (can) apply to other services behind AD FS as well! Note the recommendations below appear in no particular order. I did, however, add some thoughts here and there on what I believe is super useful and easy to implement. What you can (or cannot) do is up to you to determine, and in function of what applications you use, what your end user experience must look like and the buy-in from your management. The latter sometimes is the biggest challenge!

Use cloud authentication

Before diving into what you can do to enhance your AD FS infrastructure, you can also choose to replace AD FS with something else. For example, you could move to cloud-based authentication and use Azure AD accounts to authenticate to Office 365, federate with other applications, or use the Azure AD App Proxy to access on-premises applications.

Moving away from AD FS to cloud authentication moves the responsibility from dealing with these types of attacks from your own environment to Microsoft; they have the collective brain power, resources (like compute power) to intelligently detect and stop these types of attack. Although it solves the problem for you, it merely shifts the responsibility from having to deal with it yourself away.

When you move your authentication to the cloud, you can leverage additional components such as Azure AD Conditional Access. This, in turn, allows you to more intelligently perform authentication based on a set of (adaptive) rules. For example, Conditional Access can be used to not require MFA when you are using a known device from a known (trusted) location like your company network. When someone then tries to authenticate from outside the corporate network, or they aren’t using a registered device, they must perform MFA which then makes it a LOT harder for e.g. an attacker to gain access to an account –even when they have the right username/password combination…

Along the same lines, Microsoft offers Identity Protection in Azure AD. Instead of, or in addition to, defining a set of rules that will determine how authentication should be performed (Conditional Access), Identity Protection will leverage extensive AI-models to calculate a risk-score for an authentication attempt. Based on that score (low-medium-high), you can specific actions, such as denying access or requiring MFA (to prevent locking out valid users).

Telling people to replace AD FS with cloud authentication is easy. Doing so isn’t always possible, certainly not in the short term. So, let’s explore some other options.

Enable Multi-Factor Authentication

If there is one thing you should consider, it’s enabling MFA. Microsoft shared some startling numbers at their annual Ignite Conference: enabling MFA reduces the success rate of such attacks by an astonishing 99%. At the same time, they also revealed that too little organizations take advantage of MFA…

While it is an efficient way to stop password (spray) attacks from being successful, it doesn’t solve the problem caused by using passwords, or inherently the insecure way in which they are used and chosen by users. Nonetheless, MFA will render a valid username/password-combination virtually useless without the additional factor –something which is typically much harder to breach.

At the very least, you should use MFA for all your privileged accounts. These accounts can do most harm to your environment and should be well protected.

Off-topic: I know some will refer to the recent Azure MFA outage and point out that when MFA is not working, it really creates an operational problem. While this is completely true, there are few alternatives. Not turning on MFA could be far worse and will require you to keep a very close eye on the use of that account…!

Alternatively, consider keeping one or more break-glass accounts. These can be admin accounts that can be exempted from MFA using e.g. Conditional Access (e.g. require that the account can only be used from the internal network, on known devices). Having such an account will allow you to perform emergency-changes in your tenant, should such an MFA issues occur again in the future.

Turn on MFA as primary authentication

In addition to just using MFA, you can explore configuring MFA as the primary authentication method in AD FS 2016 and 2019. This doesn’t mean you can’t use passwords anymore: it can be used as the second factor after the initial MFA was successful. It means that attackers would first have to successfully perform MFA before they could attempt the username/password combination; very effective!

Use “advanced”, but built-in, AD FS capabilities

AD FS in Windows Server 2016/2019 have some features that are extremely useful. The first one is “(Extranet) Smart Lockout”. This capability will look at (un)successful authentication attempts and use the information gathered to proactively block authentication attempts from specific locations (IP addresses). The feature works in a very similar way than Azure AD’s smart lockout and is a good starting point to protect your environment.

Another feature is the “Banned IP”-list. This is a list of IPs that you can configure on your AD FS servers for which authentication attempts will be ignored. This is similar to the Smart Lockout capability, but you’ll have to configure it manually. If you leverage Azure AD Connect Health for AD FS, you can use the Risky IP report to block the IP addresses that consistently seem to be the source of unsuccessful authentication attempts and haven’t (yet) been caught by e.g. Smart Lockout.

Prevent bad passwords from being created

At the source of the problem (of password spray attacks) is the use of simple and predictable passwords. Educating users on what a good password is one thing, but do good passwords even exist? Even more so, users are notorious for not always playing by the rules…

So, better than just (blindly) trusting them, you can leverage Azure AD’s password protection in your on-premises environment. Without going into too much technical details, this feature will ensure that whenever a password change occurs in the on-premises environment, an agent will verify if the new password does not match a known bad (or banned) password and – if it does – prevent the password change, forcing the user to choose another password. Although this doesn’t solve the problem from known passwords to be used, it will – over time – ensure less simple and too obvious passwords exists in your environment. In turn, this decreases your attack surface a little bit.

Go for password-less, or just less passwords

Discussing the topic of whether getting rid of passwords is a good thing or not is worth an article by itself. Tldr; replacing passwords with something else is a good idea 😊

Today, you can already start by leveraging certificate-based authentication. This, too, is harder to compromise than passwords, but also notoriously difficult to implement (because of the management overhead for the deployment of client-side certificates etc). Because of the complexity, it’s not always a desired solution.

There are a few ways to go passwordless. One way I found to be super easy, is to use the Microsoft Authenticator app. Combining the requirement for MFA with password-less and Conditional Access is very powerful and super easy to deploy. Unfortunately, this will only work considering that 1) users must have a smart phone with the Microsoft Authenticator, and 2) they must access resources with their Azure AD account… Soon, you’ll also be able to use hardware (OAUTH) tokens instead of the authenticator app, adding (a bit) more flexibility in your options!

If you are up-to-date with your workstations (that is Windows 10, folks!) and you are using modern applications like Office 365 Pro Plus, you are in a good starting position to start adopting password-less authentication by implementing Hello for Business. Password-less is still very much something for the future, and passwords won’t disappear overnight. However, now is a great time to start planning and testing! If not, at the very least use it for a small subset of your (IT) users so you can get acquainted with it, both from a deployment and manageability perspective…

Monitoring

In Dutch, we have a saying: “meten is weten”. This roughly translates into “measuring is knowing”. Sounds horrible, doesn’t it? What it means though – in the context of this article – is that if you don’t measure/monitor your AD FS environment, you won’t know what’s happening or that you are under attack!

Azure AD Connect Health for AD FS is a good way to gain visibility into what’s happening in AD FS. However: what if you don’t have the licenses for it? Although there’s plenty of other (monitoring) solutions that can help you get the right information from AD FS, below I’ve included a (manual) way of using PowerShell to detect failed authentication attempts. It’s by no means a replacement for a decent monitoring solution –but it could be a starting point!

The script uses the AD FS event log information to lookup the IP information from the (failed) authentication attempts and compiles a list of countries that are generating these unsuccessful events. To retrieve the country to which an IP address belongs, the scripts uses the ipapi.com REST service. You will have to get your own API keys to perform the lookups! In the end, you can use the information from this report to proactively block specific IP addresses from the Internet. You can do this on your firewall or using the Banned IPs-feature in AD FS.

First, we’ll get the relevant events from the event logs:

#Define time frame for which to retreive events. Can be omitted.
$StartDate = "26/10/2018 19:00:00" | Get-Date
$EndDate = "27/10/2018 14:59:59" | Get-Date

#get events
$eventsFailed = Get-WinEvent -FilterHashtable @{logname='Security'; ID=1203; StartTime=$StartDate; EndTime=$EndDate} -ComputerName adfs1.domain.com
$eventsSuccess = Get-WinEvent -FilterHashtable @{logname='Security'; ID=1202; StartTime=$StartDate; EndTime=$EndDate} -ComputerName adfs1.domain.com

Next, we’re taking the events and extracting the required information from them. I am not a PowerShell-guru, so I am sure there are better ways to do this. However, below is what I could come up with in the short period I had to troubleshoot whilst an attack was going on. The customer I created this for then added a bunch of additional information (like coordinate and time stamps) afterwards.#reset variables

#reset variables
$result = @()

#Loop through results and compile the list

foreach($event in $eventsFailed){
    #workaround to get information.
    [xml]$eventXML = [xml]$Event.ToXml()
    $eventXML.Event.EventData.Data[1] | Out-File xml2.xml
    [xml]$xmlData = Get-Content .\xml2.xml
    $data = New-Object PSObject
    $ipAddress = $xmlData.AuditBase.ContextComponents.Component[3].IPAddress.Split(",")[0]

    #the clause below differentiates between internal and external IPs
    if ($ipAddress -eq "<internal IP addresses>")
    {
        $data | Add-Member -MemberType NoteProperty -Name UserID -Value ($xmlData.AuditBase.ContextComponents.Component[0] | Select UserID).UserID
        $data | Add-Member -MemberType NoteProperty -Name IPAddress -Value $ipAddress
        $data | Add-Member -MemberType NoteProperty -Name Continent -Value "Europe"
        $data | Add-Member -MemberType NoteProperty -Name Country -Value "Belgium"
        $data | Add-Member -MemberType NoteProperty -Name City -Value "City"
        $data | Add-Member -MemberType NoteProperty -Name Region -Value "Region/Province"
        $data | Add-Member -MemberType NoteProperty -Name Lat -Value "50.9160" #latitue
        $data | Add-Member -MemberType NoteProperty -Name Long -Value "4.0402" #longitude
        $data | Add-Member -MemberType NoteProperty -Name Status -Value "Failure"
        $data | Add-Member -MemberType NoteProperty -Name TimeCreated -Value $event.TimeCreated
        $data | Add-Member -MemberType NoteProperty -Name Source -Value "Internal"
        $data | Add-Member -MemberType NoteProperty -Name Date -Value $event.TimeCreated.ToShortDateString()
        $data | Add-Member -MemberType NoteProperty -Name DateNearestHour -Value (($event.TimeCreated).AddMinutes(-(($event.TimeCreated).Minute % 60))).ToString("yyyy-MM-dd HH:mm")
        $data | Add-Member -MemberType NoteProperty -Name TimeNearestHour -Value (($event.TimeCreated).AddMinutes(-(($event.TimeCreated).Minute % 60))).ToString("HH:mm")
    }
    else
    { 
        #perform REST lookup to get IP information
        $ipLookup = Invoke-RestMethod -Method Get -Uri "http://api.ipapi.com/$($ipAddress)?access_key=<accessKey>"

        $data | Add-Member -MemberType NoteProperty -Name UserID -Value ($xmlData.AuditBase.ContextComponents.Component[0] | Select UserID).UserID
        $data | Add-Member -MemberType NoteProperty -Name IPAddress -Value $ipAddress
        $data | Add-Member -MemberType NoteProperty -Name Continent -Value $ipLookup.continent_name
        $data | Add-Member -MemberType NoteProperty -Name Country -Value $ipLookup.country_name
        $data | Add-Member -MemberType NoteProperty -Name City -Value $ipLookup.city
        $data | Add-Member -MemberType NoteProperty -Name Region -Value $ipLookup.region_name
        $data | Add-Member -MemberType NoteProperty -Name Lat -Value $ipLookup.latitude
        $data | Add-Member -MemberType NoteProperty -Name Long -Value $ipLookup.longitude
        $data | Add-Member -MemberType NoteProperty -Name Status -Value "Failure"
        $data | Add-Member -MemberType NoteProperty -Name TimeCreated -Value $event.TimeCreated
        $data | Add-Member -MemberType NoteProperty -Name Source -Value "External"
        $data | Add-Member -MemberType NoteProperty -Name Date -Value $event.TimeCreated.ToShortDateString()
        $data | Add-Member -MemberType NoteProperty -Name DateNearestHour -Value (($event.TimeCreated).AddMinutes(-(($event.TimeCreated).Minute % 60))).ToString("yyyy-MM-dd HH:mm")
        $data | Add-Member -MemberType NoteProperty -Name TimeNearestHour -Value (($event.TimeCreated).AddMinutes(-(($event.TimeCreated).Minute % 60))).ToString("HH:mm")

    }

    $result += $data
}

#get the results. This can be changed to output to CSV, XML or anything else, really.
Write-Output $result

Note: you’ll have to turn on AD FS Audit Logs for these events to start popping up in your Security Logs!

Help! I’m under attack…!

If your monitoring shows an unusual high number of failed authentication attempts, you might be under attack. But what to do next? First, keep calm and breathe! Now is not a good time to panic. Without you, the attackers have free reign! Pulling the plug might work but it is a bit drastic. So, let’s hold off on doing that for now.

Once you’ve overcome the initial shock, try to identify what the attack vector/angle of attack is. Perhaps the suspicious activity is coming from just a few IP addresses? Good. Block those. If things take a turn for the worse, you might have to (temporarily) block external access to resources. This doesn’t necessarily mean you have to lock out your users: if they have other means to access the (internal) network such as through a VPN-connection, they would still be able to authenticate directly to the internal AD FS servers.

The challenge with the VPN-approach is that it doesn’t work for basic authentication (e.g. ActiveSync with Exchange Online). This is because with basic authentication, the authentication request(s) are initiated by Exchange Online and blocking those IP addresses will cripple access for everyone who uses basic authentication.

As such, it’s important that you try to move away from Basic Authentication as a rule of thumb anyway. In a modern world, and with modern applications, there should be no need to basic authentication. For example, Outlook and Outlook Mobile can perfectly live without it. POP3 and IMAP on the other hand, not so much…

Basic Authentication is the root of all evil. A simple username/password combination is no longer from this era. It might have been a fitting solution in the early days of computing, but that’s long past. Even complex and long passwords provide little to no value if used incorrectly, which is how most of the users use them. It’s not just because of what passwords one use, passwords themselves are because of how they are used also subject to man-in-the-middle attacks, etc… But that’s besides the point of this article. What is the point, is that you should disable basic auth when you get the chance, and there’s plenty of ways you can do it in Office 365/Azure AD:

  • Use Conditional Access to block legacy Auth
  • Use Authentication Policies in Exchange Online to block basic auth
  • Block specific protocols (such as e.g. POP3/IMAP) for specific user accounts.
  • Use AD FS Claim Rules to (selectively) block basic auth from specific locations (or users, or groups, etc.)

If you can’t block basic auth completely, limiting what accounts can use it, is already a good first step! Remember, there is no single security strategy that you can implement in a single go. As with many things, it’s something that takes time to setup correctly and get users accustomed to. It’s like a good wine: it can get better with the age. But if you wait too long, it can become bad as well!

Wrapping up

There are many countermeasures, each addressing parts of the challenge. The more of the above features and capabilities you implement or configure, the more secure your environment becomes. However, it doesn’t mean that doing all of this will make you 100% secure, as there is no such thing…

Instead, make sure that you take a holistic approach. After all, your defense is only as good as the weakest link in the chain. Having a multi-layered approach is key. Secondly, make sure to monitor your authentication traffic. If not, you’ll be flying blind and it might take you a while to figure out something bad is happening.

Over the next couple of weeks, I’ll be diving more deeply into some of the topics mentioned in this article. Until then, all there is left for me is to wish you a great end of the year!

-Michael

ADFS Blog How-To's Identity & Security Office 365

Speeding up retrieval of Send-As permissions

Many Exchange administrators are familiar with the Get-ADPermission cmdlet. In the contrary to the Get-MailboxPermission cmdlet, the Get-ADPermission cmdlet retrieves Active Directory permissions for an object, instead of permission in Exchange itself. For instance, the Get-ADPermission cmdlet will reveal e.g. Send-As permissions whereas the Get-MailboxPermission cmdlet will tell you e.g. who has Full Access permissions on the mailbox.

If you need to do a quick search for Send-As permissions, and for a limited set of mailboxes, you will find that using the Get-ADPermission cmdlet is pretty simple and straightforward:

Get-ADPermission <mailbox> | ?{$_.ExtendedRight -like "*Send-As*"}

If you are dealing with a large number of mailboxes (e.g. several thousands of mailboxes), using the Get-ADPermission cmdlet can be quite limiting. During recent testing, I noticed the command took anywere from 2-8 seconds per mailbox to complete. In this particular scenario, I was helping a customer to move user accounts from their (old) account forest into the new resource forest.

As part of the process, we would enumerate all mailbox permissions (including Send-As), and check if any of them were assigned to a user account in the account forest. However, because the source environment has tens of thousands mailboxes, the Get-ADPermission approach was not feasible.

Normally, querying AD is not a problem. If you’ve ever written an LDAP query, you probably noticed that most of them complete within several seconds –depending on the result set size, of course. But either way, talking directly to AD should be a lot faster. As such, and given that Send-As permission are assigned to the user account in AD, I figured that using the Get-ACL cmdlet would be best suited.

The first particularity to keep in mind is that, for easy processing, you should change your current location in PowerShell to Active Directory:

Import-Module ActiveDirectory
Set-Location AD:

Next, you can e.g. run the following command to get the ACL for an object. Notice how I’m using the distinguishedName of the object. Although there are other ways to quickly get the DN for an object, I referred to using the Get-Mailbox cmdlet, because I had to run it earlier in the script anyway:

$mbx = Get-Mailbox &lt;mailbox&gt;
(Get-ACL $mbx.distinguishedName).Access

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a53-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\Everyone
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a54-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\UserA
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

ActiveDirectoryRights : WriteProperty
InheritanceType       : All
ObjectType            : 934de926-b09e-11d2-aa06-00c04f8eedd8
InheritedObjectType   : 00000000-0000-0000-0000-000000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\Exchange Servers
IsInherited           : False
InheritanceFlags      : ContainerInherit
PropagationFlags      : None

The result of the cmdlet will look similar to what you see above. For brevity purposes, I’ve omitted some of the results. Nonethelss, it should give you a good picture of what to expect.

As you can see from the results, there’s a LOT of entries on each object. Because I was solely interested in the Send-As permission, I decided to filter the results based on the ActiveDirectoryRights attribute. Given that Send-As is an ExtendedRight, I used the following:

$mbx = Get-Mailbox &lt;mailbox&gt;
(Get-ACL $mbx.distinguishedName).Access | ?{$_.ActiveDirectoryRights -eq "ExtendedRight"}

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a53-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\Everyone
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a54-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\UserA
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

So far, so good. However, none of the entries mentioned “Send-As” anywhere. As it turns out, the objectType attribute contains a GUID which refers to the actual permission. AD stores information about Extended Rights in the configuration partition, in a container unsurprisingly called “Extended-Rights”. Using ADSIEdit, you can navigate to the Send-As Extended Right, and look for the rightsGuid attribute. I’ve checked in various labs and environments, and the Guid always turns out to be ab721a54-1e2f-11d0-9819-00aa0040529b.

ExtendedRight

Now that we have this information, it is very easy to filter the results from the Get-ACL cmdlet:

$mbx = Get-Mailbox &lt;mailbox&gt;
Get-ACL $mbx.distinguishedName | ?{($_.ActiveDirectoryRights -eq "ExtendedRight") -and ($_.objectType -eq "ab721a54-1e2f-11d0-9819-00aa0040529b")}

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a54-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\UserA
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

While benchmarking this approach, we were able to return results for approximately 1-5 mailboxes per second. Quite an improvement over before!

The one caveat is that Get-ACL does not return the same result set (in terms of what attributes are shown) as the Get-ADPermission cmdlet. If all you care about it the permission itself, or if you already have all the other information for the mailbox (e.g. because you previously ran Get-Mailbox), than the speedy approach using Get-ACL might just offer all you need.

Blog How-To's PowerShell

Rewriting URLs with KEMP LoadMaster

Earlier this month, I wrote an article on how you could use a KEMP LoadMaster to publish multiple workloads onto the internet using only a single IP address using a feature called content-switching.

Based on the principle of content switching, KEMP LoadMasters also allow you to modify traffic while it’s flowing through the device. More specifically, this article will show you how you can rewrite URLs using a Load Master.

The rewriting of URLs is quite common. The goal is to ‘send’ people to another destination than the one they are trying to reach. This could be the case when you are changing your domain name or maybe even as part of a merger and you want the other company’s traffic to automatically be redirected to your website.

Let’s start with a simple example we are all familiar with: you want to redirect traffic from the root of your domain to /owa. The goal is that when someone enters e.g. webmail.domain.com, that person is automatically redirected to webmail.domain.com/owa. Although Exchange 2013 already redirects traffic from the root to the /owa virtual directory out-of-the-box, the idea here is to illustrate how you can do it with KEMP instead of IIS. As a result you could just as easily send someone from one virtual directory (e.g. /test) to another (e.g. /owa).

How it works – Mostly

Just as in my previous article, everything evolves around content switching. However, next to the content matching rules, KEMP’s LoadMasters allow you to define so-called header modification rules as shown in the following screenshot:

image

By default, it suffices to create such a header modification rule and assign it to a virtual service. By doing so, you will rewrite traffic traffic to the root (or the /test virtual directory) and people will end up at the /owa virtual directory.

To create a header modification rule, perform the following steps:

  1. Login to your LoadMaster
  2. Click on Rules & Checking and then Content Rules
  3. On top of the Content Rules page, click Create New…
  4. Fill in the details as show in the following screenshot:

    image

  5. Click Create Rule.

Now that you have created the header modification rule, you need to assign it to the virtual service on which you want to use it:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

That’s it. All traffic that hits the LoadMaster on the root will now automatically be rewritten to /owa.

How it works with existing content rules (SUBVSs)

When you are already using content rules to ‘capture’ traffic and send it to a different virtual directory (as described in my previous article), the above approach won’t work – at least not entirely.

While the creating of the header modification rule and the addition of that rule to the virtual service remain entirely the same, there is an additional task you have to perform.

First, let me explain why. When you are already using Content Rules the LoadMaster will use these rules to evaluate traffic in order to make the appropriate routing decision. As a result, these content rules are processed before the header modification rules. However, when the LoadMaster doesn’t find a match in one of its content matching rules, it will not process the header modification rule – at least not when you are trying to modify a virtual directory. As I will describe later in this article, it will still process host-header modifications though.

So, in order for the LoadMaster to perform the rewrite, the initial destination has to be defined on the virtual directory where you want to redirect traffic to. Let’s take the following example: you are using content rules to direct traffic from a single IP address to different virtual directories. At the same time, you want traffic from an non-existing virtual directory (e.g. /test) to be redirected to /owa.

First, you start of again by creating a header modification rule. The process is the same as outlined above. The only thing that changes is that the match string will now be “/^\/test$/” instead of /^\/$/:

image

Next, create a new content rule, but this time create a content matching rule as follows:

image

Next, we’ll make the changes to the virtual service:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

Now, we still need to add the content matching rule to the /owa SubVs:

  1. In the properties of the Virtual Service, go down to SubVSs and click the button in the Rules column, for the OWA SubVS:

    image

  2. From the drop-down list, select the rule you created earlier (“test”) and click Add:

    image

  3. You should now have 2 rules in the SubVS:

    image

That’s it. If you now navigate to the /test virtual directory, the traffic will automatically be rewritten to /owa.

How about if I want to redirect more than a single virtual directory to /owa?

In theory you would need to create a header modification rule for each of the virtual directories and a content matching rule as well. However, if you are going to redirect multiple other virtual directories to /owa, you can also use the “default” content rule which acts as a catch-all. As a result, instead of creating and adding a separate content matching rule for each virtual directory, you just create a header modification rule for each of them and add the default content rule to the /owa virtual directory as shown below:

image

What about rewriting host names?

Rewriting host names is handled a tad differently than e.g. virtual directories. Unlike the latter, host header modifications are processed before the content matching rules for the Virtual Services. As a result, it suffices to create a modification rule and apply it to the virtual service. To create a Host Header modification rule, do the following:

  1. Go to Rules & Checking and then Content Rules
  2. Click Add New… and create the rule as follows:

    image

Once you have created the rule, add it the the HTTP Header Modification rules on the virtual services and your done. Traffic that hits test.domain.com will now automatically be rewritten to webmail.domain.com. It’s as easy as that.

Conclusion

Rewriting URLs with KEMP’s LoadMaster is relatively easy. You only have to watch out when you are already using content switching rules as I described earlier.

Until later,

Michael

Blog Exchange How-To's

Publishing multiple services to the internet on a single IP address using a KEMP Load Balancer and content switching rules

A few days ago, someone suggested I write this article as it seems many people are struggling with ‘problem’. In fact, the solution which I’m going to explain below is the answer to a problem typically found in “home labs” where the internet connection doesn’t always have multiple IP addresses. This doesn’t mean that it’s only valid for home-us or testing scenarios only. Given that IPv4 addresses are almost depleted, it’s a good thing not to waste these valuable resources if it’s not necessary.

Basically, what I’m going to explain is how you can use a KEMP Load Master to publish multiple services/workloads to the internet using only a single (external) IP address. In the example below, I will be publishing Exchange, Office Web Apps and Lync onto the internet.

The following image depicts how the network in my example looks like. It also displays the different domain names and IP addresses that I’m using. Note that – although I perfectly could – I’m not connecting the Load Master directly onto the internet. Instead, I mapped an external IP address from my router/firewall to the Load Master:

image

How it works

The principle behind all this is simple: whenever a request ‘hits’ the Load Master, it will read the host header which is used to connect to the server and use that to determine where to send the request to. Given that most of the applications we are publishing use SSL, we have to decrypt content at the Load Master. This means we will be configuring the Load Master in Layer 7. Because we need to decrypt traffic, there’s also a ‘problem’ which we need to work around. The workloads we are publishing to the internet all use different host names. Because we only use a single Virtual Service, we can assign only a single certificate to it. Therefore, you have to make sure that the certificate you will configure in the Load Master either includes all published host names as a Subject (Alternative) Name or use a wildcard certificate which automatically covers all the hosts for a given domain. The latter option is not valid if you have multiple different domain names involved.

How the Load Master handles this ‘problem’ is not new – far from it. The same principle is used in every reverse proxy and was also the way how our beloved – but sadly discontinued – TMG used to handle such scenarios. You do not necessarily need to enable the Load Master’s ESP capabilities.

Step 1: Creating Content Rules

First, we will start by creating the content rules which the Load Master will use to determine where to send the requests to. In this example we will be creating rules for the following host names:

  • outlook.exchangelab.be (Exchange)
  • meet.exchangelab.be (Lync)
  • dialin.exchangelab.be (Lync)
  • owa.exchangelab.be (Office Web Apps)
  1. Login to the Load Master and navigate to Rules & Checking and click > Content Rules:image
  2. Click Create New…
  3. On the Create Rule page, enter the details as follows:conten rule

Repeat steps 2-3 for each domain name. Change the value for the field Match String so that it matches the domain names you are using. The final result should look like the following:

content rules

Step 2: creating a new Virtual Service

This step is fairly easy. We will be creating a new virtual service which uses the internal IP address that is mapped to the external IP address. If you already have create a virtual service previously, you can skip this step.

  1. In the Load Master, click Virtual Services and the click > Add New:image
  2. Specify the internal IP address which you have previously mapped to an external IP address
  3. Specify port TCP 443
  4. Click Add this Virtual Serviceimage

Step 3: Configuring the Virtual Service

So how does the Load Master differentiate between the different host headers? Content Rules. Content rules allow you to use Regular Expressions which the Load Master will use to examine incoming requests. If a match is found through one of the expressions, the Load Master will forward the traffic to the real server which has been configured with that content rule.

First, we need to enable proper SSL handling by the Load Master:

  1. Under SSL Properties, click the checkbox next to Enabled.
  2. When presented with a warning about a temporary self-signed certificate, click OK.
  3. Select the box next to Reencrypt. This will ensure that traffic leaving the Load Master is encrypted again before being sent to the real servers. Although some services might support SSL offloading (thus not reencrypting traffc), it’s beyond the scope of this article and will not be discussed.
  4. Select HTTPS under Rewrite Rules.image

Before moving to the next step, we will also need to configure the (wildcard) certificate to be used with this Virtual Service:

  1. Next to Certificates, click Add New
  2. Click Import Certificate and follow the steps to import the wildcard certificate into the Load Master. These steps include selecting a certificate file, specifying a password for the certificate file (if applicable) and setting an identifying name for the certificate (e.g. wildcard).image
  3. Click Save
  4. Click “OK” in the confirmation prompt.
  5. Under Operations, click the dropdown menu VS to Add and select the virtual service.
  6. Now click Add VSimage

You’ve now successfully configured the certificate for the main Virtual Service. This will ensure the Load Master can decrypt an analyze traffic sent to it. Let’s move on to the next  step in which we will define the “Sub Virtual Services”.

Step 4: Adding Sub Virtual Services

While still on the properties pages for the (main) Virtual Service, we will now be adding new ‘Sub Virtual Services’. Having a Sub Virtual Service per workload allows us to define different real servers per SubVS as well as a different health check. This is the key functionality which allows to have multiple different workloads live under a single ‘main’ Virtual Service.

  1. Under Real Servers click Add SubVS…
  2. Click OK in the confirmation window.
  3. A new SubVS will now have appeared. Click Modify and configure the following parameters:
  • Nickname (makes it easier to differentiate from other SubVSs)
  • Persistence options (if necessary)
  • Real Server(s)

Repeat the steps above for each of the workloads you want to publish.

Note: a word of warning is needed here. Typically, you would add your ‘real servers’ using the same TCP port as the main Virtual Service, being TCP 443, in this case. However, if you are also using the Load Master as a reverse proxy for Lync, you will need to make sure your Lync servers are added using port 4443 instead.

Once you have configured the Sub Virtual Services, you still need to assign one of the content rules to it. Before you’re able to do so, you first have to enable Content Rules.

Step 5: enabling and configuring content rules

In the properties of the main Virtual Service, Under Advanced Properties click Enable next to Content Switching. You will notice that this option has become available after adding your first SubVS.

image

Once Content Switching is enabled, we need to assign the appropriate rules to each SubVS.

  1. Under SubVSs, Click None in the Rules column for the SubVS you just are configuring. For example, if you want to configure the content rule for the Exchange SubVS:image
  2. On the Rule Management page, select the appropriate Content Matching rule (created earlier) from the selection box and then click Add:image
  3. Repeat these steps for each Sub Virtual Service you created earlier

Testing

You can now test the configuration by navigating your browser to one of your published services or by using one of the service. If all is well, you should now be able to reach Exchange, Lync and Office Web Apps – all using the same external IP Address.

As you can see, there’s some fair amount of work involved, but it’s all in all relatively straightforward to configure. In this example we published Exchange, Lync and Office Web Apps, but you could just as easily add other services too. Especially with the many Load Balancing options you have with Exchange 2013, you could for instance use multiple additional Sub Virtual Services for Exchange alone. To get you started, here’s how the content rules for that would look like:

content rules exchange

Note: if you are defining multiple Sub Virtual Services for e.g. Exchange, you don’t need to use/configure a Sub Virtual Service which uses the content rule for the Exchange domain name “^outlook.domain.com*”. If you still do, you’d find that – depending on the order of the rules – your workload-specific virtual services would remain unused.

I hope you enjoyed this article!

Until later,

Michael

Blog Exchange 2013 How-To's Office 365 Uncategorized

My journey to a new home lab…

Almost two years ago now, I bought myself a new – what I thought was at that time – a “real” home lab. The components I bought back then were similar to what MVP Jeff Guillet had described on his blog: http://www.expta.com/2013/04/updated-blistering-fast-hyper-v-2012.html

That server comprised of the following components:

  • Intel Core i7-2600K
  • Asus motherboard (P8Z77-VLX)
  • 32GB DDR RAM
  • 3x 120GB SSD
  • 1x 1TB Western Digital 7200rpm SATA
  • Antec NSK4400 (400W PSU)

Additionally, I also use my “main desktop” machine to run a few VMs from:

  • Intel Core i5-3550
  • Gigabyte Motherboard (Z68AP-D3)
  • 16GB DDR3 RAM
  • 2x 120GB SSD
  • 1x 2TB Western Digital 7200rpm SATA
  • Antec NSK4400 (400W PSU)

From a networking point of view, I bought a relatively cheap HP ProCurve 1810-24G which gives me more than enough ‘power’ to tie everything together. What I also liked about this switch is that it’s relatively easy to configure, low-noise and supports e.g. VLAN tagging.

Challenges

Over time, and especially on the occasion of preparing for the recently demised MCSM training, I started to experience some ‘problems’. Under load, my server would randomly freeze up. While this usually isn’t much of a problem, I sometimes dare to do my demos from these machines. Last time I did that, it froze up in the middle of a demo! Those who were present at TechDays in Belgium will actually remember what that was like…

Given that very unpleasant experience, I made the decision to build a new “home lab”. From a requirements point-of-view, this ‘new’ lab has to be able to accommodate for all the VMs I want to run (+/- 60) and preferably be more stable. In order to do so, I definitely need more memory (CPU wasn’t an issue) and I need more storage. I found that I was able to run quite a lot of VMs out of the limited amount of storage I have right now (360GB SSD) using deduplication in Windows Server 2012. That is also the reason why I decided to keep on using SSDs; which ultimately cost me the most.

From a network point-of-view, I’ll also be looking to replace (or complement) my current switch with a high-performance router. In my persistent lab environment, I have a few subnets which are now routed through a virtual machines (WS 2012). In the future I would like this to be done by a Layer 3 switch. I’ve been doing some research and have found that HP’s 1910 series actually offer up to 32 static routes while remaining relatively cheap. Another option, though, would be to use one of MikroTik’s RouterBoard devices… Still not sure about what to do…

The process

When I first stared looking for solutions, I ended up with a few possibilities. One of them was to move out my lab outside of my house and either host it in a datacenter or use a platform like Amazon Web Services or – preferably – Windows Azure to build my lab.

The problem with either of both is that – given the amount of VMs I’m running at times – this can become a quite costly business. Even though the solution would be the most elegant of all, it’s just not something I can afford.

Next, I looked to moving my hardware to a datacenter. Through a colleague of mine I was able to rent 1u rack space at +/- 35 EURO a month (which is relatively cheap). While from a connectivity point-of-view this was an awesome idea, I had to find hardware that was able to fit in 1 or 2 units. For this, I came up with 2 solutions:

    • Modified MAC Mini’s
    • Custom-build 1U Servers

Unfortunately, both solutions turned out to be inefficient or too expensive. I could easily fit 4 Mac Minis in 1U rack space, but they can only contain 16GB of RAM and – even by adding a second disk myself – each one would cost up to 850 EUR.The alternative of building a server myself based of some SuperMicro hardware (which is decent quality for a fair price) seemed do-able; except for when trying to fit as much as you can into 1U. Basically, I ended up with the following hardware list ~ which I nearly ended up buying:

  • Supermicro 1027R-73DAF
  • 1x Intel Xeon e5-2620
  • 6x Samsung 250GB EVO SSD
  • 8xKingston 16GB DDR3 ECC

The problem here is: cost. Next to buying the hardware (+/- 2700 EUR), I would also need to take into account the monthly recurring cost for the datacenter. All-in-all, a little overkill for what I want to get out of it.

The (final) solution

So I started looking, AGAIN, for something that combined the best of both worlds and ended up building my server with a mix of server and desktop hardware. In the end, I decided to host the hardware at home (despite the advantages of putting it in a DC) and request a new VDSL line which offers me a bunch of external IP addresses that I can use:

    • Antec Three Hundred Two
    • Corsair 430Watt Modular Power Supply
    • Asus P9X79 Pro
    • Intel Xeon E5-2620
    • 64GB DDR3 1333Mhz
    • 4x Samsung 840 EVO 250GB SSD
    • 1x Western Digital “Black” 1TB 7200rpm

The total price for this machine was approximately 1700 EUR, which isn’t too bad given what I get in return.

        The reason I chose the Intel Xeon CPU and not a regular i7 is simple: even though some motherboards claim to support 64GB RAM with most i7 CPU’s, there’s only a single one that actually addresses more than 32GB (i7 3930K). The price for that one is actually – compared to the Xeon CPU – insanely high, which is why I went for the latter one.

Because I’m using a “regular” motherboard [P9X79] instead of e.g. one from SuperMicro, I was able to drive down cost on memory as well. Even though I’m now limited to ‘only’ 64GB per host, the additional cost of ECC RAM and the SuperMicro motherboard weren’t worth it in my humble opinion.

The future?

My ultimate goal is to end up with 2 (maybe 3) of these machines and leverage Windows Server 2012 R2’s new capabilities with regards to storage and networking (enhanced deduplication, SMB multichannel, storage spaces, …). This would also allow me to configure the Hyper-V hosts in a cluster which ‘unlocks’ some testing scenarios.

As such, I would like to get to the following setup (which will take months to acquire, for sure!):

image

I’m still in doubt how I will do networking though. Given that 10Gbe is becoming cheaper by the day [Netgear has a really affordable 8port 10GBe switch], I might end up throwing that into the mix. It’s not that I need it, but at least it gives me something to play and get familiar with.

I’m most likely to transform my current hyper-v host to the iSCSI Target over time. But let’s first start at the beginning.

Once I have received the hardware, I’ll definitely follow up with a post of how I put the components together and my (hopefully positive) first impressions of it. So make sure to look out for it!

Cheers,

Michael

Blog How-To's

Exploring Exchange Server Component States

As described in a previous article, an Exchange 2013 server can be placed into a sort of maintenance mode, not only by temporarily pausing a DAG node from the cluster, but also by putting Exchange components on that server into an inactive state using the Set-ServerComponentState cmdlet.

The most obvious reason why a component is in an inactive state is because someone put it into that state as part of a maintenance task. However, there can be several other reasons why a component is inactive. The most common reason is when the Health Management service (part of Managed Availability) has taken a component offline because it was deemed unhealthy.

The tricky part comes when one or more “requesters” have put a component into an Inactive state which might lead to confusing situations. There are 5 requesters that can switch the state of a component:

  • HealthAPI
  • Maintenance
  • Sidelined
  • Functional
  • Deployment

Consider the following scenario. As part of the upgrade to the latest CU, you decide to put a server into maintenance mode using the Set-ServerComponentState cmdlet. After the upgrade, you take the server back “online” by changing the state of the component back to “Active”. However, when running the Get-ServerComponentState cmdlet, you notice that one or more components are still inactive… You investigate the issue and it turned out that the component was already in an inactive state before YOU put it in an inactive state. So why didn’t the state change AFTER you put it into an active state again?

The answer is pretty simple. As you know, only the requester that has put a component into a certain state, can put it back into an active state. So, as part of your maintenance task, you’ve put a component into maintenance using “Maintenance” as the requester, you are actually flagging the service inactive for a second time.

In fact, every time someone (or something) makes a component inactive, an entry gets added to the local server’s registry in the following location:

HKLM\SOFTWARE\Microsoft\Exchange Server\v15\ServerComponentStates\<componentname>

image
this image displays the different entries for the FrontEndTransport component in the local Exchange server’s registry

Each entry includes the following information, separate by a colon: [Unknown Value]:[State]:[TimeStamp]
By looking at the picture, you can see that the requester “Maintenance” has put this component into an active state on the given timestamp. FYI, the timestamp is saved in a binary format.

Now consider the following image:

image

As you can see, the component has multiple entries. Luckily, all of them are showing that the component is Active. However, if one of the entries would show the component was inactive, it would effectively be inactive. Even if a more recent entry would place that component into an active state, it would remain inactive until the same requester switches it back to active.

Why is that, you might wonder. Simply because there are cases where you don’t want someone to override the component state set by someone or something else. This could be the case when someone placed a server into maintenance mode (requester = Maintenance) and while the server is in maintenance, someone updates the Exchange server to the latest version. Exchange setup will actually place all components into an inactive state prior to starting the upgrade (requester  = Deployment) and switch them back after the upgrade completes. If this action would override the component state set by “Maintenance”, the server would effectively become operational again. Something you might not want in this case.

Script to query Component States

The output of the Get-ServerComponentState cmdlet does not show who placed a component into an inactive state, nor shows it if there’s more than one entry for that component. Of course, you could each time have a look in the local server’s registry. For convenience reasons, I put together a little script that will query the local registry and output the information on-screen:

image

Below you’ll find the code for the script. All you need to do is save it as a .ps1 file and run it from the Exchange Server that you want to query. Alternatively, you can download the script from here.

The current version of the script is a bit rough, but it works 🙂
In a future version, I’ll try to add remoting and clean up the code a bit by adding comments…

$components = Get-ChildItem HKLM:\SOFTWARE\Microsoft\ExchangeServer\v15\ServerComponentStates\ -Recurse | select PSPath

foreach($component in $components){

$componentstates = (Get-ItemProperty $component.PSPath | Select * -ExcludeProperty PS* ) | Get-Member -MemberType NoteProperty

$i=0

do {

$componentstate = ($componentstates[$i].Definition).split("=")
$statebreakdown = $componentstate[1].Split(":")

#$componentActualState = $statebreakdown[1]

switch($statebreakdown[1]){
1 {$componentActualState = "Active"}
0 {$componentActualState = "Inactive"}
}
$componentActualTime = [timezone]::CurrentTimeZone.ToLocalTime([datetime]::FromBinary($statebreakdown[2]))

$obj = New-Object PSObject
$obj | Add-Member -MemberType NoteProperty -Name Component -Value $($component.PSPath.Split("\"))[7]
$obj | Add-Member -MemberType NoteProperty -Name Requester -Value $componentstates[$i].Name
$obj | Add-Member -MemberType NoteProperty -Name State -Value $componentActualState
$obj | Add-Member -MemberType NoteProperty -Name TimeStamp -Value $componentActualTime
$obj

$i++
}
while ($i -lt $componentstates.count)

}

What’s next?

In a follow-up article, I’ll discuss the Server Component States and why the entries also exist in Active Directory.

Exchange 2013 How-To's

Finally! First Exchange 2013 Sizing Information released.

Yesterday, Microsoft’s CXP team (Customer Experience Team) released a lengthy blog post containing practical information towards sizing of Exchange 2013. A moment we all have been waiting for since the product was released in October of last year.

With this new information, it will finally become possible to create a decenter design and migration approach.

Unfortunately, there’s still no trace of the Mailbox Server Role Requirements Calculator which is – let’s face it – the reference tool when properly sizing an Exchange Server environment. The only mention in the article states that it’s coming somewhere later this quarter. Looks like we’re going to have to be a little more patient, don’t we?

Nonetheless, with the information from the article, you should be set on your way as it contains all the information you need to properly size a new Exchange 2013 Server environment. Please do keep in mind that this is Microsoft’s first guidance on this topic and is likely to change over time as customers and also Microsoft gain more experience with the product in more real-life deployments.

Over the course of the next weeks, I hope to provide you with some comparative figures between Exchange 2010 and Exchange 2013, but for now it looks like Exchange 2013 is quite hungry on Memory and CPU. However, from an architecture point-of-view, it’s not all that surprising. After all, memory and CPU are relatively cheap these days.

Stay tuned for more info!

To read the original article containing the different calculations, have a look at the following article:

http://blogs.technet.com/b/exchange/archive/2013/05/06/ask-the-perf-guy-sizing-exchange-2013-deployments.aspx

Blog Exchange 2013 How-To's