Load Balancing Exchange 2013 – part 2

Introduction

In the first part of this article, we talked about Load Balancing in general and took a closer look at what the advantages and disadvantages of simple layer 4 load balancing for Exchange 2013 were. Today we’ll dive a bit deeper into the remaining two ways of load balancing Exchange 2013: layer 4 with multiple namespaces and ‘traditional’ layer 7.

Layer 7 Load Balancing

Layer 7 load balancing offers some additional functionalities over Layer 4. Because traffic is being decrypted, the load balancer can now ‘read’ (‘understand’) the traffic that is coming through and take appropriate actions based on the type (or destination) of the traffic.

By decrypting traffic, the load balancer can read the destination for a packet which allows you to make a distinction between traffic for the different Exchange workloads while still using a single virtual service. Based on the type of workload, traffic could e.g. be sent to a different set or servers. However, this was not the most important reason to do Layer 7 load balancing. In Exchange 2010, traffic coming from a certain client had to be persisted to the same endpoint (= the same Client Access Server). This meant that the initial connection could be sent to just about any CAS, but once the session was made superseding packets for that session had to be maintained with the same CAS.

A Load Balancer typically has multiple ways to maintain this client <> server relationship. Depending on the model and make of your Load Balancer, you might see the vendor refer to this relationship as “persistence”, “stickyness” etc… The most commonly used methods are:

  • Source IP
  • Session ID
  • Session Cookie

For a load balancer to be able to identify these things it needs to be able to read the traffic, forcing the need for traffic to be decrypted except when using Source IP. Although Source IP affinity doesn’t necessarily require decryption, in some scenarios using this type of affinity could cause an uneven distribution of load; especially when traffic is coming from behind a NAT device. Consider the following scenario:

Multiple internet-connected devices connect to your on-premises environment and before hitting the load balancers, they go through a router/firewall or other NAT device which will ‘change’(mask) the source IP Addresses (80.70.60.50 and 80.70.60.40) with a single internal address (10.20.30.40). If your load balancers are configured to persist connections based on the source IP address, all connections will potentially end up at the same CAS. This is – of course – something you’d want to avoid. Else, what purpose would your load balancer have? Right?

image

For this “problem” there are a few solutions:

  • You could disable NAT, which would reveal the client’s original IP address to the load balancer. Unfortunately, this isn’t always possible and depends on your network topology/routing infrastructure.
  • You could change the configuration of your load balancer to use something else than the Source IP to determine whether a connection should be persisted or not.

In the latter case persistence based on the SSL Session ID is a good alternative. The load balancer will examine every “packet” that flows through it, read the session ID and if it finds a match for a previously created session, it will send that packet to the same destination as before. While this works brilliantly, it will induce a higher load on your load balancer because:

  1. the load balancer needs to inspect each packet flowing through, consuming more CPU cycles
  2. the load balancer needs to maintain a bigger “routing table”, which consumes more memory. By that I mean a table where the Session ID is mapped to a destination server.

As mentioned earlier, because you are decrypting the traffic, you can e.g. determine from the packet what the destination URL is. In essence, this allows you to define multiple virtual services (one for each workload) and make the load balancer choose what virtual service to forward a packet to. In this specific example, the virtual services are “hidden” for the end user.

Let’s poor that into an image, things might become more clearly that way:

image

For external clients, there is still a single external URL (VIP) they connect to but ‘internally’  there is separate virtual service for each workload. Whenever a packet reaches the load balancer, it will be read and based on the destination URL, the appropriate virtual service is picked. The biggest advantage is that each virtual service can have its own set of health criteria. This also means that – because workloads are split – if e.g. OWA fails on one server, it won’t affect other workloads for that server (as they belong to a different virtual service). While OWA might be down, other protocols remain healthy and the LB will continue forwarding packets to that server for a specific workload.

With this in mind, we can safely conclude that Layer 7 Load Balancing clearly offers some benefits over simple layer 4. However it will cost you more in terms of hardware capacity for your load balancer. Given that a decently sized load balancer can cost a small fortune, it’s always nice to explore what other alternatives you have. On top of that, this kind of configuration isn’t really “easy” and requires a lot of work from a load balancer’s perspective. I’ll keep the configuration steps for a future article.

Layer 4 load balancing with multiple namespaces

As I showed in the first part of this article, Exchange 2013 greatly simplifies load balancing, compared to Exchange 2010. Unfortunately, this simplification comes at a cost. You loose the ability to do a per-protocol health check when using layer 4. And let’s face it: losing functionality isn’t something you like, right?

Luckily, there is a way to have the best of both worlds though…

Combining the simplicity of Layer 4 and finding a way to mimic the Layer 7 functionality is what the fuzz is all about. Because when using Layer 4 your load balancer has no clue what the endpoint for a given a connection is, we need to find a way to make the Load Balancer know what the endpoint is, without actually needing to decrypt traffic.

The answer is in fact as simple as the idea itself: use a different virtual service for each workload but this time with a different IP address for each URL. The result would be something like this:

image

Each workload now has its own virtual service and therefore you also get a per-workload (per-protocol) availability. This means that, just as with Layer 7, the failure of a single workload on a server has no immediate impact on other workloads while at the same time, you maintain the same level of simplicity as with simple Layer 4. Sounds cool, right?!

Obviously, there is a “huge” downside to this story: you potentially end up with a bunch of different external IP addresses. Although there are also solutions for that, it’s beyond the scope of this article to cover those.

Most people – when I tell them about this approach – don’t like the idea of exposing multiple IPs/URLs to an end user. Personally, I don’t see this as an issue though. “For my users it’s already hard enough to remember a single URL”, they say. Who am I to argue? However, when thinking about it; there’s only one URL an end user will actually be exposed to and that’s the one for OWA. All other URLs are configured automatically using Autodiscover. So even with these multiple URLs/IPs, your users really only need to remember one.

Of course, there’s also the element of the certificate, but depending on the load balancer you buy, that could still be cheaper than the L7 load balancer from the previous example.

Configuring Layer 4 load balancing with different namespaces is done the same way as you configure a single namespace. Only now you have to do it multiple times. The difference will be in the health checks you perform for each protocol and those are something that’s free for you to choose. For now, the health-check options are relatively limited (although that also depends from the load balancer you use). But the future might hold some interesting changes: at MEC, Greg Taylor explained that Microsoft is working with some load balancer vendors to make their load balancers capable of “reading” the health status of an Exchange produced by Managed Availability. This would mean that your load balancer no longer has to perform any specific health checks itself and can rely on the built-in mechanisms in Exchange. Unfortunately there isn’t much more information available right now, but rest assured I follow this closely.

Differentiator: health checks

As said before, the configuration to Layer 4 without multiple namespaces is identical. For more information on how to configure a virtual service with a KEMP load balancer, please take a look at the first part of this article.

The key difference lies within the difference in health checks you perform against each workload. Even then, there is a (huge) difference in what load balancers can do.

Even though I like KEMP load balancers a lot, they are – compared to e.g. F5 – limited in their health checks. From a KEMP load balancer perspective, your health check rule for e.g. Outlook Anywhere would look something like this:

image

From an F5 perspective, a BIG-IP LTM would allow you to “dive deeper” into the health checks You can define a user account that is used to authenticate against the rpcproxy.dll. Only if that fails, the service will be marked down, rather then using a “simple” HTTP GET.

On a side note: for about 90% of the deployments out there, the simple health check method proves more than enough…

Conclusion

Below you’ll find a table in which I summarized the most important pro’s and con’s per option.

  Pro’s Con’s

Layer 4 (Simple)

  • Easy to setup
  • Less resources on the LB required
  • No per-protocol availability
Layer 7
    • per-protocol availability
    • single external IP/URL
  • More difficult to setup
  • more resources on the LB required
Layer 4 (Multiple Namespaces)
  • Easy to setup
  • Les resources on the LB required
  • per-protocol availability
  • multiple external IP/URL

If it were op to me, I like the idea of Layer 4 with multiple namespaces. It’s clean, it’s simple to setup (both from a load balancer and Exchange point of view) and it will most probably save me some money. Unless you only have a limited amount of (external) IP addresses available, this seems like the way forward to me.

Nonetheless, this doesn’t mean you should kick out your fancy load balancer if you already have one. My opinion: use with what you got and above all: keep it simple!

Blog Exchange 2013 How-To's

Retrieving Exchange Autodiscover SCP information from AD via PowerShell

As part of the Autodiscover process, Outlook will query Active Directory in search for the Autodiscover SCP which it will use to discover the Autodiscover URL where it should send its request to.

The configuration information for Autodiscover can easily be retrieved with the Get-ClientAccessServer cmdlet, which will show you important information like:

  • AutoDiscoverSiteScope
  • AutoDiscoverServiceInternalUri
  • WhenCreated

The reason that I’m referring to these three items is because of the way Outlook will handle the retrieved Autodiscover information. In a nutshell, it will query AD and will retrieve a list of SCPs. This list will first be ordered based on the Site Scope (the site in which the client resides). If not, the list will be ordered by creation date (older entries will get queried first).

If you wish to find out more about this process, have a look at the following article: http://technet.microsoft.com/en-us/library/bb332063(v=exchg.80).aspx

What I wanted to do, is “mimic” this process without running the Get-ClientAccessServer cmdlet. One of the reasons is that Get-ClientAccessServer cmdlet depends on the Exchange Management Shell (or more accurately, the Exchange snapin).

Below you will find a code example which uses PowerShell’s ability to query AD directly (ADSI). It will use exactly the same query as Outlook does to retrieve a list of SCPs and will then query these SCPs for the information mentioned above. In the end, the information is displayed on screen, where the SCP records are filtered based on SiteScope and Creation date.

This should give you a pretty good idea of what URL/server your Outlook client will connect to (first) during the AutoDiscover process. Enjoy!

[sourcecode language=”powershell”]
$obj = @()

$ADDomain = Get-ADDomain | Select DistinguishedName
$DSSearch = New-Object System.DirectoryServices.DirectorySearcher
$DSSearch.Filter = ‘(&(objectClass=serviceConnectionPoint)(|(keywords=67661d7F-8FC4-4fa7-BFAC-E1D7794C1F68)(keywords=77378F46-2C66-4aa9-A6A6-3E7A48B19596)))’
$DSSearch.SearchRoot = ‘LDAP://CN=Configuration,’+$ADDomain.DistinguishedName
$DSSearch.FindAll() | %{

$ADSI = [ADSI]$_.Path
$autodiscover = New-Object psobject -Property @{
Server = [string]$ADSI.cn
Site = $adsi.keywords[0]
DateCreated = $adsi.WhenCreated.ToShortDateString()
AutoDiscoverInternalURI = [string]$adsi.ServiceBindingInformation
}
$obj += $autodiscover

}

Write-Output $obj | Select Server,Site,DateCreated,AutoDiscoverInternalURI | ft -AutoSize
[/sourcecode]

Exchange How-To's PowerShell

Using New-Migrationbatch to perform local mailbox moves in Exchange Server 2013

Along with a whole bunch of other improvements and new features, New-Migrationbatch is one of my favorite new additions to the Management Shell.

Prior, if you were moving mailboxes between mailbox servers in e.g. Exchange 2010, you had to use New-MoveRequest to start/request a mailbox move from one database/server to the other. If you had multiple mailboxes you wanted to move at once, you had several options:

    • Use Get-Mailbox and pipe the results along to New-Moverequest
    • Import a CSV-file and pipe the results along to New-Moverequest
    • Create an individual move request for each users

While these options are still valid in Exchange Server 2013, you now also have the ability to create a migration batch using the New-MigrationBatch cmdlet.

This cmdlet will allow you to submit new move requests for a batch of users between two Exchange servers, local or remote (other forest) and on-prem or in the cloud (on-boarding/off-boarding). If you have been performing migrations to Office 365, this cmdlet shouldn’t be new to you as it was already available there.

The nice thing about migration batches is that you can easily create them, without having to start or complete them immediately. Although with a little effort you could’ve also done this using New-Moverequest, it’s not only greatly simplified, the use of migraiton batches also gives you additional benefits like:

  • Automatic Reporting/Notifications
  • Endpoint validation
  • Incremental Syncs
  • Pre-staging of data

Just as in Exchange Server 2010, the switchover from one database to the other is performed during the “completion phase” of the move. During this process, the remainder of items that haven’t been copied from the source mailbox to the target mailbox before are copied over after which the user is redirected to his “new” mailbox on the target database. (for purpose of staying on track with this article, I’ve oversimplified the explanation of what happens during the “completion” phase)

Creating a migration batch

To create a migration batch, you’ll need to have a CSV-file that contains the email addresses of the mailboxes you are going to move. These can be any of the email addresses that are assigned to the mailbox. There is – to my knowledge – no requirement to use the user’s primary email address.

Also, the file should have a ‘heading’ called “EmailAddress”:

image

Next, open the Exchange Management Shell and run the following cmdlet:
[sourcecode language=”powershell”]New-MigrationBatch –Name <name> –CSVData ([System.IO.File]::ReadAllBytes(“<full path to file>”)) –Local –TargetDatabase <dbname>[/sourcecode]
Running this cmdlet will start a local mailbox move between two Exchange server in the same forest. However, it will not automatically start moving the mailboxes as we haven’t used the –Autostart parameter. Furthermore, the moves won’t be completed automatically either because the –AutoComplete parameter wasn’t used either.

Note   It’s important that you specify the full path to where the csv-file is stored (e.g. C:\Files\Batch1.csv). Otherwise the cmdlet will fail because it will search for the file in the sytem32-folder by default.

Once the batch is created (and you didn’t use the –AutoStart parameter), you can launch the moves by running the following cmdlet:
[sourcecode language=”powershell”]Get-Migrationbatch | Start-MigrationBatch[/sourcecode]
Please note that if you have multiple migration batches, this cmdlet will start all of them.

Polling for the status of a migration

You can query the current status of a migration on a per-mailbox basis using the Get-MigrationUserStatistics cmdlet. The cmdlet will return the current status of the mailbox being moved and the amount of items that have been synced/skipped so far.
[sourcecode language=”powershell”]Get-MigrationUser | Get-MigrationUserStatistics[/sourcecode]
image

Note   Alternatively, you can also use the –NotificationEmails parameter during the creation of the migration batch. This parameter will allow you to specify an admin’s email address to which a status report is automatically sent. If you don’t use this parameter, no report is created/sent.

Completing the migration

If you didn’t specify the –AutoComplete parameter while creating the migration batch, you will have to manually start the “completion phase”. This can easily be done using the Complete-MigrationBatch cmdlet.
[sourcecode language=”powershell”]Get-MigrationBatch | Complete-MigrationBatch[/sourcecode]
When you take a look at the migration statistics, you’ll see that the status will be “Completing”:

image

Once mailbox moves have been completed successfully, the status will change to “Completed”Confused smile

Summary

As you can see, the New-Migrationbatch will certainly prove useful (e.g. if you want to pre-stage data without performing the actual switchover). Of course, there are other use cases as well: it’s the perfect companion to use for cross-forest moves and moves to/from Office 365 as  it contains numerous parameters that can be used to make your life easier. For instance the Test-MigrationEndpoint cmdlet can be used to verify if the remote host (to/from which you are migrating) is available and working correctly. This is especially useful in remote mailbox moves (cross-forest) or between on-prem/cloud.

If you want to find out more about the cmdlet, go and have a look at the following page:

Alternatively, you could also run Get-Help Get-NewMigrationBatch –Online from the Exchange Management Shell which will take you to the same page!

Until later!

Michael

Exchange 2013 How-To's PowerShell

Configuring High Availability for the Client Access Server role in Exchange Server 2013 Preview

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

Following one of my previous articles in which I described how you could configure a Database Availability Group to achieve high availability for the Mailbox Server Role, we will now take a look at the process of how to configure high availability for the Client Access Server.

CAS Array

To achieve high availability, you create a load-balanced array of Client Access Servers just like in Exchange Server 2010. Other than before, layer-4 load balancing now becomes a viable options, though that would only be in the smallest deployments where there’s no budget for load balancers.

Layer-4 load balancing only takes into account the IP (and TCP port). Yes, you are no longer required to configure “affinity”. The latter is the process where a connection – once it was built – had to be persisted through the same Client Access Servers. This is because CAS in Exchange Server 2013 doesn’t do any data-rendering anymore: everything happens on the backend (Mailbox servers).

I hear you thinking: does this mean we could use DNS load balancing (a.k.a. round-robin). The answer is yes and no. Yes, because it will load-balance between multiple Client Access Servers; no because if a server would fail, you’d have to remove the server (manually) from DNS and wait for the record to time-out on all the clients. While this might be a cost-effective way to have load-balancing and a very, very basic form of high availability, it is not a real viable solution for most deployments…

Ever since the CAS Array was first introduced, it was subject to quite some misconceptions. A lot of them were addressed by Brian Day in a very interesting article he wrote.  What I find is that people tend to mix up the RPC Client Access Array and the load-balanced array used for http-based traffic. Yes, the use of the term CAS Array can be a little confusing. No, they’re not the same! Winking smile

Now, since Exchange Server 2013 dumped using RPC-over-TCP, I no longer see the purpose in creating the RPC Client Access Array object (New-ClientAccessArray). Instead, it suffices to configure multiple Client Access Servers with the same internal hostname for Outlook Anywhere.

To understand what happens, let’s take a look at the following examples:

In the case where you’re using two Client Access Servers in the same AD site, by default Exchange will “load balance” traffic between the two end points. This means that the 1st request will go to CAS1, the second to CAS2, the third to CAS1 etc… While this does provide some sort of load-balancing, it doesn’t really provide high-availability. Once Outlook is connected to a CAS, it will keep trying to connect to that same server, even after the server is down. Eventually, it will try connecting to the other CAS, but in the meantime your Outlook client will be disconnected.

image

If we add a load balancer, we need to configure the Internal Hostname for OA to a shared value between the Client Access Servers. For example: outlook.exblog.be. This fqdn would then point to the VIP of the load balancer which, in turn, would take care of the rest. Because we’re using a load balancer, it will automatically detect a server failure and redirect the incoming connection to the surviving node. Since there is no affinity required, this “fail over” happens transparently to the end user:

image

As explained before, this load balancer could be anything from simple DNS load balancing, to WNLB or a full-blown hardware load balancer that’s got all the kinky stuff! However, in contrast to before (Exchange 2010), most of the advanced options are not necessary anymore…

Configuring Outlook Anywhere

To configure the internal hostname for Outlook Anywhere, run the following command for each Client Access Server involved:

[sourcecode language=”powershell”]Get-OutlookAnywhere – Server <server> | Set-OutlookAnywhere –InternalHostname <fqdn>[/sourcecode]

Configuring the Load Balancer

As I explained earlier, Layer-4 is now a viable option. Although this could mean that you’d just be using DNS load balancing, you would want to use some load balancing device (physical or virtual).

The benefit of using a load balancer over e.g. WNLB is that these devices usually give you more options towards health-checking of the servers/service that you’re load balancing. This will allow you to better control over the load balancing process. For example: you could check for a particular HTTP-response code to determine whether a server is running or not. It definitely beats using simple ICMP Pings…!

The example below is based on the load balancer in my lab: a KEMP Virtual Load Master 1000. As you will see, it’s setup in the most basic way:

I’ve configured no persistency and – because it’s a lab – I’m only checking the availability of the OWA virtual directory on the Exchange servers. Alternatively, you could do more complex health checks. If you’re looking for more information on how to configure a KEMP load balancer, I’d suggest you take a look at Jaap Wesselius’ blogs here and here. Although these articles describe the configuration of a Load Master in combination with Exchange 2010, the process itself (except for the persistency-stuff etc) is largely the same for Exchange Server 2013. Definitely worth the read!

image

image

image

Exchange 2013 How-To's

Configuring Database Availability Groups in Exchange Server 2013 Preview

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

The process of creating a Database Availability Group (DAG) in Exchange Server 2013 (Preview) is largely the same as in Exchange Server 2010. You can choose to create a DAG via either the Exchange Administrative Center (GUI) or through the Exchange Management Shell (PowerShell).

I prefer using the Exchange Management Shell over EAC as it provides you more information over the process.

Exchange Management Shell

To configure a Database Availability Group using the EMS, type in the following commands. Replace the example values with the ones that suit your environment:

[sourcecode language=”powershell”]New-DatabaseAvailabilityGroup –Name DAG01 –WitnessServer SRV01 –WitnessDirectory “C:\FSW” – DatabaseAvailabilityGroupIPAddresses 192.168.20.110[/sourcecode]

This command will create the DAG. As part of this process, a computer object, also known as the Cluster Name Object, will automatically be created:

image_thumb[1]

Note   In order for this process to complete successfully, you need to have the appropriate permissions on the container in which the object is created. By default this will be the “Computers”-container. However, it is possible that your Active Directory has been reconfigured to use another container/OU as default location for new computer accounts. Have a look at http://support.microsoft.com/kb/324949 for more information.

Another way to get around the possible issue of permissions, is to create the Cluster Name Object (CNO) upfront. This process is also called “pre-staging”. Doing so will allow you to create the object up-front with another account (that has the appropriate rights) so that you don’t run into any issues when configuring your DAG.

To pre-stage the CNO, complete the following tasks:

  1. Open Active Directory Users & Computers, navigate to the OU in which you want to create the object, right-click and select New > Computer:

    image

  2. Enter a Computer Name and click OKto create the account:

    image

  3. Right-click the new account and select Properties. Open the Security tab and add the following permissions:- Exchange Trusted Subsystem – Full Control
    – First DAG Node (Computer Account) – Full Control

image     image

More information on how to pre-stage the CNO can be found here: http://technet.microsoft.com/en-us/library/ff367878.aspx

Note   if your DAG has multiple nodes across different subnets, you will need to assign an IP address in each subnet to the DAG. To do so, you can separate the IP addresses using a comma:

[sourcecode language=”powershell”]New-DatabaseAvailabilityGroup –Name DAG01 –WitnessServer SRV01 –WitnessDirectory “C:\FSW” – DatabaseAvailabilityGroupIPAddresses 192.168.20.110,192.168.30.110,192.168.40.110[/sourcecode]

Once the command executed successfully, you can now add mailbox servers to the DAG. You will need to run this command for each server you want to add to the DAG:

[sourcecode language=”powershell”]Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer EX01
Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer EX02[/sourcecode]

Alternatively, you can also add all/multiple mailbox servers at once to the DAG:

[sourcecode language=”powershell”]Get-MailboxServer | %{Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer $_.Name}[/sourcecode]

Adding Database Copies

Now that your DAG has been created, you can add copies of mailbox databases to other mailbox servers. For example, to add a copy of DB1 to server EX02, you would run the following command:

[sourcecode language=”powershell”]Add-MailboxDatabaseCopy –Identity DB1 –MailboxServer EX02[/sourcecode]

Stay tuned! In an upcoming article, I will be talking about configuring high availability for the Client Access infrastructure in Exchange Server 2013 as well as a topic on high availability (more general).

Exchange 2013 How-To's

Configuring Certificates in Exchange Server 2013 Preview

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

One of the tasks you will have to complete after installing Exchange Server 2013 is configuring certificates. Most Exchange-related traffic (including client traffic) is handled via HTTPS and thus they require some additional configuration to work properly.

Out of the box, Exchange Server 2013 will be using self-signed certificates. While these certificates (and the associated error messages) might be acceptable in a test-lab, they won’t be in production.

Although there is already a lot of guidance on this topic for Exchange Server 2010, I still regularly come across issues because of wrongly configured certificates. Hence, I decided to create this article to somewhat clarify the subject.

What certificates can I use?

As in Exchange Server 2010, Exchange Server 2013 can use both SAN certificates (recommended) or wildcard certificates.

What namespaces should I use?

The change in the client access model in Exchange 2013 (RPC over TCP to RPC over HTTP) has a positive impact on the design of your namespaces: ultimately, you need less namespaces then before.

Amongst the namespaces that you (potentially) need are:

  • Client Access Array (Array of Load Balanced CAS servers)
  • Autodiscover
  • OWA Failback URL*
    * Whether or not you need the Failback URL, depends entirely on the setup of your environment. I will discuss the design and setup of an highly available Exchange Server 2013 environment in another article soon.

Configuring Certificates using EAC

If you aren’t re-using a certificate that you exported from another server earlier, you will have to create a new certificate request first. This request will provide you a DSR which you can use to generate the actual certificate.

  1. Open the EAC and navigate to Servers > Certificatesand click the “+”-sign (New Certificate Wizard):image
  2. Click create a request for a certificate from a certification authority and click Next:

    image
  3. Type in a name for the certificate and click Next. Although you could enter just anything, it’s always interesting to make this name as descriptive as possible:image
  4. If you want to use a wildcard certificate, click the checkbox and enter your root domain. Then click Next.
    imageIf you are notusing a wildcard certificate, you will be presented with the following screens first. On the first page, you can define what namespaces you are going to use for the different Exchange-services:image

    After having clicked Next, you can manually add or remove namespaces to be included on the certificate. Once ready, click Next.
    image

  5. Select on which server to store the certificate and click Next:image
  6. Enter your organization details and click Next.image
  7. Enter the location where you want to save the certificate. The location should be entered as a UNC path. Then click Finish:SNAGHTML283ffc

Before continuing, verify that the certificate request file has correctly been created. The file will contains a DSR which you can use to request your certificate from your CA. This CA can be a private of public one.

Your DSR will look something like this:

—–BEGIN NEW CERTIFICATE REQUEST—–
MIID5TCC5s0C5Q5w5j5SMB5G51U55wwJZXhibG9nLmJlMQ8wDQYDVQQLD5ZF5GJs
b2cxCz5JBgNVB5oM5klUMQ8wDQYDVQQHD5ZW5WNodGUxGD5WBgNVB5gMD1dlc3Qt
VmxhYW5kZXJlbj5LM5kG51U5BhMCQkUwgg5iM50GCSqGSIb3DQ5B5QU554IBDw5w
gg5K5oIB5QDcTxGgG8NTjxR5boNJii76SOXwz1Zs75ZwFKj3bSn8mhd55+u4uwn5
R1zrMvV55+35ccdn8OPxbmPJISK9q58O750nOU+tM5kRxmHR571d5lRvL6MtqYsS
337hshIFMOFNOo9Ln/U05WrGJcxCnC52tFlzhBbRLzBWwFQCHnCjS13o4j2PdF0d
gxgQ0s/N5wmWW53L1Vh85+Ri58zlC2+tskocRymVorldM3yDYlm9ZgCxX750/5QT
…..

—–END NEW CERTIFICATE REQUEST—–

Once you have received the certificate from your CA, we can now continue to configure the certificate.

  1. From the certificate overview window, click Completein the actions-pane:image
  2. Enter the location where you stored the certificate. The location should be entered in UNC format. Then, click OK:image
  3. The certificate request should now be completed successfully. Verify that the certificate shows up in the certificate list and that it’s status is valid:image

Although the certificate has been installed on the server, it has not been activated yet. Before a certificate is used, you need to assign services to it.

  1. From the certificate overview, select the newly added certificate and click Edit:image
  2. On the properties page, select Services and select the appropriate service (e.g. IIS). Afterwards click Save:SNAGHTML34c265
  3. Verify that the certificate is installed and configured correctly by browsing to Outlook Web App. This should not throw a certificate warning. If there is a certificate warning, either something went wrong or your certificate does not contain all the namespaces you’re using.

If you have multiple servers, you can either repeat the process above for each server or if you will be using a single certificate for all servers (e.g. when using a wildcard certificate), you should import this newly added certificate on other servers.

  1. From the certificate overview, select the certificate and click the three dots (…). From there, select Export Exchange Certificate:image
  2. Enter a path where the certificate should be saved and type in a password to protect the certificate. The path should be entered in UNC format. Afterwards click OK:image
  3. Verify that the certificate was exported successfully:image

Next, you will have to import the certificate on each server that will be using the certificate.

  1. From the certificate overview, select the certificate and click the three dots (…). From there, select Import Exchange Certificate:image
  2. Enter the UNC where you previously exported the certificate to and provide the password you chose earlier. Click Next:

    image
  3. Select the server(s) to which you want to import the certificate to. Then click Finish:imageimage

    image

  4. Verify that the certificate has successfully been imported to the different servers by selecting the server from the drop-down list on the certificate overview page. The imported certificate should be in the list and it’s status should be valid:image

All that’s left to do now, is to assign services to these certificate on each server. The process is identical as described before.

Configuring certificates using PowerShell

The more servers you have, the longer it might take to perform the actions using EAC. Alternatively, you can also use PowerShell to request, import and export exchange certificates:

  1. Open the Exchange Management Shell and type in the following command. This will create the certificate request “DSR” which you can use to request a certificate from your CA:[sourcecode language=”powershell”]$newcert = New-ExchangeCertificate –GenerateRequest –SubjectName “c=BE,o=EXBLOG,cn=webmail.exblog.be” –DomainName “autodiscover.exblog.be” –PrivateKeyExportable $true
    $newcert | out-file c:\certreq.txt[/sourcecode]

    image

    c= is used to denote the country by it’s international code. e.g. “BE = Belgium”
    o= is used to denote the organization e.g. “Exblog”
    cn= represents the common name of the certificate.
    DomainName represents the (Subject Alternative) Names
    to be included on the certificate.

  2. Once you’ve received the certificate from your CA, it’s time to import/install it onto the server (= completing the request):[sourcecode language=”powershell”]Import-ExchangeCertificate –FileData ([byte []]$(Get-Content –Path “c:\CertificateFromCA.cer” –Encoding Byte –ReadCount 0))[/sourcecode]

    image

  3. Next, we need to assign services to this certificate:[sourcecode language=”powershell”]Get-ExchangeCertificate –ThumbPrint <thumbprint> | Enable-ExchangeCertificate –Services IIS,SMTP[/sourcecode]

    image

    Note   While enabling the certificate you will get a warning that this will actually overwrite the existing certificate. You can safely overwrite the default self-signed certificate.

  4. Now that the certificate has been installed on the first server, we need to export it and import it on our other servers:[sourcecode language=”powershell”]$certexport = Get-ExchangeCertificate –DomainName “webmail.exblog.be” | Export-ExchangeCertificate –BinaryEncoded:$true –Password (Get-Credential).password
    Set-Content –Path c:\cert_export.pfx –Value $certexport.FileDate –Encoding Byte[/sourcecode]

    image

    Note   when running the first command, you will be prompted for a username and password. You can type whatever you want for the username, as this value will be ignored. However, remember the password as you will be using it later to import the certificate into another server.

    To import the certificate into another server, use the following command:

    [sourcecode language=”powershell”]Import-ExchangeCertificate –FileData ([byte[]](Get-Content –Path <path_to_exported_certificate> –Encoding Byte –ReadCount 0)) –Password (Get-Credential).password –Server <servername>[/sourcecode]

    image

    Note   you will be prompted for a username and password. You can type any value for the username, but the password should match the one you selected earlier while exporting the certificate.

    If you have multiple servers to which you want to import the certificate to, you could script the execution of the command above like this:

    [sourcecode language=”powershell”]$servers = Get-ClientAccessServer | Select Name
    foreach($server in $servers){
    Import-ExchangeCertificate –FileData ([byte[]](Get-Content –Path <path_to_exported_certificate> –Encoding Byte –ReadCount 0)) –Password (Get-Credential).password –Server $server.name
    }[/sourcecode]

  5. Enable the newly imported certificates by assigning services to it. Do this for each server to which you imported the certificate:[sourcecode language=”powershell”]Get-ExchangeCertificate –DomainName “webmail.exblog.be” | Enable-ExchangeCertificate –Services IIS,SMTP[/sourcecode]

Clearly, PowerShell is thé choice once you have multiple servers in your organization: in just a few steps you created, exported and imported a certificate on multiple servers

Exchange 2013 How-To's

How To: Deploy Exchange Server 2013

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

In this article, we’ll have a closer look at how to install Exchange Server 2013.

Before continuing, please make sure that you have fulfilled all the required prerequisites and that you have prepared your Active Directory. For more information, take a look at my other article:

For those who have already been working with Exchange Server 2010, you’ll notice that the process itself is pretty similar and as straightforward as it was before.

To start the setup of Exchange Server 2013, open PowerShell on the server you want to install and type in the following commands:

For the the Mailbox Server role only:

./setup.exe /mode:install /role:m /IAcceptExchangeServerLicenseTerms

For the Client Access Server role only:

./setup.exe /mode:install /role:c /IAcceptExchangeServerLicenseTerms

For both Mailbox Server & Client Access Server role:

./setup.exe /mode:install /role:m,c /IAcceptExchangeServerLicenseTerms

Note   Note the use of setup.exe and the /IAcceptExchangeServerLicenseTerms switch. Setup.com (which was used before) is now deprecated and has been replaced by setup.exe.The licensing-switch is also new in Exchange Server 2013 and is required to launch the setup. Many admins will welcome the latter as it will generally save you some time with each deployment, now that you don’t have to wait for the licensing-message to display and time-out!

After you launch one of the commands from above, setup will kick off. Depending on what you are installing, the output might look like this:

image

Once setup completes successfully, restart the computer and you’re good to go!

Installation Logs

Setup creates a log file “ExchangeSetup.txt” under the root (as was the case in Exchange Server 2010):

C:\ExchangeSetupLogs

image

I recommend that you take a look at the log after the installation file to check whether everything completed successfully. In case you encounter any issues, this is probably also the first place to go looking for more information.

Exchange 2013 How-To's

Installing Exchange Server 2013 Preview Prerequisites

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

In this article, we will have a first look at the different system prerequisites for installing Exchange Server 2013.

Supported Platforms

You can install Exchange Server 2013 either on Windows Server 2008 R2 or Windows Server 2012. Just as with Exchange Server 2010, you will need the enterprise edition of Windows Server 2008 R2 if you are going to deploy a Database Availability Group (DAG).

As recently announced, Windows Server 2012 does not include an Enterprise Edition anymore. The Standard Edition now also covers all product features. The number of licenses you need depends on the number of physical processors your system will have.

For a nice and clear overview of the licensing changes, please have a look at Aidan Finn’s blog post.

Preparing Active Directory

On the computer you are going to use to prepare Active Directory, you need at least the Remote Server Administration Tools for Active Directory installed. To install the tools, open up PowerShell and type in the following commands:

Import-Module ServerManager

Add-WindowsFeature RSAT-ADDS (if you’re running WS2008R2)

Install-WindowsFeature RSAT-ADDS (if you’re running WS2012)

You will also need the Microsoft .NET Framework 4.5 and Windows Management Framework 3.0. They are both already included on a system running Windows Server 2012.

Note  make sure that you have appropriate permissions to execute the tasks listed below. Permissions include membership of the Schema Admins and Enterprise Admins group.

Just as with Exchange Server 2010 – the following tasks need to be execute to prepare the Active Directory:

  • Update Schema
  • Prepare AD
  • Prepare Domains
    To update the schema, launch a PowerShell console and type in the following:

./setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms

Note the use of “setup.exe”. Setup.com which was used before has been deprecated. There is also a new switch “/IAcceptExchangeServerLicenseTerms”. It will prevent the mandatory time-out which displays the warning that by waiting you agree with the license terms.

    Next, you’re up for preparing the AD:

./setup.exe /PrepareAD /ON:<exchange organization name>

This command will prepare Active Directory and amongst others configure the appropriate permissions. This includes the creation of the Microsoft Exchange Security Groups (if they do not exist yet).

Lastly, you need to prepare the domain(s). As part of this task, setup will create/update the Microsoft Exchange System Objects container, update the objectVersion attribute and create a domain local group in the targeted domain called “Exchange Install Domain Servers”.

System Prerequisites for Mailbox Server or Mailbox/Client Access Server (combined):

First, on the computer you are going to install Exchange Server 2013, run the following commands (PowerShell) to install the required Roles and Features:

For Windows Server 2012:

Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation

For Windows Server 2008 R2:

Add-WindowsFeature Desktop-Experience, NET-Framework, NET-HTTP-Activation, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Web-Server, WAS-Process-Model, Web-Asp-Net, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI

After you installed the operating system roles and features, you should install the following items:

Windows Server 2008 R2 Windows Server 2012
Microsoft .NET Framework 4.5 Microsoft Office 2010 Filter Pack 64 bit (Mailbox Server Role)
Windows Management Framework 3.0 Microsoft Office 2010 Filter Pack SP1 64 bit (Mailbox Server Role)
Microsoft Unified Communications Managed API 4.0n Core Runtime 64bit Microsoft Unified Communications Managed API 4.0n Core Runtime 64bit
Microsoft Office 2010 Filter Pack 64 bit (Mailbox Server Role)
Microsoft Office 2010 Filter Pack SP1 64 bit (Mailbox Server Role)
Microsoft KB974405 (Windows Identity Foundation)
Microsoft KB2619234
Microsoft KB2533623

Once you installed the required prerequisites, execute the following steps:

  • Uninstall Microsoft Visual C++ 11 Beta Redistributable (x64)
    • Open Control Panel > Programs and Features.
    • Select Visual C++ 11 Beta Redistributable (x64) – 11.0.50531 and then click Uninstall.
    • In Microsoft Visual C++ 11 Beta setup, click Uninstall.
    • When Microsoft Visual C++ 11 Beta is uninstalled, click Close.

If you are running Windows Server 2008 R2, you should also execute the following step. Please note that you should execute this step afterhaving uninstalled Microsoft Visual C++ 11 Beta Redistributable (x64) and before you install Exchange Server 2013:

  • Register ASP.Net with .NET Framework 4.5 in IIS.

Open Command Prompt and type in the following commands:

%SystemDrive%\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -ir -enable

IISReset

System Prerequisites Client Access Server only:

First, on the computer you are going to install Exchange Server 2013, run the following commands (PowerShell) to install the required Roles and Features:

For Windows Server 2012:

Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation

For Windows Server 2008 R2:

Import-Module ServerManager

Add-WindowsFeature Desktop-Experience, NET-Framework, NET-HTTP-Activation, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Web-Server, WAS-Process-Model, Web-Asp-Net, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI

After you installed the operating system roles and features, you should install the following items:

Windows Server 2008 R2 Windows Server 2012
Microsoft .NET Framework 4.5 Microsoft Unified Communications Managed API 4.0n Core Runtime 64bit
Windows Management Framework 3.0
Microsoft Unified Communications Managed API 4.0n Core Runtime 64bit
Microsoft KB974405 (Windows Identity Foundation)
Microsoft KB2619234
Microsoft KB2533623

Once you installed the required prerequisites, execute the following steps:

  • Uninstall Microsoft Visual C++ 11 Beta Redistributable (x64)
    • Open Control Panel > Programs and Features.
    • Select Visual C++ 11 Beta Redistributable (x64) – 11.0.50531 and then click Uninstall.
    • In Microsoft Visual C++ 11 Beta setup, click Uninstall.
    • When Microsoft Visual C++ 11 Beta is uninstalled, click Close.

If you are running Windows Server 2012, you should manually create a firewall rule. This rule will allow the Mailbox Server(s) to access the Client Access Servers’ registry:

    • Open Control Panel > Windows Firewall.
    • Click Advanced Settings.
    • In Windows Firewall with Advanced Security, click Inbound Rules and then click New Rule.
    • Select Port and then click Next.
    • Select TCP, and in Specify local ports, type 139. Click Next.
    • Select Allow the connection and then click Next.
    • Make sure Domain, Private, and Public are selected and then click Next.
    • Enter a name and description for the new rule and then click Finish.

If you are running Windows Server 2008 R2, you should also execute the following step. Please note that you should execute this step afterhaving uninstalled Microsoft Visual C++ 11 Beta Redistributable (x64) and before you install Exchange Server 2013:

  • Register ASP.Net with .NET Framework 4.5 in IIS.

Open Command Prompt and type in the following commands:

%SystemDrive%\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -ir -enable

IISReset

Once you’ve completed the steps above, you are ready for deploying Exchange Server 2013 Preview!

Exchange 2013 How-To's

Checking for administrative permissions in PowerShell

Some PowerShell cmdlets require you to have administrative permissions to run them. If you’re creating a script and you’re using such a cmdlet (e.g. writing a file to the root), it would be nice to check up front if the user who is running the script has the required permissions. After all, what good is it to run the script anyway and throw an error?

Fortunately, there’s an easy way in PowerShell to do this. Add the following code to your script and that’s it. It’s a simple if-statement that will stop the script if you don’t have the required permissions. If you do have administrative permissions, nothing happens and the script will continue processing.

If (-NOT ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] “Administrator”))
{
    Write-Warning “You do not have sufficient permissions to run this script!`nPlease re-run this script as an Administrator!”
    Break
}

What happens is that we’re querying the current identity (user who is running the script) and then check whether or not the identity is part of the Built-in Role “Administrator”.

Have fun!

Cheers,

Michael

P.S.: Thanks to Microsoft’s Scripting Guy “Ed Wilson”. Check out his blog: http://blogs.technet.com/b/heyscriptingguy/

How-To's PowerShell

Get a list of installed applications using PowerShell

I was recently at a customer who needed to  get a report of software that was running on each computer. Since they did not have (or at least not fully deployed) a solution that could do that for them (e.g. System Center Configuration Manager), I  proposed to write a PowerShell script which would remotely check a computer using WMI.

Usage

The script accepts a single parameter to indicate the computer you want to get a list of installed applications from:
[sourcecode language=”powershell”]Get-InstalledApplications –Computer <computername>[/sourcecode]
The output can be formatted in different ways and even be exported to a file or printed on screen:
[sourcecode language=”powershell”]Get-InstalledApplications –Computer <computername> | Out-File <file>
or
Get-InstalledApplications -Computer <computername> | Out-GridView[/sourcecode]
The script

Just copy-paste the code below and save it as a .PS1 file. You can also add the script to your profile so that the function is loaded whenever you open PowerShell.
[sourcecode language=”powershell”]
<#
.Synopsis
Get a list of the installed applications on a (remote) computer.
.DESCRIPTION
Using WMI (Win32_Product), this script will query a (remote) computer for all installed applications and output the results.
If required, these results can be exported or printed on screen.
Please keep in mind that you need to have access to the (remote) computer’s WMI classes.
.EXAMPLE
To simply list the installed applications, use the script as follows:

Get-InstalledApplications -computer <computername>

.EXAMPLE
If required, the output of the script can be modified. For instance, viewing the results on screen:

Get-InstalledApplications -computer <computername> | Out-GridView
#>
function Get-InstalledApplications
{
[CmdletBinding()]
[OutputType([int])]
Param
(
# defines what computer you want to see the inventory for
[Parameter(Mandatory=$true,
ValueFromPipelineByPropertyName=$true,
Position=0)]
$computer
)

Begin
{
}

Process
{
$win32_product = @(get-wmiobject -class ‘Win32_Product’ -computer $computer)

foreach ($app in $win32_product){
$applications = New-Object PSObject -Property @{
Name = $app.Name
Version = $app.Version
}

Write-Output $applications | Select-Object Name,Version
}
}

End
{
}
}
[/sourcecode]

How-To's PowerShell