Microsoft is retiring the MCSM/MCA Program…

After having written about my experience of going through the MCSM program, I couldn’t resist writing down about how I feel about Microsoft’s latest decision to kill the MCSM/MCA program.

For those who aren’t fully up to date about what happened, let me enlighten you first. Early Saturday morning, (Friday evening if you’re in the US), Microsoft sent out the following statement to all existing (and aspiring) Certified Masters and Architects:

“We are contacting you to let you know we are making a change to the Microsoft Certified Master, Microsoft Certified Solutions Master, and Microsoft Certified Architect certifications. As technology changes so do Microsoft certifications and as such, we are continuing to evolve the Microsoft certification program. Microsoft will no longer offer Masters and Architect level training rotations and will be retiring the Masters level certification exams as of October 1, 2013. The IT industry is changing rapidly and we will continue to evaluate the certification and training needs of the industry to determine if there’s a different certification needed for the pinnacle of our program.”

Have a look at Neil Johnson, who also teaches during the MCSM rotation, his blog for the full email.

As you can see, for someone who recently went through the program, that’s not the kind of email I’d hoped to read any time soon. Effectively, the reactions from the community are mainly drenched in disbelief and anger; which also perfectly reflects how I feel about the decision.

Many people have already expressed their displeasure. Although twitter has the top of the talks, some others – like Paul Robichaux and Marcel van den Berg – have also blogged about their thoughts:

At this moment, all that we have is the email. Nothing more. The fact that Microsoft sent out the email right before a long weekend (Monday is apparently Labor Day in the US) doesn’t really help. Besides: who does that anyway? Sending out an email like that and then absenting from any discussion whatsoever? Maybe they hoped that the storm would settle down by Tuesday? There are many reasons why Microsoft might have chosen to kill the program. Most likely it’s cost-related. There’s no doubt that running the MCSM/MCA program costs a lot of money; maybe too much for what they get in return from a direct revenue point-of-view? If so, raising the price for the certification was no option either; it was already expensive as it was. Though I stand with my statement earlier where I said it’s more than worth it. Nonetheless, MCSM/MCA perhaps never became as big as Microsoft hoped for? Anyway. It doesn’t matter, does it?

Microsoft has lately been making a lot of rather strange decisions. A lot of IT Pros (including myself) are wondering what they (Microsoft) are trying to achieve. First, they decided to kill TechNet subscriptions, now the MCSM/MCA program. The question is what will be next…? They are, for sure, making it very difficult to keep on advocating for them…

I cherish no hope that Microsoft will reverse their decision, but I would like to have a more decent explanation as to why they made this decision and what they are up to next. This is the very least they can do for all those who have invested a lot of money and time to go through the program…

Until later,

– a very disappointed – Michael

Exchange 2013 News

Exchange Server 2013 reaches General Availability!

Only a few moments ago, Microsoft’s Exchange Product team announced that Exchange Server 2013 reached General Availability.

In short, this means that the product is now available for everyone to download and evaluate. Unfortunately, you still can only deploy Exchange 2013 in a greenfield install (no prior versions of Exchange).
However, the ETA for Exchange 2010 SP3 and the Exchange 2007 coexistence rollup was adjusted to Q1 2013 instead of H1 2013. Good news!

Read more about it in the official announcement here:

Blog Exchange 2013

Load Balancing Exchange 2013 – part 1


In one of my earlier posts, I already briefly explained what load balancing in Exchange 2013 looked like and how you could setup basic load balancing using a KEMP LoadMaster based on a blog post that Jaap Wesselius wrote. Today I will elaborate a bit more on the different load balancing options you have, along with how to configure a KEMP LoadMaster accordingly. In a future blog post (which I’m working on right now), I will be explaining how you can configure an F5 BIG-IP LTM to achieve the same results.

Load Balancing Primer

One of the ‘biggest’ changes towards load balancing in Exchange 2013 is that you can now use simple, plain Layer 4 load balancing. More ‘complex’ and more expensive layer 7 load balancing that you had to use in Exchange 2010 is no longer a hard requirement. However, this doesn’t mean that you cannot use Layer 7 load balancing. But more about that later (part 2).

In a nutshell, Layer 7 load balancing means that you can do application-aware load balancing. You achieve this by decrypting SSL traffic, which allows the load balancing device to ‘read’ the traffic that’s passing though. The Load Balancer will analyze the headers and based on the what it ‘finds’ (and how it’s configured), it will take an appropriate action (e.g. use another virtual service or use another load balancing option).

Traffic decryption uses additional resources from your load balancer. The impact of the amount of users connecting through your load balancing device usually is (much) higher in such case. Also, not all load balancers offer this functionality by default. Some require you to purchase additional license (or hardware) before being able to decrypt SSL traffic. Companies might choose to re-encrypt traffic that is sent to Exchange (SSL Bridging) because of security reasons (e.g. avoid sniffing). In Exchange 2010 you could also choose not to re-encrypt traffic when sending it to Exchange. This latter process is called SSL Offloading and required some additional configuration in Exchange. Not re-encrypting traffic (SSL Offloading) saves some resources on your load balancing device, but know that – for now – it is not supported for Exchange 2013; you are required to re-encrypt traffic when using Layer 7 load balancing! So make sure that you size your hardware load balancers accordingly. How to size your load balancer depends of the model and make. Contact the manufacturer for more information and guidance.

Load Balancing Options in Exchange 2013

When looking purely at Exchange 2013, you have different options with regards to load balancing. Each of the following options has its advantages and disadvantages. Let’s take a closer look at each of them and explain why:

  • Layer 4 (single namespace)
  • Layer 7
  • Layer 4 (separate namespaces)

Layer 4 (single namespace)

This is the most basic (and restricted) but also the easiest way of setting up load balancing for Exchange 2013. A single namespace will be used to load balance all traffic for the different Exchange workloads (OWA, EAC, ActiveSync, EWS, …).


Because you are using a single namespace with Layer 4, you cannot do workload-specific load balancing and/or health checking. As a result, a failure of a single workload can possibly impact the end user experience. To illustrate, let’s take the following example:

User A connects to and opens up OWA. At the same time, his mobile device syncs (using ActiveSync) over the same URL while his Outlook is connected (Outlook Anywhere), you’ve guessed it right, over the same external URL. In this example, the virtual service is configured to check the OWA Virtual Directory (HTTP 200 OK) to determine whether a server is available or not.

Let’s see what happens if OWA fails on a server. The health check that the load balancer executes will fail, therefore it will assume that the server isn’t functioning properly anymore and take that entire server out of service. It does this by preventing traffic from being forwarded to that server. Even though other workloads like Outlook Anywhere and ActiveSync, … might still work perfectly, the server will still considered ‘down’. From an efficiency point-of-view that is not the best thing that could happen:


Now, imagine that OWA works fine, but the RPC/HTTP components is experiencing some difficulties…At this point, OWA would still return a positive health check status and the load balancer would assume the server is still healthy. As a result traffic is still being sent to the server, including traffic for the failed RPC/HTTP components.  This will definitely impact a user’s experience.


As you can see, it’s perhaps not the best possible experience for the end user (in all cases), but it definitely is the easiest and most cost-effective way to setup load balancing for Exchange 2013. Because you don’t need advanced functionality from the load balancer, you will probably be able to fit more traffic through the load balancer before hitting it’s limits. Alternatively, if you don’t have a load balancer yet, it could allow you to buy a smaller scaled model to achieve the “same” results as with L7 load balancing.

Let’s take a look at how to setup a KEMP LoadMaster for this scenario:

Note   to keep things brief and to the point, I’m not going to discuss how to setup a KEMP LoadMaster in your environment and what options you have there (e.g. single-arm setup, …). We’ll only discuss how to setup the virtual services and other related configuration settings.

First, navigate to Virtual Services and click Add New:


Enter the Virtual IP Address (VIP), port, and optionally a descriptive name for the service. Then click Add this Virtual Service to create it:


Configure the Basic Properties as follows:


Because we’re not going to use Layer 7, make sure to disable Force L7 in the Standard Options. Because Exchange doesn’t need affinity (persistence), you do not have to configure anything specific for that either. You’re free to choose the load balancing mechanism (a.k.a. scheduling), but Round Robin should be OK:


I can imagine that you’re now like “Wait a minute, does that mean that every new connection possibly end up with another server in the array?” The answer is: yes. But that actually doesn’t matter. Every connection that is made is automatically re-authenticated. As a result, the fact that every new connection is forwarded to another server (even mid-session) doesn’t impact the end user’s experience. If you’d like to know a bit more about this, I suggest you listen to episode 12 of “The UC Architects”. In this episode, Greg Taylor (Microsoft) explains how that actually works.

Anyway, we got sidetracked. Back to the Load Master! Again, because we’re doing the simplest of the simplest (L4) here, we also don’t need to do any form of SSL decrypting, meaning that you can leave these options at their default:


If you will, you can specify a “Sorry Server” in the advanced properties. This is a server where traffic is redirected to if all servers within the array become unavailable:


Note   if you click Add HTTP Redirector the KEMP LoadMaster will automatically create a Virtual Service for the same VIP on TCP port 80 that will redirect traffic to the Virtual Service on TCP port 443. This way you can enforce SSL and you don’t necessarily have to do that elsewhere.

All that’s left now is to configure the servers to direct traffic to. Click Add New under Real Servers to add an Exchange 2013 server. Once you’ve filled in the details, click Add this Real Server. Repeat this step for all Exchange 2013 Client Access Servers you want to add.


We still have to configure how the LoadMaster will check the health of the servers in the array. There are different ways to do this. In the example below, the LoadMaster will query the OWA Virtual Directory and if it gets a response, the server will be deemed healthy. Alternatively, you could also opt not to specify an URL. Do not forget to click Set Check Port after entering the port number.


Believe it or not, but that’s it. Your LoadMaster is now setup to handle incoming traffic and distribute the load evenly over the configured servers.



Simple Layer 4 load balancing clearly has its benefits towards simplicity and cost (you usually need less performing hardware load balancers to achieve the same result as with L7). On the other hand, the limited options towards health checking could potentially take offline a server based on problems with a single workload while other workloads might still work fine on that server.

In the second part of this article, we’ll dive deeper into the option of using Layer 4 load balancing with multiple namespaces and how that compares to using Layer 7 load balancing, so stay tuned!



Blog Exchange 2013

Are we going to see location-based services in Exchange 2013 soon…?

I was reading through some Exchange Server 2013 documentation, when I came across the following information on the schema changes that are introduced:

It seems that new attributes now allow to add exact location information (coordinates) for mail recipients:

As the documentation points out, these attributes could be used to determine the location of an office or meeting room, but I can think of some ways that it could be used to locate e.g. other resources or people. In a connected world people do nothing else but sharing their whereabouts using Facebook, Twitter and Foursquare I can see why introducing such functionality into Exchange could make sense. The new “Bing Map”-app in Outlook, which will automatically map an address from an email onto Bing Maps, can already be seen as a step into that direction.

Interestingly enough, these attributes aren’t required to store a location or offer location-based services. An application could easily store this information elsewhere. However, while this information – at first – might seem irrelevant, to me it proves that Exchange is a product that is constantly evolving and trying to stay ahead of the curve. I’m looking forward to see if we’ll see some use of these attributes in future updates of Exchange or Outlook…

Please note that these thoughts are purely my own and are by no means related to the Exchange product team. To my knowledge, they haven’t made any announcement that location-based services will be used in future updates or releases of Exchange nor have they confirmed these attributes could be used to do so.

For more information on other schema changes, have a look at the following website:

Blog Exchange 2013

Using New-Migrationbatch to perform local mailbox moves in Exchange Server 2013

Along with a whole bunch of other improvements and new features, New-Migrationbatch is one of my favorite new additions to the Management Shell.

Prior, if you were moving mailboxes between mailbox servers in e.g. Exchange 2010, you had to use New-MoveRequest to start/request a mailbox move from one database/server to the other. If you had multiple mailboxes you wanted to move at once, you had several options:

    • Use Get-Mailbox and pipe the results along to New-Moverequest
    • Import a CSV-file and pipe the results along to New-Moverequest
    • Create an individual move request for each users

While these options are still valid in Exchange Server 2013, you now also have the ability to create a migration batch using the New-MigrationBatch cmdlet.

This cmdlet will allow you to submit new move requests for a batch of users between two Exchange servers, local or remote (other forest) and on-prem or in the cloud (on-boarding/off-boarding). If you have been performing migrations to Office 365, this cmdlet shouldn’t be new to you as it was already available there.

The nice thing about migration batches is that you can easily create them, without having to start or complete them immediately. Although with a little effort you could’ve also done this using New-Moverequest, it’s not only greatly simplified, the use of migraiton batches also gives you additional benefits like:

  • Automatic Reporting/Notifications
  • Endpoint validation
  • Incremental Syncs
  • Pre-staging of data

Just as in Exchange Server 2010, the switchover from one database to the other is performed during the “completion phase” of the move. During this process, the remainder of items that haven’t been copied from the source mailbox to the target mailbox before are copied over after which the user is redirected to his “new” mailbox on the target database. (for purpose of staying on track with this article, I’ve oversimplified the explanation of what happens during the “completion” phase)

Creating a migration batch

To create a migration batch, you’ll need to have a CSV-file that contains the email addresses of the mailboxes you are going to move. These can be any of the email addresses that are assigned to the mailbox. There is – to my knowledge – no requirement to use the user’s primary email address.

Also, the file should have a ‘heading’ called “EmailAddress”:


Next, open the Exchange Management Shell and run the following cmdlet:
[sourcecode language=”powershell”]New-MigrationBatch –Name <name> –CSVData ([System.IO.File]::ReadAllBytes(“<full path to file>”)) –Local –TargetDatabase <dbname>[/sourcecode]
Running this cmdlet will start a local mailbox move between two Exchange server in the same forest. However, it will not automatically start moving the mailboxes as we haven’t used the –Autostart parameter. Furthermore, the moves won’t be completed automatically either because the –AutoComplete parameter wasn’t used either.

Note   It’s important that you specify the full path to where the csv-file is stored (e.g. C:\Files\Batch1.csv). Otherwise the cmdlet will fail because it will search for the file in the sytem32-folder by default.

Once the batch is created (and you didn’t use the –AutoStart parameter), you can launch the moves by running the following cmdlet:
[sourcecode language=”powershell”]Get-Migrationbatch | Start-MigrationBatch[/sourcecode]
Please note that if you have multiple migration batches, this cmdlet will start all of them.

Polling for the status of a migration

You can query the current status of a migration on a per-mailbox basis using the Get-MigrationUserStatistics cmdlet. The cmdlet will return the current status of the mailbox being moved and the amount of items that have been synced/skipped so far.
[sourcecode language=”powershell”]Get-MigrationUser | Get-MigrationUserStatistics[/sourcecode]

Note   Alternatively, you can also use the –NotificationEmails parameter during the creation of the migration batch. This parameter will allow you to specify an admin’s email address to which a status report is automatically sent. If you don’t use this parameter, no report is created/sent.

Completing the migration

If you didn’t specify the –AutoComplete parameter while creating the migration batch, you will have to manually start the “completion phase”. This can easily be done using the Complete-MigrationBatch cmdlet.
[sourcecode language=”powershell”]Get-MigrationBatch | Complete-MigrationBatch[/sourcecode]
When you take a look at the migration statistics, you’ll see that the status will be “Completing”:


Once mailbox moves have been completed successfully, the status will change to “Completed”Confused smile


As you can see, the New-Migrationbatch will certainly prove useful (e.g. if you want to pre-stage data without performing the actual switchover). Of course, there are other use cases as well: it’s the perfect companion to use for cross-forest moves and moves to/from Office 365 as  it contains numerous parameters that can be used to make your life easier. For instance the Test-MigrationEndpoint cmdlet can be used to verify if the remote host (to/from which you are migrating) is available and working correctly. This is especially useful in remote mailbox moves (cross-forest) or between on-prem/cloud.

If you want to find out more about the cmdlet, go and have a look at the following page:

Alternatively, you could also run Get-Help Get-NewMigrationBatch –Online from the Exchange Management Shell which will take you to the same page!

Until later!


Exchange 2013 How-To's PowerShell

The “High-Availability” concept…-part 2

Welcome to this second article on my view on High Availability. In the first part, we’ve taken a look at what high availability was and what the potential impact of requiring a higher availability rate might be.

Today, we’re going to focus on the question: “what should be measured?” and how you take the answer to that question and build your solution.


I ended my previous article by explaining that you’d want to measure a functionality rather than building blocks of your application architecture. Regular monitoring won’t get you far though. Instead, so-called synthetic transactions (tests which mimic a user’s actions like e.g. logging into OWA) allow you to test a functionality end-to-end. This allows you to determine whether or not the functionality that is being tested is working as expect. Underneath there is still a collection of building blocks which may or may not be aware of each other’s presence. This is something you – as an architect/designer/whatever – should definitely be cautious about.

All too often I come across designs that do not take inter-component-dependencies into account: from an application point-of-view everything might seem operational but that does not necessarily reflect the situation as it is for and end user…

For example, your Exchange services might be up and running and everything might seem OK from the backend; if the connection between the server- and client network is broken or otherwise not functioning, your end-users won’t be able to use their email (or perhaps other services that depend on email) anyway.

This means that – when designing your solution – you should try (mentally) reconstructing the interaction between the end user and the application (in this case: Exchange). While doing so, you write down all components that come into play. Soon you’ll end up with a list that also involves elements that – at first – aren’t really part of your application’s core-architecture:

  • Networking (switches, routers, WAN-links, …)
  • Storage (e.g. very important in scenarios where virtualization of Exchange is used)
  • Physical hosts (hypervisors, clustering, …)
  • Human interaction
  • You’ll find out that the above elements play an important part in your application architecture which automatically make them sort of part of the core-architecture after all…

    Negotiate your SLAs

Whether or not these ‘external’ elements are your responsibility probably depend on how your IT department is organized. I can image that if you have no influence at all on e.g. the networking-components, you don’t want to be responsible if something ever goes wrong at that layer. While it is still important to know what components might influence the health of your application, it wouldn’t be a bad idea to leave these components out of the SLAs. In other words: if the outage of your application was due to the network layer it wouldn’t count towards your SLA.

In my opinion that beats the entire purpose of defining SLAs and trying to make IT work as a service for the business. After all: they don’t care what caused an outage, they only care about how long it takes you (or someone else) to get the service/functionality back up and running.

Now that I brought that up, imagine the following example: one of the business requirements state that mails to outside your company should always be delivered within x-period of time (Yes, I deliberately left the timeframe out because it’s inferior to the point I’m trying to make). When doing a component break-down, you could come up with something similar like this (high-level):

  • Client network
  • Mailbox Server(s)
  • Hub Transport Server(s)
  • Server network
  • WAN-links (internet)
  • While the first four components might lie within your reach to manage and remediate in case of an outage, the 5th (WAN link) usually isn’t. So if it takes your ISP 8 hours to solve an issue (because they can according to their SLA for instance), you might perhaps think twice before accepting a 99,9% uptime in your SLA… However, if that isn’t an option you could try finding an ISP who can solve your issues quicker or you could try installing a backup internet connection. Bottom-line: you also need to take into account external factors when designing your solution.

    In some cases, I’ve seen that WAN-links (or outages due to) were marked as an exception to the SLA, just because the probability of an outage was very low (and the cost of an additional backup link was too high).

Probability vs. impact

When you are designing your solution, you don’t always have to take into account every little bit that could go wrong. Simply because you cannot account for everything that can go wrong (Murphy?). While your design definitely should take into account whatever it can, it should also pay attention to the cost-effectiveness of the solution. Remember the graph in the first part which said that cost tend to grow exponentially when trying to achieve a higher availability rate?

This means that sometimes, because the cost to mitigate a single-point-of-failure or risk cannot be justified, you’ll have to settle for less. In such case, you’d want to assess what the probability of a potential error/fault is and how that might affect your application. If both probability that it occurs and impact on your application are low, it’s sometimes far more interesting to accept a risk then trying to mitigate/solve it. On the other hand, if there’s an error which is very likely to occur and might knock down your environment for a few hours, you might reconsider and try solving that.

Solving such an issue can be done in various ways (depending on what the error/fault can be): either increase the application’s resiliency or solve the issue at the layer that it occurs. For instance: if you’ve got a dodgy (physical) network in one of both sites; you might rethink your design to make more use of the site that has got a proper network OR you could try solving the issues at the network layer to make it less dodgy (which I would prefer).


Although I’m convinced that what I wrote didn’t surprise you, by now you should realize that creating a highly available (Exchange) solution takes proper planning. There are far more elements that come into play than one might think at first. Also keep in mind that I only touched on these different aspects superficially; when dealing with potential risks like human error there are other things that come into play like e.g. defining a training-plan to lower the risk of human error.

I personally believe that the importance of these elements will only grow in the future. I’m sure you’ve already heard of the phenomenon “IT as a service”? When approaching the aspect of high availability, try thinking you’re the electricity supplier and the business is the customer (which they actually are). You don’t care how electricity gets to your home or – if it doesn’t – why it doesn’t. All you care about is having electricity in the end…

Extra resources

Thanks to one of the comments on my previous article, I found out that co-UCArchitect Michel de Rooij also wrote an article on high availability a while back. You should check it out!


Configuring High Availability for the Client Access Server role in Exchange Server 2013 Preview


All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

Following one of my previous articles in which I described how you could configure a Database Availability Group to achieve high availability for the Mailbox Server Role, we will now take a look at the process of how to configure high availability for the Client Access Server.

CAS Array

To achieve high availability, you create a load-balanced array of Client Access Servers just like in Exchange Server 2010. Other than before, layer-4 load balancing now becomes a viable options, though that would only be in the smallest deployments where there’s no budget for load balancers.

Layer-4 load balancing only takes into account the IP (and TCP port). Yes, you are no longer required to configure “affinity”. The latter is the process where a connection – once it was built – had to be persisted through the same Client Access Servers. This is because CAS in Exchange Server 2013 doesn’t do any data-rendering anymore: everything happens on the backend (Mailbox servers).

I hear you thinking: does this mean we could use DNS load balancing (a.k.a. round-robin). The answer is yes and no. Yes, because it will load-balance between multiple Client Access Servers; no because if a server would fail, you’d have to remove the server (manually) from DNS and wait for the record to time-out on all the clients. While this might be a cost-effective way to have load-balancing and a very, very basic form of high availability, it is not a real viable solution for most deployments…

Ever since the CAS Array was first introduced, it was subject to quite some misconceptions. A lot of them were addressed by Brian Day in a very interesting article he wrote.  What I find is that people tend to mix up the RPC Client Access Array and the load-balanced array used for http-based traffic. Yes, the use of the term CAS Array can be a little confusing. No, they’re not the same! Winking smile

Now, since Exchange Server 2013 dumped using RPC-over-TCP, I no longer see the purpose in creating the RPC Client Access Array object (New-ClientAccessArray). Instead, it suffices to configure multiple Client Access Servers with the same internal hostname for Outlook Anywhere.

To understand what happens, let’s take a look at the following examples:

In the case where you’re using two Client Access Servers in the same AD site, by default Exchange will “load balance” traffic between the two end points. This means that the 1st request will go to CAS1, the second to CAS2, the third to CAS1 etc… While this does provide some sort of load-balancing, it doesn’t really provide high-availability. Once Outlook is connected to a CAS, it will keep trying to connect to that same server, even after the server is down. Eventually, it will try connecting to the other CAS, but in the meantime your Outlook client will be disconnected.


If we add a load balancer, we need to configure the Internal Hostname for OA to a shared value between the Client Access Servers. For example: This fqdn would then point to the VIP of the load balancer which, in turn, would take care of the rest. Because we’re using a load balancer, it will automatically detect a server failure and redirect the incoming connection to the surviving node. Since there is no affinity required, this “fail over” happens transparently to the end user:


As explained before, this load balancer could be anything from simple DNS load balancing, to WNLB or a full-blown hardware load balancer that’s got all the kinky stuff! However, in contrast to before (Exchange 2010), most of the advanced options are not necessary anymore…

Configuring Outlook Anywhere

To configure the internal hostname for Outlook Anywhere, run the following command for each Client Access Server involved:

[sourcecode language=”powershell”]Get-OutlookAnywhere – Server <server> | Set-OutlookAnywhere –InternalHostname <fqdn>[/sourcecode]

Configuring the Load Balancer

As I explained earlier, Layer-4 is now a viable option. Although this could mean that you’d just be using DNS load balancing, you would want to use some load balancing device (physical or virtual).

The benefit of using a load balancer over e.g. WNLB is that these devices usually give you more options towards health-checking of the servers/service that you’re load balancing. This will allow you to better control over the load balancing process. For example: you could check for a particular HTTP-response code to determine whether a server is running or not. It definitely beats using simple ICMP Pings…!

The example below is based on the load balancer in my lab: a KEMP Virtual Load Master 1000. As you will see, it’s setup in the most basic way:

I’ve configured no persistency and – because it’s a lab – I’m only checking the availability of the OWA virtual directory on the Exchange servers. Alternatively, you could do more complex health checks. If you’re looking for more information on how to configure a KEMP load balancer, I’d suggest you take a look at Jaap Wesselius’ blogs here and here. Although these articles describe the configuration of a Load Master in combination with Exchange 2010, the process itself (except for the persistency-stuff etc) is largely the same for Exchange Server 2013. Definitely worth the read!




Exchange 2013 How-To's

Configuring Database Availability Groups in Exchange Server 2013 Preview


All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

The process of creating a Database Availability Group (DAG) in Exchange Server 2013 (Preview) is largely the same as in Exchange Server 2010. You can choose to create a DAG via either the Exchange Administrative Center (GUI) or through the Exchange Management Shell (PowerShell).

I prefer using the Exchange Management Shell over EAC as it provides you more information over the process.

Exchange Management Shell

To configure a Database Availability Group using the EMS, type in the following commands. Replace the example values with the ones that suit your environment:

[sourcecode language=”powershell”]New-DatabaseAvailabilityGroup –Name DAG01 –WitnessServer SRV01 –WitnessDirectory “C:\FSW” – DatabaseAvailabilityGroupIPAddresses[/sourcecode]

This command will create the DAG. As part of this process, a computer object, also known as the Cluster Name Object, will automatically be created:


Note   In order for this process to complete successfully, you need to have the appropriate permissions on the container in which the object is created. By default this will be the “Computers”-container. However, it is possible that your Active Directory has been reconfigured to use another container/OU as default location for new computer accounts. Have a look at for more information.

Another way to get around the possible issue of permissions, is to create the Cluster Name Object (CNO) upfront. This process is also called “pre-staging”. Doing so will allow you to create the object up-front with another account (that has the appropriate rights) so that you don’t run into any issues when configuring your DAG.

To pre-stage the CNO, complete the following tasks:

  1. Open Active Directory Users & Computers, navigate to the OU in which you want to create the object, right-click and select New > Computer:


  2. Enter a Computer Name and click OKto create the account:


  3. Right-click the new account and select Properties. Open the Security tab and add the following permissions:- Exchange Trusted Subsystem – Full Control
    – First DAG Node (Computer Account) – Full Control

image     image

More information on how to pre-stage the CNO can be found here:

Note   if your DAG has multiple nodes across different subnets, you will need to assign an IP address in each subnet to the DAG. To do so, you can separate the IP addresses using a comma:

[sourcecode language=”powershell”]New-DatabaseAvailabilityGroup –Name DAG01 –WitnessServer SRV01 –WitnessDirectory “C:\FSW” – DatabaseAvailabilityGroupIPAddresses,,[/sourcecode]

Once the command executed successfully, you can now add mailbox servers to the DAG. You will need to run this command for each server you want to add to the DAG:

[sourcecode language=”powershell”]Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer EX01
Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer EX02[/sourcecode]

Alternatively, you can also add all/multiple mailbox servers at once to the DAG:

[sourcecode language=”powershell”]Get-MailboxServer | %{Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer $_.Name}[/sourcecode]

Adding Database Copies

Now that your DAG has been created, you can add copies of mailbox databases to other mailbox servers. For example, to add a copy of DB1 to server EX02, you would run the following command:

[sourcecode language=”powershell”]Add-MailboxDatabaseCopy –Identity DB1 –MailboxServer EX02[/sourcecode]

Stay tuned! In an upcoming article, I will be talking about configuring high availability for the Client Access infrastructure in Exchange Server 2013 as well as a topic on high availability (more general).

Exchange 2013 How-To's

A closer look at the “new” Public Folders in Exchange Server 2013


All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

In addition to some of the existing blogs and information out there, I wanted to take the time and explain the “new” Public Folders in Exchange Server 2013 a bit more in detail.


Ever since Exchange Server 2007 was released, Microsoft proclaimed Public Folders would eventually disappear. At that time, it was said Public Folders would stop to exist in “future versions” and consultants were given the advice to evangelize SharePoint as being thé replacement for Public Folders. It seemed, however, that these statements provoked quite some comments from the field: companies world-wide would still be using Public Folders and were not planning on giving that up so easily. Knowing what a migration to SharePoint could potentially cost, it’s quite understandable that some companies were rather reluctant to immediately jump on the SharePoint-train.

In fact, I cannot blame them. Even through today, there’s no real solution that offers the same functionality and ease of use as Public Folders.

Over time, Microsoft seems to have recognized that killing Public Folders might not be such a great idea after all and changed it’s attitude little at a time. Nevertheless, they’ve always deemphasized using Public Folders.

…At least until NOW.

To better understand at what the changes in Exchange Server 2013 are, let’s first have a look at how things were in Exchange Server 2010.

Public Folders in Exchange 2010 (and before)

In Exchange Server 2010 (and before), Public Folders were stored in their own database(s).

Basically speaking, Public Folders consists of two elements:

    • Hierarchy which contains the properties of the public folders and also  includes the tree-structure in which the Public Folders are organized. This tree structure contains Public- and System Public Folders.
    • Content which is the actual data (e.g. messages) in a Public Folder. Each Public Folder database contains a copy of the hierarchy, however not every database contained the same content:

image: a simplified graphical view of a public folder database

It was up to the administrator to define what content would be replicated across one (or more) servers. When a public folder was setup to have multiple copies, a process called “Public Folder replication” ensured that data was replicated across these “replicas” (= other public folder databases).

This replication is in no way comparable to how a DAG replicates data though: SMTP messages are sent between the different Mailbox Servers that hosted a Public Folder database:

image:a simplified overview of the PF replication model

Although from a data point-of-view having multiple writeable copies of Public Folders highly available: it is not. At least not when it comes to the end-user experience during a failover. Each Mailbox database is configured with a default (“preferred”) Public Folder database. Mailboxes within that mailbox database would automatically connect to their preconfigured “preferred” Public Folder database.

    • If – for some reason – the server hosting the Public Folder database would be unavailable entirely (e.g. offline), a timeout (+/- 60 seconds) would occur and clients would be redirected to another Public Folder database.
    • If, however, the server would still be online but the database would e.g. be dismounted, there would be no redirection and clients would end up with no Public Folders.

As you can imagine, not really a nice user experience, is it?

From a design point of view, Public Folders allowed you to easily spread data across multiple locations and have people make changes to the data in different locations. These changes are then replicated amongst the different replicas. In fact, Public Folders (in Exchange 2010 and before) are implemented in a sort of multi-master model:  all public folder replicas in a PF database are writeable.

Changes in Exchange Server 2013

Exchange Server 2013 entirely changes how Public Folders operate (from a technical point-of-view). From an end user’s perspective, everything remains as it used to be.

The main architectural changes that are introduced are:

    • Public folders are now stored in a mailbox database
    • Public folders now leverage a DAG for high-availability

Public Folder Mailboxes

Public folders are now also mailboxes, but their type is “Public Folder” (just like a Room Mailbox is a Mailbox with type “Room”). In a way, Public Folders still exist of two main elements mentioned above: the hierarchy and contents.

  • The hierarchy is represented by what is called the Master Hierarchy Public Folder Mailbox. This PF Mailbox contains a writeable copy of your public folder hierarchy. There is only a single Master Hierarchy PF mailbox in the organization.
  • Contents (in Public Folders) are now stored in one or more Public Folder mailboxes. These Public Folder mailboxes usually contain one or more Public Folders. Next to the contents, each PF Mailbox also contains a read-only copy of the hierarchy.


Note   because Public Folders are now stored in mailboxes, mailbox quota’s apply to them. For instance, this means that if a PF Mailbox would grow too large, you’d have to move Public Folders to another PF Mailbox (or increase the quota).

PF Mailboxes can grow quite large without really suffering from performance issues, just like regular mailboxes. Although real-world experience will have to tell us what the performance of Public Folders would be with a large amount of numbers, I’m confident that most companies won’t have a problem fitting a Public Folders in one (or more) PF Mailboxes. Nonetheless, you should carefully plan your Public Folder deployment, especially If you’re company heavily relies on them.

How do these “new” Public Folders work?

The process of connecting to and performing actions on a Public Folder could be summarized as follows:

    1. A user connects to a Public Folder (Mailbox).
    2. Any operation to the Public Folder is recorded in the PF mailbox
    3. If the data is located in another Public Folder Mailbox, the operation is redirected to the appropriate PF mailbox (which contains the Public Folder against which the action was performed)
    4. Hierarchy changes (e.g. adding or removing folders) are redirected to the Master Hierarchy PF Mailbox from where they are in turn replicated to all other PF Mailboxes (5).

Note   Hierarchy changes are never recorded through a regular PF Mailbox


High Availability.

Because Public Folders now reside in regular mailbox databases, they can benefit from the High Availability model that a Database Availability Group offers. Since you can store Public Folder mailboxes in any database, there’s no difference as to how they are treated in case of a failover.

The implications of changes

Because of these changes, the way we will implement Public Folders will also change. Foremost, placement for Public Folders will require a bit more planning from now on.There can only be a single active copy of a mailbox database at any given time, so we can no longer “benefit” from the multi-master model that existed previously.

This means that you should – preferably – place your Public Folder Mailboxes as close to your users as possible. If, however, to different groups need to work on the same set of folders but at opposite sites of the globe; you might be in for a challenge… Of course, one might start to wonder if Public Folders would be the right choice in such a scenario after all.

It’s not only good news

Unfortunately, every upside also has a downside: due to all the changes that were introduced, it seems that Microsoft isn’t able to get everything finished in time. At RTM we won’t have Public Folders in OWA. Perhaps this is something we’ll see added in SP1 (just like with Exchange 2010)…

Migration and coexistence

I will keep the migration and coexistence part for a future article. At the moment, Exchange Server 2013 Preview only support greenfield installs… On the other hand, if you’re interested in reading more on the migration stuff, you might want to take a look at a fellow-UC Architect Mahmoud Magdy’s blog about the new Public Folders.

Blog Exchange 2013

Management options in Exchange Server 2013 (Preview)


All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

Chances are that, by now, you’ve already got Exchange Server 2013 Preview installed in your lab. High time to take a closer look at some of the things that have changed. In this article we’ll zoom in on some of the changes with regards to management of your Exchange Server 2013 (Preview).

Exchange Administrative Console (EAC)

One of the first things that will catch your eye will undeniably be the fact that the Exchange Management Console is missing. Indeed, the EMC has been replaced by a Web-Based GUI called Exchange Administrative Center (EAC). In fact, the EAC is kind of the successor of the Exchange Control Panel in Exchange 2010.

Right after you installed your server, you haven’t configured any URLs yet. The EAC will then typically be available at the following URL:


Note   Because you haven’t installed any certificates yet, a certificate warning will be thrown when navigating to the page. At this point, it’s still okay and you can safely ignore the warning.

Next, you’ll hit the login page. Did you notice how “clean” the metro-style interface looks?


After logging in, you will be taken to the EAC:


From there you will be able to execute quite a lot of tasks. The EAC has been build to replace the EMC without having to bind in on the possibilities you, as an admin, have. It will allow you to perform most of the tasks you were able to do with EMC and has been equipped with some new possibilities. Amongst these common tasks are, for example, the management of:

  • Recipients
  • Connectors
  • Certificates
  • High Availability (DAGs)
  • Role-Based Access Control (RBAC)
  • Compliance features (In-Place Hold, DLP, Retention Policies, …)

As you can see, most Administrators who are used to working with the EMC won’t have too much trouble getting along with EAC, as long as you don’t forget to turn of that pop-up blocker!Navigating the EAC is easy and pretty intuitive. Next to the trusted items like “Recipients”, “Organization” and “Servers”, you now have a single point of management for e.g. ActiveSync policies, RBAC management etc. In Exchange Server 2010 not all of these items were available from the EMC and required you to login to the Exchange Control Panel.

I encourage you to take a look around and play a bit with the EAC. In the upcoming weeks, I’ll be posting some how-to’s regarding some of the management tasks and I will be regularly referring to the EAC.


PowerShell still rocks the Exchange world. The Exchange Management Shell is your one-stop management interface from where you can do virtually anything with your Exchange environment. While junior admins might revert to the comfort the EAC offers, more seasoned administrators will certainly love PowerShell: Exchange Server 2013 uses the Management Framework 3.0 and therefore it relies on the power of PowerShell v3.

If you haven’t checked out what PowerShell v3 can do for you, I suggest that you have a look at the following page. It contains a list of articles that might prove useful!

Taking a look at one of many improvements to be found in PowerShell v3; I’m pretty sure the one-liners amongst us will just love the simplified syntax and what possibilities it offers.

For example, you could do something like the following:

[sourcecode language=”powershell”] Get-Mailbox | where name –like “*partial*” [/sourcecode]

Prior to v3 (e.g. in Exchange Server 2010), the same query would have looked like this:

[sourcecode language=”powershell”]Get-Mailbox | Where {$_.Name –like “*partial*”}[/sourcecode]

Pretty cool, isn’t it? Although I still find that when writing bigger scripts the latter seems more orderly, the first example comes in pretty handy when doing some quick searches or some rough filtering.

Note   these improvements are a general feature of PowerShell v3 and can be used anywhere with PowerShell.

New Exchange 2013 Cmdlets

Exchange 2013 (Preview) brings along a bunch of new PowerShell commands. Although most of them are related to new features like e.g. Site Mailboxes, there are also some new test commands and some existing cmdlet ‘sets’ have been extended. For example, one of the commands that was added to the Mail Flow cmdlets is:


Note   The help files are already available online! A quick search revealed the true purpose of this command:

[sourcecode language=”powershell”]Get-Help Redirect-Message –Online[/sourcecode]

For your reference, I’ve added a list of new cmdlets that can be found in Exchange 2013 (Preview):



As you can see, Office 2013 (Preview) commits to greater flexibility for the Administrator. I’m pretty excited to see the first scripts to pop up in the next few weeks and I’m sure there will be plenty to talk about in the future.

It’s also interesting to see how Exchange Server 2013 tends more to the cloud than ever before. Given that nowadays it’s all about BYOD, being mobile and being to work where and when you want, I believe that these improvements are for the better. The fact that you will be able to manage both Exchange on-premises as in the cloud from within a single Web-Interface, from virtually anywhere and almost every device is a HUGE leap forward.

Make sure to check back regularly as I will be posting more updates on Exchange Server 2013 in the upcoming days and weeks!

Blog Exchange 2013