Review: Exchange 2013 Inside-Out: “Mailbox & High Availability” and “Connectivity, Clients & UM”

Although there’s a saying that you shouldn’t judge a book by it’s cover, for any book that features Tony Redmond and Paul Robichaux as the authors, it’s safe to assume it will be a great book! While both books have plenty of interesting technical content, to me that’s not the only things what defines a good book. Tony and Paul are very eloquent writers and I found reading both books extremely pleasant from a language-technical perspective. To be honest, as an amateur-writer myself, I can only dream of ever being able to put the English language to work as they do.

Having read the Exchange 2010 Inside-Out book before, I expected these 2013-version books to contain at least the same amount and type of content. The 2010-version is a HUGE book (+/- 1200 pages!) and therefore I was pleasantly surprised to see it was decided to split the content into two separate books now. This definitely helps making the amount of information more manageable. Don’t get me wrong: there’s still a lot to digest, but having two (slightly) smaller books makes the task to take on the books more easy, if not at least mentally it does! This is also something I do have to ‘warn’ you about. The amount of information and the technical breadth and depth of the content might sometimes feel a little overwhelming. This could be especially true if you aren’t very familiar with Exchange. For some, the technical depth might even be outright too deep. That’s also why I advise you not to try and read any of these books in one go. Instead, take your time to read through each of the chapters and allow some time to let the information sink in. Combine that with some fiddling around in your own lab and you’ll have a great learning experience.

What I like about these books is that you’re provided with all the information to successfully understand how Exchange operates and are then expected to put that knowledge to work yourself. Although there are plenty of examples in the book, if you are looking for pre-canned scripts, how-to’s or step-by-step guides, there might be better alternatives for you (e.g. Exchange 2013 PowerShell Cookbook). But then again, I don’t think that’s what Paul and Tony were trying to achieve anyway.

Conclusion

Paul and Tony have managed to combine a fun-to-read style with great (technical) content making the Exchange 2013 Inside-Out books a must to read. Whether you’re a consultant working day in and day out with Exchange or you’re an admin in charge of a Exchange organization, I’m sure you’ll find lots of valuable information in each of the books which will help you in your day-to-day job.

Blog Exchange Exchange 2013 Reviews

Load Balancing Exchange 2013 – part 2

Introduction

In the first part of this article, we talked about Load Balancing in general and took a closer look at what the advantages and disadvantages of simple layer 4 load balancing for Exchange 2013 were. Today we’ll dive a bit deeper into the remaining two ways of load balancing Exchange 2013: layer 4 with multiple namespaces and ‘traditional’ layer 7.

Layer 7 Load Balancing

Layer 7 load balancing offers some additional functionalities over Layer 4. Because traffic is being decrypted, the load balancer can now ‘read’ (‘understand’) the traffic that is coming through and take appropriate actions based on the type (or destination) of the traffic.

By decrypting traffic, the load balancer can read the destination for a packet which allows you to make a distinction between traffic for the different Exchange workloads while still using a single virtual service. Based on the type of workload, traffic could e.g. be sent to a different set or servers. However, this was not the most important reason to do Layer 7 load balancing. In Exchange 2010, traffic coming from a certain client had to be persisted to the same endpoint (= the same Client Access Server). This meant that the initial connection could be sent to just about any CAS, but once the session was made superseding packets for that session had to be maintained with the same CAS.

A Load Balancer typically has multiple ways to maintain this client <> server relationship. Depending on the model and make of your Load Balancer, you might see the vendor refer to this relationship as “persistence”, “stickyness” etc… The most commonly used methods are:

  • Source IP
  • Session ID
  • Session Cookie

For a load balancer to be able to identify these things it needs to be able to read the traffic, forcing the need for traffic to be decrypted except when using Source IP. Although Source IP affinity doesn’t necessarily require decryption, in some scenarios using this type of affinity could cause an uneven distribution of load; especially when traffic is coming from behind a NAT device. Consider the following scenario:

Multiple internet-connected devices connect to your on-premises environment and before hitting the load balancers, they go through a router/firewall or other NAT device which will ‘change’(mask) the source IP Addresses (80.70.60.50 and 80.70.60.40) with a single internal address (10.20.30.40). If your load balancers are configured to persist connections based on the source IP address, all connections will potentially end up at the same CAS. This is – of course – something you’d want to avoid. Else, what purpose would your load balancer have? Right?

image

For this “problem” there are a few solutions:

  • You could disable NAT, which would reveal the client’s original IP address to the load balancer. Unfortunately, this isn’t always possible and depends on your network topology/routing infrastructure.
  • You could change the configuration of your load balancer to use something else than the Source IP to determine whether a connection should be persisted or not.

In the latter case persistence based on the SSL Session ID is a good alternative. The load balancer will examine every “packet” that flows through it, read the session ID and if it finds a match for a previously created session, it will send that packet to the same destination as before. While this works brilliantly, it will induce a higher load on your load balancer because:

  1. the load balancer needs to inspect each packet flowing through, consuming more CPU cycles
  2. the load balancer needs to maintain a bigger “routing table”, which consumes more memory. By that I mean a table where the Session ID is mapped to a destination server.

As mentioned earlier, because you are decrypting the traffic, you can e.g. determine from the packet what the destination URL is. In essence, this allows you to define multiple virtual services (one for each workload) and make the load balancer choose what virtual service to forward a packet to. In this specific example, the virtual services are “hidden” for the end user.

Let’s poor that into an image, things might become more clearly that way:

image

For external clients, there is still a single external URL (VIP) they connect to but ‘internally’  there is separate virtual service for each workload. Whenever a packet reaches the load balancer, it will be read and based on the destination URL, the appropriate virtual service is picked. The biggest advantage is that each virtual service can have its own set of health criteria. This also means that – because workloads are split – if e.g. OWA fails on one server, it won’t affect other workloads for that server (as they belong to a different virtual service). While OWA might be down, other protocols remain healthy and the LB will continue forwarding packets to that server for a specific workload.

With this in mind, we can safely conclude that Layer 7 Load Balancing clearly offers some benefits over simple layer 4. However it will cost you more in terms of hardware capacity for your load balancer. Given that a decently sized load balancer can cost a small fortune, it’s always nice to explore what other alternatives you have. On top of that, this kind of configuration isn’t really “easy” and requires a lot of work from a load balancer’s perspective. I’ll keep the configuration steps for a future article.

Layer 4 load balancing with multiple namespaces

As I showed in the first part of this article, Exchange 2013 greatly simplifies load balancing, compared to Exchange 2010. Unfortunately, this simplification comes at a cost. You loose the ability to do a per-protocol health check when using layer 4. And let’s face it: losing functionality isn’t something you like, right?

Luckily, there is a way to have the best of both worlds though…

Combining the simplicity of Layer 4 and finding a way to mimic the Layer 7 functionality is what the fuzz is all about. Because when using Layer 4 your load balancer has no clue what the endpoint for a given a connection is, we need to find a way to make the Load Balancer know what the endpoint is, without actually needing to decrypt traffic.

The answer is in fact as simple as the idea itself: use a different virtual service for each workload but this time with a different IP address for each URL. The result would be something like this:

image

Each workload now has its own virtual service and therefore you also get a per-workload (per-protocol) availability. This means that, just as with Layer 7, the failure of a single workload on a server has no immediate impact on other workloads while at the same time, you maintain the same level of simplicity as with simple Layer 4. Sounds cool, right?!

Obviously, there is a “huge” downside to this story: you potentially end up with a bunch of different external IP addresses. Although there are also solutions for that, it’s beyond the scope of this article to cover those.

Most people – when I tell them about this approach – don’t like the idea of exposing multiple IPs/URLs to an end user. Personally, I don’t see this as an issue though. “For my users it’s already hard enough to remember a single URL”, they say. Who am I to argue? However, when thinking about it; there’s only one URL an end user will actually be exposed to and that’s the one for OWA. All other URLs are configured automatically using Autodiscover. So even with these multiple URLs/IPs, your users really only need to remember one.

Of course, there’s also the element of the certificate, but depending on the load balancer you buy, that could still be cheaper than the L7 load balancer from the previous example.

Configuring Layer 4 load balancing with different namespaces is done the same way as you configure a single namespace. Only now you have to do it multiple times. The difference will be in the health checks you perform for each protocol and those are something that’s free for you to choose. For now, the health-check options are relatively limited (although that also depends from the load balancer you use). But the future might hold some interesting changes: at MEC, Greg Taylor explained that Microsoft is working with some load balancer vendors to make their load balancers capable of “reading” the health status of an Exchange produced by Managed Availability. This would mean that your load balancer no longer has to perform any specific health checks itself and can rely on the built-in mechanisms in Exchange. Unfortunately there isn’t much more information available right now, but rest assured I follow this closely.

Differentiator: health checks

As said before, the configuration to Layer 4 without multiple namespaces is identical. For more information on how to configure a virtual service with a KEMP load balancer, please take a look at the first part of this article.

The key difference lies within the difference in health checks you perform against each workload. Even then, there is a (huge) difference in what load balancers can do.

Even though I like KEMP load balancers a lot, they are – compared to e.g. F5 – limited in their health checks. From a KEMP load balancer perspective, your health check rule for e.g. Outlook Anywhere would look something like this:

image

From an F5 perspective, a BIG-IP LTM would allow you to “dive deeper” into the health checks You can define a user account that is used to authenticate against the rpcproxy.dll. Only if that fails, the service will be marked down, rather then using a “simple” HTTP GET.

On a side note: for about 90% of the deployments out there, the simple health check method proves more than enough…

Conclusion

Below you’ll find a table in which I summarized the most important pro’s and con’s per option.

  Pro’s Con’s

Layer 4 (Simple)

  • Easy to setup
  • Less resources on the LB required
  • No per-protocol availability
Layer 7
    • per-protocol availability
    • single external IP/URL
  • More difficult to setup
  • more resources on the LB required
Layer 4 (Multiple Namespaces)
  • Easy to setup
  • Les resources on the LB required
  • per-protocol availability
  • multiple external IP/URL

If it were op to me, I like the idea of Layer 4 with multiple namespaces. It’s clean, it’s simple to setup (both from a load balancer and Exchange point of view) and it will most probably save me some money. Unless you only have a limited amount of (external) IP addresses available, this seems like the way forward to me.

Nonetheless, this doesn’t mean you should kick out your fancy load balancer if you already have one. My opinion: use with what you got and above all: keep it simple!

Blog Exchange 2013 How-To's

Error: You cannot synchronize the ADFS configuration database after adding a secondary federation server

Introduction

There are multiple ways to setup a highly available ADFS server farm. One possibility is to install multiple federation servers using the default Windows Internal Database.
In that case, the first federation server is designated as being the ‘primary’ federation server. Every subsequent federation server that is added to the farm will be a ‘secondary’ federation server.

These secondary federation servers periodically poll the primary federation server for configuration changes and replicate these changes across. By default this is every 5 minutes.

This scenario is especially useful if you do not have a SQL server available or if you cannot make your SQL server highly available but still want to increase resiliency for your federation server farm.

Note   when using the Windows Internal Database instead of SQL, you are limited to a maximum of 5 federation servers in a farm.

If you want more information, read my previous article on the implications of a database choice in ADFS:

The issue

When installing a secondary federation server, you might see the following error in the AD FS 2.0 Application Event Log when the server tries to contact the primary federation server to replicate the configuration database:

EventID: 344
Source: AD FS 2.0s

There was an error doing synchronization. Synchronization of data from the primary federation server to a secondary federation server did not occur.

Additional data

Exception details:
System.IO.InvalidDataException: ADMIN0023: Incorrect value for property LastPublishedPolicyCheckTime: 12/31/1899 11:00:00 PM.
   at Microsoft.IdentityServer.PolicyModel.PropertyTypes.DateTimeProperty.Validate(Object context)
   at Microsoft.IdentityServer.PolicyModel.PropertyTypes.PropertySet.ValidateProperties(Object context)
   at Microsoft.IdentityServer.PolicyModel.Client.ClientObject.GetData()
   at Microsoft.IdentityServer.PolicyModel.Client.ClientObject.OnReadFromStore()
   at Microsoft.IdentityServer.PolicyModel.Client.SearchResult..ctor(SearchResultData data, PropertyFactoryBase factory)
   at Microsoft.IdentityServer.Service.Synchronization.SyncAdministrationManager.DoSyncForItems(List`1 itemsToSync)
   at Microsoft.IdentityServer.Service.Synchronization.SyncAdministrationManager.Sync(Boolean syncAll)
   at Microsoft.IdentityServer.Service.Synchronization.SyncAdministrationManager.Sync()
   at Microsoft.IdentityServer.Service.Policy.PolicyServer.Service.SqlPolicyStoreService.DoSyncDirect()
   at Microsoft.IdentityServer.Service.Synchronization.SyncBackgroundTask.Run(Object context)

User Action
Make sure the primary federation server is available or the service account identity of this machine matches the service account identity of the primary federation server.

image

The solution

In this specific case, the customer decided to geographically spread the different AD FS servers to increase the (site) resiliency of their federation server farm. However, this particular secondary federation server was located in a different time zone than the primary federation server. It seems that AD FS cannot handle the time zone difference by itself (unlike e.g. Active Directory that reduces time back to UTC).

After changing the time zone on the secondary AD FS server to match the time zone of the primary AD FS server, replication started working.

ADFS Blog Hybrid Exchange Office 365

The “High-Availability” concept…-part 2

Welcome to this second article on my view on High Availability. In the first part, we’ve taken a look at what high availability was and what the potential impact of requiring a higher availability rate might be.

Today, we’re going to focus on the question: “what should be measured?” and how you take the answer to that question and build your solution.

Dependencies…

I ended my previous article by explaining that you’d want to measure a functionality rather than building blocks of your application architecture. Regular monitoring won’t get you far though. Instead, so-called synthetic transactions (tests which mimic a user’s actions like e.g. logging into OWA) allow you to test a functionality end-to-end. This allows you to determine whether or not the functionality that is being tested is working as expect. Underneath there is still a collection of building blocks which may or may not be aware of each other’s presence. This is something you – as an architect/designer/whatever – should definitely be cautious about.

All too often I come across designs that do not take inter-component-dependencies into account: from an application point-of-view everything might seem operational but that does not necessarily reflect the situation as it is for and end user…

For example, your Exchange services might be up and running and everything might seem OK from the backend; if the connection between the server- and client network is broken or otherwise not functioning, your end-users won’t be able to use their email (or perhaps other services that depend on email) anyway.

This means that – when designing your solution – you should try (mentally) reconstructing the interaction between the end user and the application (in this case: Exchange). While doing so, you write down all components that come into play. Soon you’ll end up with a list that also involves elements that – at first – aren’t really part of your application’s core-architecture:

  • Networking (switches, routers, WAN-links, …)
  • Storage (e.g. very important in scenarios where virtualization of Exchange is used)
  • Physical hosts (hypervisors, clustering, …)
  • Human interaction
  • You’ll find out that the above elements play an important part in your application architecture which automatically make them sort of part of the core-architecture after all…

    Negotiate your SLAs

Whether or not these ‘external’ elements are your responsibility probably depend on how your IT department is organized. I can image that if you have no influence at all on e.g. the networking-components, you don’t want to be responsible if something ever goes wrong at that layer. While it is still important to know what components might influence the health of your application, it wouldn’t be a bad idea to leave these components out of the SLAs. In other words: if the outage of your application was due to the network layer it wouldn’t count towards your SLA.

In my opinion that beats the entire purpose of defining SLAs and trying to make IT work as a service for the business. After all: they don’t care what caused an outage, they only care about how long it takes you (or someone else) to get the service/functionality back up and running.

Now that I brought that up, imagine the following example: one of the business requirements state that mails to outside your company should always be delivered within x-period of time (Yes, I deliberately left the timeframe out because it’s inferior to the point I’m trying to make). When doing a component break-down, you could come up with something similar like this (high-level):

  • Client network
  • Mailbox Server(s)
  • Hub Transport Server(s)
  • Server network
  • WAN-links (internet)
  • While the first four components might lie within your reach to manage and remediate in case of an outage, the 5th (WAN link) usually isn’t. So if it takes your ISP 8 hours to solve an issue (because they can according to their SLA for instance), you might perhaps think twice before accepting a 99,9% uptime in your SLA… However, if that isn’t an option you could try finding an ISP who can solve your issues quicker or you could try installing a backup internet connection. Bottom-line: you also need to take into account external factors when designing your solution.

    In some cases, I’ve seen that WAN-links (or outages due to) were marked as an exception to the SLA, just because the probability of an outage was very low (and the cost of an additional backup link was too high).

Probability vs. impact

When you are designing your solution, you don’t always have to take into account every little bit that could go wrong. Simply because you cannot account for everything that can go wrong (Murphy?). While your design definitely should take into account whatever it can, it should also pay attention to the cost-effectiveness of the solution. Remember the graph in the first part which said that cost tend to grow exponentially when trying to achieve a higher availability rate?

This means that sometimes, because the cost to mitigate a single-point-of-failure or risk cannot be justified, you’ll have to settle for less. In such case, you’d want to assess what the probability of a potential error/fault is and how that might affect your application. If both probability that it occurs and impact on your application are low, it’s sometimes far more interesting to accept a risk then trying to mitigate/solve it. On the other hand, if there’s an error which is very likely to occur and might knock down your environment for a few hours, you might reconsider and try solving that.

Solving such an issue can be done in various ways (depending on what the error/fault can be): either increase the application’s resiliency or solve the issue at the layer that it occurs. For instance: if you’ve got a dodgy (physical) network in one of both sites; you might rethink your design to make more use of the site that has got a proper network OR you could try solving the issues at the network layer to make it less dodgy (which I would prefer).

Conclusion

Although I’m convinced that what I wrote didn’t surprise you, by now you should realize that creating a highly available (Exchange) solution takes proper planning. There are far more elements that come into play than one might think at first. Also keep in mind that I only touched on these different aspects superficially; when dealing with potential risks like human error there are other things that come into play like e.g. defining a training-plan to lower the risk of human error.

I personally believe that the importance of these elements will only grow in the future. I’m sure you’ve already heard of the phenomenon “IT as a service”? When approaching the aspect of high availability, try thinking you’re the electricity supplier and the business is the customer (which they actually are). You don’t care how electricity gets to your home or – if it doesn’t – why it doesn’t. All you care about is having electricity in the end…

Extra resources

Thanks to one of the comments on my previous article, I found out that co-UCArchitect Michel de Rooij also wrote an article on high availability a while back. You should check it out!

Blog

Configuring High Availability for the Client Access Server role in Exchange Server 2013 Preview

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

Following one of my previous articles in which I described how you could configure a Database Availability Group to achieve high availability for the Mailbox Server Role, we will now take a look at the process of how to configure high availability for the Client Access Server.

CAS Array

To achieve high availability, you create a load-balanced array of Client Access Servers just like in Exchange Server 2010. Other than before, layer-4 load balancing now becomes a viable options, though that would only be in the smallest deployments where there’s no budget for load balancers.

Layer-4 load balancing only takes into account the IP (and TCP port). Yes, you are no longer required to configure “affinity”. The latter is the process where a connection – once it was built – had to be persisted through the same Client Access Servers. This is because CAS in Exchange Server 2013 doesn’t do any data-rendering anymore: everything happens on the backend (Mailbox servers).

I hear you thinking: does this mean we could use DNS load balancing (a.k.a. round-robin). The answer is yes and no. Yes, because it will load-balance between multiple Client Access Servers; no because if a server would fail, you’d have to remove the server (manually) from DNS and wait for the record to time-out on all the clients. While this might be a cost-effective way to have load-balancing and a very, very basic form of high availability, it is not a real viable solution for most deployments…

Ever since the CAS Array was first introduced, it was subject to quite some misconceptions. A lot of them were addressed by Brian Day in a very interesting article he wrote.  What I find is that people tend to mix up the RPC Client Access Array and the load-balanced array used for http-based traffic. Yes, the use of the term CAS Array can be a little confusing. No, they’re not the same! Winking smile

Now, since Exchange Server 2013 dumped using RPC-over-TCP, I no longer see the purpose in creating the RPC Client Access Array object (New-ClientAccessArray). Instead, it suffices to configure multiple Client Access Servers with the same internal hostname for Outlook Anywhere.

To understand what happens, let’s take a look at the following examples:

In the case where you’re using two Client Access Servers in the same AD site, by default Exchange will “load balance” traffic between the two end points. This means that the 1st request will go to CAS1, the second to CAS2, the third to CAS1 etc… While this does provide some sort of load-balancing, it doesn’t really provide high-availability. Once Outlook is connected to a CAS, it will keep trying to connect to that same server, even after the server is down. Eventually, it will try connecting to the other CAS, but in the meantime your Outlook client will be disconnected.

image

If we add a load balancer, we need to configure the Internal Hostname for OA to a shared value between the Client Access Servers. For example: outlook.exblog.be. This fqdn would then point to the VIP of the load balancer which, in turn, would take care of the rest. Because we’re using a load balancer, it will automatically detect a server failure and redirect the incoming connection to the surviving node. Since there is no affinity required, this “fail over” happens transparently to the end user:

image

As explained before, this load balancer could be anything from simple DNS load balancing, to WNLB or a full-blown hardware load balancer that’s got all the kinky stuff! However, in contrast to before (Exchange 2010), most of the advanced options are not necessary anymore…

Configuring Outlook Anywhere

To configure the internal hostname for Outlook Anywhere, run the following command for each Client Access Server involved:

Get-OutlookAnywhere – Server <server> | Set-OutlookAnywhere –InternalHostname <fqdn>

Configuring the Load Balancer

As I explained earlier, Layer-4 is now a viable option. Although this could mean that you’d just be using DNS load balancing, you would want to use some load balancing device (physical or virtual).

The benefit of using a load balancer over e.g. WNLB is that these devices usually give you more options towards health-checking of the servers/service that you’re load balancing. This will allow you to better control over the load balancing process. For example: you could check for a particular HTTP-response code to determine whether a server is running or not. It definitely beats using simple ICMP Pings…!

The example below is based on the load balancer in my lab: a KEMP Virtual Load Master 1000. As you will see, it’s setup in the most basic way:

I’ve configured no persistency and – because it’s a lab – I’m only checking the availability of the OWA virtual directory on the Exchange servers. Alternatively, you could do more complex health checks. If you’re looking for more information on how to configure a KEMP load balancer, I’d suggest you take a look at Jaap Wesselius’ blogs here and here. Although these articles describe the configuration of a Load Master in combination with Exchange 2010, the process itself (except for the persistency-stuff etc) is largely the same for Exchange Server 2013. Definitely worth the read!

image

image

image

Exchange 2013 How-To's

Configuring Database Availability Groups in Exchange Server 2013 Preview

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

The process of creating a Database Availability Group (DAG) in Exchange Server 2013 (Preview) is largely the same as in Exchange Server 2010. You can choose to create a DAG via either the Exchange Administrative Center (GUI) or through the Exchange Management Shell (PowerShell).

I prefer using the Exchange Management Shell over EAC as it provides you more information over the process.

Exchange Management Shell

To configure a Database Availability Group using the EMS, type in the following commands. Replace the example values with the ones that suit your environment:

New-DatabaseAvailabilityGroup –Name DAG01 –WitnessServer SRV01 –WitnessDirectory “C:\FSW” – DatabaseAvailabilityGroupIPAddresses 192.168.20.110

This command will create the DAG. As part of this process, a computer object, also known as the Cluster Name Object, will automatically be created:

image_thumb[1]

Note   In order for this process to complete successfully, you need to have the appropriate permissions on the container in which the object is created. By default this will be the “Computers”-container. However, it is possible that your Active Directory has been reconfigured to use another container/OU as default location for new computer accounts. Have a look at http://support.microsoft.com/kb/324949 for more information.

Another way to get around the possible issue of permissions, is to create the Cluster Name Object (CNO) upfront. This process is also called “pre-staging”. Doing so will allow you to create the object up-front with another account (that has the appropriate rights) so that you don’t run into any issues when configuring your DAG.

To pre-stage the CNO, complete the following tasks:

  1. Open Active Directory Users & Computers, navigate to the OU in which you want to create the object, right-click and select New > Computer:

    image

  2. Enter a Computer Name and click OKto create the account:

    image

  3. Right-click the new account and select Properties. Open the Security tab and add the following permissions:- Exchange Trusted Subsystem – Full Control
    – First DAG Node (Computer Account) – Full Control

image     image

More information on how to pre-stage the CNO can be found here: http://technet.microsoft.com/en-us/library/ff367878.aspx

Note   if your DAG has multiple nodes across different subnets, you will need to assign an IP address in each subnet to the DAG. To do so, you can separate the IP addresses using a comma:

New-DatabaseAvailabilityGroup –Name DAG01 –WitnessServer SRV01 –WitnessDirectory “C:\FSW” – DatabaseAvailabilityGroupIPAddresses 192.168.20.110,192.168.30.110,192.168.40.110

Once the command executed successfully, you can now add mailbox servers to the DAG. You will need to run this command for each server you want to add to the DAG:

Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer EX01
Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer EX02

Alternatively, you can also add all/multiple mailbox servers at once to the DAG:

Get-MailboxServer | %{Add-DatabaseAvailabilityGroupServer –Identity DAG01 –MailboxServer $_.Name}

Adding Database Copies

Now that your DAG has been created, you can add copies of mailbox databases to other mailbox servers. For example, to add a copy of DB1 to server EX02, you would run the following command:

Add-MailboxDatabaseCopy –Identity DB1 –MailboxServer EX02

Stay tuned! In an upcoming article, I will be talking about configuring high availability for the Client Access infrastructure in Exchange Server 2013 as well as a topic on high availability (more general).

Exchange 2013 How-To's

A closer look at the “new” Public Folders in Exchange Server 2013

Disclaimer:

All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.

In addition to some of the existing blogs and information out there, I wanted to take the time and explain the “new” Public Folders in Exchange Server 2013 a bit more in detail.

History

Ever since Exchange Server 2007 was released, Microsoft proclaimed Public Folders would eventually disappear. At that time, it was said Public Folders would stop to exist in “future versions” and consultants were given the advice to evangelize SharePoint as being thé replacement for Public Folders. It seemed, however, that these statements provoked quite some comments from the field: companies world-wide would still be using Public Folders and were not planning on giving that up so easily. Knowing what a migration to SharePoint could potentially cost, it’s quite understandable that some companies were rather reluctant to immediately jump on the SharePoint-train.

In fact, I cannot blame them. Even through today, there’s no real solution that offers the same functionality and ease of use as Public Folders.

Over time, Microsoft seems to have recognized that killing Public Folders might not be such a great idea after all and changed it’s attitude little at a time. Nevertheless, they’ve always deemphasized using Public Folders.

…At least until NOW.

To better understand at what the changes in Exchange Server 2013 are, let’s first have a look at how things were in Exchange Server 2010.

Public Folders in Exchange 2010 (and before)

In Exchange Server 2010 (and before), Public Folders were stored in their own database(s).

Basically speaking, Public Folders consists of two elements:

    • Hierarchy which contains the properties of the public folders and also  includes the tree-structure in which the Public Folders are organized. This tree structure contains Public- and System Public Folders.
    • Content which is the actual data (e.g. messages) in a Public Folder. Each Public Folder database contains a copy of the hierarchy, however not every database contained the same content:

image
image: a simplified graphical view of a public folder database

It was up to the administrator to define what content would be replicated across one (or more) servers. When a public folder was setup to have multiple copies, a process called “Public Folder replication” ensured that data was replicated across these “replicas” (= other public folder databases).

This replication is in no way comparable to how a DAG replicates data though: SMTP messages are sent between the different Mailbox Servers that hosted a Public Folder database:

image
image:a simplified overview of the PF replication model

Although from a data point-of-view having multiple writeable copies of Public Folders highly available: it is not. At least not when it comes to the end-user experience during a failover. Each Mailbox database is configured with a default (“preferred”) Public Folder database. Mailboxes within that mailbox database would automatically connect to their preconfigured “preferred” Public Folder database.

    • If – for some reason – the server hosting the Public Folder database would be unavailable entirely (e.g. offline), a timeout (+/- 60 seconds) would occur and clients would be redirected to another Public Folder database.
    • If, however, the server would still be online but the database would e.g. be dismounted, there would be no redirection and clients would end up with no Public Folders.

As you can imagine, not really a nice user experience, is it?

From a design point of view, Public Folders allowed you to easily spread data across multiple locations and have people make changes to the data in different locations. These changes are then replicated amongst the different replicas. In fact, Public Folders (in Exchange 2010 and before) are implemented in a sort of multi-master model:  all public folder replicas in a PF database are writeable.

Changes in Exchange Server 2013

Exchange Server 2013 entirely changes how Public Folders operate (from a technical point-of-view). From an end user’s perspective, everything remains as it used to be.

The main architectural changes that are introduced are:

    • Public folders are now stored in a mailbox database
    • Public folders now leverage a DAG for high-availability


Public Folder Mailboxes

Public folders are now also mailboxes, but their type is “Public Folder” (just like a Room Mailbox is a Mailbox with type “Room”). In a way, Public Folders still exist of two main elements mentioned above: the hierarchy and contents.

  • The hierarchy is represented by what is called the Master Hierarchy Public Folder Mailbox. This PF Mailbox contains a writeable copy of your public folder hierarchy. There is only a single Master Hierarchy PF mailbox in the organization.
  • Contents (in Public Folders) are now stored in one or more Public Folder mailboxes. These Public Folder mailboxes usually contain one or more Public Folders. Next to the contents, each PF Mailbox also contains a read-only copy of the hierarchy.

image

Note   because Public Folders are now stored in mailboxes, mailbox quota’s apply to them. For instance, this means that if a PF Mailbox would grow too large, you’d have to move Public Folders to another PF Mailbox (or increase the quota).

PF Mailboxes can grow quite large without really suffering from performance issues, just like regular mailboxes. Although real-world experience will have to tell us what the performance of Public Folders would be with a large amount of numbers, I’m confident that most companies won’t have a problem fitting a Public Folders in one (or more) PF Mailboxes. Nonetheless, you should carefully plan your Public Folder deployment, especially If you’re company heavily relies on them.

How do these “new” Public Folders work?

The process of connecting to and performing actions on a Public Folder could be summarized as follows:

    1. A user connects to a Public Folder (Mailbox).
    2. Any operation to the Public Folder is recorded in the PF mailbox
    3. If the data is located in another Public Folder Mailbox, the operation is redirected to the appropriate PF mailbox (which contains the Public Folder against which the action was performed)
    4. Hierarchy changes (e.g. adding or removing folders) are redirected to the Master Hierarchy PF Mailbox from where they are in turn replicated to all other PF Mailboxes (5).

Note   Hierarchy changes are never recorded through a regular PF Mailbox

image

High Availability.

Because Public Folders now reside in regular mailbox databases, they can benefit from the High Availability model that a Database Availability Group offers. Since you can store Public Folder mailboxes in any database, there’s no difference as to how they are treated in case of a failover.

The implications of changes

Because of these changes, the way we will implement Public Folders will also change. Foremost, placement for Public Folders will require a bit more planning from now on.There can only be a single active copy of a mailbox database at any given time, so we can no longer “benefit” from the multi-master model that existed previously.

This means that you should – preferably – place your Public Folder Mailboxes as close to your users as possible. If, however, to different groups need to work on the same set of folders but at opposite sites of the globe; you might be in for a challenge… Of course, one might start to wonder if Public Folders would be the right choice in such a scenario after all.

It’s not only good news

Unfortunately, every upside also has a downside: due to all the changes that were introduced, it seems that Microsoft isn’t able to get everything finished in time. At RTM we won’t have Public Folders in OWA. Perhaps this is something we’ll see added in SP1 (just like with Exchange 2010)…

Migration and coexistence

I will keep the migration and coexistence part for a future article. At the moment, Exchange Server 2013 Preview only support greenfield installs… On the other hand, if you’re interested in reading more on the migration stuff, you might want to take a look at a fellow-UC Architect Mahmoud Magdy’s blog about the new Public Folders.

Blog Exchange 2013