as a consultant, I regularly come across situations in which I have to troubleshoot an existing Exchange server environment or perhaps have to make an assessment, health report, etc.
Almost every time, I found myself looking up the information from the different (commonly used) virtual directories like: Autodiscover, ActiveSync, OWA, ECP, Web Services, OAB… That’s why I thought it became about time I automated this process so that I didn’t have to type the commands in manually anymore.
The result is a simple script which will query the Exchange Client Access Servers in your environment and will query them for their virtual directory information. Depending on the use of the virtual directory, different object are shown:
In the first part of this article, we talked about Load Balancing in general and took a closer look at what the advantages and disadvantages of simple layer 4 load balancing for Exchange 2013 were. Today we’ll dive a bit deeper into the remaining two ways of load balancing Exchange 2013: layer 4 with multiple namespaces and ‘traditional’ layer 7.
Layer 7 Load Balancing
Layer 7 load balancing offers some additional functionalities over Layer 4. Because traffic is being decrypted, the load balancer can now ‘read’ (‘understand’) the traffic that is coming through and take appropriate actions based on the type (or destination) of the traffic.
By decrypting traffic, the load balancer can read the destination for a packet which allows you to make a distinction between traffic for the different Exchange workloads while still using a single virtual service. Based on the type of workload, traffic could e.g. be sent to a different set or servers. However, this was not the most important reason to do Layer 7 load balancing. In Exchange 2010, traffic coming from a certain client had to be persisted to the same endpoint (= the same Client Access Server). This meant that the initial connection could be sent to just about any CAS, but once the session was made superseding packets for that session had to be maintained with the same CAS.
A Load Balancer typically has multiple ways to maintain this client <> server relationship. Depending on the model and make of your Load Balancer, you might see the vendor refer to this relationship as “persistence”, “stickyness” etc… The most commonly used methods are:
For a load balancer to be able to identify these things it needs to be able to read the traffic, forcing the need for traffic to be decrypted except when using Source IP. Although Source IP affinity doesn’t necessarily require decryption, in some scenarios using this type of affinity could cause an uneven distribution of load; especially when traffic is coming from behind a NAT device. Consider the following scenario:
Multiple internet-connected devices connect to your on-premises environment and before hitting the load balancers, they go through a router/firewall or other NAT device which will ‘change’(mask) the source IP Addresses (220.127.116.11 and 18.104.22.168) with a single internal address (10.20.30.40). If your load balancers are configured to persist connections based on the source IP address, all connections will potentially end up at the same CAS. This is – of course – something you’d want to avoid. Else, what purpose would your load balancer have? Right?
For this “problem” there are a few solutions:
You could disable NAT, which would reveal the client’s original IP address to the load balancer. Unfortunately, this isn’t always possible and depends on your network topology/routing infrastructure.
You could change the configuration of your load balancer to use something else than the Source IP to determine whether a connection should be persisted or not.
In the latter case persistence based on the SSL Session ID is a good alternative. The load balancer will examine every “packet” that flows through it, read the session ID and if it finds a match for a previously created session, it will send that packet to the same destination as before. While this works brilliantly, it will induce a higher load on your load balancer because:
the load balancer needs to inspect each packet flowing through, consuming more CPU cycles
the load balancer needs to maintain a bigger “routing table”, which consumes more memory. By that I mean a table where the Session ID is mapped to a destination server.
As mentioned earlier, because you are decrypting the traffic, you can e.g. determine from the packet what the destination URL is. In essence, this allows you to define multiple virtual services (one for each workload) and make the load balancer choose what virtual service to forward a packet to. In this specific example, the virtual services are “hidden” for the end user.
Let’s poor that into an image, things might become more clearly that way:
For external clients, there is still a single external URL (VIP) they connect to but ‘internally’ there is separate virtual service for each workload. Whenever a packet reaches the load balancer, it will be read and based on the destination URL, the appropriate virtual service is picked. The biggest advantage is that each virtual service can have its own set of health criteria. This also means that – because workloads are split – if e.g. OWA fails on one server, it won’t affect other workloads for that server (as they belong to a different virtual service). While OWA might be down, other protocols remain healthy and the LB will continue forwarding packets to that server for a specific workload.
With this in mind, we can safely conclude that Layer 7 Load Balancing clearly offers some benefits over simple layer 4. However it will cost you more in terms of hardware capacity for your load balancer. Given that a decently sized load balancer can cost a small fortune, it’s always nice to explore what other alternatives you have. On top of that, this kind of configuration isn’t really “easy” and requires a lot of work from a load balancer’s perspective. I’ll keep the configuration steps for a future article.
Layer 4 load balancing with multiple namespaces
As I showed in the first part of this article, Exchange 2013 greatly simplifies load balancing, compared to Exchange 2010. Unfortunately, this simplification comes at a cost. You loose the ability to do a per-protocol health check when using layer 4. And let’s face it: losing functionality isn’t something you like, right?
Luckily, there is a way to have the best of both worlds though…
Combining the simplicity of Layer 4 and finding a way to mimic the Layer 7 functionality is what the fuzz is all about. Because when using Layer 4 your load balancer has no clue what the endpoint for a given a connection is, we need to find a way to make the Load Balancer know what the endpoint is, without actually needing to decrypt traffic.
The answer is in fact as simple as the idea itself: use a different virtual service for each workload but this time with a different IP address for each URL. The result would be something like this:
Each workload now has its own virtual service and therefore you also get a per-workload (per-protocol) availability. This means that, just as with Layer 7, the failure of a single workload on a server has no immediate impact on other workloads while at the same time, you maintain the same level of simplicity as with simple Layer 4. Sounds cool, right?!
Obviously, there is a “huge” downside to this story: you potentially end up with a bunch of different external IP addresses. Although there are also solutions for that, it’s beyond the scope of this article to cover those.
Most people – when I tell them about this approach – don’t like the idea of exposing multiple IPs/URLs to an end user. Personally, I don’t see this as an issue though. “For my users it’s already hard enough to remember a single URL”, they say. Who am I to argue? However, when thinking about it; there’s only one URL an end user will actually be exposed to and that’s the one for OWA. All other URLs are configured automatically using Autodiscover. So even with these multiple URLs/IPs, your users really only need to remember one.
Of course, there’s also the element of the certificate, but depending on the load balancer you buy, that could still be cheaper than the L7 load balancer from the previous example.
Configuring Layer 4 load balancing with different namespaces is done the same way as you configure a single namespace. Only now you have to do it multiple times. The difference will be in the health checks you perform for each protocol and those are something that’s free for you to choose. For now, the health-check options are relatively limited (although that also depends from the load balancer you use). But the future might hold some interesting changes: at MEC, Greg Taylor explained that Microsoft is working with some load balancer vendors to make their load balancers capable of “reading” the health status of an Exchange produced by Managed Availability. This would mean that your load balancer no longer has to perform any specific health checks itself and can rely on the built-in mechanisms in Exchange. Unfortunately there isn’t much more information available right now, but rest assured I follow this closely.
Differentiator: health checks
As said before, the configuration to Layer 4 without multiple namespaces is identical. For more information on how to configure a virtual service with a KEMP load balancer, please take a look at the first part of this article.
The key difference lies within the difference in health checks you perform against each workload. Even then, there is a (huge) difference in what load balancers can do.
Even though I like KEMP load balancers a lot, they are – compared to e.g. F5 – limited in their health checks. From a KEMP load balancer perspective, your health check rule for e.g. Outlook Anywhere would look something like this:
From an F5 perspective, a BIG-IP LTM would allow you to “dive deeper” into the health checks You can define a user account that is used to authenticate against the rpcproxy.dll. Only if that fails, the service will be marked down, rather then using a “simple” HTTP GET.
On a side note: for about 90% of the deployments out there, the simple health check method proves more than enough…
Below you’ll find a table in which I summarized the most important pro’s and con’s per option.
Layer 4 (Simple)
Easy to setup
Less resources on the LB required
No per-protocol availability
single external IP/URL
More difficult to setup
more resources on the LB required
Layer 4 (Multiple Namespaces)
Easy to setup
Les resources on the LB required
multiple external IP/URL
If it were op to me, I like the idea of Layer 4 with multiple namespaces. It’s clean, it’s simple to setup (both from a load balancer and Exchange point of view) and it will most probably save me some money. Unless you only have a limited amount of (external) IP addresses available, this seems like the way forward to me.
Nonetheless, this doesn’t mean you should kick out your fancy load balancer if you already have one. My opinion: use with what you got and above all: keep it simple!
In one of my earlier posts, I already briefly explained what load balancing in Exchange 2013 looked like and how you could setup basic load balancing using a KEMP LoadMaster based on a blog post that Jaap Wesselius wrote. Today I will elaborate a bit more on the different load balancing options you have, along with how to configure a KEMP LoadMaster accordingly. In a future blog post (which I’m working on right now), I will be explaining how you can configure an F5 BIG-IP LTM to achieve the same results.
Load Balancing Primer
One of the ‘biggest’ changes towards load balancing in Exchange 2013 is that you can now use simple, plain Layer 4 load balancing. More ‘complex’ and more expensive layer 7 load balancing that you had to use in Exchange 2010 is no longer a hard requirement. However, this doesn’t mean that you cannot use Layer 7 load balancing. But more about that later (part 2).
In a nutshell, Layer 7 load balancing means that you can do application-aware load balancing. You achieve this by decrypting SSL traffic, which allows the load balancing device to ‘read’ the traffic that’s passing though. The Load Balancer will analyze the headers and based on the what it ‘finds’ (and how it’s configured), it will take an appropriate action (e.g. use another virtual service or use another load balancing option).
Traffic decryption uses additional resources from your load balancer. The impact of the amount of users connecting through your load balancing device usually is (much) higher in such case. Also, not all load balancers offer this functionality by default. Some require you to purchase additional license (or hardware) before being able to decrypt SSL traffic. Companies might choose to re-encrypt traffic that is sent to Exchange (SSL Bridging) because of security reasons (e.g. avoid sniffing). In Exchange 2010 you could also choose not to re-encrypt traffic when sending it to Exchange. This latter process is called SSL Offloading and required some additional configuration in Exchange. Not re-encrypting traffic (SSL Offloading) saves some resources on your load balancing device, but know that – for now – it is not supported for Exchange 2013; you are required to re-encrypt traffic when using Layer 7 load balancing! So make sure that you size your hardware load balancers accordingly. How to size your load balancer depends of the model and make. Contact the manufacturer for more information and guidance.
Load Balancing Options in Exchange 2013
When looking purely at Exchange 2013, you have different options with regards to load balancing. Each of the following options has its advantages and disadvantages. Let’s take a closer look at each of them and explain why:
Layer 4 (single namespace)
Layer 4 (separate namespaces)
Layer 4 (single namespace)
This is the most basic (and restricted) but also the easiest way of setting up load balancing for Exchange 2013. A single namespace will be used to load balance all traffic for the different Exchange workloads (OWA, EAC, ActiveSync, EWS, …).
Because you are using a single namespace with Layer 4, you cannot do workload-specific load balancing and/or health checking. As a result, a failure of a single workload can possibly impact the end user experience. To illustrate, let’s take the following example:
User A connects to externalurl.domain.com and opens up OWA. At the same time, his mobile device syncs (using ActiveSync) over the same URL while his Outlook is connected (Outlook Anywhere), you’ve guessed it right, over the same external URL. In this example, the virtual service is configured to check the OWA Virtual Directory (HTTP 200 OK) to determine whether a server is available or not.
Let’s see what happens if OWA fails on a server. The health check that the load balancer executes will fail, therefore it will assume that the server isn’t functioning properly anymore and take that entire server out of service. It does this by preventing traffic from being forwarded to that server. Even though other workloads like Outlook Anywhere and ActiveSync, … might still work perfectly, the server will still considered ‘down’. From an efficiency point-of-view that is not the best thing that could happen:
Now, imagine that OWA works fine, but the RPC/HTTP components is experiencing some difficulties…At this point, OWA would still return a positive health check status and the load balancer would assume the server is still healthy. As a result traffic is still being sent to the server, including traffic for the failed RPC/HTTP components. This will definitely impact a user’s experience.
As you can see, it’s perhaps not the best possible experience for the end user (in all cases), but it definitely is the easiest and most cost-effective way to setup load balancing for Exchange 2013. Because you don’t need advanced functionality from the load balancer, you will probably be able to fit more traffic through the load balancer before hitting it’s limits. Alternatively, if you don’t have a load balancer yet, it could allow you to buy a smaller scaled model to achieve the “same” results as with L7 load balancing.
Let’s take a look at how to setup a KEMP LoadMaster for this scenario:
Note to keep things brief and to the point, I’m not going to discuss how to setup a KEMP LoadMaster in your environment and what options you have there (e.g. single-arm setup, …). We’ll only discuss how to setup the virtual services and other related configuration settings.
First, navigate to Virtual Services and click Add New:
Enter the Virtual IP Address (VIP), port, and optionally a descriptive name for the service. Then click Add this Virtual Service to create it:
Configure the Basic Properties as follows:
Because we’re not going to use Layer 7, make sure to disable Force L7 in the Standard Options. Because Exchange doesn’t need affinity (persistence), you do not have to configure anything specific for that either. You’re free to choose the load balancing mechanism (a.k.a. scheduling), but Round Robin should be OK:
I can imagine that you’re now like “Wait a minute, does that mean that every new connection possibly end up with another server in the array?” The answer is: yes. But that actually doesn’t matter. Every connection that is made is automatically re-authenticated. As a result, the fact that every new connection is forwarded to another server (even mid-session) doesn’t impact the end user’s experience. If you’d like to know a bit more about this, I suggest you listen to episode 12 of “The UC Architects”. In this episode, Greg Taylor (Microsoft) explains how that actually works.
Anyway, we got sidetracked. Back to the Load Master! Again, because we’re doing the simplest of the simplest (L4) here, we also don’t need to do any form of SSL decrypting, meaning that you can leave these options at their default:
If you will, you can specify a “Sorry Server” in the advanced properties. This is a server where traffic is redirected to if all servers within the array become unavailable:
Note if you click Add HTTP Redirector the KEMP LoadMaster will automatically create a Virtual Service for the same VIP on TCP port 80 that will redirect traffic to the Virtual Service on TCP port 443. This way you can enforce SSL and you don’t necessarily have to do that elsewhere.
All that’s left now is to configure the servers to direct traffic to. Click Add New under Real Servers to add an Exchange 2013 server. Once you’ve filled in the details, click Add this Real Server. Repeat this step for all Exchange 2013 Client Access Servers you want to add.
We still have to configure how the LoadMaster will check the health of the servers in the array. There are different ways to do this. In the example below, the LoadMaster will query the OWA Virtual Directory and if it gets a response, the server will be deemed healthy. Alternatively, you could also opt not to specify an URL. Do not forget to click Set Check Port after entering the port number.
Believe it or not, but that’s it. Your LoadMaster is now setup to handle incoming traffic and distribute the load evenly over the configured servers.
Simple Layer 4 load balancing clearly has its benefits towards simplicity and cost (you usually need less performing hardware load balancers to achieve the same result as with L7). On the other hand, the limited options towards health checking could potentially take offline a server based on problems with a single workload while other workloads might still work fine on that server.
In the second part of this article, we’ll dive deeper into the option of using Layer 4 load balancing with multiple namespaces and how that compares to using Layer 7 load balancing, so stay tuned!
All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.
Following one of my previous articles in which I described how you could configure a Database Availability Group to achieve high availability for the Mailbox Server Role, we will now take a look at the process of how to configure high availability for the Client Access Server.
To achieve high availability, you create a load-balanced array of Client Access Servers just like in Exchange Server 2010. Other than before, layer-4 load balancing now becomes a viable options, though that would only be in the smallest deployments where there’s no budget for load balancers.
Layer-4 load balancing only takes into account the IP (and TCP port). Yes, you are no longer required to configure “affinity”. The latter is the process where a connection – once it was built – had to be persisted through the same Client Access Servers. This is because CAS in Exchange Server 2013 doesn’t do any data-rendering anymore: everything happens on the backend (Mailbox servers).
I hear you thinking: does this mean we could use DNS load balancing (a.k.a. round-robin). The answer is yes and no. Yes, because it will load-balance between multiple Client Access Servers; no because if a server would fail, you’d have to remove the server (manually) from DNS and wait for the record to time-out on all the clients. While this might be a cost-effective way to have load-balancing and a very, very basic form of high availability, it is not a real viable solution for most deployments…
Ever since the CAS Array was first introduced, it was subject to quite some misconceptions. A lot of them were addressed by Brian Day in a very interesting article he wrote. What I find is that people tend to mix up the RPC Client Access Array and the load-balanced array used for http-based traffic. Yes, the use of the term CAS Array can be a little confusing. No, they’re not the same!
Now, since Exchange Server 2013 dumped using RPC-over-TCP, I no longer see the purpose in creating the RPC Client Access Array object (New-ClientAccessArray). Instead, it suffices to configure multiple Client Access Servers with the same internal hostname for Outlook Anywhere.
To understand what happens, let’s take a look at the following examples:
In the case where you’re using two Client Access Servers in the same AD site, by default Exchange will “load balance” traffic between the two end points. This means that the 1st request will go to CAS1, the second to CAS2, the third to CAS1 etc… While this does provide some sort of load-balancing, it doesn’t really provide high-availability. Once Outlook is connected to a CAS, it will keep trying to connect to that same server, even after the server is down. Eventually, it will try connecting to the other CAS, but in the meantime your Outlook client will be disconnected.
If we add a load balancer, we need to configure the Internal Hostname for OA to a shared value between the Client Access Servers. For example: outlook.exblog.be. This fqdn would then point to the VIP of the load balancer which, in turn, would take care of the rest. Because we’re using a load balancer, it will automatically detect a server failure and redirect the incoming connection to the surviving node. Since there is no affinity required, this “fail over” happens transparently to the end user:
As explained before, this load balancer could be anything from simple DNS load balancing, to WNLB or a full-blown hardware load balancer that’s got all the kinky stuff! However, in contrast to before (Exchange 2010), most of the advanced options are not necessary anymore…
Configuring Outlook Anywhere
To configure the internal hostname for Outlook Anywhere, run the following command for each Client Access Server involved:
[sourcecode language=”powershell”]Get-OutlookAnywhere – Server <server> | Set-OutlookAnywhere –InternalHostname <fqdn>[/sourcecode]
Configuring the Load Balancer
As I explained earlier, Layer-4 is now a viable option. Although this could mean that you’d just be using DNS load balancing, you would want to use some load balancing device (physical or virtual).
The benefit of using a load balancer over e.g. WNLB is that these devices usually give you more options towards health-checking of the servers/service that you’re load balancing. This will allow you to better control over the load balancing process. For example: you could check for a particular HTTP-response code to determine whether a server is running or not. It definitely beats using simple ICMP Pings…!
The example below is based on the load balancer in my lab: a KEMP Virtual Load Master 1000. As you will see, it’s setup in the most basic way:
I’ve configured no persistency and – because it’s a lab – I’m only checking the availability of the OWA virtual directory on the Exchange servers. Alternatively, you could do more complex health checks. If you’re looking for more information on how to configure a KEMP load balancer, I’d suggest you take a look at Jaap Wesselius’ blogs here and here. Although these articles describe the configuration of a Load Master in combination with Exchange 2010, the process itself (except for the persistency-stuff etc) is largely the same for Exchange Server 2013. Definitely worth the read!
All the information in this blog post is subject to change as Exchange Server 2013 is still under construction. Although it’s not very likely that big changes will occur between now an RTM, it might.
One of the tasks you will have to complete after installing Exchange Server 2013 is configuring certificates. Most Exchange-related traffic (including client traffic) is handled via HTTPS and thus they require some additional configuration to work properly.
Out of the box, Exchange Server 2013 will be using self-signed certificates. While these certificates (and the associated error messages) might be acceptable in a test-lab, they won’t be in production.
Although there is already a lot of guidance on this topic for Exchange Server 2010, I still regularly come across issues because of wrongly configured certificates. Hence, I decided to create this article to somewhat clarify the subject.
What certificates can I use?
As in Exchange Server 2010, Exchange Server 2013 can use both SAN certificates (recommended) or wildcard certificates.
What namespaces should I use?
The change in the client access model in Exchange 2013 (RPC over TCP to RPC over HTTP) has a positive impact on the design of your namespaces: ultimately, you need less namespaces then before.
Amongst the namespaces that you (potentially) need are:
Client Access Array (Array of Load Balanced CAS servers)
OWA Failback URL*
* Whether or not you need the Failback URL, depends entirely on the setup of your environment. I will discuss the design and setup of an highly available Exchange Server 2013 environment in another article soon.
Configuring Certificates using EAC
If you aren’t re-using a certificate that you exported from another server earlier, you will have to create a new certificate request first. This request will provide you a DSR which you can use to generate the actual certificate.
Open the EAC and navigate to Servers > Certificatesand click the “+”-sign (New Certificate Wizard):
Click create a request for a certificate from a certification authority and click Next:
Type in a name for the certificate and click Next. Although you could enter just anything, it’s always interesting to make this name as descriptive as possible:
If you want to use a wildcard certificate, click the checkbox and enter your root domain. Then click Next.
If you are notusing a wildcard certificate, you will be presented with the following screens first. On the first page, you can define what namespaces you are going to use for the different Exchange-services:
After having clicked Next, you can manually add or remove namespaces to be included on the certificate. Once ready, click Next.
Select on which server to store the certificate and click Next:
Enter your organization details and click Next.
Enter the location where you want to save the certificate. The location should be entered as a UNC path. Then click Finish:
Before continuing, verify that the certificate request file has correctly been created. The file will contains a DSR which you can use to request your certificate from your CA. This CA can be a private of public one.
Once you have received the certificate from your CA, we can now continue to configure the certificate.
From the certificate overview window, click Completein the actions-pane:
Enter the location where you stored the certificate. The location should be entered in UNC format. Then, click OK:
The certificate request should now be completed successfully. Verify that the certificate shows up in the certificate list and that it’s status is valid:
Although the certificate has been installed on the server, it has not been activated yet. Before a certificate is used, you need to assign services to it.
From the certificate overview, select the newly added certificate and click Edit:
On the properties page, select Services and select the appropriate service (e.g. IIS). Afterwards click Save:
Verify that the certificate is installed and configured correctly by browsing to Outlook Web App. This should not throw a certificate warning. If there is a certificate warning, either something went wrong or your certificate does not contain all the namespaces you’re using.
If you have multiple servers, you can either repeat the process above for each server or if you will be using a single certificate for all servers (e.g. when using a wildcard certificate), you should import this newly added certificate on other servers.
From the certificate overview, select the certificate and click the three dots (…). From there, select Export Exchange Certificate:
Enter a path where the certificate should be saved and type in a password to protect the certificate. The path should be entered in UNC format. Afterwards click OK:
Verify that the certificate was exported successfully:
Next, you will have to import the certificate on each server that will be using the certificate.
From the certificate overview, select the certificate and click the three dots (…). From there, select Import Exchange Certificate:
Enter the UNC where you previously exported the certificate to and provide the password you chose earlier. Click Next:
Select the server(s) to which you want to import the certificate to. Then click Finish:
Verify that the certificate has successfully been imported to the different servers by selecting the server from the drop-down list on the certificate overview page. The imported certificate should be in the list and it’s status should be valid:
All that’s left to do now, is to assign services to these certificate on each server. The process is identical as described before.
Configuring certificates using PowerShell
The more servers you have, the longer it might take to perform the actions using EAC. Alternatively, you can also use PowerShell to request, import and export exchange certificates:
Open the Exchange Management Shell and type in the following command. This will create the certificate request “DSR” which you can use to request a certificate from your CA:[sourcecode language=”powershell”]$newcert = New-ExchangeCertificate –GenerateRequest –SubjectName “c=BE,o=EXBLOG,cn=webmail.exblog.be” –DomainName “autodiscover.exblog.be” –PrivateKeyExportable $true
$newcert | out-file c:\certreq.txt[/sourcecode]
c= is used to denote the country by it’s international code. e.g. “BE = Belgium” o= is used to denote the organization e.g. “Exblog” cn= represents the common name of the certificate. DomainName represents the (Subject Alternative) Namesto be included on the certificate.
Once you’ve received the certificate from your CA, it’s time to import/install it onto the server (= completing the request):[sourcecode language=”powershell”]Import-ExchangeCertificate –FileData ([byte ]$(Get-Content –Path “c:\CertificateFromCA.cer” –Encoding Byte –ReadCount 0))[/sourcecode]
Next, we need to assign services to this certificate:[sourcecode language=”powershell”]Get-ExchangeCertificate –ThumbPrint <thumbprint> | Enable-ExchangeCertificate –Services IIS,SMTP[/sourcecode]
Note While enabling the certificate you will get a warning that this will actually overwrite the existing certificate. You can safely overwrite the default self-signed certificate.
Now that the certificate has been installed on the first server, we need to export it and import it on our other servers:[sourcecode language=”powershell”]$certexport = Get-ExchangeCertificate –DomainName “webmail.exblog.be” | Export-ExchangeCertificate –BinaryEncoded:$true –Password (Get-Credential).password
Set-Content –Path c:\cert_export.pfx –Value $certexport.FileDate –Encoding Byte[/sourcecode]
Note when running the first command, you will be prompted for a username and password. You can type whatever you want for the username, as this value will be ignored. However, remember the password as you will be using it later to import the certificate into another server.
To import the certificate into another server, use the following command:
Enable the newly imported certificates by assigning services to it. Do this for each server to which you imported the certificate:[sourcecode language=”powershell”]Get-ExchangeCertificate –DomainName “webmail.exblog.be” | Enable-ExchangeCertificate –Services IIS,SMTP[/sourcecode]
Clearly, PowerShell is thé choice once you have multiple servers in your organization: in just a few steps you created, exported and imported a certificate on multiple servers