Speeding up retrieval of Send-As permissions

Many Exchange administrators are familiar with the Get-ADPermission cmdlet. In the contrary to the Get-MailboxPermission cmdlet, the Get-ADPermission cmdlet retrieves Active Directory permissions for an object, instead of permission in Exchange itself. For instance, the Get-ADPermission cmdlet will reveal e.g. Send-As permissions whereas the Get-MailboxPermission cmdlet will tell you e.g. who has Full Access permissions on the mailbox.

If you need to do a quick search for Send-As permissions, and for a limited set of mailboxes, you will find that using the Get-ADPermission cmdlet is pretty simple and straightforward:

Get-ADPermission <mailbox> | ?{$_.ExtendedRight -like "*Send-As*"}

If you are dealing with a large number of mailboxes (e.g. several thousands of mailboxes), using the Get-ADPermission cmdlet can be quite limiting. During recent testing, I noticed the command took anywere from 2-8 seconds per mailbox to complete. In this particular scenario, I was helping a customer to move user accounts from their (old) account forest into the new resource forest.

As part of the process, we would enumerate all mailbox permissions (including Send-As), and check if any of them were assigned to a user account in the account forest. However, because the source environment has tens of thousands mailboxes, the Get-ADPermission approach was not feasible.

Normally, querying AD is not a problem. If you’ve ever written an LDAP query, you probably noticed that most of them complete within several seconds –depending on the result set size, of course. But either way, talking directly to AD should be a lot faster. As such, and given that Send-As permission are assigned to the user account in AD, I figured that using the Get-ACL cmdlet would be best suited.

The first particularity to keep in mind is that, for easy processing, you should change your current location in PowerShell to Active Directory:

Import-Module ActiveDirectory
Set-Location AD:

Next, you can e.g. run the following command to get the ACL for an object. Notice how I’m using the distinguishedName of the object. Although there are other ways to quickly get the DN for an object, I referred to using the Get-Mailbox cmdlet, because I had to run it earlier in the script anyway:

$mbx = Get-Mailbox &lt;mailbox&gt;
(Get-ACL $mbx.distinguishedName).Access

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a53-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\Everyone
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a54-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\UserA
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

ActiveDirectoryRights : WriteProperty
InheritanceType       : All
ObjectType            : 934de926-b09e-11d2-aa06-00c04f8eedd8
InheritedObjectType   : 00000000-0000-0000-0000-000000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\Exchange Servers
IsInherited           : False
InheritanceFlags      : ContainerInherit
PropagationFlags      : None

The result of the cmdlet will look similar to what you see above. For brevity purposes, I’ve omitted some of the results. Nonethelss, it should give you a good picture of what to expect.

As you can see from the results, there’s a LOT of entries on each object. Because I was solely interested in the Send-As permission, I decided to filter the results based on the ActiveDirectoryRights attribute. Given that Send-As is an ExtendedRight, I used the following:

$mbx = Get-Mailbox &lt;mailbox&gt;
(Get-ACL $mbx.distinguishedName).Access | ?{$_.ActiveDirectoryRights -eq "ExtendedRight"}

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a53-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\Everyone
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a54-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\UserA
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

So far, so good. However, none of the entries mentioned “Send-As” anywhere. As it turns out, the objectType attribute contains a GUID which refers to the actual permission. AD stores information about Extended Rights in the configuration partition, in a container unsurprisingly called “Extended-Rights”. Using ADSIEdit, you can navigate to the Send-As Extended Right, and look for the rightsGuid attribute. I’ve checked in various labs and environments, and the Guid always turns out to be ab721a54-1e2f-11d0-9819-00aa0040529b.

ExtendedRight

Now that we have this information, it is very easy to filter the results from the Get-ACL cmdlet:

$mbx = Get-Mailbox &lt;mailbox&gt;
Get-ACL $mbx.distinguishedName | ?{($_.ActiveDirectoryRights -eq "ExtendedRight") -and ($_.objectType -eq "ab721a54-1e2f-11d0-9819-00aa0040529b")}

ActiveDirectoryRights : ExtendedRight
InheritanceType       : None
ObjectType            : ab721a54-1e2f-11d0-9819-00aa0040529b
InheritedObjectType   : 00000000-0000-0000-0000-000000000000
ObjectFlags           : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference     : EXCHANGELAB\UserA
IsInherited           : False
InheritanceFlags      : None
PropagationFlags      : None

While benchmarking this approach, we were able to return results for approximately 1-5 mailboxes per second. Quite an improvement over before!

The one caveat is that Get-ACL does not return the same result set (in terms of what attributes are shown) as the Get-ADPermission cmdlet. If all you care about it the permission itself, or if you already have all the other information for the mailbox (e.g. because you previously ran Get-Mailbox), than the speedy approach using Get-ACL might just offer all you need.

Blog How-To's PowerShell

Rewriting URLs with KEMP LoadMaster

Earlier this month, I wrote an article on how you could use a KEMP LoadMaster to publish multiple workloads onto the internet using only a single IP address using a feature called content-switching.

Based on the principle of content switching, KEMP LoadMasters also allow you to modify traffic while it’s flowing through the device. More specifically, this article will show you how you can rewrite URLs using a Load Master.

The rewriting of URLs is quite common. The goal is to ‘send’ people to another destination than the one they are trying to reach. This could be the case when you are changing your domain name or maybe even as part of a merger and you want the other company’s traffic to automatically be redirected to your website.

Let’s start with a simple example we are all familiar with: you want to redirect traffic from the root of your domain to /owa. The goal is that when someone enters e.g. webmail.domain.com, that person is automatically redirected to webmail.domain.com/owa. Although Exchange 2013 already redirects traffic from the root to the /owa virtual directory out-of-the-box, the idea here is to illustrate how you can do it with KEMP instead of IIS. As a result you could just as easily send someone from one virtual directory (e.g. /test) to another (e.g. /owa).

How it works – Mostly

Just as in my previous article, everything evolves around content switching. However, next to the content matching rules, KEMP’s LoadMasters allow you to define so-called header modification rules as shown in the following screenshot:

image

By default, it suffices to create such a header modification rule and assign it to a virtual service. By doing so, you will rewrite traffic traffic to the root (or the /test virtual directory) and people will end up at the /owa virtual directory.

To create a header modification rule, perform the following steps:

  1. Login to your LoadMaster
  2. Click on Rules & Checking and then Content Rules
  3. On top of the Content Rules page, click Create New…
  4. Fill in the details as show in the following screenshot:

    image

  5. Click Create Rule.

Now that you have created the header modification rule, you need to assign it to the virtual service on which you want to use it:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

That’s it. All traffic that hits the LoadMaster on the root will now automatically be rewritten to /owa.

How it works with existing content rules (SUBVSs)

When you are already using content rules to ‘capture’ traffic and send it to a different virtual directory (as described in my previous article), the above approach won’t work – at least not entirely.

While the creating of the header modification rule and the addition of that rule to the virtual service remain entirely the same, there is an additional task you have to perform.

First, let me explain why. When you are already using Content Rules the LoadMaster will use these rules to evaluate traffic in order to make the appropriate routing decision. As a result, these content rules are processed before the header modification rules. However, when the LoadMaster doesn’t find a match in one of its content matching rules, it will not process the header modification rule – at least not when you are trying to modify a virtual directory. As I will describe later in this article, it will still process host-header modifications though.

So, in order for the LoadMaster to perform the rewrite, the initial destination has to be defined on the virtual directory where you want to redirect traffic to. Let’s take the following example: you are using content rules to direct traffic from a single IP address to different virtual directories. At the same time, you want traffic from an non-existing virtual directory (e.g. /test) to be redirected to /owa.

First, you start of again by creating a header modification rule. The process is the same as outlined above. The only thing that changes is that the match string will now be “/^\/test$/” instead of /^\/$/:

image

Next, create a new content rule, but this time create a content matching rule as follows:

image

Next, we’ll make the changes to the virtual service:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

Now, we still need to add the content matching rule to the /owa SubVs:

  1. In the properties of the Virtual Service, go down to SubVSs and click the button in the Rules column, for the OWA SubVS:

    image

  2. From the drop-down list, select the rule you created earlier (“test”) and click Add:

    image

  3. You should now have 2 rules in the SubVS:

    image

That’s it. If you now navigate to the /test virtual directory, the traffic will automatically be rewritten to /owa.

How about if I want to redirect more than a single virtual directory to /owa?

In theory you would need to create a header modification rule for each of the virtual directories and a content matching rule as well. However, if you are going to redirect multiple other virtual directories to /owa, you can also use the “default” content rule which acts as a catch-all. As a result, instead of creating and adding a separate content matching rule for each virtual directory, you just create a header modification rule for each of them and add the default content rule to the /owa virtual directory as shown below:

image

What about rewriting host names?

Rewriting host names is handled a tad differently than e.g. virtual directories. Unlike the latter, host header modifications are processed before the content matching rules for the Virtual Services. As a result, it suffices to create a modification rule and apply it to the virtual service. To create a Host Header modification rule, do the following:

  1. Go to Rules & Checking and then Content Rules
  2. Click Add New… and create the rule as follows:

    image

Once you have created the rule, add it the the HTTP Header Modification rules on the virtual services and your done. Traffic that hits test.domain.com will now automatically be rewritten to webmail.domain.com. It’s as easy as that.

Conclusion

Rewriting URLs with KEMP’s LoadMaster is relatively easy. You only have to watch out when you are already using content switching rules as I described earlier.

Until later,

Michael

Blog Exchange How-To's

Publishing multiple services to the internet on a single IP address using a KEMP Load Balancer and content switching rules

A few days ago, someone suggested I write this article as it seems many people are struggling with ‘problem’. In fact, the solution which I’m going to explain below is the answer to a problem typically found in “home labs” where the internet connection doesn’t always have multiple IP addresses. This doesn’t mean that it’s only valid for home-us or testing scenarios only. Given that IPv4 addresses are almost depleted, it’s a good thing not to waste these valuable resources if it’s not necessary.

Basically, what I’m going to explain is how you can use a KEMP Load Master to publish multiple services/workloads to the internet using only a single (external) IP address. In the example below, I will be publishing Exchange, Office Web Apps and Lync onto the internet.

The following image depicts how the network in my example looks like. It also displays the different domain names and IP addresses that I’m using. Note that – although I perfectly could – I’m not connecting the Load Master directly onto the internet. Instead, I mapped an external IP address from my router/firewall to the Load Master:

image

How it works

The principle behind all this is simple: whenever a request ‘hits’ the Load Master, it will read the host header which is used to connect to the server and use that to determine where to send the request to. Given that most of the applications we are publishing use SSL, we have to decrypt content at the Load Master. This means we will be configuring the Load Master in Layer 7. Because we need to decrypt traffic, there’s also a ‘problem’ which we need to work around. The workloads we are publishing to the internet all use different host names. Because we only use a single Virtual Service, we can assign only a single certificate to it. Therefore, you have to make sure that the certificate you will configure in the Load Master either includes all published host names as a Subject (Alternative) Name or use a wildcard certificate which automatically covers all the hosts for a given domain. The latter option is not valid if you have multiple different domain names involved.

How the Load Master handles this ‘problem’ is not new – far from it. The same principle is used in every reverse proxy and was also the way how our beloved – but sadly discontinued – TMG used to handle such scenarios. You do not necessarily need to enable the Load Master’s ESP capabilities.

Step 1: Creating Content Rules

First, we will start by creating the content rules which the Load Master will use to determine where to send the requests to. In this example we will be creating rules for the following host names:

  • outlook.exchangelab.be (Exchange)
  • meet.exchangelab.be (Lync)
  • dialin.exchangelab.be (Lync)
  • owa.exchangelab.be (Office Web Apps)
  1. Login to the Load Master and navigate to Rules & Checking and click > Content Rules:image
  2. Click Create New…
  3. On the Create Rule page, enter the details as follows:conten rule

Repeat steps 2-3 for each domain name. Change the value for the field Match String so that it matches the domain names you are using. The final result should look like the following:

content rules

Step 2: creating a new Virtual Service

This step is fairly easy. We will be creating a new virtual service which uses the internal IP address that is mapped to the external IP address. If you already have create a virtual service previously, you can skip this step.

  1. In the Load Master, click Virtual Services and the click > Add New:image
  2. Specify the internal IP address which you have previously mapped to an external IP address
  3. Specify port TCP 443
  4. Click Add this Virtual Serviceimage

Step 3: Configuring the Virtual Service

So how does the Load Master differentiate between the different host headers? Content Rules. Content rules allow you to use Regular Expressions which the Load Master will use to examine incoming requests. If a match is found through one of the expressions, the Load Master will forward the traffic to the real server which has been configured with that content rule.

First, we need to enable proper SSL handling by the Load Master:

  1. Under SSL Properties, click the checkbox next to Enabled.
  2. When presented with a warning about a temporary self-signed certificate, click OK.
  3. Select the box next to Reencrypt. This will ensure that traffic leaving the Load Master is encrypted again before being sent to the real servers. Although some services might support SSL offloading (thus not reencrypting traffc), it’s beyond the scope of this article and will not be discussed.
  4. Select HTTPS under Rewrite Rules.image

Before moving to the next step, we will also need to configure the (wildcard) certificate to be used with this Virtual Service:

  1. Next to Certificates, click Add New
  2. Click Import Certificate and follow the steps to import the wildcard certificate into the Load Master. These steps include selecting a certificate file, specifying a password for the certificate file (if applicable) and setting an identifying name for the certificate (e.g. wildcard).image
  3. Click Save
  4. Click “OK” in the confirmation prompt.
  5. Under Operations, click the dropdown menu VS to Add and select the virtual service.
  6. Now click Add VSimage

You’ve now successfully configured the certificate for the main Virtual Service. This will ensure the Load Master can decrypt an analyze traffic sent to it. Let’s move on to the next  step in which we will define the “Sub Virtual Services”.

Step 4: Adding Sub Virtual Services

While still on the properties pages for the (main) Virtual Service, we will now be adding new ‘Sub Virtual Services’. Having a Sub Virtual Service per workload allows us to define different real servers per SubVS as well as a different health check. This is the key functionality which allows to have multiple different workloads live under a single ‘main’ Virtual Service.

  1. Under Real Servers click Add SubVS…
  2. Click OK in the confirmation window.
  3. A new SubVS will now have appeared. Click Modify and configure the following parameters:
  • Nickname (makes it easier to differentiate from other SubVSs)
  • Persistence options (if necessary)
  • Real Server(s)

Repeat the steps above for each of the workloads you want to publish.

Note: a word of warning is needed here. Typically, you would add your ‘real servers’ using the same TCP port as the main Virtual Service, being TCP 443, in this case. However, if you are also using the Load Master as a reverse proxy for Lync, you will need to make sure your Lync servers are added using port 4443 instead.

Once you have configured the Sub Virtual Services, you still need to assign one of the content rules to it. Before you’re able to do so, you first have to enable Content Rules.

Step 5: enabling and configuring content rules

In the properties of the main Virtual Service, Under Advanced Properties click Enable next to Content Switching. You will notice that this option has become available after adding your first SubVS.

image

Once Content Switching is enabled, we need to assign the appropriate rules to each SubVS.

  1. Under SubVSs, Click None in the Rules column for the SubVS you just are configuring. For example, if you want to configure the content rule for the Exchange SubVS:image
  2. On the Rule Management page, select the appropriate Content Matching rule (created earlier) from the selection box and then click Add:image
  3. Repeat these steps for each Sub Virtual Service you created earlier

Testing

You can now test the configuration by navigating your browser to one of your published services or by using one of the service. If all is well, you should now be able to reach Exchange, Lync and Office Web Apps – all using the same external IP Address.

As you can see, there’s some fair amount of work involved, but it’s all in all relatively straightforward to configure. In this example we published Exchange, Lync and Office Web Apps, but you could just as easily add other services too. Especially with the many Load Balancing options you have with Exchange 2013, you could for instance use multiple additional Sub Virtual Services for Exchange alone. To get you started, here’s how the content rules for that would look like:

content rules exchange

Note: if you are defining multiple Sub Virtual Services for e.g. Exchange, you don’t need to use/configure a Sub Virtual Service which uses the content rule for the Exchange domain name “^outlook.domain.com*”. If you still do, you’d find that – depending on the order of the rules – your workload-specific virtual services would remain unused.

I hope you enjoyed this article!

Until later,

Michael

Blog Exchange 2013 How-To's Office 365 Uncategorized

My journey to a new home lab…

Almost two years ago now, I bought myself a new – what I thought was at that time – a “real” home lab. The components I bought back then were similar to what MVP Jeff Guillet had described on his blog: http://www.expta.com/2013/04/updated-blistering-fast-hyper-v-2012.html

That server comprised of the following components:

  • Intel Core i7-2600K
  • Asus motherboard (P8Z77-VLX)
  • 32GB DDR RAM
  • 3x 120GB SSD
  • 1x 1TB Western Digital 7200rpm SATA
  • Antec NSK4400 (400W PSU)

Additionally, I also use my “main desktop” machine to run a few VMs from:

  • Intel Core i5-3550
  • Gigabyte Motherboard (Z68AP-D3)
  • 16GB DDR3 RAM
  • 2x 120GB SSD
  • 1x 2TB Western Digital 7200rpm SATA
  • Antec NSK4400 (400W PSU)

From a networking point of view, I bought a relatively cheap HP ProCurve 1810-24G which gives me more than enough ‘power’ to tie everything together. What I also liked about this switch is that it’s relatively easy to configure, low-noise and supports e.g. VLAN tagging.

Challenges

Over time, and especially on the occasion of preparing for the recently demised MCSM training, I started to experience some ‘problems’. Under load, my server would randomly freeze up. While this usually isn’t much of a problem, I sometimes dare to do my demos from these machines. Last time I did that, it froze up in the middle of a demo! Those who were present at TechDays in Belgium will actually remember what that was like…

Given that very unpleasant experience, I made the decision to build a new “home lab”. From a requirements point-of-view, this ‘new’ lab has to be able to accommodate for all the VMs I want to run (+/- 60) and preferably be more stable. In order to do so, I definitely need more memory (CPU wasn’t an issue) and I need more storage. I found that I was able to run quite a lot of VMs out of the limited amount of storage I have right now (360GB SSD) using deduplication in Windows Server 2012. That is also the reason why I decided to keep on using SSDs; which ultimately cost me the most.

From a network point-of-view, I’ll also be looking to replace (or complement) my current switch with a high-performance router. In my persistent lab environment, I have a few subnets which are now routed through a virtual machines (WS 2012). In the future I would like this to be done by a Layer 3 switch. I’ve been doing some research and have found that HP’s 1910 series actually offer up to 32 static routes while remaining relatively cheap. Another option, though, would be to use one of MikroTik’s RouterBoard devices… Still not sure about what to do…

The process

When I first stared looking for solutions, I ended up with a few possibilities. One of them was to move out my lab outside of my house and either host it in a datacenter or use a platform like Amazon Web Services or – preferably – Windows Azure to build my lab.

The problem with either of both is that – given the amount of VMs I’m running at times – this can become a quite costly business. Even though the solution would be the most elegant of all, it’s just not something I can afford.

Next, I looked to moving my hardware to a datacenter. Through a colleague of mine I was able to rent 1u rack space at +/- 35 EURO a month (which is relatively cheap). While from a connectivity point-of-view this was an awesome idea, I had to find hardware that was able to fit in 1 or 2 units. For this, I came up with 2 solutions:

    • Modified MAC Mini’s
    • Custom-build 1U Servers

Unfortunately, both solutions turned out to be inefficient or too expensive. I could easily fit 4 Mac Minis in 1U rack space, but they can only contain 16GB of RAM and – even by adding a second disk myself – each one would cost up to 850 EUR.The alternative of building a server myself based of some SuperMicro hardware (which is decent quality for a fair price) seemed do-able; except for when trying to fit as much as you can into 1U. Basically, I ended up with the following hardware list ~ which I nearly ended up buying:

  • Supermicro 1027R-73DAF
  • 1x Intel Xeon e5-2620
  • 6x Samsung 250GB EVO SSD
  • 8xKingston 16GB DDR3 ECC

The problem here is: cost. Next to buying the hardware (+/- 2700 EUR), I would also need to take into account the monthly recurring cost for the datacenter. All-in-all, a little overkill for what I want to get out of it.

The (final) solution

So I started looking, AGAIN, for something that combined the best of both worlds and ended up building my server with a mix of server and desktop hardware. In the end, I decided to host the hardware at home (despite the advantages of putting it in a DC) and request a new VDSL line which offers me a bunch of external IP addresses that I can use:

    • Antec Three Hundred Two
    • Corsair 430Watt Modular Power Supply
    • Asus P9X79 Pro
    • Intel Xeon E5-2620
    • 64GB DDR3 1333Mhz
    • 4x Samsung 840 EVO 250GB SSD
    • 1x Western Digital “Black” 1TB 7200rpm

The total price for this machine was approximately 1700 EUR, which isn’t too bad given what I get in return.

        The reason I chose the Intel Xeon CPU and not a regular i7 is simple: even though some motherboards claim to support 64GB RAM with most i7 CPU’s, there’s only a single one that actually addresses more than 32GB (i7 3930K). The price for that one is actually – compared to the Xeon CPU – insanely high, which is why I went for the latter one.

Because I’m using a “regular” motherboard [P9X79] instead of e.g. one from SuperMicro, I was able to drive down cost on memory as well. Even though I’m now limited to ‘only’ 64GB per host, the additional cost of ECC RAM and the SuperMicro motherboard weren’t worth it in my humble opinion.

The future?

My ultimate goal is to end up with 2 (maybe 3) of these machines and leverage Windows Server 2012 R2’s new capabilities with regards to storage and networking (enhanced deduplication, SMB multichannel, storage spaces, …). This would also allow me to configure the Hyper-V hosts in a cluster which ‘unlocks’ some testing scenarios.

As such, I would like to get to the following setup (which will take months to acquire, for sure!):

image

I’m still in doubt how I will do networking though. Given that 10Gbe is becoming cheaper by the day [Netgear has a really affordable 8port 10GBe switch], I might end up throwing that into the mix. It’s not that I need it, but at least it gives me something to play and get familiar with.

I’m most likely to transform my current hyper-v host to the iSCSI Target over time. But let’s first start at the beginning.

Once I have received the hardware, I’ll definitely follow up with a post of how I put the components together and my (hopefully positive) first impressions of it. So make sure to look out for it!

Cheers,

Michael

Blog How-To's

Exploring Exchange Server Component States

As described in a previous article, an Exchange 2013 server can be placed into a sort of maintenance mode, not only by temporarily pausing a DAG node from the cluster, but also by putting Exchange components on that server into an inactive state using the Set-ServerComponentState cmdlet.

The most obvious reason why a component is in an inactive state is because someone put it into that state as part of a maintenance task. However, there can be several other reasons why a component is inactive. The most common reason is when the Health Management service (part of Managed Availability) has taken a component offline because it was deemed unhealthy.

The tricky part comes when one or more “requesters” have put a component into an Inactive state which might lead to confusing situations. There are 5 requesters that can switch the state of a component:

  • HealthAPI
  • Maintenance
  • Sidelined
  • Functional
  • Deployment

Consider the following scenario. As part of the upgrade to the latest CU, you decide to put a server into maintenance mode using the Set-ServerComponentState cmdlet. After the upgrade, you take the server back “online” by changing the state of the component back to “Active”. However, when running the Get-ServerComponentState cmdlet, you notice that one or more components are still inactive… You investigate the issue and it turned out that the component was already in an inactive state before YOU put it in an inactive state. So why didn’t the state change AFTER you put it into an active state again?

The answer is pretty simple. As you know, only the requester that has put a component into a certain state, can put it back into an active state. So, as part of your maintenance task, you’ve put a component into maintenance using “Maintenance” as the requester, you are actually flagging the service inactive for a second time.

In fact, every time someone (or something) makes a component inactive, an entry gets added to the local server’s registry in the following location:

HKLM\SOFTWARE\Microsoft\Exchange Server\v15\ServerComponentStates\<componentname>

image
this image displays the different entries for the FrontEndTransport component in the local Exchange server’s registry

Each entry includes the following information, separate by a colon: [Unknown Value]:[State]:[TimeStamp]
By looking at the picture, you can see that the requester “Maintenance” has put this component into an active state on the given timestamp. FYI, the timestamp is saved in a binary format.

Now consider the following image:

image

As you can see, the component has multiple entries. Luckily, all of them are showing that the component is Active. However, if one of the entries would show the component was inactive, it would effectively be inactive. Even if a more recent entry would place that component into an active state, it would remain inactive until the same requester switches it back to active.

Why is that, you might wonder. Simply because there are cases where you don’t want someone to override the component state set by someone or something else. This could be the case when someone placed a server into maintenance mode (requester = Maintenance) and while the server is in maintenance, someone updates the Exchange server to the latest version. Exchange setup will actually place all components into an inactive state prior to starting the upgrade (requester  = Deployment) and switch them back after the upgrade completes. If this action would override the component state set by “Maintenance”, the server would effectively become operational again. Something you might not want in this case.

Script to query Component States

The output of the Get-ServerComponentState cmdlet does not show who placed a component into an inactive state, nor shows it if there’s more than one entry for that component. Of course, you could each time have a look in the local server’s registry. For convenience reasons, I put together a little script that will query the local registry and output the information on-screen:

image

Below you’ll find the code for the script. All you need to do is save it as a .ps1 file and run it from the Exchange Server that you want to query. Alternatively, you can download the script from here.

The current version of the script is a bit rough, but it works 🙂
In a future version, I’ll try to add remoting and clean up the code a bit by adding comments…

$components = Get-ChildItem HKLM:\SOFTWARE\Microsoft\ExchangeServer\v15\ServerComponentStates\ -Recurse | select PSPath

foreach($component in $components){

$componentstates = (Get-ItemProperty $component.PSPath | Select * -ExcludeProperty PS* ) | Get-Member -MemberType NoteProperty

$i=0

do {

$componentstate = ($componentstates[$i].Definition).split("=")
$statebreakdown = $componentstate[1].Split(":")

#$componentActualState = $statebreakdown[1]

switch($statebreakdown[1]){
1 {$componentActualState = "Active"}
0 {$componentActualState = "Inactive"}
}
$componentActualTime = [timezone]::CurrentTimeZone.ToLocalTime([datetime]::FromBinary($statebreakdown[2]))

$obj = New-Object PSObject
$obj | Add-Member -MemberType NoteProperty -Name Component -Value $($component.PSPath.Split("\"))[7]
$obj | Add-Member -MemberType NoteProperty -Name Requester -Value $componentstates[$i].Name
$obj | Add-Member -MemberType NoteProperty -Name State -Value $componentActualState
$obj | Add-Member -MemberType NoteProperty -Name TimeStamp -Value $componentActualTime
$obj

$i++
}
while ($i -lt $componentstates.count)

}

What’s next?

In a follow-up article, I’ll discuss the Server Component States and why the entries also exist in Active Directory.

Exchange 2013 How-To's

Finally! First Exchange 2013 Sizing Information released.

Yesterday, Microsoft’s CXP team (Customer Experience Team) released a lengthy blog post containing practical information towards sizing of Exchange 2013. A moment we all have been waiting for since the product was released in October of last year.

With this new information, it will finally become possible to create a decenter design and migration approach.

Unfortunately, there’s still no trace of the Mailbox Server Role Requirements Calculator which is – let’s face it – the reference tool when properly sizing an Exchange Server environment. The only mention in the article states that it’s coming somewhere later this quarter. Looks like we’re going to have to be a little more patient, don’t we?

Nonetheless, with the information from the article, you should be set on your way as it contains all the information you need to properly size a new Exchange 2013 Server environment. Please do keep in mind that this is Microsoft’s first guidance on this topic and is likely to change over time as customers and also Microsoft gain more experience with the product in more real-life deployments.

Over the course of the next weeks, I hope to provide you with some comparative figures between Exchange 2010 and Exchange 2013, but for now it looks like Exchange 2013 is quite hungry on Memory and CPU. However, from an architecture point-of-view, it’s not all that surprising. After all, memory and CPU are relatively cheap these days.

Stay tuned for more info!

To read the original article containing the different calculations, have a look at the following article:

http://blogs.technet.com/b/exchange/archive/2013/05/06/ask-the-perf-guy-sizing-exchange-2013-deployments.aspx

Blog Exchange 2013 How-To's

Create a function to connect to and disconnect from Exchange Online

[Update 26/03/2013] I wanted to share a quick update to the script with you. Over the past few weeks, I have hitting an issue at some customers that used an outbound authenticating proxy which prevented me from connecting to Exchange Online using the functions from my profile. As such, I have tweaked the script a bit to now include a switch (-ProxyEnabled) that, when used, will trigger PowerShell to authenticate the session against the proxy.

The new script now looks like this:

function Connect-ExchangeOnline{
    [CmdLetBinding()]
    param(
        [Parameter(Position=0,Mandatory=$false)]
        [Switch]
        $ProxyEnabled
    )

	if($ProxyEnabled){
	    $Session = New-Pssession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell -Credential (Get-Credential) -Authentication Basic -AllowRedirection -sessionOption (New-PsSessionOption -ProxyAccessType IEConfig -ProxyAuthentication basic)
	}
	else{
	    $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell -Authentication Basic -AllowRedirection -Credential (get-credential)
	}

	Import-PSSession $session
}

function Disconnect-ExchangeOnline {
	Get-PSSession | ?{$_.ComputerName -like "*outlook.com"} | Remove-PSSession
}

[Original Post]

Office 365 allows you to connect remotely to Exchange online using PowerShell. However, if you had to type in the commands to connect every time, you would be losing quite some time:

$session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri <a href="https://ps.outlook.com/PowerShell">https://ps.outlook.com/PowerShell</a> -Authentication Basic -Credential (Get-Credential) -AllowRedirection

Import-PSSession $session

Equally, to disconnect, you’d have to type in the following command each time

Get-PSSession  | ?{$_.ComputerName -like "*.outlook.com"} | Remove-PSSession

However, it is relatively easy to add both commands into a function which you can afterwards add them into your profile:

Function Connect-ExchangeOnline{
   $session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri <a href="https://ps.outlook.com/powershell">https://ps.outlook.com/powershell</a> -Authentication Basic -AllowRedirection -Credential (Get-Credential)
   Import-PSSession $session
}

Function Disconnect-ExchangeOnline{
   Get-PSSession  | ?{$_.ComputerName -like "*.outlook.com"} | Remove-PSSession
}

To add these functions to your PowerShell profile, simply copy-past them into your profile. To find out where your profile-file is located, type the following:

PS C:\> $profile
C:\Users\Michael\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

To start using the functions, all you have to do is to call them from the PowerShell prompt:

PS C:\> Connect-ExchangeOnline

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential

WARNING: Your connection has been redirected to the following URI:
https://pod51014psh.outlook.com/PowerShell-LiveID?PSVersion=3.0

PS C:\> Disconnect-ExchangeOnline

Note   there is no output for the Disconnect-ExchangeOnline function

How-To's Office 365 PowerShell