Azure AD Synchronization HTML Report

In 2013, Exchange Server MVP Mike Crowley wrote a script which would interactively report on the Office 365 Directory Synchronization tool. In 2014, Mike and I worked to update the script so that an HTML report would be generated. This would allow you to schedule the script and have the output emailed to you without the need to run the script interactively.

Before you can actually run the script, you will have to install SQL PowerShell on the AADSync machine first. DirSync had this installed by default, but it seems that AADSync does not. To install the SQL PS module, you must install the following components separately:

  1. Microsoft® System CLR Types for Microsoft® SQL Server® 2012
  2. Microsoft® SQL Server® 2012 Shared Management Objects
  3. *Microsoft® Windows PowerShell Extensions for Microsoft® SQL Server® 2012

The binaries can be installed from the installation instructions on the following page: http://www.microsoft.com/en-us/download/details.aspx?id=29065

Once you have installed the components, run the following command from the AADSync server and verify that the SQLPS module is listed:

Get-Module -ListAvailable

Once you have verified the SQLPS module is installed and available, you can run the script.

This time around I have decided to publish the script through Github. You can download it from HERE. Alternatively, the script also available from the Technet Script Gallery, HERE.

Please use the script for what it’s worth, and always test in a lab first. Comments/feedback and feature requests are always welcome!

Blog Office 365 PowerShell

Microsoft manages to ship yet another broken Cumulative Update…

A few days ago, Microsoft released Cumulative Update 6 for Exchange 2013 to the world. There used to be a time where Exchange server updates were fairly safe. However, pretty much like in every other Cumulative Update for Exchange 2013, this one also includes some bugs which break functionality in one way or another. While one would say that it starts to become painful for Microsoft, I’m starting to believe it’s more of a joke.

Exchange Server MVP Jeff Guillet was the one to first report the issue. As it turns out, the Hybrid Configuration Wizard in CU6 runs just fine, but some of the features (like initiating a mailbox move from the on-premises EAC or the ability to switch between the on-prem/cloud EAC) no longer work. Although the scope of the break is somewhat limited (it only applies to customers in a hybrid deployment), one could argue it’s an important focus area for Microsoft – especially given that it’s cloud-related. Microsoft has been trying really hard (with success, may I add) to promote Office 365 and get customers to onboard to “the service”. As such, I find it really surprising that it’s the n-th issue related to hybrid deployments in such a short time. In Cumulative Update 5, the Hybrid Configuration Wizard is broken and now there’s this.

Needless to say, you are warned about deploying Cumulative Updates into production. Pretty much every MVP which announced the Cumulative Update made the remark that you should better test the update before deploying it. I would say this is a general best-practice, but given the history of recent Exchange Server updates, I wouldn’t dare to deploy one without thoroughly testing it.

This brings me to another point: what happened to testing, Microsoft? I understand that it’s impossible to test every customer scenario that you can find out there, but how come that pretty obvious functionalities like these manage to slip through the cracks? If it were a one-time event, I could understand. But there’s a clear trend developing here.

Running a service like Office 365 is not easy. More so, the cadence at which the service evolves can be really scathing. On-premises customers have been struggling to keep up with the updates that are being released in the cloud, but it seems that Microsoft itself is having a hard time to keep up too.

On a final note, I’m wondering what customers with a hybrid deployment should do. According to Microsoft support guidelines, hybrid customers are requested to stay current with Exchange Server updates. But given that this is now two consecutive update that are causing problems, one might start to wonder if it’s not better to stay at CU4 as it was the last CU which did not have any hybrid issues…

I imagine that Microsoft is working hard on a fix for this issue, even during a holiday weekend… Let’s wait and see what happens early next week!

Until then, I would hold off on deploying CU6 and revert to using CU5 with the interim update which fixes the HCW bug or – if you don’t like IUs – stick to CU4/SP1.

Blog Exchange 2013 Hybrid Exchange News Office 365

Some thoughts on using the Exchange Integration Pack for Orchestrator…

Recently a customer decided to start using System Center Orchestrator to automate some of the recurring tasks with regards to the management and support of their Exchange organization. More specifically, they wanted to create a custom interface to which the support organization would get access to in order to request specific actions like mailbox moves, message tracking logs etc. This would allow them to execute some (basic) tasks themselves without needing any Exchange permissions or being forced to use PowerShell or the Exchange Management Console.

In fact, Orchestrator is very powerful when it comes down to automating things like this. The Integration Packs for products like Exchange, Active Directory and SharePoint allow you to easily develop such solutions.

What is described below are merely some thoughts I had while building the solution along with a colleague of mine. I’m in no way an expert in the use of Orchestrator, but that’s why I thought this information might be interesting to you.

Before diving into the quirks which may (or may not) be “by design”, let’s have a look at what we were trying to build.

The solution

The customer wanted to have a custom interface (not specified what it needed to be) which would allow members of the Service Desk to request message tracking information without the need for an Exchange administrator to intervene. After some debates on whether or not PowerShell should be leveraged to develop such a GUI, we decided to move forward using SharePoint lists. SharePoint lists are easy to setup and use and because it was already heavily used within this organization, it seemed like the logical step.

In order not to complicate things, we decided that we would ask the support engineer for the following information:

  • Sender Email Address
  • Recipient Email Address
  • Date + Time (hour)

 To enter the information into the SharePoint list, a custom form was created which also took care of data validation. As such, the engineer would be able to enter only a sender address, a recipient address or both. The date + time are always required and needed to be in a specific format (yyyy-mm-dd). The reason why we chose to do data validation in the form is two-fold. First, it’s extremely easy to do and secondly, it would save us quite a bit of code later on when building the solution in Orchestrator. The date + hour indication are used to indicate for what time the message tracking logs are requested. We found that 95% of all requests all were within a 24 hour timespan. That’s why the code we built (more about that later) would look in the message tracking logs 12 hours before and 12 hours after the indicated time.

Whenever an engineer would enter a new item in the list, it should automatically get processed by Orchestrator which would then fetch the information from the Message Tracking logs and write it to a CSV file which would be attached to the list item in SharePoint when finished processing.

All-in-all, the requirements were are pretty straight forward which should make the final solution relatively easy to build. In order to meet these requirements, we needed to following components in Orchestrator:

  • SharePoint Integration Pack
  • Exchange Integration Pack
  • Active Directory Integration Pack

Below is a screenshot of the actual runbook which we built:

image

As you can see, it’s not the most advanced runbook (and maybe not the most efficient either). However, it does the trick. As you will notice, we did built in some logging (in order to have a history what happened) and also do some data validation using the built-in capabilities of Orchestrator.

The quirks

Monitor Item Lists activity

The first problem we came across was using the “Monitor List Items” activity. When first adding this activity to the runbook, it automatically detects all the fields in the list and allows you to use them throughout the runbook as published data from that activity:

image

The problem, though, is that – for some reason – it doesn’t pick up changes made to the list after the activity has been added to the runbook. When you have designed your SharePoint list to have all the necessary fields from the get-go, this probably isn’t much of an issue. In our case, unfortunately, the customer tends to changes his mind once in a while…
No matter what we tried, newly added fields would not show up in the published data for that activity. The only workaround is to remove the activity and add it again. While this might work for very simple workflows, in larger workflows – like the one above – dealing with this is particularly painful. Especially because the published data from the activity is used in multiple places. This means that when you remove/re-add the activity, you have to go in to all the other activity where you referenced the data and ‘reconfigure’ it. Again, this is not a big problem if you know this up front, but it can cost you some time if you decide to make changes to the SharePoint list. So one word of advice: plan carefully!

Exchange Management Shell code

The Exchange Integration Pack allows you to directly run a bunch of Exchange PowerShell commands when you add the Run Exchange Management Shell activity. Pretty easy, right?!  Not really. As it turns out, not everything you can do in the Exchange Management Shell works in this activity.

For instance, in the Exchange Management Shell, the following code would work beautifully:

Get-TransportServer | Get-MessageTrackinglog

Bizarrely enough, this won’t work here. Instead, you have to break the pipeline as follows:

$servers = Get-TransportServer
$servers | Get-MessageTrackingLog

My best guess is that the runbook activity is already executing a pipeline which limits what you can do with it. Again, this isn’t a big problem. But it’s just something to keep in mind when developing code to be used in Orchestrator.

Here’s the code I (we) used:

#Defining Variables

$Subject = "{MT-Subject from "Monitor List Items"}"
$Sender = "{MT-Sender from "Monitor List Items"}"
$Recipient = "{MT-Recipient from "Monitor List Items"}"
$data = "{MT-Date from "Monitor List Items"}"
$Hour = "{MT-Hour from "Monitor List Items"}"

#Check Date Input
try{
[datetime]$inputdate = $date+" "+$Hour+":00:00"
[datetime]$startDate = $inputdate.AddHours(-12)
[datetime]$endDate = $inputdate.AddHours(+12)
}
catch{
$returnMsg = "Input date format invalid"
$returnMsg
exit
}

#Convert DateTimes back into usable format
$strStartDate = $startDate.ToString("yyyy/MM/dd HH:mm:ss")
$strEndDate = $endDate.ToString("yyyy/MM/dd HH:mm:ss")

#Construct cmdlet
$cmd += "Get-MessageTrackingLog -Server `$_.Name"

if($Subject -ne $null -and $Subject -ne ""){
$cmd += " -MessageSubject `"$Subject`""
}
if($Sender -ne $null -and $Sender -ne ""){
$cmd += " -Sender `"$Sender`""
}
if($Recipient -ne $null -and $Recipient -ne ""){
$cmd += " -Recipient `"$Recipient`""
}

$cmd += " -Start `"$strStartDate`""
$cmd += " -End `"$strEndDate`""

#workaround for Orchestrator limitation (limitation while executing pipeline)

$servers = Get-TransportServer
$result = $servers | %{
Invoke-Expression $cmd
}

$result | Select EventID,Source,Sender,Recipients,MessageSubject,MessageID

Exchange Management Shell code output

This is probably the biggest quirk we had to deal with. When you execute code within Orchestrator, the output from the code is then pushed to the next activity (provided that you subscribed to the data, of course). In this case, if multiple results came back from the Exchange Management Shell activity, it would fire the following activity for each result. To avoid this and to bundle the results and send them off to the next activity as a whole, you need to “flatten” the output first:

image

That was easy, wans’t it…? Well, that wasn’t really the problem either. When you take a closer look at the symbol used for separating the results, you’ll notice that we reverted to using a rather rarely used symbol. There’s a specific reason.

Here’s why: for some (stupid) reason, the results which are returned by the EMS runbook activity look like the following:

“[EventId: RECEIVE]
[Source: SMTP]
[Sender: member@linkedin.com]
[Recipients:email@domain.com]
[MessageSubject: email, please add me to your LinkedIn network]
[MessageId: somethinghere@somethingelse]
,[EventId: SEND]
[Source: SMTP]
[Sender: member@linkedin.com]
[Recipients: email@domain.com]
[MessageSubject: email, please add me to your LinkedIn network]
[MessageId: somethinghere@somethingelse]”

No this is not a weird way of display a hash table, it’s literally a large string which is passed on. This makes it incredibly difficult to deal with. In fact, there is no way that you can just push this into a CSV file and make something useful out of it without dealing with the output first.

A quick look on the internet, brought me to the following article: http://blog.coretech.dk/jgs/scorchestrator-2012-ps-function-for-parsing-the-result-of-the-run-exchange-management-shell-cmdlet-activity/

The author of that article suggested to parse the code and to convert the output into something usable. This seemed fair enough to me and I decided to re-use the code (although I needed to modify it to make it work in our case). This was then added to a second activity called “Convert Output” which is the “Run .NET Script” runbook activity. Here’s the code I used:

$inputText = "{Command Output for "Get MessageTracking Results"}"

if($inputText -ne ""){

$collection = $inputText.Split("¶")
$returnCollection = @()
foreach ($item in $collection)
{
$keypairCollection = $item.Split("`n")

$EventID = $keypairCollection[0].Trim().TrimStart("[").TrimEnd("]").TrimStart("EventId: ")
$Source = $keypairCollection[1].Trim().TrimStart("[").TrimEnd("]").TrimStart("Source:").TrimStart(" ")
$Sender = $keypairCollection[2].Trim().TrimStart("[").TrimEnd("]").TrimStart("Sender: ")
$Recipients = $keypairCollection[3].Trim().TrimStart("[").TrimEnd("]").TrimStart("Recipients: ")
$MessageSubject = $keypairCollection[4].Trim().TrimStart("[").TrimEnd("]").TrimStart("MessageSubject: ")
$MessageId = $keypairCollection[5].Trim().TrimStart("[").TrimEnd("]").TrimStart("MessageId: ")

$obj = New-Object -TypeName PSObject
Add-Member -InputObject $obj -MemberType NoteProperty -Name "EventID" -Value $EventID
Add-Member -InputObject $obj -MemberType NoteProperty -Name "Source" -Value $Source
Add-Member -InputObject $obj -MemberType NoteProperty -Name "Sender" -Value $Sender
Add-Member -InputObject $obj -MemberType NoteProperty -Name "Recipients" -Value $Recipients
Add-Member -InputObject $obj -MemberType NoteProperty -Name "MessageSubject" -Value $MessageSubject
Add-Member -InputObject $obj -MemberType NoteProperty -Name "MessageId" -Value $MessageId

$returnCollection += $obj
}
$output = $returnCollection | convertto-csv -delimiter ";"
}
else{
[string]$output = "No messages could be found with the specified parameters."
}

You might still wonder why I had to use the ¶ symbol as a delimeter for the output from the “Get MessageTracking Results” activity… The problem with parsing the code is that you have to split the individual results returned by the activity. Using a regular comma or semi-colon is out of the question as it’s quite likely that the subject for a message contains that same character which would cause the output conversion to fail (I found out the hard way). So we needed to decide on a symbol which allowed us to safely parse the code. Hence the ¶ symbol. For now, this does mean that if someone uses the ¶ symbol in the subject and it is returned by the message tracking logs, the above code will fail and there won’t be any output for the message tracking request.

If you have a better alternative: feel free to comment and let me know!

Some final thoughts

All in all, Orchestrator proves to be quite a flexible platform to perform tasks like the one described in this article. Especially the combination with SharePoint makes it very easy to build ‘custom’ tools which can be used by anyone without needing to provide them with permissions in Exchange (even though RBAC provides a nice framework to do just that). I only wish that some of the quirks got solved as it will remove the need for additional steps (and code) which only add complexity…

Exchange 2013 PowerShell

Spooky! The curious case of the ‘ghost’ File Share Witness…

Recently, I was doing some research for a book that I’m working on together with Exchange MVP’s Paul Cunningham and Steve Goodman. It involved recovering a “failed” server using the /m:recoverserver switch. The process itself is straightforward, but depending on what server role(s) you are recovering, you might have to perform some additional post-recovery steps.

In this particular case, I was recovering a Client Access Server (single role) which also happened to be the File Share Witness for one of my Database Availability Groups.
As such, you need to ‘reconfirm’ the recovered server as a File Share Witness. One way of doing so, is to run the following command:

Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup

However, upon executing the command, I was presented with the following error message:

clip_image002

Given that I didn’t use a System State Backup, I was surprised to read that a File Share Witness already existed.
The first thing I did was to check the restored server itself to see if the share existed. As expected, there was nothing to see.

By default, a DAG uses the File Share Witness + Node Majority Cluster Quorum model. This prevents you from removing the File Share Witness from the cluster because it is a critical resource. So, my next thought was to temporary ‘move’ the File Share Witness to another server and then move it back. First, I executed the following command to move the FSW to another server:

Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup –WitnessServer <server name>

The command completed successfully, after which I decided to move the FSW back to the recovered server using the following command:

Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup –WitnessServer <recovered server name>

I was surprised to see that the command failed with the same error message as before:

clip_image002[5]

I then took a peak at the cluster resources and found the following:

clip_image002[7]

It seemed there were now TWO File Share Witnesses for the same DAG, where the failed one is the one that used to live on the recovered server.

At this point, I decided to clear house and remove both resources. Before being able to do so, I had to switch the Quorum Model to “Node Majority Only”:

Set-ClusterQuorum –NodeMajority

I then re-ran the command to configure the recovered server as the File Share Witness:

Get-DatabaseAvailabilityGroup <DAG name>| Set-DatabaseAvailabilityGroup –WitnessServer <recovered server name>

Note:  when configuring the File Share Witness, the cluster’s quorum model is automatically changed back into NodeAndFileShareMajorty

After this series of steps, everything was back to the way it was and working as it should. I decided to double-check with Microsoft whether they had seen this before. That’s also where I got the [unofficial] naming for this “issue”: ghost file share witness (Thanks to Tim McMichael). If you ever land in this situation, I suggest that you contact Microsoft Support to figure out how you got into that situation. From personal testing, however, I can tell that this behaviour seems consistent when recovering a File Share Witness using the /m:recoverserver switch.

Blog Exchange Exchange 2013

The road ahead… New challenges!

Over the past twelve years I’ve gone through a series of jobs, from being a support agent at a helpdesk to being an Exchange consultant at Xylos today. All of the opportunities I have had and the customers I’ve worked with allowed me to grow as a consultant and as a person. I’m very grateful for the opportunities Xylos has offered me, but after having spent a little over six years working for them, I thought it was time for me to look forward and accept a new challenge.

As such, I’m happy and excited to announce that I will be joining ENow Software as of September 1st of this year as Product Director for their Exchange Monitoring and Reporting tools: Mailscape and Mailscape for Exchange Online. I look forward to working with all of the customers and getting feedback from the community to make the software even better than it already is today. Part of the reason why I chose for ENow is their unique approach and philosophy. ENow started out as a consulting company and continues to provide professional services which allows me to stay true to my “roots”.

I’m very excited about this new adventure as it allows me to combine the things I love to do along with the ability to contribute to a great piece of software. Over the past few months I have been working closely with ENow in order to build Mailscape for Exchange Online. So far, customers are thrilled with what they see and we have plenty more to come! (Did you know that Mailscape for Exchange Online was also nominated for the The Best of TechEd awards?)

In order to keep up with the ever-changing landscape in IT, I will continue to consult for ENow’s customers, albeit that I will spend less time doing so than I currently do. I strongly believe that staying hands-on with Exchange, Office 365 and everything related, will allow me to contribute better on a technical level but also allow me to better understand how our software should work with everything Microsoft does.

Of course, I will continue to blog here, speak at various tech conferences and be part of The UC Architects podcast. As a matter of fact, I’ll be speaking at Exchange Connections in September in Vegas where I will be presenting two sessions and a one-day workshop on deploying a Hybrid Exchange environment. If you haven’t signed up, I strongly suggest you do: after all IT/DEV Connections is one of the rare conferences where you get the chance to meet with real-world experts presenting no-nonsense and hands-on topics on various Microsoft technologies. This year’s edition has everything to make it even better than last year’s, including a new location: the beautiful Aria hotel/convention center.

Although I won’t officially be starting until September, feel free to reach out to me regarding Mailscape or ENow’s software in general. I’ll be more than happy to assist you with any questions you have. As always, you can follow me on twitter (@mvanhorenbeeck) or send me an email on michael.van.horenbeeck@enowsoftware.com!

-Michael

Blog

Upcoming speaking engagements

2014 promises to be a busy year, just like 2013. So far, I’ve had the opportunity to speak at the Microsoft Exchange Conference in Austin and soon I’ll be speaking at TechEd in Houston as well. Below are my recent and upcoming speaking engagements. If you’re attending any of these conferences, feel free to hit me up and have a chat!

-Michael

Pro-Exchange, Brussels, BE

Just last week, Pro-Exchange held another in-person event at Xylos in Brussels. The topic of the night was “best of MEC” where I presented the full 2.5 hours about all and everything that was interesting at the Microsoft Exchange Conference in Austin.

TechEd North America – Houston, TX

This year, I’ve got the opportunity to speak at TechEd in Houston. I’ll be presenting my “Configure a hybrid Exchange deployment in (less than) 75 minutes”.
The session takes place on Monday May 12 from 4:45 – 6:00 PM

More information: OFC-B312 Building a Hybrid Microsoft Exchange Server 2013 Deployment in Less than 75 Minutes

ITPROceed – Antwerp, BE

Given that Microsoft isn’t organizing any TechDays in Belgium, this year, the Belgian IT PRO community took matters in their own hands and created this free one-day event. It will take place on June 12th in Antwerp at ALM.
The conference consists of multiple tracks, amongst which also is “Office Server & Services” for which I will be presenting a session on “Exchange 2013 in the real world, from deployment to management”.

For more information, have a look at the official website here.

Blog Events TechEd NA 2014

The UC Architects episode 37 now available!

Even though it’s only been a couple of days that we released episode 36 – “Live @ MEC”, we proudly announce the release of our latest episode: “episode 37 – don’t MEC my Heartbleed“.

In this latest installment, Steve, John, Michel, Stale and myself talk about a myriad of things including some random thoughts on the Microsoft Exchange Conference, the recently disclosed Heartbleed vulnerability and latest improvements in the world of Hybrid Exchange deployments. Stale also started a new feature called “using Lync like a Lync PRO” in which he will reveal a very handy tip on how to better use Lync. Make sure you don’t miss it!

Don’t let the length of the show (almost 2 hours!) scare you, it’s filled with tons of great info!

Talk to you soon!

Michael

Blog Events Exchange News Podcasts