Spooky! The curious case of the ‘ghost’ File Share Witness…

Recently, I was doing some research for a book that I’m working on together with Exchange MVP’s Paul Cunningham and Steve Goodman. It involved recovering a “failed” server using the /m:recoverserver switch. The process itself is straightforward, but depending on what server role(s) you are recovering, you might have to perform some additional post-recovery steps.

In this particular case, I was recovering a Client Access Server (single role) which also happened to be the File Share Witness for one of my Database Availability Groups.
As such, you need to ‘reconfirm’ the recovered server as a File Share Witness. One way of doing so, is to run the following command:

[sourcecode language=”powershell”]Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup[/sourcecode]

However, upon executing the command, I was presented with the following error message:

clip_image002

Given that I didn’t use a System State Backup, I was surprised to read that a File Share Witness already existed.
The first thing I did was to check the restored server itself to see if the share existed. As expected, there was nothing to see.

By default, a DAG uses the File Share Witness + Node Majority Cluster Quorum model. This prevents you from removing the File Share Witness from the cluster because it is a critical resource. So, my next thought was to temporary ‘move’ the File Share Witness to another server and then move it back. First, I executed the following command to move the FSW to another server:

[sourcecode language=”powershell”]
Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup –WitnessServer <server name>
[/sourcecode]

The command completed successfully, after which I decided to move the FSW back to the recovered server using the following command:

[sourcecode language=”powershell”]Get-DatabaseAvailabilityGroup <DAG name> | Set-DatabaseAvailabilityGroup –WitnessServer <recovered server name>[/sourcecode]

I was surprised to see that the command failed with the same error message as before:

clip_image002[5]

I then took a peak at the cluster resources and found the following:

clip_image002[7]

It seemed there were now TWO File Share Witnesses for the same DAG, where the failed one is the one that used to live on the recovered server.

At this point, I decided to clear house and remove both resources. Before being able to do so, I had to switch the Quorum Model to “Node Majority Only”:

[sourcecode language=”powershell”]Set-ClusterQuorum –NodeMajority[/sourcecode]

I then re-ran the command to configure the recovered server as the File Share Witness:

[sourcecode language=”powershell”]Get-DatabaseAvailabilityGroup <DAG name>| Set-DatabaseAvailabilityGroup –WitnessServer <recovered server name>[/sourcecode]

Note:  when configuring the File Share Witness, the cluster’s quorum model is automatically changed back into NodeAndFileShareMajorty

After this series of steps, everything was back to the way it was and working as it should. I decided to double-check with Microsoft whether they had seen this before. That’s also where I got the [unofficial] naming for this “issue”: ghost file share witness (Thanks to Tim McMichael). If you ever land in this situation, I suggest that you contact Microsoft Support to figure out how you got into that situation. From personal testing, however, I can tell that this behaviour seems consistent when recovering a File Share Witness using the /m:recoverserver switch.

Blog Exchange Exchange 2013

The road ahead… New challenges!

Over the past twelve years I’ve gone through a series of jobs, from being a support agent at a helpdesk to being an Exchange consultant at Xylos today. All of the opportunities I have had and the customers I’ve worked with allowed me to grow as a consultant and as a person. I’m very grateful for the opportunities Xylos has offered me, but after having spent a little over six years working for them, I thought it was time for me to look forward and accept a new challenge.

As such, I’m happy and excited to announce that I will be joining ENow Software as of September 1st of this year as Product Director for their Exchange Monitoring and Reporting tools: Mailscape and Mailscape for Exchange Online. I look forward to working with all of the customers and getting feedback from the community to make the software even better than it already is today. Part of the reason why I chose for ENow is their unique approach and philosophy. ENow started out as a consulting company and continues to provide professional services which allows me to stay true to my “roots”.

I’m very excited about this new adventure as it allows me to combine the things I love to do along with the ability to contribute to a great piece of software. Over the past few months I have been working closely with ENow in order to build Mailscape for Exchange Online. So far, customers are thrilled with what they see and we have plenty more to come! (Did you know that Mailscape for Exchange Online was also nominated for the The Best of TechEd awards?)

In order to keep up with the ever-changing landscape in IT, I will continue to consult for ENow’s customers, albeit that I will spend less time doing so than I currently do. I strongly believe that staying hands-on with Exchange, Office 365 and everything related, will allow me to contribute better on a technical level but also allow me to better understand how our software should work with everything Microsoft does.

Of course, I will continue to blog here, speak at various tech conferences and be part of The UC Architects podcast. As a matter of fact, I’ll be speaking at Exchange Connections in September in Vegas where I will be presenting two sessions and a one-day workshop on deploying a Hybrid Exchange environment. If you haven’t signed up, I strongly suggest you do: after all IT/DEV Connections is one of the rare conferences where you get the chance to meet with real-world experts presenting no-nonsense and hands-on topics on various Microsoft technologies. This year’s edition has everything to make it even better than last year’s, including a new location: the beautiful Aria hotel/convention center.

Although I won’t officially be starting until September, feel free to reach out to me regarding Mailscape or ENow’s software in general. I’ll be more than happy to assist you with any questions you have. As always, you can follow me on twitter (@mvanhorenbeeck) or send me an email on michael.van.horenbeeck@enowsoftware.com!

-Michael

Blog

Upcoming speaking engagements

2014 promises to be a busy year, just like 2013. So far, I’ve had the opportunity to speak at the Microsoft Exchange Conference in Austin and soon I’ll be speaking at TechEd in Houston as well. Below are my recent and upcoming speaking engagements. If you’re attending any of these conferences, feel free to hit me up and have a chat!

-Michael

Pro-Exchange, Brussels, BE

Just last week, Pro-Exchange held another in-person event at Xylos in Brussels. The topic of the night was “best of MEC” where I presented the full 2.5 hours about all and everything that was interesting at the Microsoft Exchange Conference in Austin.

TechEd North America – Houston, TX

This year, I’ve got the opportunity to speak at TechEd in Houston. I’ll be presenting my “Configure a hybrid Exchange deployment in (less than) 75 minutes”.
The session takes place on Monday May 12 from 4:45 – 6:00 PM

More information: OFC-B312 Building a Hybrid Microsoft Exchange Server 2013 Deployment in Less than 75 Minutes

ITPROceed – Antwerp, BE

Given that Microsoft isn’t organizing any TechDays in Belgium, this year, the Belgian IT PRO community took matters in their own hands and created this free one-day event. It will take place on June 12th in Antwerp at ALM.
The conference consists of multiple tracks, amongst which also is “Office Server & Services” for which I will be presenting a session on “Exchange 2013 in the real world, from deployment to management”.

For more information, have a look at the official website here.

Blog Events TechEd NA 2014

The UC Architects episode 37 now available!

Even though it’s only been a couple of days that we released episode 36 – “Live @ MEC”, we proudly announce the release of our latest episode: “episode 37 – don’t MEC my Heartbleed“.

In this latest installment, Steve, John, Michel, Stale and myself talk about a myriad of things including some random thoughts on the Microsoft Exchange Conference, the recently disclosed Heartbleed vulnerability and latest improvements in the world of Hybrid Exchange deployments. Stale also started a new feature called “using Lync like a Lync PRO” in which he will reveal a very handy tip on how to better use Lync. Make sure you don’t miss it!

Don’t let the length of the show (almost 2 hours!) scare you, it’s filled with tons of great info!

Talk to you soon!

Michael

Blog Events Exchange News Podcasts

This was MEC 2014 (in a nutshell)

As things wind down after a week full of excitement and – yes, in some cases – emotion, MEC 2014 is coming to an end. Lots of attendees have already left Austin and those who stayed behind are sharing a few last drinks before making their way back home as well. As good as MEC 2012 in Orlando was, MEC 2014 was E-P-I-C. Although some might state that the conference had missed its start – despite the great Dell Venue Pro 8 tablet giveaway – you cannot ignore the success of the rest of the week.

With over 100 unique sessions, MEC was packed with tons and tons of quality information. To see that amount of content being delivered by the industry’s top speakers is truly an unique experience. After all, at how many conferences is the PM or lead developer presenting the content on a specific topic? Also, Microsoft did a fairly good job of keeping a balance between the different types of sessions by having a mix of Microsoft-employees presenting sessions that reflected their view on things (“How things should work / How it’s designed to be”) and MVPs and Masters presenting a more practical approach (“How it really works”).

I also like the format of the “unplugged” sessions where you could interact with members of the Product Team to discuss a variety of topics. I believe that these sessions are not only very interesting (tons of great information), but they are also an excellent way for Microsoft to connect with the audience and receive immediate feedback on what is going out “out there”. For example, I’m sure that the need for some better guidance or maybe a GUI for Managed Availability is a message that was well conveyed and that Microsoft should use this feedback to maybe prioritize some of the efforts going into development. Whether that will happen, only time will tell..

This edition wasn’t only a success because of the content, but also because of the interactions. It was good to see some old friends and make many new ones. To  me, conferences like this aren’t only about learning but also about connecting with other people and networking. There were tons of great talks – some of which have given me food for thought and blog posts.

Although none of them might seem earth-shattering, MEC had a few announcements and key messages; some of which I’m very happy to see:

  • Multi-Factor Authentication and SSO are coming to Outlook before the end of the year. On-premises deployments can expect support for it next calendar year.
  • Exchange Sizing Guidance has been updated to reflect some of the new features in Exchange 2013 SP1:
    • The recommended page file size is now 32778 MB if your Exchange server has more than 32GB of memory. It should still be a fixed size and not managed by the OS.
    • CAS CPU requirements have increased with 50% to accommodate for MAPI/HTTP. It’s still lower than Exchange 2010
  • If you didn’t know it before, you will now: NFS is not supported for hosting Exchange data.
  • The recommended Exchange deployment uses 4 database copies, 3 regular 1 lagged. FSW preferably in a 3rd datacenter.
  • Increased emphasis on using a lagged copy.
  • OWA app for Android is coming
  • OWA in Office 365 will get a few new features including Clutter, People-view and Groups. No word if and when this will be made available for on-premises customers.

By now, it’s clear that Microsoft’s development cycle is based on a cloud-first model which – depending on what your take on things is – makes a lot of sense. This topic was also discussed during the Live recording of The UC Architects, I recommend you have a listen at it (as soon as it’s available) to hear how The UC Architects, Microsoft and the audience feels about this. Great stuff!

It’s also interesting to see some trends developing/happening. “Enterprise Social” is probably one of the biggest trends at the moment. With Office Graph being recently announced, I am curious to see how Exchange will evolve to embrace the so-called “Social Enterprise”. Features like Clutter, People View and Groups are already good examples of this.

Of course, MEC wasn’t all about work. There’s also time for fun. Lots of it. The format of the attendee party was a little atypical for a conference. Usually all attendees gather at a fairly large location. This time, however, the crowd was shattered across several bars in Rainey Street which Microsoft had rented off. Although I was a little skeptical at first, it rather worked really well and had tons of fun.

Then there was the UC Architects party which ENow graciously offered to host for us. The Speakeasy rooftop was really amazing and the turnout even more so. The party was a real success and I’m pretty confident there will be more in the future!

I’m sure that in the course of the next few weeks, more information will become available through the various blogs and websites as MVPs, Masters and other enthusiasts have digested the vast amount of information distributed at MEC.

I look forward to returning home, get some rest and start over again!

Au revoir, Microsoft Exchange Conference. I hope to see you soon!

Blog Events Exchange Exchange 2013 Microsoft Exchange Conference 2014 Office 365

Why MEC is the place to be for Exchange admins/consultants/enthusiasts!

In less than a month, the 2014 edition of the Microsoft Exchange Conference will kick off in Austin, Texas. For those who haven’t decided if they will be going yet, here’s some reasons why you should.

The Value of Conferences

Being someone who frequently attends conferences, I *think* I’m in a position I can say that conferences provide great value. Typically, you can get up-to-date with the latest (and greatest) technology in IT.

Often, the cost for attending a conference are estimated higher than a traditional 5-day course. However, I find this not to be true – at least not all the time. It is true that – depending on where you fly in from – Travel & Expenses might add up to the cost. However, I think it is a good thing to be ‘away’ from your daily work environment. That typically leaves one less tempted to be pre-occupied with work rather than soaking in the knowledge shared throughout the conference. The experience is quite different from a training course. Conferences might not provide you the exact same information as in a training, but you’ll definitely be able to learn more (different) things. Especially if your skills in a particular product are already well-developed, conferences are the place to widen your knowledge.

On top of that, classroom trainings don’t offer you the same networking capabilities. In case of MEC, for instance, there will be a bunch of Exchange MVPs and Masters who you can talk to. All of them very knowledgeable and I’m sure they won’t mind a good discussion on Exchange! This could be your opportunity to ask some really difficult questions or just hear what their opinion is on a specific issue. Sometimes the insights of a 3rd person can make a difference…!

It is also the place where all the industry experts will meet. Like I mentioned earlier, there will be Masters and MVPs, but also a lot of people from within Microsoft’s Exchange Product Group will be there. What better people are there to ask your questions to?

Great Content

Without any doubt, the Exchange Conference will be the place in 2014 to learn about what’s happening with Exchange. Service Pack 1 – or Cumulative Update 4, if you will – has just been released and as you might’ve read there are many new things to discover.

At the same time, it’s been almost 1.5 years since Exchange 2013 has been released and there are quite some sessions that focus on deployment and migration. If you’re looking to migrate shortly, or if you’re a consultant migrating other companies, I’m sure you’ll get a lot of value from these sessions as they will be able to provide you with first-hand information. When MEC 2012 was held – shortly before the launch of Exchange 2013 – this wasn’t really possible as there weren’t many deployments out there.

Sure, one might argue that the install base for Exchange 2013 is still low. However, if you look back at it, deployments for Exchange 2010 only really kicked of once it was past the SP1 era. And I expect nothing else to happen for Exchange 2013.

As a reference: here’s a list of sessions I definitely look forward to:

And of course the “Experts unplugged” sessions:

I realize that’s way too many sessions already and I will probably have to make a choice which ones I will be able to attend…
But the fact that I have so many only proves that there’s so much valuable information at MEC…

Great speakers

I’ve had a look through who is speaking at MEC and I can only conclude that there is a TON of great speakers. All of which I am sure they will make it worth the wile. While Microsoft-speakers will most likely give you an overview of how things are supposed to work, many of the MVPs have sessions scheduled which might give you a slight less biased view of things. The combination of both makes for a good mix to get you started on the new stuff and broaden your knowledge of what was already there.

Location

Austin, Texas. I haven’t been there myself. But based on what Exchange Master Andrew Higginbotham blogged a few days ago; it looks promising!

Microsoft has big shoes to fill. MEC 2012 was a huge success and people are expecting the same – if not better – things from MEC 2014. Additionally, for those who were lucky enough to attend the Lync Conference in Vegas earlier this month, that is quite something MEC has to compete with. Knowing the community and the people behind MEC, I’m pretty confident this edition will be EPIC.

See you there!

Michael

Blog Exchange 2013 Microsoft Exchange Conference 2014 News Office 365 Uncategorized

Rewriting URLs with KEMP LoadMaster

Earlier this month, I wrote an article on how you could use a KEMP LoadMaster to publish multiple workloads onto the internet using only a single IP address using a feature called content-switching.

Based on the principle of content switching, KEMP LoadMasters also allow you to modify traffic while it’s flowing through the device. More specifically, this article will show you how you can rewrite URLs using a Load Master.

The rewriting of URLs is quite common. The goal is to ‘send’ people to another destination than the one they are trying to reach. This could be the case when you are changing your domain name or maybe even as part of a merger and you want the other company’s traffic to automatically be redirected to your website.

Let’s start with a simple example we are all familiar with: you want to redirect traffic from the root of your domain to /owa. The goal is that when someone enters e.g. webmail.domain.com, that person is automatically redirected to webmail.domain.com/owa. Although Exchange 2013 already redirects traffic from the root to the /owa virtual directory out-of-the-box, the idea here is to illustrate how you can do it with KEMP instead of IIS. As a result you could just as easily send someone from one virtual directory (e.g. /test) to another (e.g. /owa).

How it works – Mostly

Just as in my previous article, everything evolves around content switching. However, next to the content matching rules, KEMP’s LoadMasters allow you to define so-called header modification rules as shown in the following screenshot:

image

By default, it suffices to create such a header modification rule and assign it to a virtual service. By doing so, you will rewrite traffic traffic to the root (or the /test virtual directory) and people will end up at the /owa virtual directory.

To create a header modification rule, perform the following steps:

  1. Login to your LoadMaster
  2. Click on Rules & Checking and then Content Rules
  3. On top of the Content Rules page, click Create New…
  4. Fill in the details as show in the following screenshot:

    image

  5. Click Create Rule.

Now that you have created the header modification rule, you need to assign it to the virtual service on which you want to use it:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

That’s it. All traffic that hits the LoadMaster on the root will now automatically be rewritten to /owa.

How it works with existing content rules (SUBVSs)

When you are already using content rules to ‘capture’ traffic and send it to a different virtual directory (as described in my previous article), the above approach won’t work – at least not entirely.

While the creating of the header modification rule and the addition of that rule to the virtual service remain entirely the same, there is an additional task you have to perform.

First, let me explain why. When you are already using Content Rules the LoadMaster will use these rules to evaluate traffic in order to make the appropriate routing decision. As a result, these content rules are processed before the header modification rules. However, when the LoadMaster doesn’t find a match in one of its content matching rules, it will not process the header modification rule – at least not when you are trying to modify a virtual directory. As I will describe later in this article, it will still process host-header modifications though.

So, in order for the LoadMaster to perform the rewrite, the initial destination has to be defined on the virtual directory where you want to redirect traffic to. Let’s take the following example: you are using content rules to direct traffic from a single IP address to different virtual directories. At the same time, you want traffic from an non-existing virtual directory (e.g. /test) to be redirected to /owa.

First, you start of again by creating a header modification rule. The process is the same as outlined above. The only thing that changes is that the match string will now be “/^\/test$/” instead of /^\/$/:

image

Next, create a new content rule, but this time create a content matching rule as follows:

image

Next, we’ll make the changes to the virtual service:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

Now, we still need to add the content matching rule to the /owa SubVs:

  1. In the properties of the Virtual Service, go down to SubVSs and click the button in the Rules column, for the OWA SubVS:

    image

  2. From the drop-down list, select the rule you created earlier (“test”) and click Add:

    image

  3. You should now have 2 rules in the SubVS:

    image

That’s it. If you now navigate to the /test virtual directory, the traffic will automatically be rewritten to /owa.

How about if I want to redirect more than a single virtual directory to /owa?

In theory you would need to create a header modification rule for each of the virtual directories and a content matching rule as well. However, if you are going to redirect multiple other virtual directories to /owa, you can also use the “default” content rule which acts as a catch-all. As a result, instead of creating and adding a separate content matching rule for each virtual directory, you just create a header modification rule for each of them and add the default content rule to the /owa virtual directory as shown below:

image

What about rewriting host names?

Rewriting host names is handled a tad differently than e.g. virtual directories. Unlike the latter, host header modifications are processed before the content matching rules for the Virtual Services. As a result, it suffices to create a modification rule and apply it to the virtual service. To create a Host Header modification rule, do the following:

  1. Go to Rules & Checking and then Content Rules
  2. Click Add New… and create the rule as follows:

    image

Once you have created the rule, add it the the HTTP Header Modification rules on the virtual services and your done. Traffic that hits test.domain.com will now automatically be rewritten to webmail.domain.com. It’s as easy as that.

Conclusion

Rewriting URLs with KEMP’s LoadMaster is relatively easy. You only have to watch out when you are already using content switching rules as I described earlier.

Until later,

Michael

Blog Exchange How-To's

Free/Busy in a hybrid environment fail and Test-Federationtrust returns error “Failed to validate delegation token”

Following an issue with Free/Busy in Exchange online, earlier this week, I was troubleshooting the exchange of Free/Busy information in some of my hybrid deployments as Free/Busy information was still not working.
After having checked some (obvious things) like the Organization Relationships and whether or not Autodiscover was working properly, I discovered an issue when running the Test-FederationTrust cmdlet.

In fact, the cmdlet completed almost entirely successful, except for the very last step in the process:

Id         : TokenValidation
Type       : Error
Message    : Failed to validate delegation token.

This also explained why I was seeing 401 Unauthorized messages when running the Test-OrganizationRelationship command.

I then checked the same in some of my other deployments and found out the all had the same issue. At least, there was some common ground to start working from.
I turned to co-MVP Steve Goodman and asked him to run the same command in one of his labs in order to have a point of reference. At the same time, he asked me to run a command which might help:

Get-FederationTrust | Set-Federationtrust –RefreshMetaData

After running the command, I re-ran the Test-FederationTrust command which now completed successfully.

Conclusion

Although the Free/Busy issues in Office 365 should be solved, some customers might still experience problems exchanging Free/Busy information. In this case, the problem manifests itself by e.g. online users not being able to request on-premises user’s availability information.

Blog Exchange Hybrid Exchange Office 365

Exchange Online Archive (EOA): a view from the trenches – part 2

A bit later than expected, here’s finally the successor to the first article about Exchange Online Archiving which I wrote a while ago.

Exchange Online Archives and Outlook

How does Outlook connect to the online archive? Essentially, it’s the same process as with an on-premises archive. The client will receive the archive information during the initial Autodiscover process. If you take a look at the response, you will see something similar in the ouput:

image

Based on the SMTP address, the Outlook client will now make a second Autodiscover call to retrieve the connection settings for the archive after which it will try connecting to it. What happens then is exactly the same as how Outlook would connect to a regular mailbox in Office 365. Because Exchange Online is configured to use basic authentication for Outlook, the user will be prompted to enter their credentials. It’s particularly important to point this out to your users as the credential window will have no reference to what it’s used for. If you have deployed SSO, users will have to use their UPN (and not domain\username !) in the user field.

Experiences

So far we have covered what Exchange Online Archiving is all about, what the prerequisites are to make it work and how things come together in e.g. Outlook. Now, it’s time to stir things up a little and talk about how things are actually perceived in real life.

First, let me start by pointing out that this feature actually works great, IF you are willing to accept some of the particularities inherent to the solution. What I mean with particularities?

Latency

Unlike on-premises archives, your archives are now stored ‘in the cloud’. Which means that the only way to access them is over the internet. Depending on where you are connecting from, this could be an advantage or a disadvantage. I’ve noticed that connectivity to the archive and therefore the user-experience is highly dependent on the internet access you have. Rule of thumb: the more bandwidth/lower latency the better it gets. This shouldn’t be a surprise, but can be easily forgotten. I have found on-premises archives to be more responsive in terms of initial connectivity and retrieval of content. This brings me to the second point: speed.

Speed

As you are connecting over the internet, the speed of fetching content is highly dependent on the speed of your internet connection (you see a similarity here?). The bigger the message/attachment you want to download is, the longer it will take. Truth be told, you’ll have the same experience while accessing your on-premises archive from a remote location, so it’s not something exclusive to Office 365.

Outlook

To be honest, Outlook does a relative good job of working with the archive – at least when you deal with it the way it was designed. If you let Exchange sync expired items to your archive using the Managed Folder Assistant, your life will be great! However, if you dare to manually drag & drop messages from your primary mailbox into the archive, you’ll be in for a surprise. Outlook treats such an operation as a “foreground” action, which means that you will have to wait for this action to complete before you can do anything else in Outlook. The problem here is that if you choose to manually move a 4Mb message to the archive, it could take as long as 20 – 30 seconds (depending on your internet connection) before the action completes. To make things worse: during this operation Outlook freezes and if you try clicking something it’ll (temporarily) go into a “Not Responding…” state until the operation completes. According to Microsoft’s support, this is by design. So, as a measure of precaution: advise your users to NOT drag & drop messages, just let Exchange take care of it; something it does marvelously by the way.

I have found that proper end-user educations is also key here. If they are well informed about the how the archive works and have had some training on how to use retention tags, they’ll be on their way in no time!

Provisioning

As part of the problem I described above, the initial provisioning process can be a problem. When you first enable an archive, chances are that a lot of items will be moved to the archive. Although this process is handled by the MFA, if you mailbox is open whilst the MFA processes the mailbox, Outlook might become unresponsive or extremely slow at the least – this because the changes are happening and Outlook needs to sync those changes to the client’s OST file (when running in cached mode at least). Instead, it’s better to provision the archive on-premises, let the MFA do it’s work and then move the archive to Office 365. The latter approach works as a charm and doesn’t burden the user with an unresponsive Outlook client. If you are going to provision archives on-premises first, you might find it useful to estimate the size of an archive before diving in, heads first.

Search

This is a short one. Search is great. Because Outlook and Exchange can do cross-premises searches, you will be able to search both your primary mailbox and archive mailbox at once. Didn’t have much issues here. So: thumbs up!

Other Tips & Tricks

General (best) practices

Other than the particularities above, you shouldn’t do anything else compared to ‘regular’ on-premises archives. Try not to overwhelm your users with a ginormous amount of retention tags. Instead offer them a few tags they can used and – if necessary – adapt based on user feedback.

Autodiscover

Given the dependency from both Outlook and Exchange to make the online archive work, you should make sure that Autodiscover is working for your Exchange deployment AND that your Exchange servers are able to query Office 365’s Autodiscover service successfully as well.

This is especially important if you are using Outlook Web App (OWA) to access your online archive. In this case, it’s not Outlook but Exchange that will perform an Autodiscover lookup and connect to the archive. If your internet connection isn’t working properly or you have some sort forward authenticating proxy server in between, things could not (or intermittently) work.

Implement it gradually

As described above, it’s a bad idea to grant everyone with a new cloud-based archive at once. It will not only put a heavy load on your internet connection, but it will also affect your users. Instead, try to gradually implement the solution and request feedback from your users. Start with on-premises archives and move them to the cloud in batches, for instance.

DirSync is utterly important!

As described in the prerequisites sections, DirSync is very important to online archives. So make sure that you closely monitor how it’s doing. If you have issues with DirSync, you will inadvertently also have issues with creating archives. Issues with DirSync won’t interfere with archives that have already been provisioned though.

Conclusion

Is Exchange Online Archiving going to solve all your issues? Probably not. Is it a good solution. Yes, absolutely! I have been using Exchange Online Archiving for quite a while and I’m quite happy with it. I rarely encounter any issues, but I also have learnt to live with some of the particularities I mentioned earlier. Also, I treat my archive as a real archive. The stuff that’s in there are usually things I don’t need all that often. So the little latency-overhead that I experience whilst browsing/searching my archives is something I’m not bothered with. However, if I’d had to work with items from my archive day in, day out in; I’d probably have a lot more issues with adjusting to the fact that it’s less snappier than an on-premises archive.

So remember, set your (or your customer’s) experiences straight and you’ll enjoy the ride. If not, there might be some bumpy roads ahead!

Blog Exchange 2013 Office 365

Estimating the size of an Exchange (online) Archive

As part of some of the (archiving-) projects I have worked on, I frequently get asked if there is an easy way to determine what the size of the archive will be once it’s been activated. Although a bit odd at first, there are actually many good reasons why you’d want to know how big an archive will be.

First of all, determining the archive size allows to better size (or schedule for) the storage required for the archives. While there are also other ways to do this, knowing how big an archive will be when enabled is very helpful.

Secondly, if you’re using Exchange Online Archiving (EOA), it allows you to determine the amount of data that will pass through your internet connection for a specific mailbox. If the amount of data is large enough (compared to the available bandwidth), I personally prefer to provision an archiving on-premises, after which I can move it to Office 365 using MRS. But that’s another discussion. Especially for this scenario it can be useful to know how much archive you can (temporarily) host on-premises before sending them off to Office 365 and freeing up disk space again.

In order to calculate how big an archive would be, I’ve created a script which will go through all the items in one (or more) mailbox(es) and calculate the total size of all the items that will expire. When an item expires (and thus is eligible to be moved to the archive) depends on the Retention Policy you assign to a mailbox and what retention policy tags are included in that policy.

As the name of the script depicts, it’s important to understand that it’s an estimation of the archive size. There are situations in which the results of the script will be different from the real world. This could be the case when you enabled the archive and a user assigned personal tags to items before the Managed Folder assistant has processed the mailbox. In such a scenario, items with a retention tag that are different from the AgeLimit defined in the script will be calculated wrongfully. Then again, the script is meant to be ran before an archive is created.

Secondly, the script will go through all the folders in a mailbox. If you disabled archiving of calendar items, these items will be wrongfully included in the calculation as well. I will try to built this into the script in future releases, but this has a lower priority as the script was built to provide a pretty good estimation, not a 100% correct number.

The script, which you can download here, accepts multiple parameters:

UserPrimarySMTPAddresses the Primary SMTP Address of the mailbox for which you want to estimate the archive size
Report full file path to a txt file which contains the archive sizes
AgeLimit The retention time (in days) against which items should be probed. If you have a 60 day retention before items get moved to the archive, enter 60.
Server Used for connecting with EWS. Optional. Can be used if autodiscover is unable to determine the connection URI.
Credentials The credentials of an account that has the ApplicationImpersonation Management Role assigned to it.

 

The output of the script will be an object that contains the user’s Primary SMTP Address and the size of the archive in MB (TotalPRMessageSize).

Credit where credit is due! I would like to thank Michel de Rooij for his immensely insane PowerShell scripting skills and for helping me with cleaning up this script to its current form. Before I sent it off to Michel, the code was pretty inefficient [but hey! it was working], what you’ll download has been cleaned up and greatly enhanced. Now you have a clean code, additional error handling and some more parameters than in my original script [see parameters above].

I hope you’ll enjoy the script and find it useful. I’ve used it in multiple projects so far and it really helped me with planning of provisioning the archives.

Note:  To run the script, you’ll need to have Exchange Web Services installed and run it with an account that has the Application Impersonation Management Role assigned to it.

Cheers,

Michael

Blog Exchange