Upcoming speaking engagements

2014 promises to be a busy year, just like 2013. So far, I’ve had the opportunity to speak at the Microsoft Exchange Conference in Austin and soon I’ll be speaking at TechEd in Houston as well. Below are my recent and upcoming speaking engagements. If you’re attending any of these conferences, feel free to hit me up and have a chat!

-Michael

Pro-Exchange, Brussels, BE

Just last week, Pro-Exchange held another in-person event at Xylos in Brussels. The topic of the night was “best of MEC” where I presented the full 2.5 hours about all and everything that was interesting at the Microsoft Exchange Conference in Austin.

TechEd North America – Houston, TX

This year, I’ve got the opportunity to speak at TechEd in Houston. I’ll be presenting my “Configure a hybrid Exchange deployment in (less than) 75 minutes”.
The session takes place on Monday May 12 from 4:45 – 6:00 PM

More information: OFC-B312 Building a Hybrid Microsoft Exchange Server 2013 Deployment in Less than 75 Minutes

ITPROceed – Antwerp, BE

Given that Microsoft isn’t organizing any TechDays in Belgium, this year, the Belgian IT PRO community took matters in their own hands and created this free one-day event. It will take place on June 12th in Antwerp at ALM.
The conference consists of multiple tracks, amongst which also is “Office Server & Services” for which I will be presenting a session on “Exchange 2013 in the real world, from deployment to management”.

For more information, have a look at the official website here.

Blog Events TechEd NA 2014

This was MEC 2014 (in a nutshell)

As things wind down after a week full of excitement and – yes, in some cases – emotion, MEC 2014 is coming to an end. Lots of attendees have already left Austin and those who stayed behind are sharing a few last drinks before making their way back home as well. As good as MEC 2012 in Orlando was, MEC 2014 was E-P-I-C. Although some might state that the conference had missed its start – despite the great Dell Venue Pro 8 tablet giveaway – you cannot ignore the success of the rest of the week.

With over 100 unique sessions, MEC was packed with tons and tons of quality information. To see that amount of content being delivered by the industry’s top speakers is truly an unique experience. After all, at how many conferences is the PM or lead developer presenting the content on a specific topic? Also, Microsoft did a fairly good job of keeping a balance between the different types of sessions by having a mix of Microsoft-employees presenting sessions that reflected their view on things (“How things should work / How it’s designed to be”) and MVPs and Masters presenting a more practical approach (“How it really works”).

I also like the format of the “unplugged” sessions where you could interact with members of the Product Team to discuss a variety of topics. I believe that these sessions are not only very interesting (tons of great information), but they are also an excellent way for Microsoft to connect with the audience and receive immediate feedback on what is going out “out there”. For example, I’m sure that the need for some better guidance or maybe a GUI for Managed Availability is a message that was well conveyed and that Microsoft should use this feedback to maybe prioritize some of the efforts going into development. Whether that will happen, only time will tell..

This edition wasn’t only a success because of the content, but also because of the interactions. It was good to see some old friends and make many new ones. To  me, conferences like this aren’t only about learning but also about connecting with other people and networking. There were tons of great talks – some of which have given me food for thought and blog posts.

Although none of them might seem earth-shattering, MEC had a few announcements and key messages; some of which I’m very happy to see:

  • Multi-Factor Authentication and SSO are coming to Outlook before the end of the year. On-premises deployments can expect support for it next calendar year.
  • Exchange Sizing Guidance has been updated to reflect some of the new features in Exchange 2013 SP1:
    • The recommended page file size is now 32778 MB if your Exchange server has more than 32GB of memory. It should still be a fixed size and not managed by the OS.
    • CAS CPU requirements have increased with 50% to accommodate for MAPI/HTTP. It’s still lower than Exchange 2010
  • If you didn’t know it before, you will now: NFS is not supported for hosting Exchange data.
  • The recommended Exchange deployment uses 4 database copies, 3 regular 1 lagged. FSW preferably in a 3rd datacenter.
  • Increased emphasis on using a lagged copy.
  • OWA app for Android is coming
  • OWA in Office 365 will get a few new features including Clutter, People-view and Groups. No word if and when this will be made available for on-premises customers.

By now, it’s clear that Microsoft’s development cycle is based on a cloud-first model which – depending on what your take on things is – makes a lot of sense. This topic was also discussed during the Live recording of The UC Architects, I recommend you have a listen at it (as soon as it’s available) to hear how The UC Architects, Microsoft and the audience feels about this. Great stuff!

It’s also interesting to see some trends developing/happening. “Enterprise Social” is probably one of the biggest trends at the moment. With Office Graph being recently announced, I am curious to see how Exchange will evolve to embrace the so-called “Social Enterprise”. Features like Clutter, People View and Groups are already good examples of this.

Of course, MEC wasn’t all about work. There’s also time for fun. Lots of it. The format of the attendee party was a little atypical for a conference. Usually all attendees gather at a fairly large location. This time, however, the crowd was shattered across several bars in Rainey Street which Microsoft had rented off. Although I was a little skeptical at first, it rather worked really well and had tons of fun.

Then there was the UC Architects party which ENow graciously offered to host for us. The Speakeasy rooftop was really amazing and the turnout even more so. The party was a real success and I’m pretty confident there will be more in the future!

I’m sure that in the course of the next few weeks, more information will become available through the various blogs and websites as MVPs, Masters and other enthusiasts have digested the vast amount of information distributed at MEC.

I look forward to returning home, get some rest and start over again!

Au revoir, Microsoft Exchange Conference. I hope to see you soon!

Blog Events Exchange Exchange 2013 Microsoft Exchange Conference 2014 Office 365

What’s new in Exchange Server 2013 SP1 (CU4)?

Along With Exchange Server 2010 SP3 Update Rollup 5 and Exchange Server 2007 SP3 Update Rollup 13, Microsoft released Cumulative Update 4 for Exchange Server 2013  – also known as Service Pack 1 – just moments ago. Although much more detail will follow in the days to come, below is already a short summary of what’s new and what’s changed in this release. In the upcoming weeks we’ll definitely be taking a closer/deeper look at these new features, so make sure to check back regularly!

Goodbye RPC/HTTP and welcome MAPI/HTTP

With Service Pack 1, the Exchange team introduced a new connectivity model for Exchange 2013. Instead of using RPC/HTTP (which has been around for quite a while), they have now introduced MAPI/HTTP. The big difference between both is that RPC is now cut away and therefore allow for a more resilient / lenient way to connect to Exchange. HTTP is still used for transport, but instead of ‘encapsulating’ MAPI in RPC packets, it’s now transported directly with the HTTP stream.

To enable MAPI/HTTP, run the following command:

Set-OrganizationConfig –MapiHttpEnabled $true

As you can see from the cmdlet, deploying MAPI/HTTP is an “all-or-nothing” approach. This means that you have to plan the deployment carefully. Switching from ‘traditional’ RPC/HTTP to MAPI/HTTP involves users restarting their Outlook (yes, the dreadful “Your Administrator has made a changed…”-dialog box is back). Luckily, the feature will – for now? – only work on Office 2013 Service Pack 1. Anyone who isn’t using this version will continue to use RPC/HTTP and will not be required to restart. Just keep it in mind when you upgrade your clients so that you don’t create a storm of calls to your helpdesk…

Anyway, because the feature is disabled by default – and because it traditionally takes a while before new software gets deployed – I don’t expect this feature to be widely used any time soon though.

Exchange Admin Center Command Logging

This is one of the most-wanted features ever since Exchange 2013 was released. Previously the Exchange 2010 logged all the cmdlets that it executed when you performed a task through the Management Console. However, because of the move from the EMC to the new web-based Exchange Admin Center (EAC), this feature disappeared which caused a lot of protest.

Now, in SP1, the feature – somewhat – returns and gives you the ability to capture the cmdlets the EAC executes whenever you’re using it. The feature itself can be found in the top-right corner of the EAC, when clicking the question mark button:

image

Support for Windows Server 2012 R2

Another long-awaited and much-asked-for feature is the support for Windows Server 2012 R2. This means that you will be able to deploy Exchange 2013 SP1/CU4 on a server running Microsoft’s latest OS. At the same time, the support for Domain Controllers running Windows Server 2012 R2 was also announced. This effectively means that you no longer have to wait to upgrade your Domain Controllers!

S/MIME support for OWA

Another feature that existing in Exchange 2010, but didn’t make the bar for the RTM release of Exchange 2013 is S/MIME support for OWA. Now, however, it’s available again.

The return of the Edge Transport Server Role

It looks like the long lost son made its way back into the product. The Edge Transport Server role, that is. Although – honestly – the Edge Transport Server isn’t a much deployed server role – at least not in the deployments I come across, it is a features which is used quite a bit in hybrid deployments. This is mainly because it’s the only supported filtering solutions in a hybrid deployment. Any other type of filtering device/service/appliance [in a hybrid deployment] will cause you to do more work and inevitably cause more headaches as well.

This is definitely good news. However, there are some things to keep in mind. First of all, the Edge Transport server doesn’t have a GUI. While this is not much of an issue for seasoned admins, people who are new to Exchange might find the learning curve (PowerShell-only) a little steep.

General Fixes and Improvements

As with every Cumulative Update, this one probably also contains a bunch of improvements and fixes. More information to the download and the updates can be found here.

Support for SSL Offloading

Now, there’s also support again for SSL Offloading. This means that you are no longer required to re-encrypt traffic coming from e.g. a load-balancer after it decrypted it first. Although many customers like to decrypt/re-encrypt, there are deployments where SSL Offloading makes sense. Additionally, by offloading SSL traffic you spare some resources on the Exchange Server as it no longer has to decrypt traffic. The downside – however – is that traffic flows unencrypted between the load balancer and the Exchange Servers.

DLP Policy Tips in OWA

Data Loss Protection was one of the new features in Exchange 2013 RTM and was very well received in the market. It allows you to detect whenever sensitive data is being sent and take appropriate actions if so. Although DLP policies worked just fine in OWA, you wouldn’t get the Policy Tips (Warnings) as they were displayed in Outlook 2013. These tips are – in my opinion – one of the more useful parts of the DLP feature and that’s why I find it great they’ve finally added it into OWA. Now, you’re no longer required to stick to Outlook to get the same experience!

DLP Fingerprinting

As mentioned above, DLP allows you to detect whenever sensitive information is sent via email. However, detecting sensitive information isn’t always easy. Until now, you had to build (complex) Regular Expressions which would then be evaluated against the content being sent through Exchange. With the DLP Fingerprinting feature, you can now upload a document to Exchange which will then use that document as a template to evaluate content against. It is a great and easy way to make Exchange recognize certain files / type of files without having to code everything yourself in RegEx!

The DLP Fingerprinting feature can be found under Compliance Management > Data losse preventsion > Manage Document Fingerprints

image

A more detailed overview of what DLP Fingerprinting is, has already been published on the EHLO Blog from the MS Exchange team: http://blogs.technet.com/b/exchange/archive/2014/02/25/data-loss-prevention-in-exchange-just-got-better.aspx

Rich text editing in OWA

Outlook Web App is already one of the best web-based email clients available. In search of brining more features to OWA to make it even better, the Exchange team now added also some – maybe less visible – but very welcome improvements to OWA. The rich text editing features is one of them.

For example, you have now more editing capabilities and you can easily add items like tables or embedding images:

image

Database Availability Group without IP (Administrative Access Point)

Leveraging the new capabilities in Windows Server 2012 R2 (Failover Clustering), you can now deploy a DAG without an administrative Access Point (or IP Address). This should somehow simplify the deployment of a Database Availability Group.

Deploying Service Pack 1

The process for deploying Service Pack 1 isn’t different from any other Cumulative Update. In fact, Service Pack 1 is just another name for Cumulative Update 4. Basically, upgrading a server will do a back-to-back upgrade of the build which means that any customizations you have made to configuration files will most likely to be lost. Make sure to backup those changes and don’t forget to re-apply them. This is especially important if you have integrated Lync with Exchange 2013 as this (still) requires you to make changes to one of the web.config files!

After you have upgraded the servers, I would suggest that you reboot them. Because the way Managed Availability works, you might sometimes find the Frontend Transport Service not to work as expected for a while. Typically a reboot solves the ‘issue’ right away.

Other views

By the time I published this overview, some of the other MVPs already put some thoughts out there. Make sure to check them out:

Tony Redmond: http://windowsitpro.com/blog/exchange-2013-sp1-mixture-new-and-completed-fixtures

Have fun with it and make sure to check back in the following days as I’ll be zooming in into some of the features I discussed in this article!

-Michael

Blog Exchange 2013 News

The limitations of calendar federation in a hybrid deployment

Recently, Loryan Strant (Office 365 MVP) and myself joined forces to create an article for the Microsoft MVP blog regarding some of the limitations of calendar federation in a hybrid Exchange deployment. In this article we discuss how running a hybrid deployment might affect calendar sharing with other organizations and what your options are to work around this limitation.

To read the full article, please click here.

Enjoy!

Michael

Blog Exchange 2013 Hybrid Exchange Office 365

You get an error “the connection to the server <servername> could not be completed” when trying to start a hybrid mailbox move in Exchange 2013.

As part of running through the “New Migration Batch”-wizard, the remote endpoint (the on-premises Exchange server) is tested for its availability. After running this step, the following error is displayed:

image

By itself, this error message does not reveal much information as to what might be causing the connection issues. In the background, the wizard actually leverages the “Test-MigrationServerAvailability” cmdlet. If you run this cmdlet yourself, you will get a lot more information:

image

In this particular case, you’ll see that the issue is caused by 501 response from the on-premises server. The question is of course: why? We recently moved a number of mailboxes and then we did not encounter the issue. The only thing that had changed between then and now is that we reconfigured our load balancers in front of Exchange to use Layer 7 instead of Layer 4. So that is why I shifted my attention to the load balancers.

While reproducing the error, I took a look at the “System Message File” log in the KEMP load balancer. This log can be found under Logging Options, System Log Files. Although I didn’t expect to see much here, I saw the following message which drew my attention:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.2.130:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

A quick lookup learned that the 157.56.251.92 address was indeed coming from Microsoft. So now I knew for sure that something was wrong here. A quick search on the internet brought me to the following article which suggested to change the 100-Continue Handling in the Layer 7 configuration of the Load Master: http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html

After changing the value from its default (RFC Conformant), I could now successfully complete the wizard and start a hybrid mailbox move. So the “workaround” was found. But I was wondering, why does the Load Master *think* that the request coming from Microsoft is non-RFC compliant?

The first thing I did is ask Microsoft if they could clarify a bit on what was happening. I soon got a reply that – from Microsoft’s point of view – they were respecting the RFC documentation regarding the 100 (Continue) Status. No surprise here.

After reading the RFC specifications I decided to take some network traces to find out what was happening and maybe understand how the 501 response was triggered. The first trace I took, was one from the Load Master itself. In that trace, I could actually see the following:

image

Effectively, Office 365 was making a call to the Exchange Web Services and using the 100-continue status. As described per the RFC documentation, the Exchange on-premises server should now respond appropriately to the 100-continue status. Instead, we can see that in the entire SSL conversation, exactly 5 seconds go by after which Office 365 makes another call to the EWS virtual directory without having received a response to the 100-continue status. At the point, the KEMP Load Master generated the “501 Invalid Request”.

I turned back to the (by the way, excellent) support guys from KEMP and explained them my findings. Furthermore, when I tested without Layer 7 or even without a Load Master in between, there wasn’t a delay and everything was working as expected. So I knew for sure that the Exchange 2013 on-premises was actually replying correctly to the 100-continue status. As a matter of fact, without the KEMP LM in between, the entire ‘conversation’ between Office 365 and Exchange 2013 on-premises was perfectly following the RFC rules.

So, changing the 100-continue settings from “RFC Conformant” to “Ignore Continue-100” made sense as now KEMP would just ignore the 100-continue “rules”. But I was still interested in finding out why the LM thought the conversation was not RFC conformant in the first place. And this is where it gets interesting. There is this particular statement in the RFC documentation:

“Because of the presence of older implementations, the protocol allows ambiguous situations in which a client may send “Expect: 100- continue” without receiving either a 417 (Expectation Failed) status or a 100 (Continue) status. Therefore, when a client sends this header field to an origin server (possibly via a proxy) from which it has never seen a 100 (Continue) status, the client SHOULD NOT wait for an indefinite period before sending the request body.”

In fact, that was exactly what is happening here. Office 365 (the client) sent an initial 100-continue status and waited for a response to that request. In fact, it waits for exactly 5 seconds and sends the payload, regardless of it having received a response. In my opinion, this falls within the boundaries of the scenario described above. However, talking to the KEMP guys there seems to be a slightly different interpretation of the RFC which caused this mismatch and therefore the KEMP issuing the 501.

In the end, there is still something we haven’t worked out entirely: why the LM doesn’t send back the Continue-100 status back to Office 365 even though it receives it back almost instantaneously from the Exchange 2013 server.

All in all, the issue was resolved rather quickly and we know that changing the L7 configuration settings in the Load Master solves the issue (and this workaround was also confirmed as being the final solution by KEMP support, btw). Again, changing the 100-continue handling setting too “Ignore” doesn’t render the configuration (or the communication between Office 365 or Exchange on-premises) non-RFC compliant. So there’s no harm in changing it.

I hope you found this useful!

-Michael

Blog Exchange 2013 Hybrid Exchange Office 365

Exchange Online Archive (EOA): a view from the trenches – part 2

A bit later than expected, here’s finally the successor to the first article about Exchange Online Archiving which I wrote a while ago.

Exchange Online Archives and Outlook

How does Outlook connect to the online archive? Essentially, it’s the same process as with an on-premises archive. The client will receive the archive information during the initial Autodiscover process. If you take a look at the response, you will see something similar in the ouput:

image

Based on the SMTP address, the Outlook client will now make a second Autodiscover call to retrieve the connection settings for the archive after which it will try connecting to it. What happens then is exactly the same as how Outlook would connect to a regular mailbox in Office 365. Because Exchange Online is configured to use basic authentication for Outlook, the user will be prompted to enter their credentials. It’s particularly important to point this out to your users as the credential window will have no reference to what it’s used for. If you have deployed SSO, users will have to use their UPN (and not domain\username !) in the user field.

Experiences

So far we have covered what Exchange Online Archiving is all about, what the prerequisites are to make it work and how things come together in e.g. Outlook. Now, it’s time to stir things up a little and talk about how things are actually perceived in real life.

First, let me start by pointing out that this feature actually works great, IF you are willing to accept some of the particularities inherent to the solution. What I mean with particularities?

Latency

Unlike on-premises archives, your archives are now stored ‘in the cloud’. Which means that the only way to access them is over the internet. Depending on where you are connecting from, this could be an advantage or a disadvantage. I’ve noticed that connectivity to the archive and therefore the user-experience is highly dependent on the internet access you have. Rule of thumb: the more bandwidth/lower latency the better it gets. This shouldn’t be a surprise, but can be easily forgotten. I have found on-premises archives to be more responsive in terms of initial connectivity and retrieval of content. This brings me to the second point: speed.

Speed

As you are connecting over the internet, the speed of fetching content is highly dependent on the speed of your internet connection (you see a similarity here?). The bigger the message/attachment you want to download is, the longer it will take. Truth be told, you’ll have the same experience while accessing your on-premises archive from a remote location, so it’s not something exclusive to Office 365.

Outlook

To be honest, Outlook does a relative good job of working with the archive – at least when you deal with it the way it was designed. If you let Exchange sync expired items to your archive using the Managed Folder Assistant, your life will be great! However, if you dare to manually drag & drop messages from your primary mailbox into the archive, you’ll be in for a surprise. Outlook treats such an operation as a “foreground” action, which means that you will have to wait for this action to complete before you can do anything else in Outlook. The problem here is that if you choose to manually move a 4Mb message to the archive, it could take as long as 20 – 30 seconds (depending on your internet connection) before the action completes. To make things worse: during this operation Outlook freezes and if you try clicking something it’ll (temporarily) go into a “Not Responding…” state until the operation completes. According to Microsoft’s support, this is by design. So, as a measure of precaution: advise your users to NOT drag & drop messages, just let Exchange take care of it; something it does marvelously by the way.

I have found that proper end-user educations is also key here. If they are well informed about the how the archive works and have had some training on how to use retention tags, they’ll be on their way in no time!

Provisioning

As part of the problem I described above, the initial provisioning process can be a problem. When you first enable an archive, chances are that a lot of items will be moved to the archive. Although this process is handled by the MFA, if you mailbox is open whilst the MFA processes the mailbox, Outlook might become unresponsive or extremely slow at the least – this because the changes are happening and Outlook needs to sync those changes to the client’s OST file (when running in cached mode at least). Instead, it’s better to provision the archive on-premises, let the MFA do it’s work and then move the archive to Office 365. The latter approach works as a charm and doesn’t burden the user with an unresponsive Outlook client. If you are going to provision archives on-premises first, you might find it useful to estimate the size of an archive before diving in, heads first.

Search

This is a short one. Search is great. Because Outlook and Exchange can do cross-premises searches, you will be able to search both your primary mailbox and archive mailbox at once. Didn’t have much issues here. So: thumbs up!

Other Tips & Tricks

General (best) practices

Other than the particularities above, you shouldn’t do anything else compared to ‘regular’ on-premises archives. Try not to overwhelm your users with a ginormous amount of retention tags. Instead offer them a few tags they can used and – if necessary – adapt based on user feedback.

Autodiscover

Given the dependency from both Outlook and Exchange to make the online archive work, you should make sure that Autodiscover is working for your Exchange deployment AND that your Exchange servers are able to query Office 365’s Autodiscover service successfully as well.

This is especially important if you are using Outlook Web App (OWA) to access your online archive. In this case, it’s not Outlook but Exchange that will perform an Autodiscover lookup and connect to the archive. If your internet connection isn’t working properly or you have some sort forward authenticating proxy server in between, things could not (or intermittently) work.

Implement it gradually

As described above, it’s a bad idea to grant everyone with a new cloud-based archive at once. It will not only put a heavy load on your internet connection, but it will also affect your users. Instead, try to gradually implement the solution and request feedback from your users. Start with on-premises archives and move them to the cloud in batches, for instance.

DirSync is utterly important!

As described in the prerequisites sections, DirSync is very important to online archives. So make sure that you closely monitor how it’s doing. If you have issues with DirSync, you will inadvertently also have issues with creating archives. Issues with DirSync won’t interfere with archives that have already been provisioned though.

Conclusion

Is Exchange Online Archiving going to solve all your issues? Probably not. Is it a good solution. Yes, absolutely! I have been using Exchange Online Archiving for quite a while and I’m quite happy with it. I rarely encounter any issues, but I also have learnt to live with some of the particularities I mentioned earlier. Also, I treat my archive as a real archive. The stuff that’s in there are usually things I don’t need all that often. So the little latency-overhead that I experience whilst browsing/searching my archives is something I’m not bothered with. However, if I’d had to work with items from my archive day in, day out in; I’d probably have a lot more issues with adjusting to the fact that it’s less snappier than an on-premises archive.

So remember, set your (or your customer’s) experiences straight and you’ll enjoy the ride. If not, there might be some bumpy roads ahead!

Blog Exchange 2013 Office 365

Estimating the size of an Exchange (online) Archive

As part of some of the (archiving-) projects I have worked on, I frequently get asked if there is an easy way to determine what the size of the archive will be once it’s been activated. Although a bit odd at first, there are actually many good reasons why you’d want to know how big an archive will be.

First of all, determining the archive size allows to better size (or schedule for) the storage required for the archives. While there are also other ways to do this, knowing how big an archive will be when enabled is very helpful.

Secondly, if you’re using Exchange Online Archiving (EOA), it allows you to determine the amount of data that will pass through your internet connection for a specific mailbox. If the amount of data is large enough (compared to the available bandwidth), I personally prefer to provision an archiving on-premises, after which I can move it to Office 365 using MRS. But that’s another discussion. Especially for this scenario it can be useful to know how much archive you can (temporarily) host on-premises before sending them off to Office 365 and freeing up disk space again.

In order to calculate how big an archive would be, I’ve created a script which will go through all the items in one (or more) mailbox(es) and calculate the total size of all the items that will expire. When an item expires (and thus is eligible to be moved to the archive) depends on the Retention Policy you assign to a mailbox and what retention policy tags are included in that policy.

As the name of the script depicts, it’s important to understand that it’s an estimation of the archive size. There are situations in which the results of the script will be different from the real world. This could be the case when you enabled the archive and a user assigned personal tags to items before the Managed Folder assistant has processed the mailbox. In such a scenario, items with a retention tag that are different from the AgeLimit defined in the script will be calculated wrongfully. Then again, the script is meant to be ran before an archive is created.

Secondly, the script will go through all the folders in a mailbox. If you disabled archiving of calendar items, these items will be wrongfully included in the calculation as well. I will try to built this into the script in future releases, but this has a lower priority as the script was built to provide a pretty good estimation, not a 100% correct number.

The script, which you can download here, accepts multiple parameters:

UserPrimarySMTPAddresses the Primary SMTP Address of the mailbox for which you want to estimate the archive size
Report full file path to a txt file which contains the archive sizes
AgeLimit The retention time (in days) against which items should be probed. If you have a 60 day retention before items get moved to the archive, enter 60.
Server Used for connecting with EWS. Optional. Can be used if autodiscover is unable to determine the connection URI.
Credentials The credentials of an account that has the ApplicationImpersonation Management Role assigned to it.

 

The output of the script will be an object that contains the user’s Primary SMTP Address and the size of the archive in MB (TotalPRMessageSize).

Credit where credit is due! I would like to thank Michel de Rooij for his immensely insane PowerShell scripting skills and for helping me with cleaning up this script to its current form. Before I sent it off to Michel, the code was pretty inefficient [but hey! it was working], what you’ll download has been cleaned up and greatly enhanced. Now you have a clean code, additional error handling and some more parameters than in my original script [see parameters above].

I hope you’ll enjoy the script and find it useful. I’ve used it in multiple projects so far and it really helped me with planning of provisioning the archives.

Note:  To run the script, you’ll need to have Exchange Web Services installed and run it with an account that has the Application Impersonation Management Role assigned to it.

Cheers,

Michael

Blog Exchange

Help Exchange become a better product

In the Exchange community’s never-lasting efforts to help Microsoft increase the quality and usability of Exchange, a new website was recently created where YOU can send in and rate ideas for features and changes you would like to see in the next version(s)/updates of Exchange.

It’s important to understand that this is a community effort to convince Microsoft of taking a look at some heavily-requested features. Therefore, the more feedback we get, the better it is! If there’s enough feedback, I’m confident we are able to reach at least a few people in Redmond!

If you’ve been around long enough, you will see that some of the ideas have been lingering around for quite some time. With your help, we might just make enough noise for Microsoft to notice!

Right now, having the ability to centrally manage signatures and bringing back the Set-MailboxSentIntemsConfiguration cmdlet (why was it removed in the first place?) are on the top of the list. And they should be! If you find they should be in Exchange or you have other feature requests, feel free to vote for them so that Microsoft can see how important some features are to us.

Now, before doing anything else, have a look at exchange.ideascale.com and make your contribution to the list!

Cheers,

Michael

Blog Exchange Exchange 2013

Exchange 2013 Cumulative Update 3 and Exchange 2010 SP3 RU3 released

Microsoft just released it’s quarterly software updates for Exchange Server 2010 and Exchange Server 2013. You can download the latest updates through the following links:

Exchange 2013 Cumulative Update 3

After some issues with Cumulative Update 2, which had to be pulled and re-released, Microsoft put more effort into testing and validating CU3 before releasing it to the public. That is one of the reasons why it took a little longer than expected for CU3 to be available. A good thing which hopefully pays of in a stable update without any (major) issues! CU3 introduces a bunch of new features to Exchange 2013, amongst which are:

  • Improved experience for Group Management in EAC
  • Integration with Online RMS for on-premises-only deployments
  • Improved Admin Audit Logging

As you can see, there’s quite some new – and interesting – stuff in CU3, which makes it definitely worth taking a closer look at. I’m particularly interested in finding out more about the RMS Online integration (which is a good thing!). Next to a bunch of new features, there are also some important bug fixes in CU3:

  • KB2888315 Event 2112 or 2180 is logged when you try to back up a database in an Exchange Server 2013 environment
  • KB2874216 Security issue that is described in Security Bulletin MS13-061 is resolved by an Exchange Server update
  • KB2902929 You cannot forward an external meeting request in an Exchange Server 2013 environment
  • KB2890814 No redirection to the Outlook Web App URL for Exchange Online users in an Exchange hybrid deployment
  • KB2883203 Exchange Server 2013 restarts frequently after Cumulative Update 2 is installed

A complete list of the most important bug fixes can be found here.

Deploying CU3

Deploying CU3 is similar to deploying previous CUs. Just like these previous CUs, CU3 also includes Active Directory schema updates. For more information on how to deploy a Cumulative Update, have a look at Paul Cunningham’s blog here.

How about Exchange 2013 Service Pack 1?

As a side-note to the release is that Microsoft previously announced that Exchange Server 2013 Cumulative Update 4 would be released as Service Pack 1. Taking the three-month cadence in which Cumulative Updates are expected to be released, puts Service Pack 1 to be released around the February/March 2014 timeframe – that is assuming the CU release cadence is respected. This is a little earlier than I anticipated, to be honest. I expected SP1 not to be released until at the Microsoft Exchange Conference in April (which – now I come to think of it is merely a month later). I, for one, am looking forward to “SP1”, usually this is a milestone that many companies wait for before deploying a new server product like Exchange. Traditionally, Service Packs were used to introduce a bucket of new features to the product along with some other improvements. Given that each Cumulative Update so far has added functionality, I wonder if SP1 (Cumulative Update 4) will generate the same impact as it has done with previous releases…

Exchange 2010 SP3 Update Rollup 3

This latest Update Rollup for Exchange 2010 Service Pack 3 contains a rather long list of bug fixes. Amongst these fixes, I found the following ones to stand out, mainly because I faced them a few times, myself:

  • KB2839533 RPC Client Access service freezes in an Exchange Server 2010 environment
  • KB2887609 Hybrid Configuration wizard does not display the Domain Proof of Ownership list in an Exchange Server 2010 SP3 environment

A complete list of the most important fixes, can be found here. (note: content of this link may not yet be available) Have fun!

Blog Exchange 2013 News

Microsoft rereleases MS13-061 Security Update for Exchange 2013

After last weeks debacle where the Security Update MS13-061 went (really) bad and had to be pulled, Microsoft rereleased the update today. This new version – let’s call it v2 for a change (notice the sarcasm here) – contains a minor change; albeit one that makes a huge difference…

The initial version caused some registry settings to be overwritten incorrectly whereas this version corrects that and keeps the registry settings (as it should). The details of these registry settings can be found here: KB 2879739

The update can be found below:

For more information, please consult the original announcement by the Exchange Product Team.

Exchange 2013