This was MEC 2014 (in a nutshell)

As things wind down after a week full of excitement and – yes, in some cases – emotion, MEC 2014 is coming to an end. Lots of attendees have already left Austin and those who stayed behind are sharing a few last drinks before making their way back home as well. As good as MEC 2012 in Orlando was, MEC 2014 was E-P-I-C. Although some might state that the conference had missed its start – despite the great Dell Venue Pro 8 tablet giveaway – you cannot ignore the success of the rest of the week.

With over 100 unique sessions, MEC was packed with tons and tons of quality information. To see that amount of content being delivered by the industry’s top speakers is truly an unique experience. After all, at how many conferences is the PM or lead developer presenting the content on a specific topic? Also, Microsoft did a fairly good job of keeping a balance between the different types of sessions by having a mix of Microsoft-employees presenting sessions that reflected their view on things (“How things should work / How it’s designed to be”) and MVPs and Masters presenting a more practical approach (“How it really works”).

I also like the format of the “unplugged” sessions where you could interact with members of the Product Team to discuss a variety of topics. I believe that these sessions are not only very interesting (tons of great information), but they are also an excellent way for Microsoft to connect with the audience and receive immediate feedback on what is going out “out there”. For example, I’m sure that the need for some better guidance or maybe a GUI for Managed Availability is a message that was well conveyed and that Microsoft should use this feedback to maybe prioritize some of the efforts going into development. Whether that will happen, only time will tell..

This edition wasn’t only a success because of the content, but also because of the interactions. It was good to see some old friends and make many new ones. To  me, conferences like this aren’t only about learning but also about connecting with other people and networking. There were tons of great talks – some of which have given me food for thought and blog posts.

Although none of them might seem earth-shattering, MEC had a few announcements and key messages; some of which I’m very happy to see:

  • Multi-Factor Authentication and SSO are coming to Outlook before the end of the year. On-premises deployments can expect support for it next calendar year.
  • Exchange Sizing Guidance has been updated to reflect some of the new features in Exchange 2013 SP1:
    • The recommended page file size is now 32778 MB if your Exchange server has more than 32GB of memory. It should still be a fixed size and not managed by the OS.
    • CAS CPU requirements have increased with 50% to accommodate for MAPI/HTTP. It’s still lower than Exchange 2010
  • If you didn’t know it before, you will now: NFS is not supported for hosting Exchange data.
  • The recommended Exchange deployment uses 4 database copies, 3 regular 1 lagged. FSW preferably in a 3rd datacenter.
  • Increased emphasis on using a lagged copy.
  • OWA app for Android is coming
  • OWA in Office 365 will get a few new features including Clutter, People-view and Groups. No word if and when this will be made available for on-premises customers.

By now, it’s clear that Microsoft’s development cycle is based on a cloud-first model which – depending on what your take on things is – makes a lot of sense. This topic was also discussed during the Live recording of The UC Architects, I recommend you have a listen at it (as soon as it’s available) to hear how The UC Architects, Microsoft and the audience feels about this. Great stuff!

It’s also interesting to see some trends developing/happening. “Enterprise Social” is probably one of the biggest trends at the moment. With Office Graph being recently announced, I am curious to see how Exchange will evolve to embrace the so-called “Social Enterprise”. Features like Clutter, People View and Groups are already good examples of this.

Of course, MEC wasn’t all about work. There’s also time for fun. Lots of it. The format of the attendee party was a little atypical for a conference. Usually all attendees gather at a fairly large location. This time, however, the crowd was shattered across several bars in Rainey Street which Microsoft had rented off. Although I was a little skeptical at first, it rather worked really well and had tons of fun.

Then there was the UC Architects party which ENow graciously offered to host for us. The Speakeasy rooftop was really amazing and the turnout even more so. The party was a real success and I’m pretty confident there will be more in the future!

I’m sure that in the course of the next few weeks, more information will become available through the various blogs and websites as MVPs, Masters and other enthusiasts have digested the vast amount of information distributed at MEC.

I look forward to returning home, get some rest and start over again!

Au revoir, Microsoft Exchange Conference. I hope to see you soon!

Blog Events Exchange Exchange 2013 Microsoft Exchange Conference 2014 Office 365

Why MEC is the place to be for Exchange admins/consultants/enthusiasts!

In less than a month, the 2014 edition of the Microsoft Exchange Conference will kick off in Austin, Texas. For those who haven’t decided if they will be going yet, here’s some reasons why you should.

The Value of Conferences

Being someone who frequently attends conferences, I *think* I’m in a position I can say that conferences provide great value. Typically, you can get up-to-date with the latest (and greatest) technology in IT.

Often, the cost for attending a conference are estimated higher than a traditional 5-day course. However, I find this not to be true – at least not all the time. It is true that – depending on where you fly in from – Travel & Expenses might add up to the cost. However, I think it is a good thing to be ‘away’ from your daily work environment. That typically leaves one less tempted to be pre-occupied with work rather than soaking in the knowledge shared throughout the conference. The experience is quite different from a training course. Conferences might not provide you the exact same information as in a training, but you’ll definitely be able to learn more (different) things. Especially if your skills in a particular product are already well-developed, conferences are the place to widen your knowledge.

On top of that, classroom trainings don’t offer you the same networking capabilities. In case of MEC, for instance, there will be a bunch of Exchange MVPs and Masters who you can talk to. All of them very knowledgeable and I’m sure they won’t mind a good discussion on Exchange! This could be your opportunity to ask some really difficult questions or just hear what their opinion is on a specific issue. Sometimes the insights of a 3rd person can make a difference…!

It is also the place where all the industry experts will meet. Like I mentioned earlier, there will be Masters and MVPs, but also a lot of people from within Microsoft’s Exchange Product Group will be there. What better people are there to ask your questions to?

Great Content

Without any doubt, the Exchange Conference will be the place in 2014 to learn about what’s happening with Exchange. Service Pack 1 – or Cumulative Update 4, if you will – has just been released and as you might’ve read there are many new things to discover.

At the same time, it’s been almost 1.5 years since Exchange 2013 has been released and there are quite some sessions that focus on deployment and migration. If you’re looking to migrate shortly, or if you’re a consultant migrating other companies, I’m sure you’ll get a lot of value from these sessions as they will be able to provide you with first-hand information. When MEC 2012 was held – shortly before the launch of Exchange 2013 – this wasn’t really possible as there weren’t many deployments out there.

Sure, one might argue that the install base for Exchange 2013 is still low. However, if you look back at it, deployments for Exchange 2010 only really kicked of once it was past the SP1 era. And I expect nothing else to happen for Exchange 2013.

As a reference: here’s a list of sessions I definitely look forward to:

And of course the “Experts unplugged” sessions:

I realize that’s way too many sessions already and I will probably have to make a choice which ones I will be able to attend…
But the fact that I have so many only proves that there’s so much valuable information at MEC…

Great speakers

I’ve had a look through who is speaking at MEC and I can only conclude that there is a TON of great speakers. All of which I am sure they will make it worth the wile. While Microsoft-speakers will most likely give you an overview of how things are supposed to work, many of the MVPs have sessions scheduled which might give you a slight less biased view of things. The combination of both makes for a good mix to get you started on the new stuff and broaden your knowledge of what was already there.

Location

Austin, Texas. I haven’t been there myself. But based on what Exchange Master Andrew Higginbotham blogged a few days ago; it looks promising!

Microsoft has big shoes to fill. MEC 2012 was a huge success and people are expecting the same – if not better – things from MEC 2014. Additionally, for those who were lucky enough to attend the Lync Conference in Vegas earlier this month, that is quite something MEC has to compete with. Knowing the community and the people behind MEC, I’m pretty confident this edition will be EPIC.

See you there!

Michael

Blog Exchange 2013 Microsoft Exchange Conference 2014 News Office 365 Uncategorized

What’s new in Exchange Server 2013 SP1 (CU4)?

Along With Exchange Server 2010 SP3 Update Rollup 5 and Exchange Server 2007 SP3 Update Rollup 13, Microsoft released Cumulative Update 4 for Exchange Server 2013  – also known as Service Pack 1 – just moments ago. Although much more detail will follow in the days to come, below is already a short summary of what’s new and what’s changed in this release. In the upcoming weeks we’ll definitely be taking a closer/deeper look at these new features, so make sure to check back regularly!

Goodbye RPC/HTTP and welcome MAPI/HTTP

With Service Pack 1, the Exchange team introduced a new connectivity model for Exchange 2013. Instead of using RPC/HTTP (which has been around for quite a while), they have now introduced MAPI/HTTP. The big difference between both is that RPC is now cut away and therefore allow for a more resilient / lenient way to connect to Exchange. HTTP is still used for transport, but instead of ‘encapsulating’ MAPI in RPC packets, it’s now transported directly with the HTTP stream.

To enable MAPI/HTTP, run the following command:

Set-OrganizationConfig –MapiHttpEnabled $true

As you can see from the cmdlet, deploying MAPI/HTTP is an “all-or-nothing” approach. This means that you have to plan the deployment carefully. Switching from ‘traditional’ RPC/HTTP to MAPI/HTTP involves users restarting their Outlook (yes, the dreadful “Your Administrator has made a changed…”-dialog box is back). Luckily, the feature will – for now? – only work on Office 2013 Service Pack 1. Anyone who isn’t using this version will continue to use RPC/HTTP and will not be required to restart. Just keep it in mind when you upgrade your clients so that you don’t create a storm of calls to your helpdesk…

Anyway, because the feature is disabled by default – and because it traditionally takes a while before new software gets deployed – I don’t expect this feature to be widely used any time soon though.

Exchange Admin Center Command Logging

This is one of the most-wanted features ever since Exchange 2013 was released. Previously the Exchange 2010 logged all the cmdlets that it executed when you performed a task through the Management Console. However, because of the move from the EMC to the new web-based Exchange Admin Center (EAC), this feature disappeared which caused a lot of protest.

Now, in SP1, the feature – somewhat – returns and gives you the ability to capture the cmdlets the EAC executes whenever you’re using it. The feature itself can be found in the top-right corner of the EAC, when clicking the question mark button:

image

Support for Windows Server 2012 R2

Another long-awaited and much-asked-for feature is the support for Windows Server 2012 R2. This means that you will be able to deploy Exchange 2013 SP1/CU4 on a server running Microsoft’s latest OS. At the same time, the support for Domain Controllers running Windows Server 2012 R2 was also announced. This effectively means that you no longer have to wait to upgrade your Domain Controllers!

S/MIME support for OWA

Another feature that existing in Exchange 2010, but didn’t make the bar for the RTM release of Exchange 2013 is S/MIME support for OWA. Now, however, it’s available again.

The return of the Edge Transport Server Role

It looks like the long lost son made its way back into the product. The Edge Transport Server role, that is. Although – honestly – the Edge Transport Server isn’t a much deployed server role – at least not in the deployments I come across, it is a features which is used quite a bit in hybrid deployments. This is mainly because it’s the only supported filtering solutions in a hybrid deployment. Any other type of filtering device/service/appliance [in a hybrid deployment] will cause you to do more work and inevitably cause more headaches as well.

This is definitely good news. However, there are some things to keep in mind. First of all, the Edge Transport server doesn’t have a GUI. While this is not much of an issue for seasoned admins, people who are new to Exchange might find the learning curve (PowerShell-only) a little steep.

General Fixes and Improvements

As with every Cumulative Update, this one probably also contains a bunch of improvements and fixes. More information to the download and the updates can be found here.

Support for SSL Offloading

Now, there’s also support again for SSL Offloading. This means that you are no longer required to re-encrypt traffic coming from e.g. a load-balancer after it decrypted it first. Although many customers like to decrypt/re-encrypt, there are deployments where SSL Offloading makes sense. Additionally, by offloading SSL traffic you spare some resources on the Exchange Server as it no longer has to decrypt traffic. The downside – however – is that traffic flows unencrypted between the load balancer and the Exchange Servers.

DLP Policy Tips in OWA

Data Loss Protection was one of the new features in Exchange 2013 RTM and was very well received in the market. It allows you to detect whenever sensitive data is being sent and take appropriate actions if so. Although DLP policies worked just fine in OWA, you wouldn’t get the Policy Tips (Warnings) as they were displayed in Outlook 2013. These tips are – in my opinion – one of the more useful parts of the DLP feature and that’s why I find it great they’ve finally added it into OWA. Now, you’re no longer required to stick to Outlook to get the same experience!

DLP Fingerprinting

As mentioned above, DLP allows you to detect whenever sensitive information is sent via email. However, detecting sensitive information isn’t always easy. Until now, you had to build (complex) Regular Expressions which would then be evaluated against the content being sent through Exchange. With the DLP Fingerprinting feature, you can now upload a document to Exchange which will then use that document as a template to evaluate content against. It is a great and easy way to make Exchange recognize certain files / type of files without having to code everything yourself in RegEx!

The DLP Fingerprinting feature can be found under Compliance Management > Data losse preventsion > Manage Document Fingerprints

image

A more detailed overview of what DLP Fingerprinting is, has already been published on the EHLO Blog from the MS Exchange team: http://blogs.technet.com/b/exchange/archive/2014/02/25/data-loss-prevention-in-exchange-just-got-better.aspx

Rich text editing in OWA

Outlook Web App is already one of the best web-based email clients available. In search of brining more features to OWA to make it even better, the Exchange team now added also some – maybe less visible – but very welcome improvements to OWA. The rich text editing features is one of them.

For example, you have now more editing capabilities and you can easily add items like tables or embedding images:

image

Database Availability Group without IP (Administrative Access Point)

Leveraging the new capabilities in Windows Server 2012 R2 (Failover Clustering), you can now deploy a DAG without an administrative Access Point (or IP Address). This should somehow simplify the deployment of a Database Availability Group.

Deploying Service Pack 1

The process for deploying Service Pack 1 isn’t different from any other Cumulative Update. In fact, Service Pack 1 is just another name for Cumulative Update 4. Basically, upgrading a server will do a back-to-back upgrade of the build which means that any customizations you have made to configuration files will most likely to be lost. Make sure to backup those changes and don’t forget to re-apply them. This is especially important if you have integrated Lync with Exchange 2013 as this (still) requires you to make changes to one of the web.config files!

After you have upgraded the servers, I would suggest that you reboot them. Because the way Managed Availability works, you might sometimes find the Frontend Transport Service not to work as expected for a while. Typically a reboot solves the ‘issue’ right away.

Other views

By the time I published this overview, some of the other MVPs already put some thoughts out there. Make sure to check them out:

Tony Redmond: http://windowsitpro.com/blog/exchange-2013-sp1-mixture-new-and-completed-fixtures

Have fun with it and make sure to check back in the following days as I’ll be zooming in into some of the features I discussed in this article!

-Michael

Blog Exchange 2013 News

The limitations of calendar federation in a hybrid deployment

Recently, Loryan Strant (Office 365 MVP) and myself joined forces to create an article for the Microsoft MVP blog regarding some of the limitations of calendar federation in a hybrid Exchange deployment. In this article we discuss how running a hybrid deployment might affect calendar sharing with other organizations and what your options are to work around this limitation.

To read the full article, please click here.

Enjoy!

Michael

Blog Exchange 2013 Hybrid Exchange Office 365

You get an error “the connection to the server <servername> could not be completed” when trying to start a hybrid mailbox move in Exchange 2013.

As part of running through the “New Migration Batch”-wizard, the remote endpoint (the on-premises Exchange server) is tested for its availability. After running this step, the following error is displayed:

image

By itself, this error message does not reveal much information as to what might be causing the connection issues. In the background, the wizard actually leverages the “Test-MigrationServerAvailability” cmdlet. If you run this cmdlet yourself, you will get a lot more information:

image

In this particular case, you’ll see that the issue is caused by 501 response from the on-premises server. The question is of course: why? We recently moved a number of mailboxes and then we did not encounter the issue. The only thing that had changed between then and now is that we reconfigured our load balancers in front of Exchange to use Layer 7 instead of Layer 4. So that is why I shifted my attention to the load balancers.

While reproducing the error, I took a look at the “System Message File” log in the KEMP load balancer. This log can be found under Logging Options, System Log Files. Although I didn’t expect to see much here, I saw the following message which drew my attention:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.2.130:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

A quick lookup learned that the 157.56.251.92 address was indeed coming from Microsoft. So now I knew for sure that something was wrong here. A quick search on the internet brought me to the following article which suggested to change the 100-Continue Handling in the Layer 7 configuration of the Load Master: http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html

After changing the value from its default (RFC Conformant), I could now successfully complete the wizard and start a hybrid mailbox move. So the “workaround” was found. But I was wondering, why does the Load Master *think* that the request coming from Microsoft is non-RFC compliant?

The first thing I did is ask Microsoft if they could clarify a bit on what was happening. I soon got a reply that – from Microsoft’s point of view – they were respecting the RFC documentation regarding the 100 (Continue) Status. No surprise here.

After reading the RFC specifications I decided to take some network traces to find out what was happening and maybe understand how the 501 response was triggered. The first trace I took, was one from the Load Master itself. In that trace, I could actually see the following:

image

Effectively, Office 365 was making a call to the Exchange Web Services and using the 100-continue status. As described per the RFC documentation, the Exchange on-premises server should now respond appropriately to the 100-continue status. Instead, we can see that in the entire SSL conversation, exactly 5 seconds go by after which Office 365 makes another call to the EWS virtual directory without having received a response to the 100-continue status. At the point, the KEMP Load Master generated the “501 Invalid Request”.

I turned back to the (by the way, excellent) support guys from KEMP and explained them my findings. Furthermore, when I tested without Layer 7 or even without a Load Master in between, there wasn’t a delay and everything was working as expected. So I knew for sure that the Exchange 2013 on-premises was actually replying correctly to the 100-continue status. As a matter of fact, without the KEMP LM in between, the entire ‘conversation’ between Office 365 and Exchange 2013 on-premises was perfectly following the RFC rules.

So, changing the 100-continue settings from “RFC Conformant” to “Ignore Continue-100” made sense as now KEMP would just ignore the 100-continue “rules”. But I was still interested in finding out why the LM thought the conversation was not RFC conformant in the first place. And this is where it gets interesting. There is this particular statement in the RFC documentation:

“Because of the presence of older implementations, the protocol allows ambiguous situations in which a client may send “Expect: 100- continue” without receiving either a 417 (Expectation Failed) status or a 100 (Continue) status. Therefore, when a client sends this header field to an origin server (possibly via a proxy) from which it has never seen a 100 (Continue) status, the client SHOULD NOT wait for an indefinite period before sending the request body.”

In fact, that was exactly what is happening here. Office 365 (the client) sent an initial 100-continue status and waited for a response to that request. In fact, it waits for exactly 5 seconds and sends the payload, regardless of it having received a response. In my opinion, this falls within the boundaries of the scenario described above. However, talking to the KEMP guys there seems to be a slightly different interpretation of the RFC which caused this mismatch and therefore the KEMP issuing the 501.

In the end, there is still something we haven’t worked out entirely: why the LM doesn’t send back the Continue-100 status back to Office 365 even though it receives it back almost instantaneously from the Exchange 2013 server.

All in all, the issue was resolved rather quickly and we know that changing the L7 configuration settings in the Load Master solves the issue (and this workaround was also confirmed as being the final solution by KEMP support, btw). Again, changing the 100-continue handling setting too “Ignore” doesn’t render the configuration (or the communication between Office 365 or Exchange on-premises) non-RFC compliant. So there’s no harm in changing it.

I hope you found this useful!

-Michael

Blog Exchange 2013 Hybrid Exchange Office 365

Exchange Online Archive (EOA): a view from the trenches – part 2

A bit later than expected, here’s finally the successor to the first article about Exchange Online Archiving which I wrote a while ago.

Exchange Online Archives and Outlook

How does Outlook connect to the online archive? Essentially, it’s the same process as with an on-premises archive. The client will receive the archive information during the initial Autodiscover process. If you take a look at the response, you will see something similar in the ouput:

image

Based on the SMTP address, the Outlook client will now make a second Autodiscover call to retrieve the connection settings for the archive after which it will try connecting to it. What happens then is exactly the same as how Outlook would connect to a regular mailbox in Office 365. Because Exchange Online is configured to use basic authentication for Outlook, the user will be prompted to enter their credentials. It’s particularly important to point this out to your users as the credential window will have no reference to what it’s used for. If you have deployed SSO, users will have to use their UPN (and not domain\username !) in the user field.

Experiences

So far we have covered what Exchange Online Archiving is all about, what the prerequisites are to make it work and how things come together in e.g. Outlook. Now, it’s time to stir things up a little and talk about how things are actually perceived in real life.

First, let me start by pointing out that this feature actually works great, IF you are willing to accept some of the particularities inherent to the solution. What I mean with particularities?

Latency

Unlike on-premises archives, your archives are now stored ‘in the cloud’. Which means that the only way to access them is over the internet. Depending on where you are connecting from, this could be an advantage or a disadvantage. I’ve noticed that connectivity to the archive and therefore the user-experience is highly dependent on the internet access you have. Rule of thumb: the more bandwidth/lower latency the better it gets. This shouldn’t be a surprise, but can be easily forgotten. I have found on-premises archives to be more responsive in terms of initial connectivity and retrieval of content. This brings me to the second point: speed.

Speed

As you are connecting over the internet, the speed of fetching content is highly dependent on the speed of your internet connection (you see a similarity here?). The bigger the message/attachment you want to download is, the longer it will take. Truth be told, you’ll have the same experience while accessing your on-premises archive from a remote location, so it’s not something exclusive to Office 365.

Outlook

To be honest, Outlook does a relative good job of working with the archive – at least when you deal with it the way it was designed. If you let Exchange sync expired items to your archive using the Managed Folder Assistant, your life will be great! However, if you dare to manually drag & drop messages from your primary mailbox into the archive, you’ll be in for a surprise. Outlook treats such an operation as a “foreground” action, which means that you will have to wait for this action to complete before you can do anything else in Outlook. The problem here is that if you choose to manually move a 4Mb message to the archive, it could take as long as 20 – 30 seconds (depending on your internet connection) before the action completes. To make things worse: during this operation Outlook freezes and if you try clicking something it’ll (temporarily) go into a “Not Responding…” state until the operation completes. According to Microsoft’s support, this is by design. So, as a measure of precaution: advise your users to NOT drag & drop messages, just let Exchange take care of it; something it does marvelously by the way.

I have found that proper end-user educations is also key here. If they are well informed about the how the archive works and have had some training on how to use retention tags, they’ll be on their way in no time!

Provisioning

As part of the problem I described above, the initial provisioning process can be a problem. When you first enable an archive, chances are that a lot of items will be moved to the archive. Although this process is handled by the MFA, if you mailbox is open whilst the MFA processes the mailbox, Outlook might become unresponsive or extremely slow at the least – this because the changes are happening and Outlook needs to sync those changes to the client’s OST file (when running in cached mode at least). Instead, it’s better to provision the archive on-premises, let the MFA do it’s work and then move the archive to Office 365. The latter approach works as a charm and doesn’t burden the user with an unresponsive Outlook client. If you are going to provision archives on-premises first, you might find it useful to estimate the size of an archive before diving in, heads first.

Search

This is a short one. Search is great. Because Outlook and Exchange can do cross-premises searches, you will be able to search both your primary mailbox and archive mailbox at once. Didn’t have much issues here. So: thumbs up!

Other Tips & Tricks

General (best) practices

Other than the particularities above, you shouldn’t do anything else compared to ‘regular’ on-premises archives. Try not to overwhelm your users with a ginormous amount of retention tags. Instead offer them a few tags they can used and – if necessary – adapt based on user feedback.

Autodiscover

Given the dependency from both Outlook and Exchange to make the online archive work, you should make sure that Autodiscover is working for your Exchange deployment AND that your Exchange servers are able to query Office 365’s Autodiscover service successfully as well.

This is especially important if you are using Outlook Web App (OWA) to access your online archive. In this case, it’s not Outlook but Exchange that will perform an Autodiscover lookup and connect to the archive. If your internet connection isn’t working properly or you have some sort forward authenticating proxy server in between, things could not (or intermittently) work.

Implement it gradually

As described above, it’s a bad idea to grant everyone with a new cloud-based archive at once. It will not only put a heavy load on your internet connection, but it will also affect your users. Instead, try to gradually implement the solution and request feedback from your users. Start with on-premises archives and move them to the cloud in batches, for instance.

DirSync is utterly important!

As described in the prerequisites sections, DirSync is very important to online archives. So make sure that you closely monitor how it’s doing. If you have issues with DirSync, you will inadvertently also have issues with creating archives. Issues with DirSync won’t interfere with archives that have already been provisioned though.

Conclusion

Is Exchange Online Archiving going to solve all your issues? Probably not. Is it a good solution. Yes, absolutely! I have been using Exchange Online Archiving for quite a while and I’m quite happy with it. I rarely encounter any issues, but I also have learnt to live with some of the particularities I mentioned earlier. Also, I treat my archive as a real archive. The stuff that’s in there are usually things I don’t need all that often. So the little latency-overhead that I experience whilst browsing/searching my archives is something I’m not bothered with. However, if I’d had to work with items from my archive day in, day out in; I’d probably have a lot more issues with adjusting to the fact that it’s less snappier than an on-premises archive.

So remember, set your (or your customer’s) experiences straight and you’ll enjoy the ride. If not, there might be some bumpy roads ahead!

Blog Exchange 2013 Office 365

Publishing multiple services to the internet on a single IP address using a KEMP Load Balancer and content switching rules

A few days ago, someone suggested I write this article as it seems many people are struggling with ‘problem’. In fact, the solution which I’m going to explain below is the answer to a problem typically found in “home labs” where the internet connection doesn’t always have multiple IP addresses. This doesn’t mean that it’s only valid for home-us or testing scenarios only. Given that IPv4 addresses are almost depleted, it’s a good thing not to waste these valuable resources if it’s not necessary.

Basically, what I’m going to explain is how you can use a KEMP Load Master to publish multiple services/workloads to the internet using only a single (external) IP address. In the example below, I will be publishing Exchange, Office Web Apps and Lync onto the internet.

The following image depicts how the network in my example looks like. It also displays the different domain names and IP addresses that I’m using. Note that – although I perfectly could – I’m not connecting the Load Master directly onto the internet. Instead, I mapped an external IP address from my router/firewall to the Load Master:

image

How it works

The principle behind all this is simple: whenever a request ‘hits’ the Load Master, it will read the host header which is used to connect to the server and use that to determine where to send the request to. Given that most of the applications we are publishing use SSL, we have to decrypt content at the Load Master. This means we will be configuring the Load Master in Layer 7. Because we need to decrypt traffic, there’s also a ‘problem’ which we need to work around. The workloads we are publishing to the internet all use different host names. Because we only use a single Virtual Service, we can assign only a single certificate to it. Therefore, you have to make sure that the certificate you will configure in the Load Master either includes all published host names as a Subject (Alternative) Name or use a wildcard certificate which automatically covers all the hosts for a given domain. The latter option is not valid if you have multiple different domain names involved.

How the Load Master handles this ‘problem’ is not new – far from it. The same principle is used in every reverse proxy and was also the way how our beloved – but sadly discontinued – TMG used to handle such scenarios. You do not necessarily need to enable the Load Master’s ESP capabilities.

Step 1: Creating Content Rules

First, we will start by creating the content rules which the Load Master will use to determine where to send the requests to. In this example we will be creating rules for the following host names:

  • outlook.exchangelab.be (Exchange)
  • meet.exchangelab.be (Lync)
  • dialin.exchangelab.be (Lync)
  • owa.exchangelab.be (Office Web Apps)
  1. Login to the Load Master and navigate to Rules & Checking and click > Content Rules:image
  2. Click Create New…
  3. On the Create Rule page, enter the details as follows:conten rule

Repeat steps 2-3 for each domain name. Change the value for the field Match String so that it matches the domain names you are using. The final result should look like the following:

content rules

Step 2: creating a new Virtual Service

This step is fairly easy. We will be creating a new virtual service which uses the internal IP address that is mapped to the external IP address. If you already have create a virtual service previously, you can skip this step.

  1. In the Load Master, click Virtual Services and the click > Add New:image
  2. Specify the internal IP address which you have previously mapped to an external IP address
  3. Specify port TCP 443
  4. Click Add this Virtual Serviceimage

Step 3: Configuring the Virtual Service

So how does the Load Master differentiate between the different host headers? Content Rules. Content rules allow you to use Regular Expressions which the Load Master will use to examine incoming requests. If a match is found through one of the expressions, the Load Master will forward the traffic to the real server which has been configured with that content rule.

First, we need to enable proper SSL handling by the Load Master:

  1. Under SSL Properties, click the checkbox next to Enabled.
  2. When presented with a warning about a temporary self-signed certificate, click OK.
  3. Select the box next to Reencrypt. This will ensure that traffic leaving the Load Master is encrypted again before being sent to the real servers. Although some services might support SSL offloading (thus not reencrypting traffc), it’s beyond the scope of this article and will not be discussed.
  4. Select HTTPS under Rewrite Rules.image

Before moving to the next step, we will also need to configure the (wildcard) certificate to be used with this Virtual Service:

  1. Next to Certificates, click Add New
  2. Click Import Certificate and follow the steps to import the wildcard certificate into the Load Master. These steps include selecting a certificate file, specifying a password for the certificate file (if applicable) and setting an identifying name for the certificate (e.g. wildcard).image
  3. Click Save
  4. Click “OK” in the confirmation prompt.
  5. Under Operations, click the dropdown menu VS to Add and select the virtual service.
  6. Now click Add VSimage

You’ve now successfully configured the certificate for the main Virtual Service. This will ensure the Load Master can decrypt an analyze traffic sent to it. Let’s move on to the next  step in which we will define the “Sub Virtual Services”.

Step 4: Adding Sub Virtual Services

While still on the properties pages for the (main) Virtual Service, we will now be adding new ‘Sub Virtual Services’. Having a Sub Virtual Service per workload allows us to define different real servers per SubVS as well as a different health check. This is the key functionality which allows to have multiple different workloads live under a single ‘main’ Virtual Service.

  1. Under Real Servers click Add SubVS…
  2. Click OK in the confirmation window.
  3. A new SubVS will now have appeared. Click Modify and configure the following parameters:
  • Nickname (makes it easier to differentiate from other SubVSs)
  • Persistence options (if necessary)
  • Real Server(s)

Repeat the steps above for each of the workloads you want to publish.

Note: a word of warning is needed here. Typically, you would add your ‘real servers’ using the same TCP port as the main Virtual Service, being TCP 443, in this case. However, if you are also using the Load Master as a reverse proxy for Lync, you will need to make sure your Lync servers are added using port 4443 instead.

Once you have configured the Sub Virtual Services, you still need to assign one of the content rules to it. Before you’re able to do so, you first have to enable Content Rules.

Step 5: enabling and configuring content rules

In the properties of the main Virtual Service, Under Advanced Properties click Enable next to Content Switching. You will notice that this option has become available after adding your first SubVS.

image

Once Content Switching is enabled, we need to assign the appropriate rules to each SubVS.

  1. Under SubVSs, Click None in the Rules column for the SubVS you just are configuring. For example, if you want to configure the content rule for the Exchange SubVS:image
  2. On the Rule Management page, select the appropriate Content Matching rule (created earlier) from the selection box and then click Add:image
  3. Repeat these steps for each Sub Virtual Service you created earlier

Testing

You can now test the configuration by navigating your browser to one of your published services or by using one of the service. If all is well, you should now be able to reach Exchange, Lync and Office Web Apps – all using the same external IP Address.

As you can see, there’s some fair amount of work involved, but it’s all in all relatively straightforward to configure. In this example we published Exchange, Lync and Office Web Apps, but you could just as easily add other services too. Especially with the many Load Balancing options you have with Exchange 2013, you could for instance use multiple additional Sub Virtual Services for Exchange alone. To get you started, here’s how the content rules for that would look like:

content rules exchange

Note: if you are defining multiple Sub Virtual Services for e.g. Exchange, you don’t need to use/configure a Sub Virtual Service which uses the content rule for the Exchange domain name “^outlook.domain.com*”. If you still do, you’d find that – depending on the order of the rules – your workload-specific virtual services would remain unused.

I hope you enjoyed this article!

Until later,

Michael

Blog Exchange 2013 How-To's Office 365 Uncategorized

Review: Exchange 2013 Inside-Out: “Mailbox & High Availability” and “Connectivity, Clients & UM”

Although there’s a saying that you shouldn’t judge a book by it’s cover, for any book that features Tony Redmond and Paul Robichaux as the authors, it’s safe to assume it will be a great book! While both books have plenty of interesting technical content, to me that’s not the only things what defines a good book. Tony and Paul are very eloquent writers and I found reading both books extremely pleasant from a language-technical perspective. To be honest, as an amateur-writer myself, I can only dream of ever being able to put the English language to work as they do.

Having read the Exchange 2010 Inside-Out book before, I expected these 2013-version books to contain at least the same amount and type of content. The 2010-version is a HUGE book (+/- 1200 pages!) and therefore I was pleasantly surprised to see it was decided to split the content into two separate books now. This definitely helps making the amount of information more manageable. Don’t get me wrong: there’s still a lot to digest, but having two (slightly) smaller books makes the task to take on the books more easy, if not at least mentally it does! This is also something I do have to ‘warn’ you about. The amount of information and the technical breadth and depth of the content might sometimes feel a little overwhelming. This could be especially true if you aren’t very familiar with Exchange. For some, the technical depth might even be outright too deep. That’s also why I advise you not to try and read any of these books in one go. Instead, take your time to read through each of the chapters and allow some time to let the information sink in. Combine that with some fiddling around in your own lab and you’ll have a great learning experience.

What I like about these books is that you’re provided with all the information to successfully understand how Exchange operates and are then expected to put that knowledge to work yourself. Although there are plenty of examples in the book, if you are looking for pre-canned scripts, how-to’s or step-by-step guides, there might be better alternatives for you (e.g. Exchange 2013 PowerShell Cookbook). But then again, I don’t think that’s what Paul and Tony were trying to achieve anyway.

Conclusion

Paul and Tony have managed to combine a fun-to-read style with great (technical) content making the Exchange 2013 Inside-Out books a must to read. Whether you’re a consultant working day in and day out with Exchange or you’re an admin in charge of a Exchange organization, I’m sure you’ll find lots of valuable information in each of the books which will help you in your day-to-day job.

Blog Exchange Exchange 2013 Reviews

Help Exchange become a better product

In the Exchange community’s never-lasting efforts to help Microsoft increase the quality and usability of Exchange, a new website was recently created where YOU can send in and rate ideas for features and changes you would like to see in the next version(s)/updates of Exchange.

It’s important to understand that this is a community effort to convince Microsoft of taking a look at some heavily-requested features. Therefore, the more feedback we get, the better it is! If there’s enough feedback, I’m confident we are able to reach at least a few people in Redmond!

If you’ve been around long enough, you will see that some of the ideas have been lingering around for quite some time. With your help, we might just make enough noise for Microsoft to notice!

Right now, having the ability to centrally manage signatures and bringing back the Set-MailboxSentIntemsConfiguration cmdlet (why was it removed in the first place?) are on the top of the list. And they should be! If you find they should be in Exchange or you have other feature requests, feel free to vote for them so that Microsoft can see how important some features are to us.

Now, before doing anything else, have a look at exchange.ideascale.com and make your contribution to the list!

Cheers,

Michael

Blog Exchange Exchange 2013

Exchange 2013 Cumulative Update 3 and Exchange 2010 SP3 RU3 released

Microsoft just released it’s quarterly software updates for Exchange Server 2010 and Exchange Server 2013. You can download the latest updates through the following links:

Exchange 2013 Cumulative Update 3

After some issues with Cumulative Update 2, which had to be pulled and re-released, Microsoft put more effort into testing and validating CU3 before releasing it to the public. That is one of the reasons why it took a little longer than expected for CU3 to be available. A good thing which hopefully pays of in a stable update without any (major) issues! CU3 introduces a bunch of new features to Exchange 2013, amongst which are:

  • Improved experience for Group Management in EAC
  • Integration with Online RMS for on-premises-only deployments
  • Improved Admin Audit Logging

As you can see, there’s quite some new – and interesting – stuff in CU3, which makes it definitely worth taking a closer look at. I’m particularly interested in finding out more about the RMS Online integration (which is a good thing!). Next to a bunch of new features, there are also some important bug fixes in CU3:

  • KB2888315 Event 2112 or 2180 is logged when you try to back up a database in an Exchange Server 2013 environment
  • KB2874216 Security issue that is described in Security Bulletin MS13-061 is resolved by an Exchange Server update
  • KB2902929 You cannot forward an external meeting request in an Exchange Server 2013 environment
  • KB2890814 No redirection to the Outlook Web App URL for Exchange Online users in an Exchange hybrid deployment
  • KB2883203 Exchange Server 2013 restarts frequently after Cumulative Update 2 is installed

A complete list of the most important bug fixes can be found here.

Deploying CU3

Deploying CU3 is similar to deploying previous CUs. Just like these previous CUs, CU3 also includes Active Directory schema updates. For more information on how to deploy a Cumulative Update, have a look at Paul Cunningham’s blog here.

How about Exchange 2013 Service Pack 1?

As a side-note to the release is that Microsoft previously announced that Exchange Server 2013 Cumulative Update 4 would be released as Service Pack 1. Taking the three-month cadence in which Cumulative Updates are expected to be released, puts Service Pack 1 to be released around the February/March 2014 timeframe – that is assuming the CU release cadence is respected. This is a little earlier than I anticipated, to be honest. I expected SP1 not to be released until at the Microsoft Exchange Conference in April (which – now I come to think of it is merely a month later). I, for one, am looking forward to “SP1”, usually this is a milestone that many companies wait for before deploying a new server product like Exchange. Traditionally, Service Packs were used to introduce a bucket of new features to the product along with some other improvements. Given that each Cumulative Update so far has added functionality, I wonder if SP1 (Cumulative Update 4) will generate the same impact as it has done with previous releases…

Exchange 2010 SP3 Update Rollup 3

This latest Update Rollup for Exchange 2010 Service Pack 3 contains a rather long list of bug fixes. Amongst these fixes, I found the following ones to stand out, mainly because I faced them a few times, myself:

  • KB2839533 RPC Client Access service freezes in an Exchange Server 2010 environment
  • KB2887609 Hybrid Configuration wizard does not display the Domain Proof of Ownership list in an Exchange Server 2010 SP3 environment

A complete list of the most important fixes, can be found here. (note: content of this link may not yet be available) Have fun!

Blog Exchange 2013 News