The UC Architects episode 37 now available!

Even though it’s only been a couple of days that we released episode 36 – “Live @ MEC”, we proudly announce the release of our latest episode: “episode 37 – don’t MEC my Heartbleed“.

In this latest installment, Steve, John, Michel, Stale and myself talk about a myriad of things including some random thoughts on the Microsoft Exchange Conference, the recently disclosed Heartbleed vulnerability and latest improvements in the world of Hybrid Exchange deployments. Stale also started a new feature called “using Lync like a Lync PRO” in which he will reveal a very handy tip on how to better use Lync. Make sure you don’t miss it!

Don’t let the length of the show (almost 2 hours!) scare you, it’s filled with tons of great info!

Talk to you soon!

Michael

Blog Events Exchange News Podcasts

Windows Server 2012 R2 ADFS ‘alternative login ID’, removes the need to have an internet-routable UPN

Recently, Microsoft released an update to Windows Server 2012 R2 which – next to a bunch of bug fixes – also includes new features to some of the Operating System’s components. Amongst these new features there’s one that I found particularly interesting, more specifically the update to the AD FS 3.0 component which enables customers to use a different attribute to identify federated uses in Windows Azure AD. The feature itself is better known as “Alternate Login ID”.

As the TechNet documentation on this topic describes, it would now be possible to use a different attributed from the User Principal Name to identify federated users in Office 365. This helps customers who aren’t able to change their UPNs from the current value (like e.g. domain.local or domain.corp) to an internet-routable domain (like domain.com). Even though that in many situations changing the UPN isn’t a big of a deal, some customers leverage the existing UPN in third party applications and therefore might not be able to make this change easily.

If you want to deploy this feature, you’ll have to figure some things out by yourself. The documentation that is currently available doesn’t explain all the steps. At least, that is if you want to implement it right away. I expect the documentation to become available shortly. Also mind that I haven’t seen any official statement that the use of “Alternate Login ID” is already supported by Office 365 today, but the documentation certainly hints to it and if I recall correctly, it was also announced at the Microsoft Exchange Conference, last week.

The configuration itself requires you to jump through a few hoops, including modifying DirSync to refer to the new attribute you’ve selected as being the Alternate Login ID instead of the UPN. Personally, I would still recommend changing the UPN – if possible. But there’s an alternative now and having alternative is always good thing, isn’t it?

I’ll definitely have a go at this later this week and will post my findings here.

-Michael

[Update 04/14/2014] Here’s the KB article describing the update I reference in this article: http://support.microsoft.com/kb/2927690

 

ADFS Blog Exchange Exchange 2013 Hybrid Exchange News Office 365

This was MEC 2014 (in a nutshell)

As things wind down after a week full of excitement and – yes, in some cases – emotion, MEC 2014 is coming to an end. Lots of attendees have already left Austin and those who stayed behind are sharing a few last drinks before making their way back home as well. As good as MEC 2012 in Orlando was, MEC 2014 was E-P-I-C. Although some might state that the conference had missed its start – despite the great Dell Venue Pro 8 tablet giveaway – you cannot ignore the success of the rest of the week.

With over 100 unique sessions, MEC was packed with tons and tons of quality information. To see that amount of content being delivered by the industry’s top speakers is truly an unique experience. After all, at how many conferences is the PM or lead developer presenting the content on a specific topic? Also, Microsoft did a fairly good job of keeping a balance between the different types of sessions by having a mix of Microsoft-employees presenting sessions that reflected their view on things (“How things should work / How it’s designed to be”) and MVPs and Masters presenting a more practical approach (“How it really works”).

I also like the format of the “unplugged” sessions where you could interact with members of the Product Team to discuss a variety of topics. I believe that these sessions are not only very interesting (tons of great information), but they are also an excellent way for Microsoft to connect with the audience and receive immediate feedback on what is going out “out there”. For example, I’m sure that the need for some better guidance or maybe a GUI for Managed Availability is a message that was well conveyed and that Microsoft should use this feedback to maybe prioritize some of the efforts going into development. Whether that will happen, only time will tell..

This edition wasn’t only a success because of the content, but also because of the interactions. It was good to see some old friends and make many new ones. To  me, conferences like this aren’t only about learning but also about connecting with other people and networking. There were tons of great talks – some of which have given me food for thought and blog posts.

Although none of them might seem earth-shattering, MEC had a few announcements and key messages; some of which I’m very happy to see:

  • Multi-Factor Authentication and SSO are coming to Outlook before the end of the year. On-premises deployments can expect support for it next calendar year.
  • Exchange Sizing Guidance has been updated to reflect some of the new features in Exchange 2013 SP1:
    • The recommended page file size is now 32778 MB if your Exchange server has more than 32GB of memory. It should still be a fixed size and not managed by the OS.
    • CAS CPU requirements have increased with 50% to accommodate for MAPI/HTTP. It’s still lower than Exchange 2010
  • If you didn’t know it before, you will now: NFS is not supported for hosting Exchange data.
  • The recommended Exchange deployment uses 4 database copies, 3 regular 1 lagged. FSW preferably in a 3rd datacenter.
  • Increased emphasis on using a lagged copy.
  • OWA app for Android is coming
  • OWA in Office 365 will get a few new features including Clutter, People-view and Groups. No word if and when this will be made available for on-premises customers.

By now, it’s clear that Microsoft’s development cycle is based on a cloud-first model which – depending on what your take on things is – makes a lot of sense. This topic was also discussed during the Live recording of The UC Architects, I recommend you have a listen at it (as soon as it’s available) to hear how The UC Architects, Microsoft and the audience feels about this. Great stuff!

It’s also interesting to see some trends developing/happening. “Enterprise Social” is probably one of the biggest trends at the moment. With Office Graph being recently announced, I am curious to see how Exchange will evolve to embrace the so-called “Social Enterprise”. Features like Clutter, People View and Groups are already good examples of this.

Of course, MEC wasn’t all about work. There’s also time for fun. Lots of it. The format of the attendee party was a little atypical for a conference. Usually all attendees gather at a fairly large location. This time, however, the crowd was shattered across several bars in Rainey Street which Microsoft had rented off. Although I was a little skeptical at first, it rather worked really well and had tons of fun.

Then there was the UC Architects party which ENow graciously offered to host for us. The Speakeasy rooftop was really amazing and the turnout even more so. The party was a real success and I’m pretty confident there will be more in the future!

I’m sure that in the course of the next few weeks, more information will become available through the various blogs and websites as MVPs, Masters and other enthusiasts have digested the vast amount of information distributed at MEC.

I look forward to returning home, get some rest and start over again!

Au revoir, Microsoft Exchange Conference. I hope to see you soon!

Blog Events Exchange Exchange 2013 Microsoft Exchange Conference 2014 Office 365

Rewriting URLs with KEMP LoadMaster

Earlier this month, I wrote an article on how you could use a KEMP LoadMaster to publish multiple workloads onto the internet using only a single IP address using a feature called content-switching.

Based on the principle of content switching, KEMP LoadMasters also allow you to modify traffic while it’s flowing through the device. More specifically, this article will show you how you can rewrite URLs using a Load Master.

The rewriting of URLs is quite common. The goal is to ‘send’ people to another destination than the one they are trying to reach. This could be the case when you are changing your domain name or maybe even as part of a merger and you want the other company’s traffic to automatically be redirected to your website.

Let’s start with a simple example we are all familiar with: you want to redirect traffic from the root of your domain to /owa. The goal is that when someone enters e.g. webmail.domain.com, that person is automatically redirected to webmail.domain.com/owa. Although Exchange 2013 already redirects traffic from the root to the /owa virtual directory out-of-the-box, the idea here is to illustrate how you can do it with KEMP instead of IIS. As a result you could just as easily send someone from one virtual directory (e.g. /test) to another (e.g. /owa).

How it works – Mostly

Just as in my previous article, everything evolves around content switching. However, next to the content matching rules, KEMP’s LoadMasters allow you to define so-called header modification rules as shown in the following screenshot:

image

By default, it suffices to create such a header modification rule and assign it to a virtual service. By doing so, you will rewrite traffic traffic to the root (or the /test virtual directory) and people will end up at the /owa virtual directory.

To create a header modification rule, perform the following steps:

  1. Login to your LoadMaster
  2. Click on Rules & Checking and then Content Rules
  3. On top of the Content Rules page, click Create New…
  4. Fill in the details as show in the following screenshot:

    image

  5. Click Create Rule.

Now that you have created the header modification rule, you need to assign it to the virtual service on which you want to use it:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

That’s it. All traffic that hits the LoadMaster on the root will now automatically be rewritten to /owa.

How it works with existing content rules (SUBVSs)

When you are already using content rules to ‘capture’ traffic and send it to a different virtual directory (as described in my previous article), the above approach won’t work – at least not entirely.

While the creating of the header modification rule and the addition of that rule to the virtual service remain entirely the same, there is an additional task you have to perform.

First, let me explain why. When you are already using Content Rules the LoadMaster will use these rules to evaluate traffic in order to make the appropriate routing decision. As a result, these content rules are processed before the header modification rules. However, when the LoadMaster doesn’t find a match in one of its content matching rules, it will not process the header modification rule – at least not when you are trying to modify a virtual directory. As I will describe later in this article, it will still process host-header modifications though.

So, in order for the LoadMaster to perform the rewrite, the initial destination has to be defined on the virtual directory where you want to redirect traffic to. Let’s take the following example: you are using content rules to direct traffic from a single IP address to different virtual directories. At the same time, you want traffic from an non-existing virtual directory (e.g. /test) to be redirected to /owa.

First, you start of again by creating a header modification rule. The process is the same as outlined above. The only thing that changes is that the match string will now be “/^\/test$/” instead of /^\/$/:

image

Next, create a new content rule, but this time create a content matching rule as follows:

image

Next, we’ll make the changes to the virtual service:

  1. Go to Virtual Services, View/Modify Services
  2. Select the Virtual Service you want to edit and click Modify
  3. In the virtual service, go to HTTP Header Modifications and click Show Header Rules.
  4. If you don’t see this option, make sure that you have Layer 7 enabled and that you are decrypting SSL traffic. This is a requirement for the LoadMaster to be able to ‘read’ (and thus modify) the traffic.

  5. Next, under Request Rules, select the header modification rule you created earlier from the drop-down list and click Add:

    image

Now, we still need to add the content matching rule to the /owa SubVs:

  1. In the properties of the Virtual Service, go down to SubVSs and click the button in the Rules column, for the OWA SubVS:

    image

  2. From the drop-down list, select the rule you created earlier (“test”) and click Add:

    image

  3. You should now have 2 rules in the SubVS:

    image

That’s it. If you now navigate to the /test virtual directory, the traffic will automatically be rewritten to /owa.

How about if I want to redirect more than a single virtual directory to /owa?

In theory you would need to create a header modification rule for each of the virtual directories and a content matching rule as well. However, if you are going to redirect multiple other virtual directories to /owa, you can also use the “default” content rule which acts as a catch-all. As a result, instead of creating and adding a separate content matching rule for each virtual directory, you just create a header modification rule for each of them and add the default content rule to the /owa virtual directory as shown below:

image

What about rewriting host names?

Rewriting host names is handled a tad differently than e.g. virtual directories. Unlike the latter, host header modifications are processed before the content matching rules for the Virtual Services. As a result, it suffices to create a modification rule and apply it to the virtual service. To create a Host Header modification rule, do the following:

  1. Go to Rules & Checking and then Content Rules
  2. Click Add New… and create the rule as follows:

    image

Once you have created the rule, add it the the HTTP Header Modification rules on the virtual services and your done. Traffic that hits test.domain.com will now automatically be rewritten to webmail.domain.com. It’s as easy as that.

Conclusion

Rewriting URLs with KEMP’s LoadMaster is relatively easy. You only have to watch out when you are already using content switching rules as I described earlier.

Until later,

Michael

Blog Exchange How-To's

Free/Busy in a hybrid environment fail and Test-Federationtrust returns error “Failed to validate delegation token”

Following an issue with Free/Busy in Exchange online, earlier this week, I was troubleshooting the exchange of Free/Busy information in some of my hybrid deployments as Free/Busy information was still not working.
After having checked some (obvious things) like the Organization Relationships and whether or not Autodiscover was working properly, I discovered an issue when running the Test-FederationTrust cmdlet.

In fact, the cmdlet completed almost entirely successful, except for the very last step in the process:

Id         : TokenValidation
Type       : Error
Message    : Failed to validate delegation token.

This also explained why I was seeing 401 Unauthorized messages when running the Test-OrganizationRelationship command.

I then checked the same in some of my other deployments and found out the all had the same issue. At least, there was some common ground to start working from.
I turned to co-MVP Steve Goodman and asked him to run the same command in one of his labs in order to have a point of reference. At the same time, he asked me to run a command which might help:

Get-FederationTrust | Set-Federationtrust –RefreshMetaData

After running the command, I re-ran the Test-FederationTrust command which now completed successfully.

Conclusion

Although the Free/Busy issues in Office 365 should be solved, some customers might still experience problems exchanging Free/Busy information. In this case, the problem manifests itself by e.g. online users not being able to request on-premises user’s availability information.

Blog Exchange Hybrid Exchange Office 365

Estimating the size of an Exchange (online) Archive

As part of some of the (archiving-) projects I have worked on, I frequently get asked if there is an easy way to determine what the size of the archive will be once it’s been activated. Although a bit odd at first, there are actually many good reasons why you’d want to know how big an archive will be.

First of all, determining the archive size allows to better size (or schedule for) the storage required for the archives. While there are also other ways to do this, knowing how big an archive will be when enabled is very helpful.

Secondly, if you’re using Exchange Online Archiving (EOA), it allows you to determine the amount of data that will pass through your internet connection for a specific mailbox. If the amount of data is large enough (compared to the available bandwidth), I personally prefer to provision an archiving on-premises, after which I can move it to Office 365 using MRS. But that’s another discussion. Especially for this scenario it can be useful to know how much archive you can (temporarily) host on-premises before sending them off to Office 365 and freeing up disk space again.

In order to calculate how big an archive would be, I’ve created a script which will go through all the items in one (or more) mailbox(es) and calculate the total size of all the items that will expire. When an item expires (and thus is eligible to be moved to the archive) depends on the Retention Policy you assign to a mailbox and what retention policy tags are included in that policy.

As the name of the script depicts, it’s important to understand that it’s an estimation of the archive size. There are situations in which the results of the script will be different from the real world. This could be the case when you enabled the archive and a user assigned personal tags to items before the Managed Folder assistant has processed the mailbox. In such a scenario, items with a retention tag that are different from the AgeLimit defined in the script will be calculated wrongfully. Then again, the script is meant to be ran before an archive is created.

Secondly, the script will go through all the folders in a mailbox. If you disabled archiving of calendar items, these items will be wrongfully included in the calculation as well. I will try to built this into the script in future releases, but this has a lower priority as the script was built to provide a pretty good estimation, not a 100% correct number.

The script, which you can download here, accepts multiple parameters:

UserPrimarySMTPAddresses the Primary SMTP Address of the mailbox for which you want to estimate the archive size
Report full file path to a txt file which contains the archive sizes
AgeLimit The retention time (in days) against which items should be probed. If you have a 60 day retention before items get moved to the archive, enter 60.
Server Used for connecting with EWS. Optional. Can be used if autodiscover is unable to determine the connection URI.
Credentials The credentials of an account that has the ApplicationImpersonation Management Role assigned to it.

 

The output of the script will be an object that contains the user’s Primary SMTP Address and the size of the archive in MB (TotalPRMessageSize).

Credit where credit is due! I would like to thank Michel de Rooij for his immensely insane PowerShell scripting skills and for helping me with cleaning up this script to its current form. Before I sent it off to Michel, the code was pretty inefficient [but hey! it was working], what you’ll download has been cleaned up and greatly enhanced. Now you have a clean code, additional error handling and some more parameters than in my original script [see parameters above].

I hope you’ll enjoy the script and find it useful. I’ve used it in multiple projects so far and it really helped me with planning of provisioning the archives.

Note:  To run the script, you’ll need to have Exchange Web Services installed and run it with an account that has the Application Impersonation Management Role assigned to it.

Cheers,

Michael

Blog Exchange

Review: Exchange 2013 Inside-Out: “Mailbox & High Availability” and “Connectivity, Clients & UM”

Although there’s a saying that you shouldn’t judge a book by it’s cover, for any book that features Tony Redmond and Paul Robichaux as the authors, it’s safe to assume it will be a great book! While both books have plenty of interesting technical content, to me that’s not the only things what defines a good book. Tony and Paul are very eloquent writers and I found reading both books extremely pleasant from a language-technical perspective. To be honest, as an amateur-writer myself, I can only dream of ever being able to put the English language to work as they do.

Having read the Exchange 2010 Inside-Out book before, I expected these 2013-version books to contain at least the same amount and type of content. The 2010-version is a HUGE book (+/- 1200 pages!) and therefore I was pleasantly surprised to see it was decided to split the content into two separate books now. This definitely helps making the amount of information more manageable. Don’t get me wrong: there’s still a lot to digest, but having two (slightly) smaller books makes the task to take on the books more easy, if not at least mentally it does! This is also something I do have to ‘warn’ you about. The amount of information and the technical breadth and depth of the content might sometimes feel a little overwhelming. This could be especially true if you aren’t very familiar with Exchange. For some, the technical depth might even be outright too deep. That’s also why I advise you not to try and read any of these books in one go. Instead, take your time to read through each of the chapters and allow some time to let the information sink in. Combine that with some fiddling around in your own lab and you’ll have a great learning experience.

What I like about these books is that you’re provided with all the information to successfully understand how Exchange operates and are then expected to put that knowledge to work yourself. Although there are plenty of examples in the book, if you are looking for pre-canned scripts, how-to’s or step-by-step guides, there might be better alternatives for you (e.g. Exchange 2013 PowerShell Cookbook). But then again, I don’t think that’s what Paul and Tony were trying to achieve anyway.

Conclusion

Paul and Tony have managed to combine a fun-to-read style with great (technical) content making the Exchange 2013 Inside-Out books a must to read. Whether you’re a consultant working day in and day out with Exchange or you’re an admin in charge of a Exchange organization, I’m sure you’ll find lots of valuable information in each of the books which will help you in your day-to-day job.

Blog Exchange Exchange 2013 Reviews

Help Exchange become a better product

In the Exchange community’s never-lasting efforts to help Microsoft increase the quality and usability of Exchange, a new website was recently created where YOU can send in and rate ideas for features and changes you would like to see in the next version(s)/updates of Exchange.

It’s important to understand that this is a community effort to convince Microsoft of taking a look at some heavily-requested features. Therefore, the more feedback we get, the better it is! If there’s enough feedback, I’m confident we are able to reach at least a few people in Redmond!

If you’ve been around long enough, you will see that some of the ideas have been lingering around for quite some time. With your help, we might just make enough noise for Microsoft to notice!

Right now, having the ability to centrally manage signatures and bringing back the Set-MailboxSentIntemsConfiguration cmdlet (why was it removed in the first place?) are on the top of the list. And they should be! If you find they should be in Exchange or you have other feature requests, feel free to vote for them so that Microsoft can see how important some features are to us.

Now, before doing anything else, have a look at exchange.ideascale.com and make your contribution to the list!

Cheers,

Michael

Blog Exchange Exchange 2013

Building an enterprise security strategy for Exchange

These days, the news is all about big brother watching us. You can’t open a newspaper of watch news on TV without being slammed with the news that one or the other intelligence agency is actively gathering information from other nations or important enterprises.

Especially the later case makes one wonder what you can do about it? I don’t believe there is a 100% safe system. However, there’s nothing wrong with trying to make an attacker’s life as hard as possible. There are many ways to do this and having a multi-layered defense is what you should be looking at. However, it’s not only about protecting your network or access to your network, it’s also about protecting your applications.

Recently, I wrote an article about some simple things you can do to secure your Exchange messaging environment. If you want to know more about it, have a look at my latest article for Search Exchange: http://searchexchange.techtarget.com/tip/Build-an-enterprise-security-strategy-for-Exchange

Enjoy!

Exchange Exchange 2013

Exchange Online Archiving (EOA): a view from the trenches – part 1

What is Exchange Online Archiving?

I’ve been meaning to write this article for quite a while now, so I’m glad it’s finally “ready”. First, let me start by introducing what Exchange Online Archiving (EOA in short) actually is.
This feature, first available since Exchange Hybrid, allows you to provision an cloud-based archive for an on-premises mailbox. While having an Exchange archive isn’t something new, at least not since Exchange 2010, the fact that the archive doesn’t have to be hosted within your own organization is pretty interesting.

Archives can be useful in many ways. One of the primary reasons why archives are used is to keep historical data for a longer period of time without cluttering a user’s primary mailbox. This could, for instance, be the case when you have to meet some compliance requirements which e.g. state that corporate data should be kept for 5 years. Although Exchange doesn’t have a problem with handling very large mailboxes including a high item count per folder, it’s usually the human component that cannot handle the overload of information that comes with having large amounts of data – at least that’s my experience. Keeping email inherently means that you’ll have to increase disk space to support the sometimes huge amounts of data that is involved. Although disk space has become quite cheap and Exchange 2013 is a great candidate to be used in combination with those cheap disks, there’s still a significant overhead involved in keeping that additional piece of infrastructure up and running.

This is where Exchange Online Archives could come in handy. First of all, there is no feature difference between an on-premises archive or a cloud-based (Office 365) archive. From a user’s point-of-view they both act and look the same. In fact, you are only offloading the task of storing archives to Office 365. The Exchange Online Plan 2 subscription automatically includes the right to provision unlimited-sized archives for your users. Although I don’t expect many people to run into the issue of filling up the initial 100GB, which you get provisioned to start with, any time soon, it’s very hard to match that offer for only  8$ per user per month… If you are only interested in EOA, there are specific EOA licenses as well which cost only a fraction of the full Exchange online license. Of course, this license will only allow you to use EOA and nothing more.

How does it work?

As briefly touched upon earlier, being able to use Exchange Online Archives is a by-product from having a hybrid Exchange deployment. A hybrid deployment, as the name stipulates, is the process of ‘pairing’ your On-Premises Exchange organization to Office 365; essentially creating one large “virtual Exchange organization”. As a result, having a (fully functional) Hybrid Deployment is the first requirement to abide to… Technically speaking it would be possible to setup a sort of minimalistic Hybrid deployment in which you leave out functionalities that you do not necessarily need to make Online Archives work (like e.g. cross-premises mail flow). Nonetheless I strongly encourage to still setup the full monty. It might save you some time afterwards if you decide to deploy cloud-based mailboxes anyway.

A very import part of the setup is set aside for DirSync. As you might remember, if you tick the “Hybrid Deployment” checkbox during DirSync setup, you allow it to write back some attributes into your on-premises organization. One of these attributes is the msExchArchiveStatus attribute. This attribute is a flag telling the on-premises organization whether an online archive has been provisioned or not. As we will see later in this section, this attribute is particularly important during the creation of an archive.

One of the questions I get asked regularly is whether you are required to deploy ADFS when setting up a hybrid deployment. The short answer is no. On the other hand, there are many good reasons why you would want to deploy ADFS, or rather: there are many good reasons why you would want to have some sort of single/same sign on. One reason I can think of it to simplify using online archives from an end user’s perspective. That way they won’t need to manage another set of credentials. Of course this isn’t only valid for online archives, it’s the same for each cloud-based workload in Office 365. ADFS can be one way of providing SSO, Password Sync is another. Both are valid options, neither are required and won’t be discussed here.

From a functional point-of-view, Online Archives have the exact same requirements as on-premises archives. You at least need Office 2007 SP3 Professional edition or later. Since we are running archives from Office 365, you also need to make sure to be up to speed with the latest required updates. For more information on what updates are needed, have a look at the following web page: http://office.microsoft.com/en-us/office365-suite-help/software-requirements-for-office-365-for-business-HA102817357.aspx

Now that we got the prerequisites covered, let’s have a look at how the provisioning process works from a high-level perspective:

image

As you can derive from the image above, there are two DirSync operations needed. The first one is used to “tell” Office 365 to create an archive for user “X”. The second DirSync operation is used to sync back the msExchArchiveStatus attribute which will now have a value of 1 instead of 0. This is to tell the on-premises organization the archive has been created. A good way to verify whether this process has completed is to run the Get-Mailbox | fl *arch* command:

image

Here you can see that the archive was created successfully (ArchiveStatus = Active). However, we are missing a part of the information. This is because the on-premises organization cannot provide the information from Office 365 (which is essentially another Exchange organization). To fetch the missing information, you’ll have to open up a remote PowerShell session to Exchange Online and run the Get-MailUser | fl *arch* command:

image

Conclusion

This is it for part one of this article.
In the following part, I will talk about some of the gotchas, do’s and don’ts. Stay tuned!

Exchange Exchange 2013 Hybrid Exchange Office 365