BitTitan SDK: Color code your MigrationWiz project users

The BitTitan SDK is a key feature for all Enterprise migration projects. Some tasks, in large migration projects, are better being automated. It will save you hundreds of hours of repetitive work.

Just recently I got asked by a partner for a way to easily execute actions in batches of users, within the same MigrationWiz project. Some times the best option is to divide those users into separate projects and just execute those actions for all the users, but that’s not always the best option.

Now imagine this scenario: You have a project with 10000 users and you need to start a migration to just 800 of them. What should you do? Color code those users, filter by that color and execute the action.

(More information about color coding here on the BitTitan help center)

So how can you categorize 800 users in a project with 10000? Using the BitTitan SDK, of course.

The script below, that you can also find here, can be used to automatically color code your MigrationWiz users, based on a CSV file.

Below there’s a sample of the CSV file. It’s needs two columns:

  • Source Email – the MigrationWiz source email address
  • Flags – A number between 1 and 6. Each has its individual color.

CSVCategories

The execution is as follows:

  1. Prompt to authenticate with BitTitan credentials
  2. Prompt you to select the BitTitan workgroup where the MigrationWiz project is
  3. Prompt you to select the BitTitan Customer where the MigrationWiz project is
  4. Prompt you to select the MigrationWiz project
  5. Enter the full path of the CSV file (i.e C:\scripts\MyUsers.csv)
<#

.DESCRIPTION

This script will move mailboxes from a mailbox project to a target project

    

.NOTES

    Author          Antonio Vargas

    Date         Jan/2019

    Disclaimer:     This script is provided 'AS IS'. No warrantee is provided either expressed or implied.

Version: 1.1

#>

### Function to create the working and log directories

Function Create-Working-Directory {

param

(

[CmdletBinding()]

[parameter(Mandatory=$true)] [string]$workingDir,

[parameter(Mandatory=$true)] [string]$logDir

)

if ( !(Test-Path-Path $workingDir)) {

        try {

            $suppressOutput = New-Item -ItemType Directory -Path $workingDir -Force -ErrorAction Stop

$msg="SUCCESS: Folder '$($workingDir)' for CSV files has been created."

Write-Host-ForegroundColor Green $msg

        }

        catch {

$msg="ERROR: Failed to create '$workingDir'. Script will abort."

Write-Host-ForegroundColor Red $msg

Exit

        }

}

if ( !(Test-Path-Path $logDir)) {

try {

$suppressOutput=New-Item-ItemType Directory -Path $logDir-Force -ErrorAction Stop

$msg="SUCCESS: Folder '$($logDir)' for log files has been created."

Write-Host-ForegroundColor Green $msg

}

catch {

$msg="ERROR: Failed to create log directory '$($logDir)'. Script will abort."

Write-Host-ForegroundColor Red $msg

Exit

}

}

}

### Function to write information to the Log File

Function Log-Write

{

param

(

[Parameter(Mandatory=$true)] [string]$Message

)

$lineItem="[$(Get-Date-Format "dd-MMM-yyyy HH:mm:ss") | PID:$($pid) | $($env:username) ] "+$Message

    Add-Content -Path $logFile -Value $lineItem

}

### Function to display the workgroups created by the user

Function Select-MSPC_Workgroup {

#######################################

# Display all mailbox workgroups

#######################################

$workgroupPageSize=100

  $workgroupOffSet = 0

    $workgroups = $null

Write-Host

Write-Host-Object "INFO: Retrieving MSPC workgroups ..."

do

{

$workgroupsPage=@(Get-BT_Workgroup-PageOffset $workgroupOffSet-PageSize $workgroupPageSize)




if($workgroupsPage) {

$workgroups+=@($workgroupsPage)

foreach($Workgroupin$workgroupsPage) {

Write-Progress-Activity ("Retrieving workgroups ("+$workgroups.Length+")") -Status $Workgroup.Id

}

$workgroupOffset+=$workgroupPageSize

}

} while($workgroupsPage)

if($workgroups-ne$null-and$workgroups.Length-ge1) {

Write-Host-ForegroundColor Green -Object ("SUCCESS: "+$workgroups.Length.ToString() +" Workgroup(s) found.")

}

else {

Write-Host-ForegroundColor Red -Object "INFO: No workgroups found."

Exit

}

#######################################

# Prompt for the mailbox Workgroup

#######################################

if($workgroups-ne$null)

{

Write-Host-ForegroundColor Yellow -Object "ACTION: Select a Workgroup:"

Write-Host-ForegroundColor Gray -Object "INFO: your default workgroup has no name, only Id."

for ($i=0; $i-lt$workgroups.Length; $i++)

{

$Workgroup=$workgroups[$i]

if($Workgroup.Name-eq$null) {

Write-Host-Object $i,"-",$Workgroup.Id

}

else {

Write-Host-Object $i,"-",$Workgroup.Name

}

}

Write-Host-Object "x - Exit"

Write-Host

do

{

if($workgroups.count-eq1) {

$result=Read-Host-Prompt ("Select 0 or x")

}

else {

$result=Read-Host-Prompt ("Select 0-"+ ($workgroups.Length-1) +", or x")

}




if($result-eq"x")

{

Exit

}

if(($result-match"^\d+$") -and ([int]$result-ge0) -and ([int]$result-lt$workgroups.Length))

{

$Workgroup=$workgroups[$result]

Return$Workgroup.Id

}

}

while($true)

}

}

### Function to display all customers

Function Select-MSPC_Customer {

param

(

[parameter(Mandatory=$true)] [String]$WorkgroupId

)

#######################################

# Display all mailbox customers

#######################################

$customerPageSize=100

  $customerOffSet = 0

    $customers = $null

Write-Host

Write-Host-Object "INFO: Retrieving MSPC customers ..."

do

{

$customersPage=@(Get-BT_Customer-WorkgroupId $WorkgroupId-IsDeleted False -IsArchived False -PageOffset $customerOffSet-PageSize $customerPageSize)




if($customersPage) {

$customers+=@($customersPage)

foreach($customerin$customersPage) {

Write-Progress-Activity ("Retrieving customers ("+$customers.Length+")") -Status $customer.CompanyName

}

$customerOffset+=$customerPageSize

}

} while($customersPage)

if($customers-ne$null-and$customers.Length-ge1) {

Write-Host-ForegroundColor Green -Object ("SUCCESS: "+$customers.Length.ToString() +" customer(s) found.")

}

else {

Write-Host-ForegroundColor Red -Object "INFO: No customers found."

Exit

}

#######################################

# {Prompt for the mailbox customer

#######################################

if($customers-ne$null)

{

Write-Host-ForegroundColor Yellow -Object "ACTION: Select a customer:"

for ($i=0; $i-lt$customers.Length; $i++)

{

$customer=$customers[$i]

Write-Host-Object $i,"-",$customer.CompanyName

}

Write-Host-Object "x - Exit"

Write-Host

do

{

if($customers.count-eq1) {

$result=Read-Host-Prompt ("Select 0 or x")

}

else {

$result=Read-Host-Prompt ("Select 0-"+ ($customers.Length-1) +", or x")

}

if($result-eq"x")

{

Exit

}

if(($result-match"^\d+$") -and ([int]$result-ge0) -and ([int]$result-lt$customers.Length))

{

$customer=$customers[$result]

Return$Customer.OrganizationId

}

}

while($true)

}

}

### Function to display all mailbox connectors

Function Select-MW_Connector {

param

(

[parameter(Mandatory=$true)] [guid]$customerId

)

#######################################

# Display all mailbox connectors

#######################################




$connectorPageSize=100

  $connectorOffSet = 0

    $connectors = $null

Write-Host

Write-Host-Object "INFO: Retrieving mailbox connectors ..."




do

{

$connectorsPage=@(Get-MW_MailboxConnector-ticket $global:mwTicket-OrganizationId $customerId-PageOffset $connectorOffSet-PageSize $connectorPageSize)




if($connectorsPage) {

$connectors+=@($connectorsPage)

foreach($connectorin$connectorsPage) {

Write-Progress-Activity ("Retrieving connectors ("+$connectors.Length+")") -Status $connector.Name

}

$connectorOffset+=$connectorPageSize

}

} while($connectorsPage)

if($connectors-ne$null-and$connectors.Length-ge1) {

Write-Host-ForegroundColor Green -Object ("SUCCESS: "+$connectors.Length.ToString() +" mailbox connector(s) found.")

}

else {

Write-Host-ForegroundColor Red -Object "INFO: No mailbox connectors found."

Exit

}

#######################################

# {Prompt for the mailbox connector

#######################################

if($connectors-ne$null)

{




for ($i=0; $i-lt$connectors.Length; $i++)

{

$connector=$connectors[$i]

Write-Host-Object $i,"-",$connector.Name

}

Write-Host-Object "x - Exit"

Write-Host

Write-Host-ForegroundColor Yellow -Object "ACTION: Select the source mailbox connector:"

do

{

$result=Read-Host-Prompt ("Select 0-"+ ($connectors.Length-1) +" o x")

if($result-eq"x")

{

Exit

}

if(($result-match"^\d+$") -and ([int]$result-ge0) -and ([int]$result-lt$connectors.Length))

{

$global:connector=$connectors[$result]

Break

}

}

while($true)

}

}

Function Add-MW_Category {

param

(

[parameter(Mandatory=$true)] [Object]$Connector

)

# add items to a MigrationWiz project

$count=0

Write-Host

Write-Host-Object ("Aplying categories to migration item(s) in the MigrationWiz project "+$connector.Name)

    $importFilename = (Read-Host -prompt "Enter the full path to CSV import file")

    # read csv file

    $users = Import-Csv -Path $importFilename

    foreach($user in $users)

    {

     $sourceEmail = $user.'Source Email'

$flags=$user.'Flags'

        if($sourceEmail -ne $null -and $sourceEmail -ne "" -and $flags -in 1..6)

        {

$count++

Write-Progress-Activity ("Applying category to migration item ("+$count+")") -Status $sourceEmail

$mbx=get-mw_mailbox-ticket $mwTicket-ExportEmailAddress $sourceEmail

if ($mbx)

{

$Category=";tag-"+$flags+";"

$result=Set-MW_Mailbox-Ticket $mwTicket-ConnectorId $connector.Id-mailbox $mbx-Categories $Category

}

else

{

Write-Host"Cannot find MigrationWiz line item with source address: '$($sourceEmail)'"-ForegroundColor Yellow

}

}

else {

Write-Host"The line item with the address '$($sourceEmail)' and the flag '$($flags)' is not valid."-ForegroundColor Yellow

}

    }




if($count-eq1)

{

Write-Host-Object "1 mailbox has been categorized in",$connector.Name-ForegroundColor Green

}

if($count-ge2)

{

Write-Host-Object $count," mailboxes have been categorized in",$connector.Name-ForegroundColor Green

}

}

#######################################################################################################################

# MAIN PROGRAM

#######################################################################################################################

#Working Directory

$workingDir = "C:\scripts"

#Logs directory

$logDirName = "LOGS"

$logDir = "$workingDir\$logDirName"

#Log file

$logFileName = "$(Get-Date -Format yyyyMMdd)_Move-MW_Mailboxes.log"

$logFile = "$logDir\$logFileName"

Create-Working-Directory -workingDir $workingDir -logDir $logDir

$msg = "++++++++++++++++++++++++++++++++++++++++ SCRIPT STARTED ++++++++++++++++++++++++++++++++++++++++"

Log-Write -Message $msg

# Authenticate

$creds = Get-Credential -Message "Enter BitTitan credentials"

try {

# Get a ticket and set it as default

$ticket=Get-BT_Ticket-Credentials $creds-ServiceType BitTitan -SetDefault

# Get a MW ticket

$global:mwTicket=Get-MW_Ticket-Credentials $creds

} catch {

$msg="ERROR: Failed to create ticket."

Write-Host-ForegroundColor Red $msg

Log-Write -Message $msg

Write-Host-ForegroundColor Red $_.Exception.Message

Log-Write -Message $_.Exception.Message

Exit

}

#Select workgroup

$WorkgroupId = Select-MSPC_WorkGroup

#Select customer

$customerId = Select-MSPC_Customer -Workgroup $WorkgroupId

#Select connector

Select-MW_Connector-customerId $customerId

$result = Add-MW_Category -Connector $connector

$msg = "++++++++++++++++++++++++++++++++++++++++ SCRIPT FINISHED ++++++++++++++++++++++++++++++++++++++++`n"

Log-Write -Message $msg

##END SCRIPT
This is the link for my GitHub Gist, where all comments are welcomed regarding the code. Use the comment section as well if you want something changed.
You can find all my BitTitan SDK scripts in my GitHub repository.
Advertisements

BitTitan SDK: Retry individual errors for all users in your MigrationWiz document migration project

The BitTitan SDK is a key feature for all Enterprise migration projects. Some tasks, in large migration projects, are better being automated. It will save you hundreds of hours of repetitive work.

The script below, that you can also find in here, can be used to automatically retry errors in all users of your MigrationWiz project.

To retry errors in a user, he needs to:

  • Be in a “Completed” state
  • have at least one item error

The execution is as follows:

  1. Prompt to authenticate with BitTitan credentials
  2. Prompt you to select your MigrationWiz document project
  3. Identifies number of users eligible for a retry errors pass
  4. Exports to a CSV, created in the same folder from where the script was executed, a list of all successfully initiated retry errors passes
<#



.DESCRIPTION

This script needs to be run on the BitTitan Command Shell

    

.NOTES

.Version        1.0

    Author          Antonio Vargas

    Date            Feb/13/2019

Disclaimer: This script is provided ‘AS IS’. No warrantee is provided either expresses or implied.

    Change Log

#>

######################################################################################################################################################

# Main Program

######################################################################################################################################################

$connectors = $null

#Working Directory

$global:workingDir = [environment]::getfolderpath("desktop")

#######################################

# Authenticate to MigrationWiz

#######################################

$creds = $host.ui.PromptForCredential("BitTitan Credentials", "Enter your BitTitan user name and password", "", "")

try {

$mwTicket=Get-MW_Ticket-Credentials $creds

} catch {

write-host"Error: Cannot create MigrationWiz Ticket. Error details: $($Error[0].Exception.Message)"-ForegroundColor Red

}

#######################################

# Display all document connectors

#######################################

Write-Host

Write-Host -Object "Retrieving Document connectors ..."

Try{

$connectors=get-mw_mailboxconnector-Ticket $mwTicket-RetrieveAll -ProjectType Storage -ErrorAction Stop

}

Catch{

Write-Host-ForegroundColor Red -Object "ERROR: Cannot retrieve document projects."

Exit

}

if($connectors -ne $null -and $connectors.Length -ge 1) {

Write-Host-ForegroundColor Green -Object ("SUCCESS: "+$connectors.Length.ToString() +" document project(s) found.")

}

else {

Write-Host-ForegroundColor Red -Object "ERROR: No document projects found."

Exit

}

#######################################

# {Prompt for the document connector

#######################################

if($connectors -ne $null)

{

Write-Host-ForegroundColor Yellow -Object "Select a document project:"

for ($i=0; $i-lt$connectors.Length; $i++)

{

$connector=$connectors[$i]

Write-Host-Object $i,"-",$connector.Name,"-",$connector.ProjectType

}

Write-Host-Object "x - Exit"

Write-Host

do

{

$result=Read-Host-Prompt ("Select 0-"+ ($connectors.Length-1) +" or x")

if($result-eq"x")

{

Exit

}

if(($result-match"^\d+$") -and ([int]$result-ge0) -and ([int]$result-lt$connectors.Length))

{

$connector=$connectors[$result]

Break

}

}

while($true)

#######################################

# Get mailboxes

#######################################

$mailboxes=$null

$MailboxesWithErrors=@()

$MailboxErrorCount=0

$ExportMailboxList=@()

Write-Host

Write-Host-Object ("Retrieving mailboxes for '$($connector.Name)':")

Try{

$mailboxes=@(Get-MW_Mailbox-Ticket $mwTicket-ConnectorId $connector.Id-RetrieveAll -ErrorAction Stop)

}

Catch{

Write-Host-ForegroundColor Red "ERROR: Failed to query users in project '$($connector.Name)'"

Exit

}

Foreach ($mailboxin$mailboxes){

$LastMigration=get-MW_MailboxMigration-ticket $mwTicket-MailboxID $mailbox.id|? {$_.Type-ne"Verification"} |Sort-Object-Property Startdate -Descending |select-object-First 1

if ($LastMigration.Status-eq"Completed"){

try{

$MailboxErrors=get-mw_mailboxerror-ticket $mwTicket-mailboxid $mailbox.id-severity Error -erroraction Stop

}

Catch{

Write-Host-ForegroundColor Yellow "WARNING: Cannot find errors for mailbox '$($mailbox.ExportEmailAddress)'"

}

if (-not ([string]::IsNullOrEmpty($MailboxErrors))){

$MailboxesWithErrors+=$mailbox

$MailboxErrorCount=$MailboxErrorCount+$MailboxErrors.count

}

}

}

if($MailboxesWithErrors-ne$null-and$MailboxesWithErrors.Length-ge1)

{

Write-Host-ForegroundColor Green -Object ("SUCCESS: "+$MailboxesWithErrors.Length.ToString() +" mailbox(es) elegible to retry errors found")

Write-Host-ForegroundColor Green -Object ("SUCCESS: '$($MailboxErrorCount)' individual errors found that will be retried")

$RetryMigrationsSuccess=0

Foreach ($mailboxwitherrorsin$MailboxesWithErrors){

try{

$RecountErrors=get-mw_mailboxerror-ticket $mwTicket-mailboxid $mailboxwitherrors.id-severity Error -erroraction Stop

$result=Add-MW_MailboxMigration-ticket $mwTicket-mailboxid $mailboxwitherrors.id-type Repair -ConnectorId $connector.id-userid $mwTicket.userid-ErrorAction Stop

write-host-ForegroundColor Green "INFO: Processing $($mailboxwitherrors.ExportEmailAddress) with $($RecountErrors.count) errors"

$ErrorLine=New-Object PSCustomObject

$ErrorLine|Add-Member-Type NoteProperty -Name MailboxID -Value $mailboxwitherrors.id

$ErrorLine|Add-Member-Type NoteProperty -Name "Source Address"-Value $mailboxwitherrors.ExportEmailAddress

$ErrorLine|Add-Member-Type NoteProperty -Name "Destination Address"-Value $mailboxwitherrors.ImportEmailAddress

$ErrorLine|Add-Member-Type NoteProperty -Name "Error Count"-Value $RecountErrors.count

$ExportMailboxList+=$ErrorLine

$RetryMigrationsSuccess=$RetryMigrationsSuccess+1

}

Catch{

Write-Host-ForegroundColor Red "ERROR: Failed to process $($mailboxwitherrors.ExportEmailAddress). Error details: $($Error[0].Exception.Message)"

}

}

if ($RetryMigrationsSuccess-ge1){

Write-Host-ForegroundColor Yellow "INFO: $($RetryMigrationsSuccess) retry migrations executed. Exporting List to CSV."

$ExportMailboxList|Export-CSV .\List-UsersWithErrors.csv -NoTypeInformation

}

Else{

Write-Host-ForegroundColor Yellow "INFO: No retry migration passes were executed with success."

}

}

else

{

Write-Host-ForegroundColor Yellow "INFO: no users in project '$($connector.Name)' qualify for a retry errors pass. Make sure the users are in a completed state and have individual item errors logged."

Exit

}

}
This is the link for my GitHub Gist, where all comments are welcomed regarding the code. Use the comment section as well if you want something changed.
You can find all my BitTitan SDK scripts in my GitHub repository.

How do you plan and execute a successful Public Folder migration?

From all the years as a consultant, and now directly in the migration business, helping partners successfully plan and execute migrations, Public Folders as always been the one of the most challenging workloads I had to deal with.

There’s always a lot of questions when you have to execute such migrations, so I decided to write a blog post about it, where I am going to try and address as much as possible.

To try and keep it as organized as possible, and because there are so many different scenarios, I will divide this post into three main sections: General migration considerations, Migrating Hybrid Public Folders and Migrating Public Folders cross organization.

We will then also discuss some more generic questions, such as why use a third party tool vs the Microsoft native tool.

General migration considerations

This blog post is focused both on Hybrid and cross organization Public Folder migrations. Some steps however are exactly the same, regardless of what the migration scenario is. Those steps are described below. After reading this section you can then focus on your specific scenario in the sections that follow.

Prepare your On Premises Environment

One of the first things you need to look at is to the On Premises Public folder structure, to check if there’s any inconsistencies or invalid folders. The best way to that is of course via scripting, and you should use this excellent script from Aaron @Microsoft, called IDFix for Public Folders. Download it, run it and fix everything that the script highlights as needing to be fixed.

You should also make sure you create a report with all mail enabled Public Folders and address, and to do so you can leverage the Get-MailPublicFolder cmdlet.

How to migrate Public Folder access permissions, as well as Send-As and Send-on-behalf rights

Public Folder permissions should be migrated by the migration tool, provided of course identities match between on premises and Exchange Online (which should of course be true for Hybrid scenarios), or between premises  in cross organization migrations.

As for the Send-As and Send-on-behalf rights, the best option is to export them from the source system and import them into the destination system, once the migration is completed. Since this is not PowerShell code I’ve focused on recently, I did a quick research online and found this article online where you can find the code to export and import those access rights.

Note: I am not the author of the code below and I am only putting it directly in my blog post just so it’s easier for you to locate it and copy it. The code was taken from the article mentioned in the line above, written by Aaron Guilmette.

Export Send-As

Get-MailPublicFolder -ResultSize Unlimited | Get-ADPermission | ? {($_.ExtendedRights -Like "Send-As") -and ($_.IsInherited -eq $False) -and -not ($_.User -like "*S-1-5-21-*")} | Select Identity,User | Export-Csv Send_As.csv -NoTypeInformation

Export Send-on-behalf

Get-MailPublicFolder | Select Alias,PrimarySmtpAddress,@{N="GrantSendOnBehalfTo";E={$_.GrantSendOnBehalfTo -join "|"}} | Export-Csv GrantSendOnBehalfTo.csv -NoTypeInformation

$File = Import-Csv .\GrantSendOnBehalfTo.csv
$Data = @()
Foreach ($line in $File)
    {
    If ($line.GrantSendOnBehalfTo)
        {
        Write-Host -ForegroundColor Green "Processing Public Folder $($line.Alias)"
        [array]$LineRecipients = $line.GrantSendOnBehalfTo.Split("|")
        Foreach ($Recipient in $LineRecipients)
            {
            Write-Host -ForegroundColor DarkGreen "     $($Recipient)"
            $GrantSendOnBehalfTo = (Get-Recipient $Recipient).PrimarySmtpAddress
            $LineData = New-Object PSCustomObject
            $LineData | Add-Member -Type NoteProperty -Name Alias -Value $line.Alias
            $LineData | Add-Member -Type NoteProperty -Name PrimarySmtpAddress -Value $line.PrimarySmtpAddress
            $LineData | Add-Member -Type NoteProperty -Name GrantSendOnBehalfTo -Value $GrantSendOnBehalfTo
            $Data += $LineData
            }
         }
    }
$Data | Export-Csv .\GrantSendOnBehalfTo-Resolved.csv -NoTypeInformation

Import Send-As

$SendAs = Import-Csv .\SendAs.csv
$i=1
foreach ($obj in $SendAs) 
    { 
    write-host "$($i)/$($SendAs.Count) adding $($obj.User) to $($obj.Identity)"
    Add-RecipientPermission -Identity $obj.Identity.Split("/")[2] -Trustee $obj.User.Split("\")[1] -AccessRights SendAs -confirm:$false; $i++
    }

Import Send-on-behalf

$GrantSendOnBehalfTo = Import-Csv .\GrantSendOnBehalfTo-Resolved.csv
$i=1
Foreach ($obj in $GrantSendOnBehalfTo)
    {
    Write-host "$($i)/$($grantsendonbehalfto.count) Granting $($obj.GrantSendOnBehalfTo) Send-On-Behalf to folder $($obj.PrimarySmtpAddress)"
    Set-MailPublicFolder -Identity $obj.PrimarySmtpAddress -GrantSendOnBehalfTo $obj.GrantSendOnBehalfTo
    $i++ 
    }

Migrating Hybrid Public Folders

This scenario, when compared to the cross organization migration, is far more complex, because besides moving the data you will also have to worry about things like mail flow, user public folder access, etc. But lets address one thing at the time.

Microsoft Official guidance to configure Hybrid Public Folders

If you’re reading this article because you’re planning to migrate your Hybrid Public folders, chances are you already read and executed the Microsoft guidance to make your on premises Public Folders available to Exchange Online users, under a Hybrid deployment. Configure legacy on-premises Public Folders for a Hybrid Deployment is the article for legacy public folders and Configure Exchange Server Public Folders for a Hybrid Deployment is the one for modern Public Folders.

Both articles are focused on the hybrid coexistence and not the migration planning of the Public Folders, but they are important to mention as they impact the migration planning, based on what type of coexistence you configured and steps you followed.

Public Folder end user access in the context of a hybrid migration

When planning a Public Folder migration, under a hybrid scenario, one of the most important things you need to consider is, end user access. With that in mind, consider the following:

  • On premises users cannot access Exchange Online Public Folders
  • Exchange Online users can access on premises public folders and/or Exchange Online Public folders, although you cannot configure a single user to access both, you can configure some users to have access to on premises folders and some to see them locally, in Exchange Online.

Have the two principles in mind, during your planning. The Public Folder access for Exchange Online users is complex and by itself worthy of a dedicated blog post.

The Microsoft official guidance, mentioned in the previous section, explains how you configure Exchange Online users to access on premises Public Folders.

The bottom line of this section is, make sure you move all users to Exchange Online, before you consider moving the Public Folders, and if you don’t, make sure that the users left on premises do not require any Public Folder access.

Public Folder mail flow coexistence before, during and after the migration. How do you handle mail enabled Public Folders.

Another very important component of your Public Folder migration is the mail flow coexistence, or to be more precise, the way you deal with the mail enabled Public Folders.

Mail Enabled Public Folders before the migration

When you follow the guidance provided by Microsoft, you will be asked to execute the Sync-MailPublicFolders script.

This script enables Exchange Online users to send emails to on premises mail enabled Public Folders, by creating mail objects in Exchange Online with the primary and all other SMTP addresses that those folders have on premises. This objects are not actual Exchange Online Public Folder nor they are visible in the Exchange Online Public Folder tree. They also allow those on premises Public Folders to be present in the Exchange Online GAL (Global Address List), and once a user in Exchange Online emails that folder, the email gets forwarded to Exchange On Premises.

Mail Enabled Public Folders during the migration

During the Public Folder migration, whether it’s a single or multiple pass (with pre-stage + full migration) migration strategy, you should not change the Public Folder mail flow. That means that you should not mail enable the Public Folders in Exchange Online (chose a tool that gives you that option). Actually as you will see below, there are things that you need to do in Exchange Online, before mail enabling the Public Folders.

Mail Enabled Public Folders after the migration

Once your migration (or the pre-stage) is completed, you should transition the Public Folder mail flow to Exchange Online. To do so, you should follow these steps:

  1. Start the pre-stage or full migration and wait for it to be completed
  2. Once the migration pass is done, go to Exchange Online and delete all mail objects created by the Sync-MailPublicFolders script (NOTE: this will temporarily break mail flow between Exchange Online users and mail enabled Public Folders, online or on premises)
  3. Mail enable the Exchange Online Public Folders, either via a script or using the migration tool. Make sure you add all addresses from the on premises to the online Public Folders
  4. Run a full migration pass if in step 1 the pass that you ran was a pre-stage

To elaborate a little bit more in step 2, the reason that you need to delete those objects is because you need to avoid conflicting addresses, when enabling the mail enabled Public Folders in Exchange Online, and those objects are not associated with the new EXO Public Folders.

Migrating Public Folders cross Organization

Migrating Public Folders cross organization is not as complex, and you’ll see why in the sections below. This scenario can include migrations such as:

  • Exchange Online to Exchange Online
  • Hosted Exchange to Exchange on premises or Exchange Online
  • Exchange on premises to Exchange on premises

When to migrate users and Public Folders

Usually this Public Folder migrations cross organization come as an additional step to a migration that also includes mailboxes.

Although there’s no 100% correct answer, when it comes to the question of what should be migrated first, mailboxes or Public Folders, in this cases normally the best option is to migrate mailboxes first and Public Folders last. The main reason for that is because you should migrate the Public Folders when they’re not being used anymore, allowing you to do a clean single pass migration of all the data.

Public Folder end user access and mail flow coexistence

This is where things gets simple, for this type of scenarios. There’s no Public Folder access cross organization (unless the user is using the credentials for the 2 systems) and although technically you can configure mail flow between any two email systems, it’s not something you should consider for the majority of the cases.

Mail enabled Public Folders can and should be created at the destination during the folder hierarchy creation.

Why use a third party tool to migrate Public Folders

That’s probably the question I get the most, working for a third party migration tool company, that has an amazing Public Folder migration tool, BitTitan. And here is a list of reasons:

  • Migrate large volumes of data: Migrating 2, 5 or 10GB is easy with any tool, but not all tools can deal with Terabytes of Public Folder data.
  • Migrate parts of the structure or prioritize data: Either by targeting just specific parts of the Public Folder hierarchy or by using folder filtering. This is a very commonly used feature in tools like BitTitan MigrationWiz.
  • Flexibility on handling mail enabled Public Folders: As explained in the Hybrid mail flow section of this posts, you might need some flexibility on how to handle mail enabled Public Folders during the migration. MigrationWiz will mail enable in the destination all Public Folders that are mail enabled at the source, but you can also suppress that option, and should in some scenarios.
  • Data transformation: While planning a migration of Public Folders, some customers want to take that opportunity to also move that data into a different structure, which can be shared or resource mailboxes, office 365 groups, etc. That is something that can be successfully done with tools that are flexible enough to perform that transformation (i.e in many cases requires recipient mapping, folder mapping, folder filtering, etc), like MigrationWiz.
  • Supported sources and destinations: Exchange 2007+ to Exchange 2007+, including of course Exchange Online and hosted as source and/or destination – this is the answer most customers want to hear from the support ability stand point of a third party tool, to migrate their Public Folders, and that is something they won’t get with the native tool.

The bottom line

While reading this post, before publishing it, I always get the feeling that there’s so many other things that I could mention and talk about, but I do think that it addresses the core concerns of most Public Folder migrations, and hopefully it addresses yours.

Nevertheless, if you do have any questions don’t hesitate to reach out.

 

 

 

Exchange 2013: When your Load Balancer marks your Client Access Server as offline.. and all services are up and running

Just recently, I was asked to help with an issue in an Exchange 2013 Organization. The problem was that the KEMP Load Balancer, that balances requests for multiple Exchange protocols, between several Exchange 2013 servers, was marking one of the CAS servers as being offline.

The server was not being marked as offline for all services, it was just for one, Outlook Web Access (OWA).

Before I go into server details, let me give you a visual representation of what the above means. In my case we had a KEMP Load Balancer, but this can apply to any load balancer.

mon2

What you can see above is the KEMP admin portal. On the left hand side if you go to Virtual Services and click “View/Modify Services”, you’ll see a table (that I conveniently don’t show, just so I don’t have to cover all information in it anyway). In that table you will have, per server, one or more “Real Servers”, and when that real server health cannot be verified, it will show in red.

What that means from an operational perspective, is that no request from that protocol (https/smtp/sip etc) will be sent to that server, meaning you can be in a situation of single point of failure and you won’t know it.

Now lets have a look at how those real servers health is checked.

mon3

If you click to modify the virtual service, in the “Real Servers” section you will see the check parameters. Basically what KEMP does is to get an https request to https://server/owa/healthcheck.htm

If that request fails, you should see a page not found and the server is marked as being offline. The below is the result you should expect from a healthy server.

mon4

In the URL above, you should see the server name. Below the 200 OK you should see the server FQDN.

Now that I explained, very high level, how the process works, lets drill down to the problem that I had.

Before detailing the problem that I had, I just want to point out that there are many reasons for a server to be marked as offline by a load balancer, and that you should check the obvious ones first, such as if all services are running, if Exchange is coming back as healthy when using tools such as the Get-ServerHealth or Exchange cmdlet, System Center Operations Manager (SCOM) if you have it, etc.

In my case there were no obvious reasons, plus I found it strange that only the https protocol was having issues, so this is what I did:

1 – I browsed to the https website that KEMP uses and noticed that I was getting page not found, which explains KEMP marking the server as not available

2- I then used another very handy Exchange PowerShell cmdlet, the Get-ServerComponent, to check which components were in an Active or Inactive state

See below an example of a good output for the server components, with all of the relevant ones being active.

mon5

But what I was getting was not what you see above. I was seeing the “OwaProxy” component as inactive, and that was a problem.

3- So what I did next was to look at the Exchange 2013 Health Mailboxes, searching for inconsistencies

And I found them…

mon1

When I ran a Get-Mailbox -Monitoring, to list all Health mailboxes, I found that some were corrupt, due to being associated with missing databases or missing mailbox servers.

This Exchange Organization has been downsized for the past year and during that cleanup those mailboxes were forgotten. If you have the same issue, don’t worry. Although it’s important to keep those mailboxes in a good state, you can easily recreate them, and that leads me to my next step.

4- Recreate the health mailboxes

So instead of me doing a lot of copy past into my blog post, check this Exchange 2013/2016 Monitor mailboxes article, from the product group blog, that not only has a lot of information about those mailboxes, but also has a perfect step by step at the bottom, on how exactly you recreate them.

And that’s what I did, recreate them, re-tested the URL https://server/owa/healthcheck.htm, which now worked perfectly and my load balancer was no longer marking my CAS server as unavailable in my https virtual service.

Hopefully this article can help you resolve your issue!

 

What does it really mean to be in the last year of support for Microsoft Exchange 2010?

The Microsoft Exchange product team, sent out a reminder on January 14th 2019, regarding the end of support for Exchange 2010. It will happen in 1 year! Don’t think one year is a long time, read the below to have my perspective of what this means to Microsoft but more importantly what this means to you, and start planning and executing the best roadmap for you. Check here what the Microsoft recommended roadmap is.

Let me start by outlining the simple reasons (in my perspective, of course) of why Exchange 2010 is going out of support:

  • Age and Versions – Believe it or not, Exchange 2010 RTM was launched in November 2009, almost 10 years ago. To add to that, we had 3 new versions being launched since: 2013, 2016 and 2019.
  • Office 365 and Hybrid compatibility – Exchange online servers get upgraded and the same compatibility standards apply. This means that it makes perfect sense that, as Exchange Online versions evolve, On Premises supported versions tend to be raised as well. Today, if you look at the Hybrid Deployment Prerequisites official information, you’ll see that Exchange 2010 based hybrid deployment is still supported, but I am sure that this won’t last long, and being in a Hybrid deployment will require at least one Exchange 2013, very soon. Logic says that once you start the On Premises upgrade path, by installing higher versions, you should be ready as well to move services to the higher versions and think about decommissioning the lower ones.

Moving on, what exactly does this announcement means, from the Microsoft perspective?

  • No more support, bug or security fixes or time zone updates – Basically Microsoft here tells you that you’re on your own, should you choose to keep Exchange 2010 in your Exchange Organization.
  • Move to Office 365, move to Office 365 (that’s actually not a typo or duplicate, it’s to show how determined Microsoft is to have you move to Exchange online) or upgrade your Exchange Servers On Premises to a newer version (Exchange 2013+) and decommission all Exchange 2010 servers from your Organization – It’s no secret that for Microsoft, the best option for most organizations, is to move to Exchange Online. Making a product with 10 years and 3 newer versions end of support, is not a consequence of that, but they do take the opportunity to call out those companies, that still have Exchange 2010 or older versions, and tell them to move to the cloud. I do agree that this will be the best option for most of the organizations out there, just not the best option for all, and the reasons are so vast that I could write an dedicated and extremely long blog post about it. In summary probably 95%+ of the organizations, are a perfect fit for Exchange Online.
  • If you need help, use the Microsoft FastTrack (did I mentioned that they highly recommend that you go to Exchange Online? 🙂 ), or search for a partner here – This part of the article, the “What if I need help?” section, is actually interesting because they don’t even mention anything directly for if you need help to upgrade your On Premises servers. They basically say that, if you need help, the FastTrack can help you or you can hire a partner, again very Office 365 focused (and I can’t stress this enough: I agree that moving to Office 365 is the best move for most organizations). You can however use the partner search link to, if you choose to, hire one of many great partners out there that will upgrade your Exchange On Premises, if they feel that you’re not the best fit for online, or give you a migration path to Exchange Online and Office 365, better than the FastTrack can provide, if they identify that is the case, by using a third party tool.

And finally, what is probably the most important part, what exactly does this mean to you and your Exchange Organization?

  • You don’t have to upgrade, or take any action for that matter, but you should – Your current Exchange 2010 servers will not stop working, and believe me I’ve recently seen Exchange 2003 or older versions still sending and receiving emails, but you should start planning now to either upgrade and/or decommission your Exchange 2010 servers. Do not go into an unsupported scenario, with a service as critical as email should be for your organization. 
  • It doesn’t matter if your Exchange 2010 servers aren’t hosting mailboxes and/or processing Client Access Services  – This can affect you because you have servers with mailboxes and active CAS services, but that’s not the only scenario. If you have some old Exchange 2010 Servers, as part of an Exchange 2013 or Exchange 2016 organization, you just recently had one reason to remove them – Exchange 2019 – and now consider this as one more. Don’t keep non supported servers in your Organization. If it’s because of that printer or app using the server to relay email, or because of an old backup or email signature software that lets you stuck to 2010, upgrade that as well or get rid of it.
  • It’s not only about if you’re planning to go to Office 365, it’s also if you’re already there and under a Hybrid – If you’re planning to go to Office 365, getting rid of the Exchange 2010 in the process is a good idea, but what if you’re already there and under a Hybrid deployment? If that’s your case, plan and execute. The ideal scenario is always towards having Exchange 2019 (the latest available version) Hybrid servers, and that won’t happen before you remove those 2010. If your Hybrid is a 2010 based hybrid than the urgency is even bigger: Don’t wait!

Hopefully all of the above makes sense. If not, please reach out to me. It’s time to move on from Exchange 2010. Time to upgrade or migrate, whatever suits best your needs.

Azure Data Box Disks just went from preview to general availability and became available in more regions

Yesterday, Microsoft announced the general availability for Azure Data Box Disks.

For those who don’t know what this is, Azure Data Box Disks are basically a fast (SSD disk based), reliable and secure solution to do offline data transfer to Azure.

It’s been a while since Microsoft announced the preview program, and that was available only for the EU and US regions. General availability is for EU, US, Australia and Canada. As Microsoft promised, the service is expanding to more Azure data centers worldwide.

When compared to the Azure Import/Export service, using the Azure Data Box Disks is, in theory, a simpler process, since Microsoft will provide the disks and handle all the logistics.

We’ll have to wait and see where Microsoft will drive this service towards, since the expectation of some customers is to see it handle other things, besides just simple data transfer, such as initial seeding for Azure Backups.

Office 365: Not sure if your vanity domain is being used by another Office 365 tenant? Don’t worry.

I remember not long ago, the pain of trying to find out in which Office 365 tenant your vanity domain was validated, when you bumped into the error stating that the domain was in use, while adding it to your current tenant.

This was maybe because I work with multiple tenants and I recycle my tenants quite often, for testing purposes, but I’ve also seen it with others while trying to assist them in their migration projects.

Fortunately Microsoft now is very clear in the error message you’ll see, when trying to add a domain to a tenant, that is in use. This is what you will see now:

vd1

As you can see above, it will tell you exactly in which tenant the domain is, just so you can login to it and remove it.

Now is there a catch with this? Of course!! Microsoft won’t give you such privileged information, until you enter a valid DNS record for the domain validation. You’ll see something similar to this, if the domain validation is not done properly:

VD2.png

So remember to add the DNS record first and click “Verify”, Microsoft will either add the domain or explain exactly why they can’t, which I am sure it was for a long time one of the main asks of Office 365 admins and consultants.

Azure Tip: Use PowerShell to check all blob spaced used in a Storage Account

Just recently, I had the need to be able to know the exact volume of all blob container data, within a specific Azure Storage Account.

This was part of a migration project, which in this case meant that I needed to report that data amount multiple times per day. Data was constantly being copied to and deleted from that Storage account, and the same applies to Blob containers being created, filled with data and deleted afterwards. So my only constant was the Storage Account, and I needed to know, every 2 hours, what was the volume of blob container data in that account.

After a quick of research I found this outstanding Microsoft article on how to leverage the Azure PowerShell module (yes, PowerShell to save the day again!!) to calculate the size of a Blob Storage Container.

The only limitation with the script in the article above was that it’s calculating the size of a single blob container, and I needed the combined size of all blob containers in my Storage Account.

So I had to adapt that script to my scenario, and I turned it into the following script:

# Connect to Azure
Connect-AzureRmAccount

# Static Values for Resource Group and Storage Account Names
$resourceGroup = "ChangeToYourResourceGroupName"
$storageAccountName = "changetoyourstorageaccountname"

# Get a reference to the storage account and the context
$storageAccount = Get-AzureRmStorageAccount `
-ResourceGroupName $resourceGroup `
-Name $storageAccountName
$ctx = $storageAccount.Context

# Get All Blob Containers
$AllContainers = Get-AzureStorageContainer -Context $ctx
$AllContainersCount = $AllContainers.Count
Write-Host "We found '$($AllContainersCount)' containers. Processing size for each one"

# Zero counters
$TotalLength = 0
$TotalContainers = 0

# Loop to go over each container and calculate size
Foreach ($Container in $AllContainers){
$TotalContainers = $TotalContainers + 1
Write-Host "Processing Container '$($TotalContainers)'/'$($AllContainersCount)'"
$listOfBLobs = Get-AzureStorageBlob -Container $Container.Name -Context $ctx

# zero out our total
$length = 0

# this loops through the list of blobs and retrieves the length for each blob and adds it to the total
$listOfBlobs | ForEach-Object {$length = $length + $_.Length}
$TotalLength = $TotalLength + $length
}
# end container loop

#Convert length to GB
$TotalLengthGB = $TotalLength /1024 /1024 /1024

# Result output
Write-Host "Total Length = " $TotallengthGB "GB"

 

The script above will provide you an output into the console of the total volume, in GB, that you have on a specific storage account.

To execute the script, follow the steps below:

  • Copy the entire code above to a notepad
  • Change the values of line 2 and 3, to the correct names of your Azure Resource group and your Azure Storage Account
  • Save the file as .ps1
  • Open a PowerShell window and execute the “script.ps1” file you just saved (see screenshot below)
  • Authenticate with your Azure username and password, when prompted

ScriptAllBlobs1

Execute the script as shown above.

AzureAuth

When prompted, authenticate.

endresult

And this is how the end result should look like.

Before I end this blog post I’d just like to point out that this script was written in a very simplistic way, and to address an urgent need that I had. With a couple more hours of work, you can make this script even easier to use and add all sorts of different features to it, such as:

  • Error handling
  • remove the hard coded values and list for selection all available storage accounts and resource groups
  • change the output format (i.e to CSV) and list sizes per blob container
  • allow you to select between multiple Azure subscriptions under the same account

The above are just some ideas on how to improve the script. I haven’t done it because I had no need for it, but by all means please let me know if you want/need an improved version. This one works just fine, if all you want is the total volume of blob data in a specific storage account.

Happy New Year!!!

Azure Identity training, anyone?

With the New Year starting, I am looking at a training plan for 2019.

I don’t think that training is the only thing that makes you improve your skills, and at least in my opinion you should add to that as much real live consulting experience as you can, as well as make your training as much “hands on” as possible. Don’t just read or watch videos, build a lab and execute everything that you’re learning.

This blog post is to share with you what seems to be an excellent training resource in Azure Identity: Microsoft Azure Identity training in the edx.org platform.

With this training, as they state in their website, you’ll learn the following:

  • How to create and manage Azure Active Directory (AD) directories.
  • How to implement applications in Azure AD.
  • How to extend on-premises AD to Azure.
  • How to configure multi-factor authentication.

The prerequisites for this training are:

  • General understanding of cloud computing models.
  • General understanding of virtualization, networking and Active Directory.
  • Basic proficiency in PowerShell and command line interface scripting.

The above basically means that you should have some experience with Azure, virtualization and of course as this training is focused in Microsoft Identity Management, that means you need to clearly understand how Active Directory works. Finally, as in everything Azure related, PowerShell knowledge is a must! 🙂

All training that I did with edx.org has been great. The training is free, but if you choose to get a certificate at the end, you can pay 99USD, knowing you’d be helping the only nonprofit and open source learning platform.

Most of my posts are about real life scenarios, tips and tricks, etc, so I am sure I will be blogging a lot about Azure Identity in the near future.

Azure Resource Manager PowerShell: How to change between subscriptions

Today’s post is a very simple one. For those of you that like me, have multiple subscriptions on your Azure account and automate a lot of your Azure work via PowerShell, you might need to change between subscriptions, in the same PowerShell session, to execute multiple tasks.

This can be done with one of the two following cmdlets:

And here is where the confusion comes. What’s the difference between the two cmdlets and which one should you use?

Well the answer is the cmdlets do the exact same thing, and you should use the “Set-AzureRMContext” cmdlet, specially if you put it into scripts, since it seems to be the replacement for the “Select-AzureRMSubscription” cmdlet.

In fact, this is what you get when you do a “Get-Help Select-AzureRMContext”:

CAS

As you can see above all references point to the new cmdlet.

Now a quick note on how the cmdlet works.

To list all of your subscriptions:

Get-AzureRMSubscription

To change the context to a different subscription:

Set-AzureRMContext -subscription <SubscriptionID or SubscriptionName>

I hope the above is helpful. Happy scripting!