Learn how to connect to your BitTitan account using our PowerShell module

In this blog post, we will teach you all you need to know, about connecting the BitTitan PowerShell to your BitTitan account. That is obviously the first step you need to do, before running your BitTitan scripts and automation.

BitTitan Environments

When you install and open our BitTitan PowerShell module, it’s default actions would be to send requests to our BitTitan.com platform, but although many of our customers don’t know, we also have dedicated BitTitan SaaS platforms in Germany and China.

To be able to change the environment you’re connecting to, you’ll have to leverage two cmdlets:

  • Set-MW_Environment – this changes the MigrationWiz environment and will cover any *-MW* cmdlets, corresponding to the migrationwiz.bittitan* website.
  • Set-BT_Environment – this changes the BitTitan environment and will cover any *-BT* cmdlets, corresponding to the manage.bittitan.* website.

You might need to use one or both of the cmdlets above, depending on the tasks you want to accomplish.

Common “*-MW*” tasks will include starting MigrationWiz migrations with the Add-MW_MailboxMigration cmdlet, or creating a MigrationWiz project with the Add-MW_MailboxConnector cmdlet, among many others.

Common “*-BT*” tasks will include listing your customers with the Get-BT_Customer cmdlet, or scheduling a DeploymentPro user with the Start-BT_DpUser cmdlet, among many others.

Although in out BitTitan cmdlet refererence page you will see many environments that can be set, the relevant externally available are the ones listed below, that apply for both BT and MW cmdlets:

Value Description
BT Represents BT
China Represents China
Germany Represents Germany

Based on the above, to change your PowerShell environment to Germany, after you open our PowerShell module, you would run the following:

Set-BT_Environment -Environment Germany
Set-MW_Environment -Environment Germany

Note that you can and should include those lines in your scripting, if it’s to run it consistently in those environments. Also, you can’t run commands in two different environments for MW or BT, in the same session. You would have to switch environments and to switch back to our .com platform, use the value BT.

The concept of ticket and how to create it

Once you ramp up your skills in our SDK, you’ll quickly learn that you can’t do anything with it without a ticket.

So what is a ticket?

A ticket in the BitTitan PowerShell module is an authentication token with several parameters, that you use each time you execute an action. The main parameters of the ticket are:

  • WorkgroupID – You can create a ticket specifically to a BitTitan Workgroup
  • IncludeSharedProjects – When set to true this is the parameter that allows you to see all projects and not just the ones you created. This parameter is exclusive for MW tickets
  • ExpirationDate – When the ticket expires. Tickets are by default valid for 1 week

How do I create a ticket?

You need to create 1 ticket specific for MW actions and 1 for BT actions, using the following cmdlets:

$MWTicket = Get-MW_Ticket -Credentials (Get-Credential)
$BTTicket = Get-BT_Ticket -Credentials (Get-Credential)

You can then leverage those tickets each time you run a cmdlet, for example:

Get-MW_Mailbox -Ticket $MWTicket

or

Get-BT_Workgroup -Ticket $BTTicket

Create a MigrationWiz ticket with project sharing

MigrationWiz enables collaboration. What that means is that either via the UI or PowerShell you can access and manage all objects within a workgroup, regardless if you created them or not.

Project sharing can be enabled or disabled. In the UI you have a checkbox for that, but with PowerShell what determines if you’re using project sharing is the way you create your ticket. We highly recommend that you use project sharing at all times, so now that you understand what a ticket is and how to create it, lets look at how to do it with the sharing enabled.

To create a ticket with project sharing you need to add 2 parameters to the cmdlet:

  • IncludeSharedProjects
  • WorkgroupId

And here’s the completed command:

$MWTicket = Get-MW_Ticket -Credentials (Get-Credential) -WorkgroupId [yourworkgroupid] -IncludeSharedProjects

You can obtain your workgroup id either by running the Get-BT-Workgroup cmdlet or by copying it from the URL in your browser, when you’re inside the workgroup.

BT tickets do not need project sharing, it only applies to MW tickets.

The bottom line

Hopefully the information above will help you understand in more detail how to connect and authenticate to the BitTitan systems, via the SDK, and all the options you have to incorporate in your scripts.

Ramp up your skills and start using the BitTitan Powershell

The purpose of this article is to provide you with the information you need to code with the BitTitan PowerShell module. We’re not going to teach you how to write a PowerShell script, code error handling, or build a loop. There are good resources online to help you develop those skills. If you’re familiar with PowerShell, you know there’s a learning curve for each new module. For example, you’ll want to learn how to connect and how to execute tasks such as creating or modifying objects. We want to help you get ahead of the curve so you can successfully build BitTitan SDK automation.

Here are resources that will help you ramp up your BitTitan PowerShell skills.

Cmdlet reference page

Once you’ve installed and started using the PowerShell module, you’ll want to check out our cmdlet reference page. This is where you’ll find all available cmdlets for the module, as well as some valuable examples of each parameter.

To give you an idea, if you need help defining items to be migrated with the Add-MW_MailboxMigration, click on the cmdlet in the left menu. Then, scroll down to ItemTypes where you’ll see a table of all available types.

GitHub script repository

In our GitHub repository you’ll find a variety of scripts, from simple scripts for basic tasks to elaborate scripts that execute complex migration workflows. Check it out here.

PowerShell blogs

We blog about our PowerShell module and scripting on GitHub. Check in often at  www.bittitan.com/blog/tips-and-tricks/ to see use cases for scripts or learn some coding with our SDK.

(this blog is a repost of the original blog that I wrote and published here)

A mix of PowerShell and Graph API to create a Microsoft Teams test environment and test the BitTitan MigrationWiz Teams migration tool

For those of us that work a lot with cloud data migration projects, one of the challenges that at least I end up having is to create random data to migrate, that being to test a new migration workload or endpoint, to do a proof of concept or even to troubleshoot a specific issue.

This blog post is focused specifically in adding data to Microsoft Teams, so if for any reason, stated above or not, you need to populate your Microsoft Teams environment, keep reading.

And of course if you’re considering to migrate Microsoft Teams you should go to the BitTitan website and read more about it. We (have an awesome tool that you should definitely use to migrate Teams and if you reach out to me I can get you some help testing it, after you create your test environment with the help of this blog post!

What we will provide in this blog post is a script, authored by Ash Karczag and co-authored by me, that will leverage both PowerShell and the Graph API (yep that’s how awesome the script is), to create and populate a bunch of stuff in your Teams environment, in your Office 365 test tenant.

Note: This script wasn’t designed to be executed in production tenants, since all it creates is based on random names (i.e Team names, Channel names, etc) and it doesn’t have error handling or logging.

What will the script create?

The following actions will be executed by the script, to create objects in Office 365:

  • Create 2 users
  • Create 10 Teams
  • Create 5 team public channels, per Team
  • Publish 5 conversations in each channel of each Team
  • Upload 5 files to the SharePoint document library of each Team

Which SDK modules or API’s do you need to configure?

The script will leverage multiple SDK’s, for multiple different reasons that include read or create objects and the Microsoft Teams Graph API will be used to create the conversations and upload the files. So in summary, you need:

  • Microsoft Azure MSOL Module to connect to your Office 365 tenant (if you don’t have it installed, run “Install-Module MSOnline”)
  • Microsoft Teams PowerShell (if you don’t have it installed, run “Install-Module -name MicrosoftTeams”)
  • Microsoft Teams Graph API (instructions below on how to set it up in your tenant)

How to configure the Microsoft Teams Graph API authentication

The script requires Migration Teams Graph API access, which is done via OAuth2 authentication. The Graph API will be used to create conversations and to upload the files.

To configure the authentication, follow the steps below:

  1. Go to portal.azure.com, sign in with global admin
  2. Select Azure Active Directory
  3. Select App Registrations
  4. Select + New Registration
  5. Enter a name for the application, for example “Microsoft Graph Native App”
  6. Select “accounts in this organizational directory only”
  7. Under Redirect URI, select the drop down and choose “Public client/native” and enter “https://redirecturi.com/”
  8. Select “Register”
  9. Make a note of your Application (client) ID, and your Directory (tenant) ID
  10. Under Manage, select “API Permissions”
  11. Click + Add Permission
  12. In the Request API Permissions blade, select “Microsoft Graph”
  13. Select “Delegated Permissions”
  14. Type “Group” in the Search
  15. Under the “Group” drop down, select “Group.ReadWrite.All”
  16. Select “Add Permissions”
  17. You will get a warning message that says “Permissions have changed, please wait a few minutes and then grant admin consent. Users and/or admins will have to consent even if they have already done so previously.”
  18. Click “Grant admin consent for <tenant>”
  19. Wait for permissions to finish propagating, you’ll see a green check-mark if it was successful
  20. Under Manage, select Certificates & Secrets
  21. Select “+ New client secret”
  22. Give the secret a name that indicates its purpose (ex. PowerShell automation secret)
  23. Under Expires, select Never
  24. Copy the secret value. YOU WILL NOT SEE THIS SECRET AGAIN AFTER THIS
  25. Now you have the Client ID, Tenant ID, and Secret to authenticate to Graph using PowerShell

Once the authentication is configured and you have your secret key, you can proceed to executing the script.

How do I get the script

The script is published in Ash’s GitHub, and it’s called Populate_Teams_Data.ps1. Copy the content into a notepad or any script editor in your machine and save it in the same ps1 format.

How to execute the script

So now lets go over the steps to execute the script. I am going to number them, just so it’s easier for you to follow:

  • Open PowerShell – It is recommended that you open it as an Administrator, since the script will try and set the execution policy to RemoteSigned

TeamsScript2

  • Browse to the .ps1 file location and execute the following
.\Populate_Teams_Data.ps1 -AdminUser "<AdminUsername>" -AdminPass "<AdminPass>" -License "<LicenseSkuID>" -tenantId "<DirectoryID>" -clientId "<AppID>" -ClientSecret "<ClientSecret>"

The values above should be the following values:

    • Admin User – your Office 365 Global admin
    • Admin Pass – the password for the GA
    • License – the license AccountSkuId that you want to apply to the newly created users (Note: Connect to the MSOnline module and run the Get-MsolAccountSku cmdlet in case you don’t know what the value is)
    • TenantId – value that you obtained in step 9 of the section above (Directory)
    • ClientId – value that you obtained in step 9 of the section above (Application)
    • Secret – value that you obtained in step 24 of the section above

Script Output

The script will describe you the steps that is taking, during its execution, such as:

  • Creating the users and the teams

TeamScript4

  • Adding users to Teams

TeamScript5

  • Creating channels per team

TeamScript6

  • Creating usable data in your teams

TeamScript7

Additional notes about the script

The following should be considered when executing the script:

  • This script was designed and created to be ran against empty tenants. It’s ok if you have users or are using other workloads, but do not run this in a production tenant, since the script was not designed for that.
  • The script can be executed multiple times, although it was created for single execution. It will check if Teams and Channels need to be created, but it will try and create users always, unless the user already exists. Have that in mind if you choose to run the script multiple times, to create more usable data.
  • The script only creates small files in the Teams. If you want to do a migration test with a large volume of files, you’ll have to upload them manually.
  • The script leverages the Graph API, which is the optimal way to create messages and upload files into the Teams, but it’s also a Beta API, so sometimes you might see random timeouts.

We welcome all feedback you might have. Enjoy!

 

 

 

Apply file permissions in a SharePoint Online document library using PowerShell

Hi all, this is a follow up post from the one I published yesterday, about applying permissions to folders in a SharePoint Online document library using PowerShell.

On this post we will look at how to apply those permissions to files, not folders. We will also have a different approach on the code. The code that I am sharing with you will apply permissions to all files within top level folders of the SharePoint library.

So what PowerShell module should you use?

Let me start by saying that, there’s multiple ways to programatically apply those permissions, to a SharePoint library. In this case I am using the SharePoint PnP PowerShell Module.

In the link above, you will be able to learn a bit more about the SharePoint Patterns and Practices module, as well as follow the steps to install it. Be aware that the PnP commands use CSOM, so you might get throttled at some point, if you execute too many.

Now lets look at the code in detail

I will try and break down the script, just so you understand all it does and adapt to your needs properly.

Configuration hard-coded values

It’s always best that you don’t hard-code any values in your script, just so you don’t have to edit it each time you want to run it for a different scenario, but I wanted to keep this one simple, so here it goes:

#This value if for your SharePoint Online Team site URL. In my case my team name is “Test1”. Change yours accordingly
#This is your list name. If you run a Get-PnPList you’ll see that Documents is for the Shared Documents library. You will need this for the cmdlet that sets the permissions
$ListName=”Documents”
#This is the user account you want to give permissions to
$UserAccount = “user1@domain.com”
#Role that you want to add (see permissions section for more information)
$Role = “Contribute”

Connect to the SharePoint PnP Online PowerShell

#Connect to PnP Online. You will get prompted for credentials.
Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)

Grab all Folders

#I created a small try catch to exit the script if we can’t grab the folders
Try{
$AllFolders=Get-PnPFolderItem-FolderSiteRelativeUrl “/Shared Documents”-ItemType Folder -ErrorAction Stop
}
Catch{
Write-Host”Failed to list the folders”-ForegroundColor Red
Exit
}

Create a loop to process each folder, grabbing all files and applying the permissions

#And finally the code for the loop to go folder by folder grab the files and apply the permissions
Foreach ($Folder in $AllFolders){
$FolderName=$Folder.Name
$FolderRelativeURL=”/Shared Documents/”+$FolderName
Try{
$AllFiles=Get-PnPFolderItem-FolderSiteRelativeUrl $FolderRelativeURL-ItemType File -ErrorAction Stop
}
Catch{
Write-Host”Failed to list the files for ‘$($FolderName)'”-ForegroundColor Red
}
if ($AllFiles.count-ne0){
Foreach ($Filein$AllFiles){
try{
Set-PnPListItemPermission-List $ListName-Identity $File.ListItemAllFields-User $UserAccount-AddRole $Role-ErrorAction Stop
Write-Host”Folder $($FolderName): File $($File.Name) processed with success”-ForegroundColor Green
}
Catch{
Write-Host”Folder $($FolderName): Failed to apply permissions to file $($File.Name). Error: $_.Exception.Message”-ForegroundColor Red
}
}
}
Else{
Write-Host”‘$($FolderName)’ does not have any files”-ForegroundColor Yellow
}
}

Now the entire script for you to copy

#Config Variables

$SiteURL = "https://yourtenant.sharepoint.com/sites/Test1"

$ListName="Documents"

$UserAccount = "user1@yourtenant.onmicrosoft.com"

$Role = "Contribute"




#Connect to PnP Online

Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)

Try{

$AllFolders=Get-PnPFolderItem-FolderSiteRelativeUrl "/Shared Documents"-ItemType Folder -ErrorAction Stop

}

Catch{

Write-Host"Failed to list the folders"-ForegroundColor Red

Exit

}

Foreach ($Folder in $AllFolders){

$FolderName=$Folder.Name

$FolderRelativeURL="/Shared Documents/"+$FolderName

Try{

$AllFiles=Get-PnPFolderItem-FolderSiteRelativeUrl $FolderRelativeURL-ItemType File -ErrorAction Stop

}

Catch{

Write-Host"Failed to list the files for '$($FolderName)'"-ForegroundColor Red

}

if ($AllFiles.count-ne0){

Foreach ($Filein$AllFiles){

try{

Set-PnPListItemPermission-List $ListName-Identity $File.ListItemAllFields-User $UserAccount-AddRole $Role-ErrorAction Stop

Write-Host"Folder $($FolderName): File $($File.Name) processed with success"-ForegroundColor Green

}

Catch{

Write-Host"Folder $($FolderName): Failed to apply permissions to file $($File.Name). Error: $_.Exception.Message"-ForegroundColor Red

}

}

}

Else{

Write-Host"'$($FolderName)' does not have any files"-ForegroundColor Yellow

}

}

What if I want to do a different folder

The code above is to apply permissions to the files the top level folders within the Shared Documents library of your Team site. If you want to change to a different folder, edit the following lines in the code:

  • Line 4 – $FolderRelativeUrl: Add the folder structure here (i.e “/sites/Test1/Shared Documents/FolderA/SubFolderB/”
  • Line 13 – FolderSiteRelativeURL parameter: Add the folder structure here as well (i.e “/Shared Documents/FolderA/SubFolderB”

How about the subfolders

This script does not process files inside subfolders. i.e you you have a top level folder “FolderA” the script will add permissions to all files inside that folder but it won’t add to files in the subfolder “FolderA\SubFolderA”. It’s much more complex to create an iteration that analyses and processes the depth of the folder structure and I wanted to keep this simple.

You can process subfolders separately by targeting them individually, following the steps if the section above.

How about permissions

The example above applies the role “Contributor” to the folders, for the user defined. If you want to know more details about which role to apply, please go to this excellent article to understand permission levels in SharePoint.

Final notes

If you haven’t read my previous post around SharePoint permissions, do it, to know more about folder level permissions and how to apply them via PowerShell. This post and the other one complement themselves really well.

There’s ways of making this script more complex and to do more things (like processing subfolders, processing multiple users, etc), but just like in the other post, the code shared in this one gives you a good baseline and good basic features.

I hope it’s useful!

 

Apply folder permissions in a SharePoint Online document library using PowerShell

Being a consultant with a primarily messaging background, it’s always interesting for me to blog about SharePoint and be out of my comfort zone.

What I am going to show you today, is how do you apply permissions to folders, in a SharePoint online document library, using PowerShell.

So what PowerShell module should you use?

Let me start by saying that, there’s multiple ways to programatically apply those permissions, to a SharePoint library. In this case I am using the SharePoint PnP PowerShell Module.

In the link above, you will be able to learn a bit more about the SharePoint Patterns and Practices module, as well as follow the steps to install it. Be aware that the PnP commands use CSOM, so you might get throttled at some point, if you execute too many.

Now lets look at the code in detail

I will try and break down the script, just so you understand all it does and adapt to your needs properly.

Configuration hard-coded values

It’s always best that you don’t hard-code any values in your script, just so you don’t have to edit it each time you want to run it for a different scenario, but I wanted to keep this one simple, so here it goes:

#This value if for your SharePoint Online Team site URL. In my case my team name is “Team1”. Change yours accordingly
#This is your list name. If you run a Get-PnPList you’ll see that Documents is for the Shared Documents library. You will need this for the cmdlet that sets the permissions
$ListName=”Documents”
#This is the user account you want to give permissions to
$UserAccount = “user1@domain.com”
#Role that you want to add (see permissions section for more information)
$Role = “Contribute”
#Relative URL of the parent folder for all folders you are applying permissions to (see different folder section below for more information on how to change this to target another folder)
$FolderRelativeURL = “/sites/Test1/Shared Documents/”

Connect to the SharePoint PnP Online PowerShell

#Connect to PnP Online. You will get prompted for credentials.
Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)

Grab all Folders to apply permissions to

#I created a small try catch to exit the script if we can’t grab the folders
Try{
$AllFolders=Get-PnPFolderItem-FolderSiteRelativeUrl “/Shared Documents”-ItemType Folder -ErrorAction Stop
}
Catch{
Write-Host”Failed to list the folders”-ForegroundColor Red
Exit
}

 

Apply the permissions

#And finally the code for the loop to go folder by folder and apply the permissions
Foreach ($Folder in $AllFolders){
$RelativeURL=$FolderRelativeURL+$Folder.Name
Write-Host$RelativeURL
$FolderItem=Get-PnPFolder-url $RelativeURL
Set-PnPListItemPermission-List $ListName-Identity $FolderItem.ListItemAllFields-User $UserAccount-AddRole $Role
}

Now the entire script for you to copy

#Config Variables

$SiteURL = "https://yourtenant.sharepoint.com/sites/Test1"

$ListName="Documents"

$FolderRelativeURL = "/sites/Test1/Shared Documents/"

$UserAccount = "user1@yourtenant.onmicrosoft.com"

$Role = "Contribute"




#Connect to PnP Online

Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)

Try{

$AllFolders=Get-PnPFolderItem-FolderSiteRelativeUrl "/Shared Documents"-ItemType Folder -ErrorAction Stop

}

Catch{

Write-Host"Failed to list the folders"-ForegroundColor Red

Exit

}

Foreach ($Folder in $AllFolders){

$RelativeURL=$FolderRelativeURL+$Folder.Name

Write-Host$RelativeURL

$FolderItem=Get-PnPFolder-url $RelativeURL

Set-PnPListItemPermission-List $ListName-Identity $FolderItem.ListItemAllFields-User $UserAccount-AddRole $Role

}

What if I want to do a different folder

The code above is to apply permissions in the top level folders within the Shared Documents library of your Team site. If you want to change to a different folder, edit the following lines in the code:

  • Line 4 – $FolderRelativeUrl: Add the folder structure here (i.e “/sites/Test1/Shared Documents/FolderA/SubFolderB/”
  • Line 13 – FolderSiteRelativeURL parameter: Add the folder structure here as well (i.e “/Shared Documents/FolderA/SubFolderB”

How about permissions

The example above applies the role “Contributor” to the folders, for the user defined. If you want to know more details about which role to apply, please go to this excellent article to understand permission levels in SharePoint.

Final notes

I hope this post is helpful. Like I stated initially there’s easy ways to make the script more complex but easier to manage, such as removing hard-coded values or for example creating a loop to add permissions to multiple users. Using the code above as reference will for sure save you some time or give you that quick win you need.

Microsoft Teams PowerShell – A simple use case to get you started

Not long ago I blogged about the new Microsoft Teams PowerShell module. Today I want to give you a quick example of how you can leverage it, to automate and make your work more efficient. I’ll show you how to list all Team Channels in your organization.

Connect to your Microsoft Teams Organization using PowerShell

The first thing you need to do is connect your Teams PowerShell module and authenticate to your Office 365 tenant.

  • If you don’t have the Microsoft Teams PowerShell module installed, click on the link in this article and install it
  • Once you have it installed the Connect-MicrosoftTeams cmdlet should be available. It’s as easy as running it and use the authentication prompt to pass the credentials, but you can also pass basic credentials if you want to, using the -credential parameter

TeamsPS01

List all Teams in your Microsoft Teams organization

To list all Teams in your organization, you can use the Get-Team cmdlet. By default the cmdlet will have as output the GroupID, DisplayName, Visibility, Archived, MailNickName and Description.

TeamsPS02

You can format your output to include any relevant Team attribute. Do a “Get-Team |fl” to list them all.

List all Team Channels in your organization

Now finally lets execute the use case of this post. To list all Team Channels in your organization, you can leverage the Get-TeamChannel cmdlet.

This cmdlet has a mandatory parameter -GroupID, which is basically the ID of each Team. That said you have two options:

Option 1: you run “Get-TeamChannel -groupid <TeamGroupID>”

TeamsPS03

You can use the Get-Team cmdlet to get the GroupId value for each team.

Option 2: you grab all Teams into an array and process each Team to list their channels, using the code snippet below.

$AllTeams = Get-Team

Foreach ($team in $AllTeams) {Get-TeamChannel -groupid $team.groupid |Ft $team.DisplayName, DisplayName}

TeamsPS04

What I did above, was changing the output of the command, to list in a readable way to which Team the Channels belong to. There are other ways, more organized, to format the output both to the console or an output file. Nevertheless this can easily guide you in that direction, and if you need any help let me know.

And that’s it. I can and will blog much more around Teams PowerShell. If you haven’t used it yet, you should.

Happy coding!

 

Manage your Exchange Online from the Azure Cloud Shell

The Microsoft Exchange product group, recently announced that you can now manage your Exchange Online, via the Azure Cloud Shell.

If you’re not familiar with the Azure Cloud Shell, it’s basically a browser-based Shell experience in the cloud, fully maintained by Microsoft and that you can leverage to manage your Azure and now also Exchange Online subscriptions.

It’s a step towards running automation from anywhere, with minimal to no effort, which to me is one of the next big things coming to us as IT Consultants.

I wrote a blog article recently, on how to use a multi factor authentication account to connect to Exchange Online, and what Microsoft just did was to provide, by default, the Exchange Online Remote PowerShell module, in the Azure Shell. Smart idea and I bet an awesome quick win for them.

So is there any gotchas?

The quick answer is not any major one, but I still want to point a few things. The first one is that you need an Azure Subscription, otherwise you’ll see the below.

ExShell01

Although many Organizations embracing the Microsoft cloud are already using Office 365 AND Azure, some are not. Some just use Office 365 and it’s good to point out that if you want to leverage this new feature, it’s time for you to create that Azure subscription. The only cost you’ll have with using Azure Cloud Shell, is Azure Storage (also mandatory) cost, which is almost insignificant.

Another smaller thing but also worth pointing out, is MFA (Multi Factor Authentication), as Microsoft expects that you have MFA enabled across all accounts. I guess that’s directly related to the fact that this module you’re leveraging is for login with MFA enabled admin accounts.

Finally Microsoft also points out that they will claim sessions that are inactive for more than 20 minutes. Have that in mind when you build your automation, or just when you have your session open for daily work. This is an expected behavior for this type of cloud and container based Shell.

What else should you know?

I am not going to transcribe the article I pointed you to, in the top of this article, but I just want to highlight the main takeaways:

  • You can access the Azure Cloud Shell in several different ways, but via the Azure portal (portal.azure.com) or directly via the Shell portal (shell.azure.com), are the two main ones.
  • All Exchange Online cmdlets should be available.
  • RBAC fidelity is intact
  • This does not mean there’s plans to decommission the current Exchange PowerShell module (yet?) 🙂
  • You’ll use the Connect-EXOPSSession cmdlet to start the session
  • Microsoft provides SSO (Single Sign-On) just so you don’t have to login twice (i.e Azure Portal and Exchange Online) Yay!!!

And that’s it, enjoy!!!

 

Use a Multi Factor Authentication enabled account to connect to Exchange Online PowerShell

Security is a big theme for any online applications and Multi Factor Authentication is used more and more everyday. Those security standards easily extend to PowerShell or any other admin sessions, to execute tasks in your tenant. It will be more and more common to see organizations that no longer allow their IT administrators to connect to their Exchange Online using basic auth.

If you’re a heavy PowerShell user like me, this is for you. Microsoft has an excellent article on how to leverage an MFA account to connect to Exchange Online PowerShell.

You should read the article, as this things tend to be updated and you’ll be sure the have the latest steps, but in essence what you need to do is:

  • Make sure you have all prerequisites (i.e .Net Framework), specially in older versions of Windows
  • Install the Exchange Online Remote PowerShell module (from the Exchange Online EAC)
  • Make sure that WinRM allows basic authentication

Finally you can run the following command to connect to Office 365 commercial:

Connect-EXOPSSession -UserPrincipalName chris@contoso.com

And if you are connecting to a different instance, such as GCC High or 365 Germany, you need to specify the ConnectionUri and the AzureADAuthorizationEndPointUri parameters (see official article for parameter configurations).

Connect-EXOPSSession -UserPrincipalName <UPN> [-ConnectionUri <ConnectionUri> -AzureADAuthorizationEndPointUri <AzureADUri>]

Here’s how the PowerShell session looks like after you install the module.

MFAPS01

And the authentication process.

MFAPS02

And that’s it. Happy scripting!!!

 

The Microsoft Teams PowerShell module GA is here

Great news for those who, like me, are heavy users of PowerShell and its available modules for the different Microsoft workloads: The Microsoft Teams PowerShell module is now in general availability, after being in Beta since last year.

There are several new functionalities, but the one I want to highlight is the new switch -TeamsEnvironmentName in the Connect-MicrosoftTeams cmdlet, that allows you to connect to environments different from 365 commercial, such as Government tenants.

If you haven’t already, read the Teams PowerShell Overview Microsoft documents.

I will be posting about Teams PowerShell scripting soon. Enjoy!

The value of the BitTitan SDK to your Enterprise migration project

For those of you with consulting experience, specially in Enterprise projects, currently working in the migration business as a consultant, or in other roles, you’ll know that there’s a huge difference between small/medium size projects and enterprise projects, specially in the way the project is executed.

It’s usually very easy to use a user interface of a tool, to perform actions for 10, 20 or 50 users, but when you have dozens or hundreds of thousands of users to migrate, that quickly becomes a challenge. In this post, we’ll discuss some of the key areas where, leveraging the BitTitan PowerShell module as a key element for task execution in your project, will bring a huge value.

Automate reoccurring tasks

In enterprise migration projects you’ll have hourly, daily or weekly tasks that you have to perform, such as:

  • start/restart migrations
  • create reports
  • move users between migration stages
  • retry failed migrations
  • add new batch of users to migration project
  • schedule the Outlook profile reconfiguration for a batch of users

During your project, most if not all of the tasks described above will have to be executed multiple times a week or even multiple times a day, depending on the task.

So why should you automate those tasks? The answer depends on the task, but reasons should include saving man hours, making sure tasks get executed on time, link task executions (i.e move a user to a different project and start another migration pass), etc

Create your own reports or integrate with an external reporting system

The BitTitan user interface provides you a whole variety of “out of the box” reports, such as user migration statistics, user devices details, etc. But from my experience there’s always one or multiple reports, that you or your customer needs and there’s no easy way of extracting that out of any user interface. With PowerShell you can extract any information and combine them in custom reports, that you can build to cover the exact needs you have.

As an example, if you want the total number of users across all project with migrations completed, running, failed, completed with errors or queued, the best way to get that is via the BitTitan PowerShell. You can filter in the user interface and you can also send to your email the project statistics CSV, but that is done per project and it doesn’t scale, specially if you have a large number of projects.

Another example is to get item counts and sizes of successfully migrated items for all users in all projects. Again in the case you can use the project statistics CSV sent via email, but that does not scale if you have 50 projects and need that report twice a day, does it?

You can also leverage the information provided by PowerShell to feed a reporting portal, from where you can monitor all major aspects of the migration (like the ones mentioned in the above examples), and that PowerShell can update in short intervals of time (i.e every 5 minutes). That gives you the ability to synthesize the information that you consider most relevant and make it available to the entire team executing the project, as well as of course keeping it up to date, without human interaction, instead of for example having to compile it and send it via email multiple times a day.

Multi platform task execution

In the scope of an Enterprise migration project, the “BitTitan tasks” are just part of the equation. There’s much more to do than just move the data and reconfigure the Outlook clients. That being said, it’s very common for consultants to build and use complex scripting for Enterprise level migrations, that leverage multiple SDKs and PowerShell modules. Some examples:

  • Leverage the MSOnline PowerShell module to create users and assign licenses in Office 365 and after the task is checked and being completed, leverage the BitTitan PowerShell to start the prestage migration for those users.
  • Stamp forwarding addresses, either via Exchange online or Exchange on premises PowerShell modules, before triggering the full migration pass with the BitTitan PowerShell.
  • Verify that the BitTitan migration is completed, via the BitTitan PowerShell, and convert mailboxes to mail users or remote mailboxes, once the completion status is marked as successful.

The three examples above, highlight some of the most common tasks that require multi-platform execution, but it can get much more complex than that. You can automate task precedence and execute much more than just two tasks, leveraging all available PowerShell modules

Trying to link those tasks via the user interfaces, with human interaction, is very challenging and time consuming. They will of course require access to multiple user interfaces and during the execution you can’t automate the task dependencies, which will make it prone to error.

Optimize execution of complex or lengthy tasks

We talked about the benefits of automating reoccurring tasks, about multi platform task execution, but it’s also very relevant to mention task optimization for very complex or lengthy tasks. This doesn’t need to be a reoccurring task nor it needs to be cross platform, all you need to consider when you plan to code a task to be done via PowerShell is, “Can I remove a lot of complexity and execution time, if I run this task via PowerShell?”. If the answer is yes then, code it.

Removing the complexity will allow you to have literally anyone executing that task, and not just a senior resource in your project.

Removing execution time is self explanatory: It saves you money and helps keep you within the estimated project timelines, since it’s easier to predict a task execution time when it’s scripted vs when it’s executed by a human.

Now to put this into context, let me give you some examples of complex or lengthy BitTitan tasks:

  • Retry errors to all illegible users in a project with 10k users
  • Schedule DeploymentPro for users with 5 different vanity domains
  • Start a migration for a list of users provided via CSV
  • Create 25 MigrationWiz projects, one per each user batch
  • Create 10 MigrationWiz endpoints for multiple admin accounts
  • Add 5k recipient mappings into a MigrationWiz project

To be honest all of the tasks described above are more lengthy than complex, since in my opinion there’s not a lot of complexity in the BitTitan tools, nevertheless you might not want to have some lower level resources executing them.

On the other hand, all of the tasks above take a significant amount of time to complete and scripting them will save you a lot of precious hours.

Proactive monitoring and task execution

The BitTitan user interface already has some monitoring tasks, such as send an email to the administrator when a migration fails, but with PowerShell you can take monitoring and proactive task execution to the next level.

Instead of just notifying the project executor that a migration failed, you can trigger another migration retry, but you can go to the detail of only doing it if the failure reason is not something like bad admin credentials. You can make it as clever as you want.

You can also chose not to have to check your email, to see if migrations failed, and just create a script that every 30 minutes checks for failed migrations, checks the reason for failure and if it’s worth retrying, starts another migration pass.

The sentence above explains why I think proactive monitoring and proactive task execution are tied together. Let that sync in, imagine a project with 75 thousand users and with around 5 thousand concurrent migrations being executed at any given point in time, and think about how proactive monitoring and task execution can not only save you hundreds of work hours, but also keep your project timelines within schedule. The last thing that you want, in a large and complex migration project, is to lose hours of migration time, where nothing is being migrated for anyone, because an error occurred and the resolution time is high.

Summary

So how do you, based on everything you read above, plan an execute an Enterprise migration project, with automation?

Use a tool that you can automate with:

When you’re choosing a tool for your Enterprise project, you should heavily consider one that has a powerful PowerShell module, like the BitTitan tools do. You can check the BitTitan PowerShell documentation here.

Prepare and test all of your automation:

Make sure you have all your scripts ready and tested. If you need helped coding with the BitTitan SDK please reach out to me directly or to the BitTitan support, that will forward you to the appropriate department that will provide the help that you need.

And that’s it.. I hope this post has been helpful and please reach out if you have any questions!