A mix of PowerShell and Graph API to create a Microsoft Teams test environment and test the BitTitan MigrationWiz Teams migration tool

For those of us that work a lot with cloud data migration projects, one of the challenges that at least I end up having is to create random data to migrate, that being to test a new migration workload or endpoint, to do a proof of concept or even to troubleshoot a specific issue.

This blog post is focused specifically in adding data to Microsoft Teams, so if for any reason, stated above or not, you need to populate your Microsoft Teams environment, keep reading.

And of course if you’re considering to migrate Microsoft Teams you should go to the BitTitan website and read more about it. We (have an awesome tool that you should definitely use to migrate Teams and if you reach out to me I can get you some help testing it, after you create your test environment with the help of this blog post!

What we will provide in this blog post is a script, authored by Ash Karczag and co-authored by me, that will leverage both PowerShell and the Graph API (yep that’s how awesome the script is), to create and populate a bunch of stuff in your Teams environment, in your Office 365 test tenant.

Note: This script wasn’t designed to be executed in production tenants, since all it creates is based on random names (i.e Team names, Channel names, etc) and it doesn’t have error handling or logging.

What will the script create?

The following actions will be executed by the script, to create objects in Office 365:

  • Create 2 users
  • Create 10 Teams
  • Create 5 team public channels, per Team
  • Publish 5 conversations in each channel of each Team
  • Upload 5 files to the SharePoint document library of each Team

Which SDK modules or API’s do you need to configure?

The script will leverage multiple SDK’s, for multiple different reasons that include read or create objects and the Microsoft Teams Graph API will be used to create the conversations and upload the files. So in summary, you need:

  • Microsoft Azure MSOL Module to connect to your Office 365 tenant (if you don’t have it installed, run “Install-Module MSOnline”)
  • Microsoft Teams PowerShell (if you don’t have it installed, run “Install-Module -name MicrosoftTeams”)
  • Microsoft Teams Graph API (instructions below on how to set it up in your tenant)

How to configure the Microsoft Teams Graph API authentication

The script requires Migration Teams Graph API access, which is done via OAuth2 authentication. The Graph API will be used to create conversations and to upload the files.

To configure the authentication, follow the steps below:

  1. Go to portal.azure.com, sign in with global admin
  2. Select Azure Active Directory
  3. Select App Registrations
  4. Select + New Registration
  5. Enter a name for the application, for example “Microsoft Graph Native App”
  6. Select “accounts in this organizational directory only”
  7. Under Redirect URI, select the drop down and choose “Public client/native” and enter “https://redirecturi.com/”
  8. Select “Register”
  9. Make a note of your Application (client) ID, and your Directory (tenant) ID
  10. Under Manage, select “API Permissions”
  11. Click + Add Permission
  12. In the Request API Permissions blade, select “Microsoft Graph”
  13. Select “Delegated Permissions”
  14. Type “Group” in the Search
  15. Under the “Group” drop down, select “Group.ReadWrite.All”
  16. Select “Add Permissions”
  17. You will get a warning message that says “Permissions have changed, please wait a few minutes and then grant admin consent. Users and/or admins will have to consent even if they have already done so previously.”
  18. Click “Grant admin consent for <tenant>”
  19. Wait for permissions to finish propagating, you’ll see a green check-mark if it was successful
  20. Under Manage, select Certificates & Secrets
  21. Select “+ New client secret”
  22. Give the secret a name that indicates its purpose (ex. PowerShell automation secret)
  23. Under Expires, select Never
  24. Copy the secret value. YOU WILL NOT SEE THIS SECRET AGAIN AFTER THIS
  25. Now you have the Client ID, Tenant ID, and Secret to authenticate to Graph using PowerShell

Once the authentication is configured and you have your secret key, you can proceed to executing the script.

How do I get the script

The script is published in Ash’s GitHub, and it’s called Populate_Teams_Data.ps1. Copy the content into a notepad or any script editor in your machine and save it in the same ps1 format.

How to execute the script

So now lets go over the steps to execute the script. I am going to number them, just so it’s easier for you to follow:

  • Open PowerShell – It is recommended that you open it as an Administrator, since the script will try and set the execution policy to RemoteSigned

TeamsScript2

  • Browse to the .ps1 file location and execute the following
.\Populate_Teams_Data.ps1 -AdminUser "<AdminUsername>" -AdminPass "<AdminPass>" -License "<LicenseSkuID>" -tenantId "<DirectoryID>" -clientId "<AppID>" -ClientSecret "<ClientSecret>"

The values above should be the following values:

    • Admin User – your Office 365 Global admin
    • Admin Pass – the password for the GA
    • License – the license AccountSkuId that you want to apply to the newly created users (Note: Connect to the MSOnline module and run the Get-MsolAccountSku cmdlet in case you don’t know what the value is)
    • TenantId – value that you obtained in step 9 of the section above (Directory)
    • ClientId – value that you obtained in step 9 of the section above (Application)
    • Secret – value that you obtained in step 24 of the section above

Script Output

The script will describe you the steps that is taking, during its execution, such as:

  • Creating the users and the teams

TeamScript4

  • Adding users to Teams

TeamScript5

  • Creating channels per team

TeamScript6

  • Creating usable data in your teams

TeamScript7

Additional notes about the script

The following should be considered when executing the script:

  • This script was designed and created to be ran against empty tenants. It’s ok if you have users or are using other workloads, but do not run this in a production tenant, since the script was not designed for that.
  • The script can be executed multiple times, although it was created for single execution. It will check if Teams and Channels need to be created, but it will try and create users always, unless the user already exists. Have that in mind if you choose to run the script multiple times, to create more usable data.
  • The script only creates small files in the Teams. If you want to do a migration test with a large volume of files, you’ll have to upload them manually.
  • The script leverages the Graph API, which is the optimal way to create messages and upload files into the Teams, but it’s also a Beta API, so sometimes you might see random timeouts.

We welcome all feedback you might have. Enjoy!

 

 

 

Exchange room booking and recurring meetings was finally simplified

If you follow the Microsoft Exchange Team blog, you probably noticed this post from around 1 month ago, “Easier Room Booking in Outlook on the Web”.

I know it’s been a month, but I haven’t blogged my 2 cents around this, so here it goes.

Why this change

This was an old ask from the Community, so well done for the Exchange Team (and in this case more specifically the Calendar Team) for making this happen.

Selecting a room

The initial focus is on user experience as it relates to room filtering. You can use filters like room location (allows multiple locations), room availability and room features (Audio, Video, etc).

Recurring meetings and room availability

This is one of the major changes implemented. Although Exchange has mechanisms to allow you to coordinate the availability of all meeting attendees, the availability of meeting rooms for the entire series was always a challenge.

The Exchange Team is addressing the above by having Exchange perform an availability query for all meeting dates, until it finds one unavailable, and let you know for how many instances the room is available.

Multiple rooms

In my opinion this is the second major change. For Geo diverse teams, with attendees in multiple office locations, you can select “browse more rooms” and add a local room for each of the attendees locations.

How does an Admin implement this

Basically by leveraging the Set-Place cmdlet (only available in Exchange Online), to define the room characteristics.

Bottom line

I really like this new feature. If I had to point out some negatives, those would be the fact that it’s not support for Exchange on premises, it was launched as an Outlook Web Access feature only (for now – it’s in the road map to make it available for Outlook) and also, in my opinion, the Exchange Team should look at allowing the Organizer to select an additional room(s) when the one selected does not cover all instances.

Finally just want to point out the -GeoCoordinates parameter in the Set-Place cmdlet. It’s really cool and allows you to enter the coordinates of the room and integrate with Bing Maps!

 

Apply file permissions in a SharePoint Online document library using PowerShell

Hi all, this is a follow up post from the one I published yesterday, about applying permissions to folders in a SharePoint Online document library using PowerShell.

On this post we will look at how to apply those permissions to files, not folders. We will also have a different approach on the code. The code that I am sharing with you will apply permissions to all files within top level folders of the SharePoint library.

So what PowerShell module should you use?

Let me start by saying that, there’s multiple ways to programatically apply those permissions, to a SharePoint library. In this case I am using the SharePoint PnP PowerShell Module.

In the link above, you will be able to learn a bit more about the SharePoint Patterns and Practices module, as well as follow the steps to install it. Be aware that the PnP commands use CSOM, so you might get throttled at some point, if you execute too many.

Now lets look at the code in detail

I will try and break down the script, just so you understand all it does and adapt to your needs properly.

Configuration hard-coded values

It’s always best that you don’t hard-code any values in your script, just so you don’t have to edit it each time you want to run it for a different scenario, but I wanted to keep this one simple, so here it goes:

#This value if for your SharePoint Online Team site URL. In my case my team name is “Test1”. Change yours accordingly
#This is your list name. If you run a Get-PnPList you’ll see that Documents is for the Shared Documents library. You will need this for the cmdlet that sets the permissions
$ListName=”Documents”
#This is the user account you want to give permissions to
$UserAccount = “user1@domain.com”
#Role that you want to add (see permissions section for more information)
$Role = “Contribute”

Connect to the SharePoint PnP Online PowerShell

#Connect to PnP Online. You will get prompted for credentials.
Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)

Grab all Folders

#I created a small try catch to exit the script if we can’t grab the folders
Try{
$AllFolders=Get-PnPFolderItem-FolderSiteRelativeUrl “/Shared Documents”-ItemType Folder -ErrorAction Stop
}
Catch{
Write-Host”Failed to list the folders”-ForegroundColor Red
Exit
}

Create a loop to process each folder, grabbing all files and applying the permissions

#And finally the code for the loop to go folder by folder grab the files and apply the permissions
Foreach ($Folder in $AllFolders){
$FolderName=$Folder.Name
$FolderRelativeURL=”/Shared Documents/”+$FolderName
Try{
$AllFiles=Get-PnPFolderItem-FolderSiteRelativeUrl $FolderRelativeURL-ItemType File -ErrorAction Stop
}
Catch{
Write-Host”Failed to list the files for ‘$($FolderName)'”-ForegroundColor Red
}
if ($AllFiles.count-ne0){
Foreach ($Filein$AllFiles){
try{
Set-PnPListItemPermission-List $ListName-Identity $File.ListItemAllFields-User $UserAccount-AddRole $Role-ErrorAction Stop
Write-Host”Folder $($FolderName): File $($File.Name) processed with success”-ForegroundColor Green
}
Catch{
Write-Host”Folder $($FolderName): Failed to apply permissions to file $($File.Name). Error: $_.Exception.Message”-ForegroundColor Red
}
}
}
Else{
Write-Host”‘$($FolderName)’ does not have any files”-ForegroundColor Yellow
}
}

Now the entire script for you to copy

#Config Variables

$SiteURL = "https://yourtenant.sharepoint.com/sites/Test1"

$ListName="Documents"

$UserAccount = "user1@yourtenant.onmicrosoft.com"

$Role = "Contribute"




#Connect to PnP Online

Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)

Try{

$AllFolders=Get-PnPFolderItem-FolderSiteRelativeUrl "/Shared Documents"-ItemType Folder -ErrorAction Stop

}

Catch{

Write-Host"Failed to list the folders"-ForegroundColor Red

Exit

}

Foreach ($Folder in $AllFolders){

$FolderName=$Folder.Name

$FolderRelativeURL="/Shared Documents/"+$FolderName

Try{

$AllFiles=Get-PnPFolderItem-FolderSiteRelativeUrl $FolderRelativeURL-ItemType File -ErrorAction Stop

}

Catch{

Write-Host"Failed to list the files for '$($FolderName)'"-ForegroundColor Red

}

if ($AllFiles.count-ne0){

Foreach ($Filein$AllFiles){

try{

Set-PnPListItemPermission-List $ListName-Identity $File.ListItemAllFields-User $UserAccount-AddRole $Role-ErrorAction Stop

Write-Host"Folder $($FolderName): File $($File.Name) processed with success"-ForegroundColor Green

}

Catch{

Write-Host"Folder $($FolderName): Failed to apply permissions to file $($File.Name). Error: $_.Exception.Message"-ForegroundColor Red

}

}

}

Else{

Write-Host"'$($FolderName)' does not have any files"-ForegroundColor Yellow

}

}

What if I want to do a different folder

The code above is to apply permissions to the files the top level folders within the Shared Documents library of your Team site. If you want to change to a different folder, edit the following lines in the code:

  • Line 4 – $FolderRelativeUrl: Add the folder structure here (i.e “/sites/Test1/Shared Documents/FolderA/SubFolderB/”
  • Line 13 – FolderSiteRelativeURL parameter: Add the folder structure here as well (i.e “/Shared Documents/FolderA/SubFolderB”

How about the subfolders

This script does not process files inside subfolders. i.e you you have a top level folder “FolderA” the script will add permissions to all files inside that folder but it won’t add to files in the subfolder “FolderA\SubFolderA”. It’s much more complex to create an iteration that analyses and processes the depth of the folder structure and I wanted to keep this simple.

You can process subfolders separately by targeting them individually, following the steps if the section above.

How about permissions

The example above applies the role “Contributor” to the folders, for the user defined. If you want to know more details about which role to apply, please go to this excellent article to understand permission levels in SharePoint.

Final notes

If you haven’t read my previous post around SharePoint permissions, do it, to know more about folder level permissions and how to apply them via PowerShell. This post and the other one complement themselves really well.

There’s ways of making this script more complex and to do more things (like processing subfolders, processing multiple users, etc), but just like in the other post, the code shared in this one gives you a good baseline and good basic features.

I hope it’s useful!

 

Apply folder permissions in a SharePoint Online document library using PowerShell

Being a consultant with a primarily messaging background, it’s always interesting for me to blog about SharePoint and be out of my comfort zone.

What I am going to show you today, is how do you apply permissions to folders, in a SharePoint online document library, using PowerShell.

So what PowerShell module should you use?

Let me start by saying that, there’s multiple ways to programatically apply those permissions, to a SharePoint library. In this case I am using the SharePoint PnP PowerShell Module.

In the link above, you will be able to learn a bit more about the SharePoint Patterns and Practices module, as well as follow the steps to install it. Be aware that the PnP commands use CSOM, so you might get throttled at some point, if you execute too many.

Now lets look at the code in detail

I will try and break down the script, just so you understand all it does and adapt to your needs properly.

Configuration hard-coded values

It’s always best that you don’t hard-code any values in your script, just so you don’t have to edit it each time you want to run it for a different scenario, but I wanted to keep this one simple, so here it goes:

#This value if for your SharePoint Online Team site URL. In my case my team name is “Team1”. Change yours accordingly
#This is your list name. If you run a Get-PnPList you’ll see that Documents is for the Shared Documents library. You will need this for the cmdlet that sets the permissions
$ListName=”Documents”
#This is the user account you want to give permissions to
$UserAccount = “user1@domain.com”
#Role that you want to add (see permissions section for more information)
$Role = “Contribute”
#Relative URL of the parent folder for all folders you are applying permissions to (see different folder section below for more information on how to change this to target another folder)
$FolderRelativeURL = “/sites/Test1/Shared Documents/”

Connect to the SharePoint PnP Online PowerShell

#Connect to PnP Online. You will get prompted for credentials.
Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)

Grab all Folders to apply permissions to

#I created a small try catch to exit the script if we can’t grab the folders
Try{
$AllFolders=Get-PnPFolderItem-FolderSiteRelativeUrl “/Shared Documents”-ItemType Folder -ErrorAction Stop
}
Catch{
Write-Host”Failed to list the folders”-ForegroundColor Red
Exit
}

 

Apply the permissions

#And finally the code for the loop to go folder by folder and apply the permissions
Foreach ($Folder in $AllFolders){
$RelativeURL=$FolderRelativeURL+$Folder.Name
Write-Host$RelativeURL
$FolderItem=Get-PnPFolder-url $RelativeURL
Set-PnPListItemPermission-List $ListName-Identity $FolderItem.ListItemAllFields-User $UserAccount-AddRole $Role
}

Now the entire script for you to copy

#Config Variables

$SiteURL = "https://yourtenant.sharepoint.com/sites/Test1"

$ListName="Documents"

$FolderRelativeURL = "/sites/Test1/Shared Documents/"

$UserAccount = "user1@yourtenant.onmicrosoft.com"

$Role = "Contribute"




#Connect to PnP Online

Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)

Try{

$AllFolders=Get-PnPFolderItem-FolderSiteRelativeUrl "/Shared Documents"-ItemType Folder -ErrorAction Stop

}

Catch{

Write-Host"Failed to list the folders"-ForegroundColor Red

Exit

}

Foreach ($Folder in $AllFolders){

$RelativeURL=$FolderRelativeURL+$Folder.Name

Write-Host$RelativeURL

$FolderItem=Get-PnPFolder-url $RelativeURL

Set-PnPListItemPermission-List $ListName-Identity $FolderItem.ListItemAllFields-User $UserAccount-AddRole $Role

}

What if I want to do a different folder

The code above is to apply permissions in the top level folders within the Shared Documents library of your Team site. If you want to change to a different folder, edit the following lines in the code:

  • Line 4 – $FolderRelativeUrl: Add the folder structure here (i.e “/sites/Test1/Shared Documents/FolderA/SubFolderB/”
  • Line 13 – FolderSiteRelativeURL parameter: Add the folder structure here as well (i.e “/Shared Documents/FolderA/SubFolderB”

How about permissions

The example above applies the role “Contributor” to the folders, for the user defined. If you want to know more details about which role to apply, please go to this excellent article to understand permission levels in SharePoint.

Final notes

I hope this post is helpful. Like I stated initially there’s easy ways to make the script more complex but easier to manage, such as removing hard-coded values or for example creating a loop to add permissions to multiple users. Using the code above as reference will for sure save you some time or give you that quick win you need.

Microsoft Teams PowerShell – A simple use case to get you started

Not long ago I blogged about the new Microsoft Teams PowerShell module. Today I want to give you a quick example of how you can leverage it, to automate and make your work more efficient. I’ll show you how to list all Team Channels in your organization.

Connect to your Microsoft Teams Organization using PowerShell

The first thing you need to do is connect your Teams PowerShell module and authenticate to your Office 365 tenant.

  • If you don’t have the Microsoft Teams PowerShell module installed, click on the link in this article and install it
  • Once you have it installed the Connect-MicrosoftTeams cmdlet should be available. It’s as easy as running it and use the authentication prompt to pass the credentials, but you can also pass basic credentials if you want to, using the -credential parameter

TeamsPS01

List all Teams in your Microsoft Teams organization

To list all Teams in your organization, you can use the Get-Team cmdlet. By default the cmdlet will have as output the GroupID, DisplayName, Visibility, Archived, MailNickName and Description.

TeamsPS02

You can format your output to include any relevant Team attribute. Do a “Get-Team |fl” to list them all.

List all Team Channels in your organization

Now finally lets execute the use case of this post. To list all Team Channels in your organization, you can leverage the Get-TeamChannel cmdlet.

This cmdlet has a mandatory parameter -GroupID, which is basically the ID of each Team. That said you have two options:

Option 1: you run “Get-TeamChannel -groupid <TeamGroupID>”

TeamsPS03

You can use the Get-Team cmdlet to get the GroupId value for each team.

Option 2: you grab all Teams into an array and process each Team to list their channels, using the code snippet below.

$AllTeams = Get-Team

Foreach ($team in $AllTeams) {Get-TeamChannel -groupid $team.groupid |Ft $team.DisplayName, DisplayName}

TeamsPS04

What I did above, was changing the output of the command, to list in a readable way to which Team the Channels belong to. There are other ways, more organized, to format the output both to the console or an output file. Nevertheless this can easily guide you in that direction, and if you need any help let me know.

And that’s it. I can and will blog much more around Teams PowerShell. If you haven’t used it yet, you should.

Happy coding!

 

How to use MigrationWiz to migrate Public Folder calendars into mailbox calendars

It’s very common to see, in Public Folder migrations, customers that want to migrate and transform that data. But how exactly is that done?

If you’re familiar with MigrationWiz, you’ll know that to migrate data all you have to do is follow some simple steps, like configuring access to source and destination, creating the migration project and defining, within the project, what’s the source and the destination.

The steps above are as simple as they sound, however, to transform data, you’ll need to do some advanced configurations. MigrationWiz gives you flexibility that probably no other tool does, by allowing you to filter or map (I’ll elaborate in a second), which are the foundation features to transform data, but to do so properly, you need to configure your project accordingly.

So how exactly should you configure a project, to migrate a Public Folder calendar into a mailbox calendar?

I won’t give you details about the basic steps to create a project, you can look for the migration guides in the BitTitan HelpCenter, but basically you need to create a normal Public Folder project and do some changes to it.

The first and more basic change you need to do is to set mailbox as a destination.

PFShare01

Within the advanced options of your MigrationWiz project, go to the Destination settings and select “Migrate to Shared Mailbox”.

Now that you have your destination defined, add the Calendar Public Folder that you want to migrate, to your MigrationWiz project, and the correspondent destination mailbox address.

PFShare02

So now that you have your 1:1 matching done in the project, can you migrate? The answer is no, but lets see what happens if you do.

PFShare03

What you are seeing above is the PowerShell output that lists all folders, after the migration, for the destination mailbox. So what happened?

Basically instead of putting all data into the default calendar folder at the destination, we created 2 new folders, of type IPF.Appointment (Calendar folders), in that mailbox.

What this means for the end user is that he will see 2 new calendars, “Folder1” that will be empty since it had no calendar data at the source and “MyCalendarFolder1” that will have all data. Additionally the default Calendar folder won’t have any migrated data.

The above is rarely the intended goal, so just migrating is usually not the solution. You’ll need some additional configurations. Lets get to it.

PFShare04

Edit the line item you added previously and in the Support options add a Folder mapping.

The regex in this folder mapping basically moves all source data to the destination folder called “Calendar”. Since the mapping is in place and it has a defined destination, we no longer create any folders in the destination. It’s also the mapping that makes all data be copied into that destination folder.

So with the configuration above all data will be into what eventually would be the folder you want. If you adjust the filter you can put it in whatever folder you want, having in mind that if the folder doesn’t exist we will create it.

Hope that helps and happy migrations!!

 

Manage your Exchange Online from the Azure Cloud Shell

The Microsoft Exchange product group, recently announced that you can now manage your Exchange Online, via the Azure Cloud Shell.

If you’re not familiar with the Azure Cloud Shell, it’s basically a browser-based Shell experience in the cloud, fully maintained by Microsoft and that you can leverage to manage your Azure and now also Exchange Online subscriptions.

It’s a step towards running automation from anywhere, with minimal to no effort, which to me is one of the next big things coming to us as IT Consultants.

I wrote a blog article recently, on how to use a multi factor authentication account to connect to Exchange Online, and what Microsoft just did was to provide, by default, the Exchange Online Remote PowerShell module, in the Azure Shell. Smart idea and I bet an awesome quick win for them.

So is there any gotchas?

The quick answer is not any major one, but I still want to point a few things. The first one is that you need an Azure Subscription, otherwise you’ll see the below.

ExShell01

Although many Organizations embracing the Microsoft cloud are already using Office 365 AND Azure, some are not. Some just use Office 365 and it’s good to point out that if you want to leverage this new feature, it’s time for you to create that Azure subscription. The only cost you’ll have with using Azure Cloud Shell, is Azure Storage (also mandatory) cost, which is almost insignificant.

Another smaller thing but also worth pointing out, is MFA (Multi Factor Authentication), as Microsoft expects that you have MFA enabled across all accounts. I guess that’s directly related to the fact that this module you’re leveraging is for login with MFA enabled admin accounts.

Finally Microsoft also points out that they will claim sessions that are inactive for more than 20 minutes. Have that in mind when you build your automation, or just when you have your session open for daily work. This is an expected behavior for this type of cloud and container based Shell.

What else should you know?

I am not going to transcribe the article I pointed you to, in the top of this article, but I just want to highlight the main takeaways:

  • You can access the Azure Cloud Shell in several different ways, but via the Azure portal (portal.azure.com) or directly via the Shell portal (shell.azure.com), are the two main ones.
  • All Exchange Online cmdlets should be available.
  • RBAC fidelity is intact
  • This does not mean there’s plans to decommission the current Exchange PowerShell module (yet?) ­čÖé
  • You’ll use the Connect-EXOPSSession cmdlet to start the session
  • Microsoft provides SSO (Single Sign-On) just so you don’t have to login twice (i.e Azure Portal and Exchange Online) Yay!!!

And that’s it, enjoy!!!

 

Use Azure automation to start and stop Virtual Machines

If you have Virtual Machines in your Azure subscription, that don’t require 24×7 uptime, this is the blog post for you.

My blog post, where I hope to provide a detailed step by step, is based on the Microsoft official article Start/Stop VMs during off-hours solution in Azure Automation. I highly recommend that you read that article, since this post is more focused on the execution and not necessarily on a detailed explanation of every component.

So what’s the goal here? To be able to, without human interaction, start and stop virtual machines in your Azure subscription, daily. Cool, right?

Before I continue and in case that all you need is to stop Virtual Machines, for that you can leverage a much simpler process, by configuring the “Auto-shutdown” option, under the “Operations” section of a Virtual Machine settings.

autoshut1

But if you need to do both, shutdown and boot up machines, then continue reading.

What are the prerequisites to configure this solution?

To be able to configure the solution to start and stop virtual machines, you need the following:

You can create those resources when you’re enabling the solution, or separately by adding new resources in the “All Resources” tab.

How do I configure the solution?

You have two easy ways of configuring the solution.

Add a new resource in the “All resources” tab

This is the ideal solution┬á specially if you haven’t create the Automation account.

StartStop00

On the left hand side menu, browse to “All Resources”, click new and type “Start/Stop”. The solution would pop up for selection. Click “Create”.

Via your Azure Automation account

If you already have an automation account created, use it to access the Start/Stop VM solution.

ss07

Browse to the automation account and under “Related Resources” click “Start/Stop VM”. Then click “Learn more about and enable the solution”.

You will end up in the same creation page as shown in the option above.

Configure the solution

The step by step configuration of the solution is actually very simple. As noted in the begging of this post, all you need is to select an automation account, a log analytics workspace and configure the solution details.

startStop01

First you start by selecting an existing or configuring a new Log Analytics workspace. If you create a new one, all you have to do is give it a name, associate it with a resource group (new or existing), select a location and for the Pricing tier keep the “Per GB”.

StartStop02

In the second step, you can select an existing or create a new automation account. If you create a new one, just select the name. The resource group and corresponding location will be locked to the one where the solution is being deployed. Also the Automation Account will be created as a “Run As Account”.

If you’re creating an automation account separately and you can’t see it for selection here, it might be because of several things, such as the account not being created as “Run As” (mandatory) or being in a resource group or location makes it unavailable.

StartStop03

Finally you can configure the most important, which is the solution parameters. Those include the following:

  • Target resource group – Enter the name of the resource group(s) that you want to target. Names are case-sensitive. If you want to target all groups enter “*”. If you want to target multiple use the comma as separator between group names.
  • VM Exlude List – Use this field to exclude any VM’s in your resource group that you don’t want the solution to affect. It’s important to understand that this solution will by default target the entire resource group, unless you exclude VMs here.
  • Daily Start and Stop Time – select the time that you want your VMs to be boot up and shut down, everyday.
  • Email functionality – if you want to receive an email notification each time an action is taken towards a VM (i.e shutdown), select yes and enter the email address you want to get the email on (multiple emails separated by commas).

How do I check if it worked?

Browse to your Automation Account and under “Process Automation > Jobs”.

SS10

Click on the latest job to see more details.

ss11

You can browse between tabs to check the details of the job execution. Pay special attention the the “All Logs” tab, where you can see the actions executed, number of errors and number of warnings.

The bottom line

Personally, I love this solution. It’s easy to deploy and saves me a ton of my Azure monthly credit.

You can go beyond what I showed you in this post and manually edit the job details, to do things like create an end date for the job, but this turn key Azure solution, although not extremely flexible (i.e targets entire workgroups and it’s tricky to specify exceptions in workgroups with a large number of VMs: it’s designed for daily boot up and shut down actions, etc), it’s very useful. 5 stars!!

Use it and give your own opinion. As always, any questions let me know.

Azure automation error “Client assertion contains invalid signature” – Time to renew your Automation account certificate

I was just recently playing with some Azure runbooks and noticed that one of my automation accounts, that I had selected to execute some of that automation, wasn’t working properly.

I had a Virtual Machine scheduled to boot at a specific time and that wasn’t happening. So this was what I did to troubleshoot it.

autoerror01

In the Azure Portal, I went to “All Resources”, filtered by “Automation Accounts” and clicked in the Automation Account that was supposed to be running that runbook.

autoerror02

I was able to immediately see that something wasn’t OK, as you can see above the automation account is showing that the certificates for both the “Run as Account” and the “Classic Run As Account”, are expired. Nevertheless, the job statistics is telling me that 4 jobs were ran and all with success. Odd, right? So lets investigate further.

autoerror03

In the Automation Account menu I went to Process Automation > Jobs, to try and understand what jobs were executed. As you can see the 4 jobs are there, but were they executed with success?

autoerror04

I clicked in one of the jobs. The status was “completed” but browsing to the “Errors” tab you could easily see it failed. “Client assertion contains an invalid signature” was the error.

So lets jump to the quick fix. Renew the automation account certificates.

autoerror05

Back to the automation account > overview page, clicked in the link to resolve the issue and renewed both certificates.

And that’s it, problem solved.

Lessons learned: make sure your automation account is functional and don’t always trust the job statistics shown in the portal.

Use a Multi Factor Authentication enabled account to connect to Exchange Online PowerShell

Security is a big theme for any online applications and Multi Factor Authentication is used more and more everyday. Those security standards easily extend to PowerShell or any other admin sessions, to execute tasks in your tenant. It will be more and more common to see organizations that no longer allow their IT administrators to connect to their Exchange Online using basic auth.

If you’re a heavy PowerShell user like me, this is for you. Microsoft has an excellent article on how to leverage an MFA account to connect to Exchange Online PowerShell.

You should read the article, as this things tend to be updated and you’ll be sure the have the latest steps, but in essence what you need to do is:

  • Make sure you have all prerequisites (i.e .Net Framework), specially in older versions of Windows
  • Install the Exchange Online Remote PowerShell module (from the Exchange Online EAC)
  • Make sure that WinRM allows basic authentication

Finally you can run the following command to connect to Office 365 commercial:

Connect-EXOPSSession -UserPrincipalName chris@contoso.com

And if you are connecting to a different instance, such as GCC High or 365 Germany, you need to specify the ConnectionUri and the AzureADAuthorizationEndPointUri parameters (see official article for parameter configurations).

Connect-EXOPSSession -UserPrincipalName <UPN> [-ConnectionUri <ConnectionUri> -AzureADAuthorizationEndPointUri <AzureADUri>]

Here’s how the PowerShell session looks like after you install the module.

MFAPS01

And the authentication process.

MFAPS02

And that’s it. Happy scripting!!!