When it comes to setting up backups in Azure, using Infrastructure as Code (IaC) with Bicep is a great way to keep things tidy and repeatable. However, when you’re dealing with Azure Backup and Storage Accounts, things can get a bit tricky—especially if you want to automate the process of backing up all containers within a storage account.
The Challenge: No Wildcard Backup Policy
If you’ve tried to set up a Backup Vault in Azure to automatically back up all containers in a storage account, you’ve likely run into a wall. Azure Backup requires you to select specific containers for backup. There’s no “backup everything” checkbox, which is a pain if your containers are dynamic. Manually updating backup policies every time a new container is added is not exactly the definition of automation.
The Solution: Bicep and Deployment Scripts to the Rescue!
To get around this limitation, you can use a combination of Bicep and a deployment script. The idea is simple:
- Use a Bicep Deployment Script to query the storage account for all existing containers.
- Pass those container names into your backup policy.
- Automate the process so your backup policy is always up to date.
Step-by-Step
- Create a storage account
This will hold our containers. We expect some other services to automatically create new containers.
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-05-01' = {
name: '<storageAccountName>'
location: resourceGroup().location
sku: {
name: 'Standard_LRS'
}
kind: 'StorageV2'
properties: {
publicNetworkAccess: 'Enabled'
minimumTlsVersion: 'TLS1_2'
allowBlobPublicAccess: false
allowSharedKeyAccess: true
largeFileSharesState: 'Disabled'
networkAcls: {
resourceAccessRules: []
bypass: 'AzureServices'
defaultAction: 'Allow'
}
supportsHttpsTrafficOnly: true
encryption: {
identity: {}
services: {
blob: {
keyType: 'Account'
enabled: true
}
}
keySource: 'Microsoft.Storage'
}
accessTier: 'Hot'
}
}
- Create a backup vault
This vault will be responsible for holding our backup policies executing backup jobs and storing backed up data.
resource storageBackupVault 'Microsoft.DataProtection/BackupVaults@2024-04-01' = {
name: '<backupVaultName>'
location: resourceGroup().location
identity: {
type: 'SystemAssigned'
}
properties: {
storageSettings: [
{
datastoreType: 'VaultStore'
type: 'LocallyRedundant'
}
]
securitySettings: {
softDeleteSettings: {
state: 'On'
retentionDurationInDays: 14
}
immutabilitySettings: {
state: 'Unlocked'
}
}
featureSettings: {
crossSubscriptionRestoreSettings: {
state: 'Disabled'
}
}
replicatedRegions: []
}
}
- Create a backup policy
The policy is how we determine the parameters of our backup jobs. Our retention periods, backup intervals and other properties. Multiple backup instances can be associated with this policy.
resource storageBackupPolicy 'Microsoft.DataProtection/BackupVaults/backupPolicies@2024-04-01' = {
parent: storageBackupVault
name: '<BackupPolicyName>'
properties: {
policyRules: [
{
lifecycles: [
{
deleteAfter: {
objectType: 'AbsoluteDeleteOption'
duration: 'P30D'
}
targetDataStoreCopySettings: []
sourceDataStore: {
dataStoreType: 'OperationalStore'
objectType: 'DataStoreInfoBase'
}
}
]
isDefault: true
name: 'Default'
objectType: 'AzureRetentionRule'
}
{
lifecycles: [
{
deleteAfter: {
objectType: 'AbsoluteDeleteOption'
duration: 'P7D'
}
targetDataStoreCopySettings: []
sourceDataStore: {
dataStoreType: 'VaultStore'
objectType: 'DataStoreInfoBase'
}
}
]
isDefault: true
name: 'Default'
objectType: 'AzureRetentionRule'
}
{
backupParameters: {
backupType: 'Discrete'
objectType: 'AzureBackupParams'
}
trigger: {
schedule: {
repeatingTimeIntervals: ['R/2024-03-03T03:00:00+01:00/P1W']
timeZone: 'W. Europe Standard Time'
}
taggingCriteria: [
{
tagInfo: {
tagName: 'Default'
}
taggingPriority: 99
isDefault: true
}
]
objectType: 'ScheduleBasedTriggerContext'
}
dataStore: {
dataStoreType: 'VaultStore'
objectType: 'DataStoreInfoBase'
}
name: 'BackupWeekly'
objectType: 'AzureBackupRule'
}
]
datasourceTypes: ['Microsoft.Storage/storageAccounts/blobServices']
objectType: 'BackupPolicy'
}
}
- A deployment script that will list all the containers in the storage account:
The deployment script runs during the deployment process and dynamically retrieves all container names. These are then seamlessly injected into the backup policy configuration, giving you a hands-off, automated solution.
//First we create a custom role, to limit the permisions of the script.
//This step is optional and you can use a built-in RBAC role
resource customRole 'Microsoft.Authorization/roleDefinitions@2022-04-01' = {
name: guid('ListStorageAccountsRole')
properties: {
roleName: 'Storage Container Observer'
description: 'Can list and read the names of storage account containers but cannot access their contents.'
permissions: [
{
actions: [
'Microsoft.Storage/storageAccounts/blobServices/containers/read'
'Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action'
]
}
]
assignableScopes: [
resourceGroup().id
]
}
}
// Grant the script a managed identity we can assign the role to
resource storageAccountDeploymentScriptIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-07-31-preview' = {
name: 'script-identity'
location: resourceGroup().location
}
//Create a role assignment for the script with the custom role
resource storageAccountDeploymentScriptRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid(customRole.id, storageAccountDeploymentScriptIdentity.id, resourceGroup().id)
scope: storageAccount
properties: {
principalType: 'ServicePrincipal'
principalId: storageAccountDeploymentScriptIdentity.properties.principalId
roleDefinitionId: customRole.id
}
}
//Here we can create the the inline deployment script using Azure CLI
resource storageAccountDeploymentScript 'Microsoft.Resources/deploymentScripts@2023-08-01' = {
name: '<DeploymentScriptName>'
location: resourceGroup().location
kind: 'AzureCLI'
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${storageAccountDeploymentScriptIdentity.id}': {}
}
}
properties: {
azCliVersion: '2.59.0'
//The script runs the azure cli command and writes the output to the built-in output path
scriptContent: 'az storage container list --auth-mode login --account-name ${storageAccountName} --query "{text: [].name}" > $AZ_SCRIPTS_OUTPUT_PATH'
cleanupPreference: 'OnSuccess'
retentionInterval: 'PT1H'
}
}
- Lastly, create the backup instance The backup instance targets the policy we defined earlier at a specific data source. In this case the storage account we created with any containers that may have been created. We can pass the output values of the deployment script as a parameter for the backup instance. This is how we enable ‘all’ known containers for the instance. When run multiple times, it will append the list of covered containers and they will be picked up in the next backup job.
resource storageBackupInstance 'Microsoft.DataProtection/backupVaults/backupInstances@2024-04-01' = {
parent: storageBackupVault
name: '<BackupInstanceName>'
properties: {
friendlyName: '<BackupInstanceSimpleName>'
dataSourceInfo: {
resourceID: storageAccount.id
resourceUri: storageAccount.id
datasourceType: 'Microsoft.Storage/storageAccounts/blobServices'
resourceName: storageAccount.name
resourceType: 'Microsoft.Storage/storageAccounts'
resourceLocation: resourceGroup().location
objectType: 'Datasource'
}
policyInfo: {
policyId: storageBackupPolicy.id
policyParameters: {
backupDatasourceParametersList: [
{
objectType: 'BlobBackupDatasourceParameters'
containersList: storageAccountDeploymentScript.properties.outputs.text
}
]
}
}
objectType: 'BackupInstance'
}
}
Warning! This will not continuously update the backup instance
This setup will add an initial list of containers to your backup instance. It will update the instance with any new containers, but to do that you need to re-run the bicep. To continually update the instance in real time, we will need a seperate system (Coming soon 😉).
Why Bother with This Setup?
- Hands-free updates: No more manually updating your backup policy when containers change.
- IaC purity: Keep all your infrastructure in Bicep without needing external automation tools.
- Scalability: Great for environments where containers are created and destroyed frequently.
Final Thoughts
Setting up an Azure Backup policy for a storage account using Bicep can feel like a puzzle, but with deployment scripts, you can automate the entire process. This solution keeps your infrastructure dynamic and your backup strategy robust—exactly what you need in a modern cloud environment.
Have you faced similar challenges with Azure Backup? Drop a comment below and share your experience!
Let me know if this aligns with what you had in mind or if you’d like me to expand on any specific aspect!