Automated Destruction: Azure DevOps Pipeline for Terraform Cleanup
Nov 08, 2023In a previous article, I shared how to create a pipeline for your Terraform code. This pipeline is used to create new infrastructure.
Today, I want to talk about a pipeline to destroy that infrastructure. We can use such a pipeline to save cost for a non production environment. For instance, I use this pipeline to destroy a staging environment.
Let’s see how to do that.
If you don’t have an Azure DevOps account yet, you can start for free here: https://azure.microsoft.com/en-ca/products/devops.
You might want to read the previous article before this one. In the previous article, I explained how to create the service connection to connect to Azure, update the permission to be able to create and delete locks and more.
Why a Separate Pipeline for Destruction?
While it may seem convenient to integrate resource creation and destruction in a single pipeline, it can potentially introduce risks and complications. By keeping these two critical processes separate, we not only maintain clarity and transparency in our DevOps workflow but also gain several crucial advantages.
A dedicated destruction pipeline allows for streamlined automation of resource cleanup, reducing the chances of unintentional deletions and ensuring that resources are disposed of in a controlled and efficient manner.
This approach becomes particularly valuable in complex environments and large-scale applications. Ultimately, the use of a separate pipeline for destruction is a key component in achieving better resource management, minimizing potential errors, and enhancing overall operational efficiency in the Azure ecosystem.
Pipeline
This pipeline is really similar to the previous one. The only difference is that we will add the option -destroy to the plan and the apply command.
We also deactivate the trigger to prevent this pipeline to automatically run after each commit. This is a pipeline that we run manually when we need to destroy the environment.
The trigger
trigger: none
The variables
variables:
artifactName: terraform
serviceConnection: AzureSubscription
buildAgent: ubuntu-latest
terraformDir: $(System.DefaultWorkingDirectory)/2023-11-terraform-pipeline-destroy/terraform
target: $(build.artifactstagingdirectory)
publishedArtifactsDirectory: "$(Pipeline.Workspace)/$(artifactName)"
planName: tfplan
resourceGroupToLock: rg-01
resourceLockName: terraform-lock
The Plan stage
Notice the -destroy option in the terraform plan commandOptions parameter.
stages:
- stage: Plan
jobs:
- job: Plan
steps:
- task: TerraformCLI@1
displayName: terraform init
inputs:
command: "init"
backendType: "azurerm"
workingDirectory: "$(terraformDir)"
backendServiceArm: "$(serviceConnection)"
- task: TerraformCLI@1
displayName: terraform validate
inputs:
command: "validate"
backendType: "azurerm"
workingDirectory: "$(terraformDir)"
environmentServiceName: "$(serviceConnection)"
- task: AzureCLI@2
displayName: Delete lock
inputs:
azureSubscription: $(serviceConnection)
scriptType: "pscore"
scriptLocation: "inlineScript"
inlineScript: |
az lock delete --name $(resourceLockName) --resource-group $(resourceGroupToLock)
- task: TerraformCLI@1
displayName: terraform plan
inputs:
command: "plan"
backendType: "azurerm"
workingDirectory: "$(terraformDir)"
commandOptions: "-destroy -input=false -out=$(planName)"
environmentServiceName: "$(serviceConnection)"
publishPlanResults: "Terraform plan"
- task: AzureCLI@2
displayName: Create lock
inputs:
azureSubscription: $(serviceConnection)
scriptType: "pscore"
scriptLocation: "inlineScript"
inlineScript: |
if($(az group exists --name $(resourceGroupToLock)) -eq $true)
{
az lock create --name $(resourceLockName) --resource-group $(resourceGroupToLock) --lock-type ReadOnly --notes "This resource is managed by Terraform."
}
- task: CopyFiles@2
displayName: Copy files
inputs:
SourceFolder: "$(terraformDir)"
Contents: |
.terraform.lock.hcl
**/*.tf
**/*.tfvars
**/*tfplan*
TargetFolder: "$(target)"
- publish: "$(target)"
artifact: "$(artifactName)"
The Destroy stage
Just like the plan stage, notice the -destroy option in the terraform destroy commandOptions parameter.
- stage: Destroy
displayName: Destroy
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
jobs:
- deployment: Destroy
environment: Azure
pool:
vmImage: $(buildAgent)
strategy:
runOnce:
deploy:
steps:
- download: "current"
artifact: $(artifactName)
- task: AzureCLI@2
displayName: Delete lock
inputs:
azureSubscription: $(serviceConnection)
scriptType: "pscore"
scriptLocation: "inlineScript"
inlineScript: |
az lock delete --name $(resourceLockName) --resource-group $(resourceGroupToLock)
- task: TerraformCLI@1
displayName: terraform init
inputs:
command: "init"
backendType: "azurerm"
workingDirectory: "$(publishedArtifactsDirectory)"
backendServiceArm: "$(serviceConnection)"
- task: TerraformCLI@1
displayName: terraform destroy
inputs:
command: "apply"
commandOptions: '-destroy -input=false "$(planName)"'
backendType: "azurerm"
workingDirectory: "$(publishedArtifactsDirectory)"
environmentServiceName: "$(serviceConnection)"
- task: AzureCLI@2
displayName: Create lock
inputs:
azureSubscription: $(serviceConnection)
scriptType: "pscore"
scriptLocation: "inlineScript"
inlineScript: |
if($(az group exists --name $(resourceGroupToLock)) -eq $true)
{
az lock create --name $(resourceLockName) --resource-group $(resourceGroupToLock) --lock-type ReadOnly --notes "This resource is managed by Terraform."
}
Run the pipeline
Import the pipeline to Azure DevOps and run it manually.
After the plan stage, you will see the list of resources that will be destroyed.
You can approve the next stage to destroy the resources.
Conclusion
In conclusion, with the Azure DevOps pipeline we've explored in this article, you now have a powerful tool at your disposal for safely and efficiently dismantling your Terraform infrastructure.
Embracing automation not only simplifies the cleanup process but also enhances the overall management of your cloud resources.
As you continue to refine your DevOps practices, remember that the ability to create and destroy infrastructure at will brings greater control and agility to your cloud operations.
As usual, you can access the source code on GitHub.