Online Course

Mastering GitOps: Deploy Your Application on AKS with Azure DevOps and ArgoCD

azure azure devops ci/cd kubernetes Dec 06, 2023

In a prior blog post, I discussed the ins and outs of my CI/CD pipeline for deploying infrastructure using Terraform. Now, I'll demonstrate how to leverage GitOps for deploying your applications on AKS with ArgoCD.

By the end of this guide, you'll be equipped with the knowledge to seamlessly install ArgoCD on AKS, structure your deployment repository effectively, and automate the entire process using Azure DevOps.

Throughout this solution, we'll be orchestrating the deployment of an application across two distinct environments: staging and production.

As always, you can access to the source code on GitHub:

Let’s get started!

 

Pipeline diagram

First, here is an overview of the solution.

In this CI/CD pipeline, we have two different repositories:

  • 1 for the application code which contains our ASP.NET web application.
  • 1 for our deployment code which contains Kubernetes manifests.

Developers always commit to the Application repository.

In this repository, we add a deployment folder that contains the Kubernetes manifests.

We use Kustomize to group multiple manifests together. With Kustomize, we can have what we call a base where we declare all the manifests that are similar for all environments. Then, we add overlays that represent the changes per environment. The overlays and the base get merged together. See the Application Repository in the picture below.

The application repository contains all the base files + all the overlays.

I want to highlight the step 5, Update manifests, where we create a new commit to the deployment repository.

We’ll have 1 stage per environment in this pipeline. So, if you deploy to the staging environment, we copy the base + the staging overlay to the deployment repository. Then, in the production stage, we copy the base + the production overlay to the deployment repository.

Only what is in the deployment repository represents what has been deployed.

 

Infrastructure

I included a Bicep script that will create a test environment for you in Azure.

This Bicep script will create:

  • A virtual network with 4 subnets.
  • A bastion to remotely access to private resources in the VNET.
  • A virtual machine that will be used as a jumpbox.
  • A private AKS cluster
  • A public container registry
  • An Application Gateway
  • A private DNS zone for the custom domain mycompany.com
  • Two A records for our staging and production applications: app-01-staging.mycompany.com and app-01.mycompany.com

Here is the infrastructure diagram.

 

To deploy this infrastructure, do the following:

Connect to Azure with Azure CLI and retrieve your current User Object ID.

az login
az ad signed-in-user show -o tsv --query id

From the Infrastructure folder, update the main.bicepparam.

param currentUserObjectId = '{your current user object ID}'

Run the script.ps1 from the Infrastructure folder.

 

Application

For this demonstration, we create a simple asp.net application called app-01. Use Visual Studio to create a new ASP.NET Core Web App.

Next, add a Dockerfile to the project.

FROM mcr.microsoft.com/dotnet/aspnet:7.0
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY /output/ .
ENTRYPOINT ["dotnet", "app-01.dll"]

In this Dockerfile, we reference the aspnet base image, create a working directory /app, expose port 80 and 443, copy everything from the output directory to the container and start the dll app-01.dll.

Notice that we don’t build the solution in the container. This will be done by our CI/CD pipeline. It is a good practice to let the pipeline build the solution, run other processes like unit tests, static code analysis… and only if everything goes well, create the container image.

Now, let’s create the Kubernetes manifests.

Add a deployment folder to the solution. In this folder, we add 2 folders: base and overlays.

This is what our application looks like.

 

The base folder

In the base folder, we have the following files:

The deployment.yaml file

Notice the image property, you have to change the container registry name. Go to the Azure portal to retrieve your container registry name. Mine is crj5yn5abgxaivw.

Notice also the tag __Build.BuildId__. It is a special variable that will be replaced during the pipeline execution.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: app-01
  name: app-01
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app-01
  template:
    metadata:
      labels:
        app: app-01
    spec:
      containers:
        - name: app-01
          image: crj5yn5abgxaivw.azurecr.io/app-01:__Build.BuildId__
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: "512Mi"
              cpu: "256m"
            requests:
              memory: "256Mi"
              cpu: "128m"

The service.yaml file

apiVersion: v1
kind: Service
metadata:
  name: app-01
spec:
  type: ClusterIP
  ports:
    - port: 80
  selector:
    app: app-01

The ingress.yaml file

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-01
  annotations:
    appgw.ingress.kubernetes.io/use-private-ip: "true"
spec:
  ingressClassName: azure-application-gateway
  rules:
    - host: app-01-staging.mycompany.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app-01
                port:
                  number: 80

The kustomization.yaml file

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml
  - ingress.yaml
  - service.yaml

 

The overlays folder

Production

In the production folder, we have the following files:

The deployment.yaml file

Here, we are just changing the environment variable ASPNETCORE_ENVIRONMENT.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: app-01
  name: app-01
spec:
  template:
    spec:
      containers:
        - name: app-01
          env:
            - name: ASPNETCORE_ENVIRONMENT
              value: production
 
The ingress.yaml file

Here, we are configuring ingress for the production application. Notice that we changed the host to be app-01.mycompany.com.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-01
  annotations:
    appgw.ingress.kubernetes.io/use-private-ip: "true"
spec:
  ingressClassName: azure-application-gateway
  rules:
    - host: app-01.mycompany.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app-01
                port:
                  number: 80

 

The kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
patches:
  - path: deployment.yaml
  - path: ingress.yaml

 

Staging

For the staging environment, we change the environment variable in the deployment.yaml file and the host name in the ingress.yaml file.

deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: app-01
  name: app-01
spec:
  template:
    spec:
      containers:
        - name: app-01
          env:
            - name: ASPNETCORE_ENVIRONMENT
              value: staging
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-01
  annotations:
    appgw.ingress.kubernetes.io/use-private-ip: "true"
spec:
  ingressClassName: azure-application-gateway
  rules:
    - host: app-01-staging.mycompany.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app-01
                port:
                  number: 80

The kustomization.yaml file stays the same as the production one.

 

ArgoCD

Now, let’s configure ArgoCD.

First connect to the AKS cluster with Azure CLI.

az aks get-credentials --name aks-01 --resource-group rg-argocd-01

Then, create the argocd namespace and install argocd.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Install the argocd CLI https://github.com/argoproj/argo-cd/releases/latest on your local computer.

Copy the binary file to a folder (for instance C:\devops_tools), rename it to be argocd.exe. Then, add the folder (C:\devops_tools) to your system PATH environment variable. Click on the Edit button, then New to add the folder to the path.

 

To access the argocd server, you can use Kubernetes port-forwarding feature. In a production environment, you would configure an ingress but for the purpose of this article the port forwarding will do the job.

kubectl port-forward svc/argocd-server -n argocd 8080:443

Retrieve the default argocd password with the following command.

argocd admin initial-password -n argocd

It is a good practice to change the password with the following command.

argocd account update-password

Then open your browser and navigate to http://localhost:8080.

Sign in to argocd with the admin username and the password from the previous command.

You will notice that the connection is not secure since there is no certificate configured. For the purpose of this article, we will ignore this warning. In production, you would configure ingress with a verified SSL certificate.

Once signed in argocd, navigate to Settings > Repositories > Connect Repo.

Add the repository URL, a username (it could be anything) and in the password field add your GitHub Personal Access Token.

Follow GitHub documentation to know how to create a personal access token for your repository.

Make sure to use the repository URL of the Deployment repository, not the Application repository.

 

Deployment repository

In the deployment repository, we have 4 folders:

  • The apps folder that contains the definition for all the applications that we will sync with ArgoCD. We use a Helm chart to group all those applications together.
  • The bootstraps folder that contains the argocd application. The argocd application allows ArgoCD to manage itself. It is basically an application that points to the apps folder.
  • The kustomize folder that contains all the manifests for a specific application. Basically, an ArgoCD application created in the apps folder will point to the manifests in the kustomize folder. For our example, we will have the app-01 manifests in the kustomize folder.
  • Lastly, we have a projects folder where we add all the ArgoCD projects that we want. In our case, we have 1 project per environment: production and staging. Notice that there is an app in the apps folder that points to the projects folder. Allowing everything in ArgoCD to be synchronized with Git. We want Git to always be the source of truth.

 

argocd.yaml

The argocd.yaml file in the bootstraps folder represents the argocd application. We use a pattern called App of Apps. Basically, we have 1 app that references all the apps that will be deployed to our cluster.

Notice the repoUrl property in the source object and the path property.

We are telling ArgoCD which repository to use and where to find our application manifests.

Make sure to use the Deployment repository and not the Application repository.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: argocd
  namespace: argocd
  labels:
    name: argocd
spec:
  project: default

  # Source of the application manifests
  source:
    repoURL: https://github.com/rceraline/devops-argocd.git
    targetRevision: HEAD
    path: apps

  # Destination cluster and namespace to deploy the application
  destination:
    # cluster API URL
    server: https://kubernetes.default.svc

  # Sync policy
  syncPolicy:
    automated: # automated sync by default retries failed attempts 5 times with following delays between attempts ( 5s, 10s, 20s, 40s, 80s ); retry controlled using `retry` field.
      prune: true # Specifies if resources should be pruned during auto-syncing ( false by default ).
      selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
      allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ).

    retry:
      limit: 3

 

apps folder

In the apps folder, we create a Helm application chart. We need to add a Chart.yaml file and a values.yaml file which contains some default values. In the templates folder, we add all our application templates.

Chart.yaml

apiVersion: v2
name: applications
description: Applications

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: "1.0"

values.yaml

spec:
  destination:
    server: https://kubernetes.default.svc
  source:
    repoURL: https://github.com/rceraline/devops-argocd.git
    targetRevision: HEAD

 

app-01.yaml

The app-01.yaml file in the apps/templates folder represents an ApplicationSet. An ApplicationSet is a custom resource that creates multiple applications. In our case, it will create an application for each environment: staging and production. Each application will be in a separate namespace. We use a list generator to do that.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: app-01
  namespace: argocd
spec:
  generators:
    - list:
        elements:
          - namespace: staging
          - namespace: production
  template:
    metadata:
      name: "{{`{{namespace}}`}}-app-01"
    spec:
      project: "{{`{{namespace}}`}}"
      source:
        repoURL: "{{ .Values.spec.source.repoURL }}"
        targetRevision: HEAD
        path: kustomize/app-01/overlays/{{`{{namespace}}`}}
      destination:
        server: "{{ .Values.spec.destination.server }}"
        namespace: "{{`{{namespace}}`}}"
      syncPolicy:
        automated:
          prune: true
          selfHeal: true

        retry:
          limit: 3

 

projects.yaml

The projects.yaml file is an application for all the projects.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: projects
  namespace: argocd
spec:
  project: default
  source:
    repoURL: "{{ .Values.spec.source.repoURL }}"
    targetRevision: HEAD
    path: projects
  destination:
    server: "{{ .Values.spec.destination.server }}"
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

    retry:
      limit: 3

To install all the Applications to Argocd, run the following command from the root folder.

kubectl apply -k .\bootstraps\

 

Configure Azure DevOps

To be able to push the container images to ACR, we have to create a new Docker Registry service connection.

In Azure DevOps, open your project, navigate to Project Settings > Service connections > New service connection.

Choose Azure Container Registry for the registry type and fill all the other fields as displayed on the picture below.

Your container registry name will be different.

Once the service connection is created, go to Pipelines > Environments and create 2 environments:

  • aks-staging
  • aks-production

 

For the aks-production environment, add your user to the Approvers list as displayed on the picture below.

 

Pipeline file

In the application repository, create a new azure-pipelines.yml file with the following content.

Replace the repository name property in the second block with the name of your Deployment repository. Mine is rceraline/devops-argocd.

You might also need to update the applicationFolder variable.

trigger:
  branches:
    include:
      - main

resources:
  repositories:
    - repository: deployment-repo
      type: github
      endpoint: rceraline
      name: rceraline/devops-argocd

variables:
  - name: artifactName
    value: deployment
  - name: componentName
    value: app-01
  - name: gitUserEmailAddress
    value: [email protected]

stages:
  - stage: Build
    jobs:
      - job: Build

        pool:
          vmImage: ubuntu-latest

        variables:
          applicationFolder: $(Build.Repository.LocalPath)/2023-12-aks-argocd-pipeline/Application/app-01
          containerRepository: app-01
          buildConfiguration: "Release"
          deploymentFolder: $(applicationFolder)/deployment
          outputFolder: $(applicationFolder)/output/

        steps:
          - task: DotNetCoreCLI@2
            displayName: Build & Publish
            inputs:
              command: "publish"
              publishWebProjects: false
              modifyOutputPath: false
              workingDirectory: $(applicationFolder)
              arguments: --output $(outputFolder) -c $(buildConfiguration) --self-contained true
              zipAfterPublish: false

          - task: Docker@2
            displayName: Login to ACR
            inputs:
              command: login
              containerRegistry: AzureContainerRegistry

          - task: Docker@2
            displayName: Build Docker image
            inputs:
              containerRegistry: AzureContainerRegistry
              repository: $(containerRepository)
              command: "build"
              Dockerfile: $(applicationFolder)/Dockerfile
              buildContext: $(applicationFolder)
              tags: |
                $(Build.BuildId)
                latest

          - task: Docker@2
            displayName: Push Docker image
            condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
            inputs:
              containerRegistry: AzureContainerRegistry
              repository: $(containerRepository)
              command: "push"
              tags: |
                $(Build.BuildId)
                latest

          - task: CopyFiles@2
            displayName: Copy deployment manifests
            inputs:
              sourceFolder: $(deploymentFolder)
              contents: "**"
              targetFolder: $(Build.ArtifactStagingDirectory)
              overwrite: true

          - task: qetza.replacetokens.replacetokens-task.replacetokens@3
            displayName: Replace tokens
            inputs:
              rootDirectory: $(Build.ArtifactStagingDirectory)/base
              targetFiles: "deployment.yaml"
              encoding: auto
              writeBOM: true
              escapeType: no escaping
              actionOnMissing: log warning
              tokenPrefix: __
              tokenSuffix: __

          - publish: $(Build.ArtifactStagingDirectory)
            artifact: $(artifactName)

  - stage: Staging
    jobs:
      - deployment: Staging

        environment:
          name: aks-staging

        pool:
          vmImage: "ubuntu-latest"

        strategy:
          runOnce:
            deploy:
              steps:
                - checkout: deployment-repo
                  persistCredentials: true
                  clean: true

                - task: DownloadPipelineArtifact@2
                  inputs:
                    artifact: $(artifactName)
                    patterns: |
                      base/**
                      overlays/staging/**

                    path: $(Build.Repository.LocalPath)/kustomize/$(componentName)

                - task: PowerShell@2
                  displayName: Commit to Deployment repo
                  inputs:
                    targetType: "inline"
                    workingDirectory: $(Build.Repository.LocalPath)
                    script: |
                      git config user.email "$(gitUserEmailAddress)"
                      git config user.name "build"
                      git stash
                      git remote update
                      git fetch
                      git checkout --track origin/main
                      git stash pop
                      git add .
                      git commit -m "$(componentName): staging deployment"
                      git push origin HEAD:main

  - stage: Production
    jobs:
      - deployment: Production

        environment:
          name: aks-production

        pool:
          vmImage: "ubuntu-latest"

        strategy:
          runOnce:
            deploy:
              steps:
                - checkout: deployment-repo
                  persistCredentials: true
                  clean: true

                - task: DownloadPipelineArtifact@2
                  inputs:
                    artifact: $(artifactName)
                    patterns: |
                      base/**
                      overlays/production/**

                    path: $(Build.Repository.LocalPath)/kustomize/$(componentName)

                - task: PowerShell@2
                  displayName: Commit to Deployment repo
                  inputs:
                    targetType: "inline"
                    workingDirectory: $(Build.Repository.LocalPath)
                    script: |
                      git config user.email "$(gitUserEmailAddress)"
                      git config user.name "build"
                      git stash
                      git remote update
                      git fetch
                      git checkout --track origin/main
                      git stash pop
                      git add .
                      git commit -m "$(componentName): production deployment"
                      git push origin HEAD:main

We have 3 different stages: Build, Staging and Production.

In the Build stage, we build and publish the dotnet web application.

Then we log in to ACR, build and push the container image.

Next, we copy the content of the deployment folder (the Kubernetes manifests for all environments) to the artifact directory.

We run a replacetokens tasks that replaces the image tag __Build.BuidId__ from the deployment.yaml file with the actual build ID.

Lastly, we publish the artifact.

In the Staging stage, we fetch the artifact (only the base and staging overlay) and commit all the files to the Deployment repository. Notice how we reference the deployment repository in the resources block at the begining of the file.

In the Production stage, we do exactly as the Staging stage but only for the base and production overlay.

Notice how we use a deployment job for the Staging and Production stages. This allows us to specify a pipeline environment. With an environment we can configure some approvals and checks before the stage runs.

Commit and push the azure-pipelines.yml to the application repository.

 

Create the pipeline in Azure DevOps

Go to Azure DevOps > Pipelines > New pipeline.

Reference your application repository from GitHub. When you create your GitHub service connection, make sure to give Azure DevOps access to your Deployment repository too.

Once you are authenticated to GitHub, choose to use an Existing Azure Pipeline YAML file and select your new created azure-pipeline.yaml file.

Save the pipeline and run it.

The pipeline will automatically deploy the application to Staging and wait for your approval to go to Production.

You can click on the Review button to approve the deployment.

After a couple of minutes, ArgoCD will automatically fetch the changes and deploy the application. You should have something similar to this picture on your ArgoCD dashboard.

Connect to vm-01 with the Bastion. Username is useradmin and password is in the main.bicepparam file.

Once connected to the vm, you can test the staging website with the URL app-01-staging.mycompany.com and the production website with the URL app-01.mycompany.com.

You should see something like this screenshot.

 

Conclusion

In summary, we've delved into the mechanics of implementing GitOps using Azure DevOps, ArgoCD, and AKS. The solution presented here uses two separate repositories: one for the application and one for the deployment.

Why the segregation? Maintaining separate repositories for the application and deployment is a best practice to enhance organizational clarity, streamline collaboration, and ensure a clean separation of concerns in the GitOps workflow.

Furthermore, the presented solution offers scalability, particularly in a microservices architecture. With each application residing in its separate repository, individual pipelines seamlessly push their manifests to the Deployment repository, establishing it as the singular source of truth.

For those moments requiring manual adjustments, a DevOps administrator can opt for direct interventions by committing changes directly to the Deployment repository.

An additional layer of security is introduced as the pipeline is deliberately isolated from the Kubernetes control plane—a security-conscious approach, especially in a "pull-based" deployment paradigm.

And with that, we conclude this technical exploration. I hope you liked it.

 

Work With Me

Ready to take your Azure solutions to the next level and streamline your DevOps processes? Let's work together! As an experienced Azure solutions architect and DevOps expert, I can help you achieve your goals. Click the button below to get in touch.

Get In Touch