สร้าง Self-Service Platform บน AWS ให้ User จัดการ Non-AWS Resource เอง
Creating a Self-Service Platform on AWS for Users to Manage Non-AWS Resources
[ENGLISH VERSION IS AT THE BOTTOM]
ผมลองใช้ AWS Service Catalog ร่วมกับ CloudFormation, Lambda (Python), Azure DevOps pipeline, Terraform และอื่น ๆ เพื่อสร้าง self-service platform ง่าย ๆ บน AWS ให้ user สามารถสร้าง resource นอก AWS ด้วยตัวเอง
ระบบทำงานยังไง? ใช้ Tool/Service อะไรบ้าง?
AWS Service Catalog
เป็น service ที่เราใช้ในการสร้าง UI เพื่อควบคุม input จาก user ตามต้องการ และทำงานร่วมกับ IAM ในการกำหนดว่า user คนไหนสามารถ provision อะไรได้บ้าง ซึ่งนี่เป็นด่านหน้าด่านเดียวที่ user ต้องมา interact ด้วย
AWS IAM
ตามที่ว่าไปข้างบน Service Catalog สามารถกำหนด permission จาก IAM ได้อยู่แล้ว นี่เป็นการใช้ AWS capability เรื่อง authentication และ authorization ที่มีอยู่แล้วให้เกิดประโยชน์
AWS CloudFormation
ผมไม่ได้ใช้ CloudFormation เพื่อสร้าง resource แต่ใช้เพื่อรับ input ของ user ที่มาจาก UI ของ Service Catalog จากนั้น CloudFormation จะส่ง parameters ทั้งหมดไป process ต่อที่ Lambda function
เทคนิคนี้เรียกว่า CloudFormation Custom Resource ซึ่งทำให้เราใช้ CloudFormation ในการ manage อะไรก็ได้ตามต้องการ (แหงสิ ก็เราต้องเขียน code เองหนิ)
AWS Lambda และ Python
หลังจาก Lambda function รับ parameters มาจาก CloudFormation จากนั้น Python code จะ…
- ใช้ Git เพื่อ clone ไฟล์ Terrarorm ที่ใช้ในการสร้าง resource ลงมา
- Generate ไฟล์ .tfvars ออกมา จากนั้น push กลับไปที่ remote Git repository
AWS Secrets Manager
ไม่ใช่สาระสำคัญของบทความ แต่ผมใช้เก็บ token สำหรับ Lambda ซึ่งจะใช้เพื่อ access Git repository บน Azure DevOps
Azure DevOps (Git Repository และ Pipeline)
ผมได้เตรียม Terraform code และ automation pipeline ที่ใช้ในการสร้าง resource ข้างนอก AWS เอาไว้แล้วโดยเก็บ code ทั้งหมดไว้ที่ Azure DevOps
หากมีไฟล์ .tfvars ถูก push ขึ้นมาบน repository นี้ pipeline จะสร้าง Terraform workspace และ run Terraform เพื่อสร้าง resource ตาม variable ในไฟล์นั้น
Terraform
Terraform เป็น IaC tool ที่ผมใช้สำหรับสร้าง resource แต่คำถามคือทำไมผมถึงไม่ใช้ CloudFormation? คำตอบคือ CloudFormation ไม่สามารถ manage resource นอก AWS platform ได้แค่นั้นเอง 🤣
แม้ตอนนี้ Service Catalog สามารถใช้ Terraform แทน CloudFormation ได้ แต่ประเด็นคือผมไม่ต้องการดูแล Terraform engine บน AWS ครับ ผมเลยยกส่วนที่เป็น provisioning engine ไปไว้บน Azure DevOps ซึ่งเป็น managed service
Bash
ไม่ใช่สาระสำคัญของบทความนี้เช่นกัน แต่แค่จะบอกว่าผมใช้มันเขียน logic ในการตัดสินใจภายใน automation pipeline ที่ทำงานอยู่หลังบ้าน
Project นี้ผมตั้งใจให้ User สร้างอะไร?
ตรงนี้ไม่จำเป็นต้องเหมือนผมนะ ผมแค่จะบอกว่า project นี้ผมให้ใครก็ตามที่มีสิทธิ์สามารถสร้าง Git repository ที่มาพร้อมกับ code และ CI/CD pipeline ขึ้นมาบน DevOps platform ได้ด้วยตัวเอง เป็นการ initialize เพื่อให้พร้อมที่จะทำงานเวลาที่เรามี project ใหม่
แน่นอนว่าไอเดียมาจาก internal developer platform (IDP) ครับ เพียงแต่อยากลองแบบเบา ๆ ไปก่อน เพื่อน ๆ สามารถอ่านเรื่อง IDP ได้ที่นี่ครับ
User สร้าง Resource ผ่าน Service Catalog ยังไง?
- เข้า AWS Management Console และไปที่ AWS Service Catalog
- เลือก product ที่ต้องการและคลิก Launch product (จะเห็นเฉพาะ product ที่ตัวเองมีสิทธิ์สร้าง)
- กรอกข้อมูลให้เรียบร้อยตาม UI
- คลิก Launch product แล้วก็รอจน provision เสร็จ
หรือถ้าไปดูที่ Azure DevOps ก็จะเห็นว่ามีไฟล์ .tfvars
เกิดขึ้นมาตาม project ที่สร้าง
ทางฝั่ง pipeline ก็จะถูก trigger เพื่อสร้าง non-AWS resource ให้ตามต้องการ
ตัวอย่าง Code ที่ใช้ใน Project นี้
ผมจะแชร์ code ทั้งหมดและอธิบายไว้ตรงนี้ ยกเว้น Terraform code ผมขอไม่แชร์นะครับเพราะมันค่อนข้างซับซ้อน, มีส่วนที่เกี่ยวข้องกับบริษัท และมันก็ไม่ใช่ประเด็นหลักของบทความ
CloudFormation Template
ตรงนี้แหละที่เราใช้สร้าง UI ให้กับ user สังเกตว่าเราสามารถที่จะควบคุม input data ที่มาจาก user ได้ผ่านฟอร์ม (UI) ของ Service Catalog
---
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
Environments:
Description: "List of environments to be created (e.g., dev, uat, prd)"
Type: List<String>
AllowedValues:
- dev
- uat
- prd
Default: "dev,uat,prd"
Stacks:
Description: "List of stacks to be created (e.g., network, database)"
Type: List<String>
AllowedValues:
- network
- database
Default: "network,database"
ApproverEmail:
Description: "Email address of the approver"
Type: String
Default: "nopnithi@example.com"
AllowedPattern: "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
ConstraintDescription: "Must be a valid email address format"
InitialEnvironments:
Description: "List of initial environments to be created (e.g., dev, uat, prd)"
Type: List<String>
AllowedValues:
- dev
- uat
- prd
Default: "dev"
InitialStacks:
Description: "List of initial stacks to be created (e.g., network, database)"
Type: List<String>
AllowedValues:
- network
- database
Default: "network"
AwsAccountId:
Description: "AWS Account ID where resources will be created"
Type: String
Default: "123456789012"
AllowedPattern: "^[0-9]{12}$"
ConstraintDescription: "Must be a valid 12-digit AWS Account ID"
AwsRegion:
Description: "AWS Region where resources will be created"
Type: String
Default: "ap-southeast-1"
AllowedValues:
- ap-southeast-1
Project:
Description: "Name of the project"
Type: String
AllowedPattern: "^[a-z0-9-]+$"
ConstraintDescription: "Must contain only alphanumeric characters, hyphens, or underscores"
Resources:
CreateIacRepository:
Type: "Custom::CreateIacRepository"
Properties:
ServiceToken: "arn:aws:lambda:ap-southeast-1:123456789012:function:iac-repo-initializer"
Environments: !Join [",", !Ref Environments]
Stacks: !Join [",", !Ref Stacks]
ApproverEmail: !Ref ApproverEmail
InitialEnvironments: !Join [",", !Ref InitialEnvironments]
InitialStacks: !Join [",", !Ref InitialStacks]
AwsAccountId: !Ref AwsAccountId
AwsRegion: !Ref AwsRegion
Project: !Ref Project
Lambda Python Code
ผมใช้ crhelper ช่วยเพื่อให้การเขียน code สำหรับ CloudFormation Custom Resource ง่ายขึ้น และมีการใช้ library อย่าง GitPython และ pytz ด้วย พวกนี้ให้ติดตั้ง local ด้วย pip
และอัพโหลดไปพร้อมกับ Lambda code ข้างล่างนี้
import os
import shutil
import json
import logging
import datetime
import boto3
import pytz
from git import Repo
from crhelper import CfnResource
from botocore.exceptions import ClientError
from typing import Dict
helper = CfnResource()
@helper.create
def initialize_repo(event, _):
try:
logging.info('Starting script...')
# Get the Azure DevOps data from the environment variables
azdo_org = os.environ.get('AZDO_ORG')
azdo_project = os.environ.get('AZDO_PROJECT')
azdo_init_iac_repo = os.environ.get('AZDO_INIT_IAC_REPO')
azdo_pat_secret_name = os.environ.get('AZDO_PAT_SECRET_NAME')
base_path = os.environ.get('BASE_PATH')
# Retrieve Azure DevOps personal access token from AWS Secrets Manager
azdo_pat = get_azdo_pat(azdo_pat_secret_name)
# Get the parameters from the event
params = {
'aws_account_id': event['ResourceProperties']['AwsAccountId'],
'project': event['ResourceProperties']['Project'],
'environments': event['ResourceProperties']['Environments'].split(','),
'stacks': event['ResourceProperties']['Stacks'].split(','),
'initial_environments': event['ResourceProperties']['InitialEnvironments'].split(','),
'initial_stacks': event['ResourceProperties']['InitialStacks'].split(','),
'approver_email': event['ResourceProperties']['ApproverEmail']
}
# Clone the repository to the destination path
repo_url = f'https://{azdo_pat}@dev.azure.com/{azdo_org}/{azdo_project}/_git/{azdo_init_iac_repo}'
clone_repository(repo_url, f'{base_path}/{azdo_init_iac_repo}')
# Set Git user and email
git_user = 'Nopnithi Khaokaew (Game)'
git_email = 'me@nopnithi.dev'
set_git_config(f'{base_path}/{azdo_init_iac_repo}', git_user, git_email)
# Generate the .tfvars file from the parameters
generate_tfvars_file(params, f'{base_path}/{azdo_init_iac_repo}')
# Commit and push the changes to the remote repository
commit_and_push_changes(
f'{base_path}/{azdo_init_iac_repo}', params['project'],
f'feat: Initialize IaC repository for project "{params["project"]}"'
)
logging.info('Script completed successfully.')
except Exception as e:
logging.error(f'Error running script: {str(e)}')
raise e
@helper.update
@helper.delete
def no_op(_, __):
pass
def handler(event, context):
helper(event, context)
def clone_repository(repo_url: str, repo_path: str) -> None:
"""
Clones a Git repository using the GitPython library.
"""
try:
logging.info('Cloning repository...')
if os.path.exists(repo_path):
shutil.rmtree(repo_path)
Repo.clone_from(repo_url, repo_path)
logging.info('Repository cloned successfully.')
except Exception as e:
logging.error(f'Error cloning repository: {str(e)}')
raise e
def set_git_config(repo_path: str, user: str, email: str) -> None:
"""
Sets the Git user and email for the repository.
"""
try:
logging.info('Setting Git user and email...')
with Repo(repo_path) as repo:
repo.config_writer().set_value("user", "name", user).release()
repo.config_writer().set_value("user", "email", email).release()
logging.info('Git user and email set successfully.')
except Exception as e:
logging.error(f"Error setting Git user and email: {str(e)}")
raise e
def build_tfvars_content(params: Dict[str, str]) -> str:
"""
Builds the content of the .tfvars file from the received parameters.
"""
tz = pytz.timezone('Asia/Bangkok')
now = datetime.datetime.now(tz=tz)
content = f'# Auto-generated via service catalog at {now}\n\n'
for key, value in params.items():
if isinstance(value, list):
value_str = '[' + ', '.join([f'"{v}"' for v in value]) + ']'
else:
value_str = f'"{value}"'
content += f'{key:<21}= {value_str}\n'
return content
def generate_tfvars_file(params: Dict[str, str], repo_path: str) -> None:
"""
Generates the .tfvars file from the received parameters.
"""
try:
logging.info('Generating .tfvars file...')
content = build_tfvars_content(params)
project_path = f'{repo_path}/projects'
if not os.path.exists(project_path):
os.makedirs(project_path)
tfvars_path = f'{project_path}/{params["project"]}.tfvars'
with open(tfvars_path, 'w') as file:
file.write(content)
logging.info(f'{params["project"]}.tfvars file generated successfully.')
except Exception as e:
logging.error(f'Error generating .tfvars file: {str(e)}')
raise e
def commit_and_push_changes(repo_path: str, project: str, commit_message: str) -> None:
"""
Commits and pushes the changes to the remote repository.
"""
try:
logging.info('Committing and pushing changes...')
with Repo(repo_path) as repo:
index = repo.index
index.add(f'projects/{project}.tfvars')
index.commit(commit_message)
origin = repo.remote(name='origin')
origin.push()
logging.info('Changes committed and pushed successfully.')
except Exception as e:
logging.error(f'Error committing and pushing changes: {str(e)}')
raise e
def get_azdo_pat(secret_name: str) -> str:
"""
Retrieves the Azure DevOps personal access token from AWS Secrets Manager.
"""
secrets_manager = boto3.client('secretsmanager')
try:
response = secrets_manager.get_secret_value(SecretId=secret_name)
secret = json.loads(response['SecretString'])
return secret['token']
except ClientError as e:
print(f'Error getting secret {secret_name}: {e}')
raise e
และถ้าดูจาก code จะเห็นว่า project นี้ใช้ CloudFormation เพื่อ provision อย่างเดียว ผมไม่ได้เขียนเพื่อรองรับการ update หรือ delete เอาไว้ (แค่จะบอกว่ามันทำได้เหมือนกันครับ)
Azure Pipeline YAML
Logic บน pipeline ของผมจะค่อนข้างซับซ้อน แต่เดี๋ยวผมอธิบายคร่าว ๆ ให้ (แต่มันขึ้นอยู่กับว่าคุณต้องการจะทำอะไร ซึ่งไม่จำเป็นต้องซับซ้อนแบบของผม)
จากรูปด้านบนคือ structure ใน repository ที่ใช้สำหรับ provision resource ของผม มันจะมี Terraform code อยู่ และที่ projects
directory ผมจะเก็บไฟล์ .tfvars เอาไว้ เวลาที่ user ไป request สร้าง resource ผ่าน AWS Service Catalog มันจะ push ไฟล์เข้ามาที่นี่ จากนั้น pipeline ก็จะทำงานดังนี้…
- เช็ค .tfvars ทั้งหมดใน
projects
directory - เช็คว่ามี Terraform workspace อะไรอยู่แล้วบ้าง
- เทียบกันเพื่อดูว่าจะต้องสร้าง workspace อะไรบ้าง (ถ้ามีอยู่แล้วจะไม่ทำอะไร)
- จากนั้น run Terraform เพื่อสร้าง resource โดยไฟล์ .tfvars หนึ่งไฟล์ก็คือหนึ่ง project ที่จะสร้าง และอยู่แยกกันคนละ workspace (ถ้าไม่แยก workspace จะเสี่ยงและอนาคตหากมี project มากขึ้นก็จะช้า)
- ทำไมมี terraform apply สองรอบ? เพราะมันมี bug ระหว่าง Terraform กับ Azure DevOps ที่มันจะ error ขณะ provision ซึ่งต้อง apply ซ้ำถึงจะหาย ผมจึงเขียน Bash script เพื่อเช็คว่าถ้าเจอให้ run ซ้ำ
trigger:
branches:
include:
- main
paths:
include:
- "projects/*.tfvars"
pool:
vmImage: ubuntu-latest
variables:
- group: myvargroup
- name: region
value: ap-southeast-1
jobs:
- job: CreateIacRepo
displayName: "Create IaC Repository"
steps:
- checkout: self
- task: TerraformInstaller@0
displayName: "Install Terraform"
inputs:
terraformVersion: "latest"
- bash: |
files_in_dir=$(ls "$(Build.SourcesDirectory)/projects"/*.tfvars | xargs -n1 basename | sed 's/\.tfvars//' | tr '\n' ' ')
echo "##vso[task.setvariable variable=filesInDir]$files_in_dir"
echo "------------------------------------------"
echo "Output:"
echo $files_in_dir
echo "------------------------------------------"
displayName: "List .tfvars file names in projects directory"
name: setfilesInDir
- bash: |
terraform init \
-backend-config="access_key=${AWS_ACCESS_KEY_ID}" \
-backend-config="secret_key=${AWS_SECRET_ACCESS_KEY}" \
-backend-config="region=$(region)" \
-backend-config="bucket=nopnithi-tfstate-xyz123" \
-backend-config="key=iacrepo/terraform.tfstate" \
-backend-config="role_arn=arn:aws:iam::123456789012:role/terraform-role" \
-backend-config="encrypt=true"
workspaces=$(terraform workspace list | sed 's/^\*//;s/ //g' | grep -v 'default' | tr '\n' ' ')
echo "##vso[task.setvariable variable=workspaces]$workspaces"
echo "------------------------------------------"
echo "Output:"
echo $workspaces
echo "------------------------------------------"
displayName: "List Terraform workspaces"
name: setworkspaces
env:
TF_TOKEN_app_terraform_io: $(tf_token_app_terraform_io)
AWS_ACCESS_KEY_ID: $(aws_access_key_id)
AWS_SECRET_ACCESS_KEY: $(aws_secret_access_key)
AZDO_PERSONAL_ACCESS_TOKEN: $(azdo_personal_access_token)
AZDO_ORG_SERVICE_URL: $(azdo_org_service_url)
- bash: |
IFS=' ' read -ra files_in_dir <<< "$(filesInDir)"
IFS=' ' read -ra workspaces <<< "$(workspaces)"
new_files=($(echo ${files_in_dir[@]} ${workspaces[@]} | tr ' ' '\n' | sort | uniq -u))
echo "##vso[task.setvariable variable=newFiles]${new_files[*]}"
echo "------------------------------------------"
echo "Output:"
echo ${new_files[*]}
echo "------------------------------------------"
displayName: "Find new .tfvars files"
name: setnewFiles
- bash: |
overall_status=0
echo "------------------------------------------"
echo "Input:"
echo $(newFiles)
echo "------------------------------------------"
echo "Running terraform init"
terraform init \
-backend-config="access_key=${AWS_ACCESS_KEY_ID}" \
-backend-config="secret_key=${AWS_SECRET_ACCESS_KEY}" \
-backend-config="region=$(region)" \
-backend-config="bucket=nopnithi-tfstate-xyz123" \
-backend-config="key=iacrepo/terraform.tfstate" \
-backend-config="role_arn=arn:aws:iam::123456789012:role/terraform-role" \
-backend-config="encrypt=true"
IFS=' ' read -ra new_files <<< "$(newFiles)"
for file in "${new_files[@]}"; do
workspace_name=$file
echo "Creating workspace: $workspace_name"
terraform workspace select -or-create=true "$workspace_name"
echo "Running terraform apply"
terraform apply -auto-approve -var-file=projects/$workspace_name.tfvars
# Check the exit status of the terraform command
if [ $? -ne 0 ]; then
echo "Terraform apply failed for $workspace_name"
overall_status=1
fi
done
echo "##vso[task.setvariable variable=overallStatus]$overall_status"
echo "------------------------------------------"
echo "Output:"
echo $overall_status
echo "------------------------------------------"
name: TerraformApply
displayName: "Run terraform apply"
condition: ne(variables.newFiles, '')
continueOnError: true
env:
TF_TOKEN_app_terraform_io: $(tf_token_app_terraform_io)
AWS_ACCESS_KEY_ID: $(aws_access_key_id)
AWS_SECRET_ACCESS_KEY: $(aws_secret_access_key)
AZDO_PERSONAL_ACCESS_TOKEN: $(azdo_personal_access_token)
AZDO_ORG_SERVICE_URL: $(azdo_org_service_url)
- bash: |
echo "------------------------------------------"
echo "Input:"
echo $(newFiles)
echo "------------------------------------------"
echo "Running terraform init"
terraform init \
-backend-config="access_key=${AWS_ACCESS_KEY_ID}" \
-backend-config="secret_key=${AWS_SECRET_ACCESS_KEY}" \
-backend-config="region=$(region)" \
-backend-config="bucket=nopnithi-tfstate-xyz123" \
-backend-config="key=iacrepo/terraform.tfstate" \
-backend-config="role_arn=arn:aws:iam::123456789012:role/terraform-role" \
-backend-config="encrypt=true"
IFS=' ' read -ra new_files <<< "$(newFiles)"
for file in "${new_files[@]}"; do
workspace_name=$file
echo "Selecting workspace: $workspace_name"
terraform workspace select -or-create=true "$workspace_name"
echo "Running terraform apply"
terraform apply -auto-approve -var-file=projects/$workspace_name.tfvars
done
name: TerraformApply2
displayName: "Run terraform apply 2"
condition: eq(variables.overallStatus, 1)
env:
TF_TOKEN_app_terraform_io: $(tf_token_app_terraform_io)
AWS_ACCESS_KEY_ID: $(aws_access_key_id)
AWS_SECRET_ACCESS_KEY: $(aws_secret_access_key)
AZDO_PERSONAL_ACCESS_TOKEN: $(azdo_personal_access_token)
AZDO_ORG_SERVICE_URL: $(azdo_org_service_url)
เรียบร้อยครับ หวังว่าบทความนี้จะเป็นประโยชน์กับคนที่กำลังจะลองท่านี้ 😄
[ENGLISH VERSION]
Title: Creating a Self-Service Platform on AWS for Users to Manage Non-AWS Resources
I’ve tried using AWS Service Catalog in conjunction with CloudFormation, Lambda (Python), Azure DevOps pipeline, Terraform, and others to create a simple self-service platform on AWS. This allows users to create non-AWS resources on their own outside of the AWS platform.
How Does It Work? What Tools/Services Are Used?
AWS Service Catalog
AWS Service Catalog is used to create a user interface (UI) to control user inputs as we want. It works in conjunction with IAM service to define which users can provision what. This is the only front-end interface that users interact with.
AWS IAM
As mentioned earlier, the Service Catalog can determine permissions based on IAM. It makes good use of AWS’s existing capabilities like authentication and authorization.
AWS CloudFormation
CloudFormation isn’t used to create resources but rather to retrieve user inputs from the Service Catalog UI. Subsequently, CloudFormation pass all parameters for processing in a Lambda function.
This technique is called “CloudFormation Custom Resource”, which allows us to use CloudFormation to manage anything as needed (well, because we have to write the code ourselves).
AWS Lambda and Python
Once the Lambda function receives parameters from CloudFormation, the Python code will:
- Use Git to clone the Terraform files used for creating resources.
- Generate .tfvars files and then push them back to the remote Git repository.
AWS Secrets Manager
Although it’s not a primary focus of this article, I use Secrets Manager to store Azure tokens for Lambda function, which are used to access the Git repository on Azure DevOps.
Azure DevOps (Git Repository and Pipeline)
I prepared Terraform code to create non-AWS resources and stored all the code in Azure DevOps, including the automation pipeline.
When a .tfvars file is pushed to this repository, pipeline creates a Terraform workspace and runs Terraform to create resources according to the variables in the file.
Terraform
Terraform is the IaC tool that I use to create resources. Why don’t I use CloudFormation? Simply because CloudFormation cannot manage resources outside the AWS.
Now, Service Catalog can use Terraform instead of CloudFormation, but I didn’t want to manage the Terraform engine on AWS. Thus, I decided to use the provisioning engine on Azure DevOps, which is a managed service.
Bash
It’s not a significant part of this article, I used Bash to write the logic for decision making within the automation pipeline running behind the scenes.
What Can Users Provision in This Project?
Your goal doesn’t necessarily have to be the same as mine. My goal was to enable anyone (who has permissions) to independently create a Git repository with code and CI/CD pipelines on the DevOps platform. This is a way to initiate a readiness to work whenever we have a new project.
Of course, the idea comes from the internal developer platform (IDP). I just wanted to try a lite version first. You can read more about IDP here.
How Do Users Create Resources via Service Catalog?
- Go to the AWS Management Console and navigate to AWS Service Catalog.
- Select the product and click Launch product (users will only see products that they’re allowed to create).
- Fill in the details via the UI.
- Click Launch product and wait for the provision to complete.
If I go to my DevOps platform, the new .tfvars
file is created in projects
directory like below.
And an automation pipeline will be triggered to execute Terraform for provisioning non-AWS resources.
Code Examples for this Project
In this section, I’ll share all the codes used in this project, except for the Terraform code. I won’t be able to share because it’s too complex, related to my personal information and isn’t the main focus of this article.
CloudFormation Template
Here is where we created a user interface (UI) for users. Note that we were able to control the input data coming from the users through the form (UI) of the Service Catalog.
---
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
Environments:
Description: "List of environments to be created (e.g., dev, uat, prd)"
Type: List<String>
AllowedValues:
- dev
- uat
- prd
Default: "dev,uat,prd"
Stacks:
Description: "List of stacks to be created (e.g., network, database)"
Type: List<String>
AllowedValues:
- network
- database
Default: "network,database"
ApproverEmail:
Description: "Email address of the approver"
Type: String
Default: "nopnithi@example.com"
AllowedPattern: "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
ConstraintDescription: "Must be a valid email address format"
InitialEnvironments:
Description: "List of initial environments to be created (e.g., dev, uat, prd)"
Type: List<String>
AllowedValues:
- dev
- uat
- prd
Default: "dev"
InitialStacks:
Description: "List of initial stacks to be created (e.g., network, database)"
Type: List<String>
AllowedValues:
- network
- database
Default: "network"
AwsAccountId:
Description: "AWS Account ID where resources will be created"
Type: String
Default: "123456789012"
AllowedPattern: "^[0-9]{12}$"
ConstraintDescription: "Must be a valid 12-digit AWS Account ID"
AwsRegion:
Description: "AWS Region where resources will be created"
Type: String
Default: "ap-southeast-1"
AllowedValues:
- ap-southeast-1
Project:
Description: "Name of the project"
Type: String
AllowedPattern: "^[a-z0-9-]+$"
ConstraintDescription: "Must contain only alphanumeric characters, hyphens, or underscores"
Resources:
CreateIacRepository:
Type: "Custom::CreateIacRepository"
Properties:
ServiceToken: "arn:aws:lambda:ap-southeast-1:123456789012:function:iac-repo-initializer"
Environments: !Join [",", !Ref Environments]
Stacks: !Join [",", !Ref Stacks]
ApproverEmail: !Ref ApproverEmail
InitialEnvironments: !Join [",", !Ref InitialEnvironments]
InitialStacks: !Join [",", !Ref InitialStacks]
AwsAccountId: !Ref AwsAccountId
AwsRegion: !Ref AwsRegion
Project: !Ref Project
Lambda Python Code
I used crhelper to make the coding for CloudFormation Custom Resource easier. Also, libraries like GitPython and pytz were used. These should be installed locally using pip
and upload to function with the Lambda code below.
import os
import shutil
import json
import logging
import datetime
import boto3
import pytz
from git import Repo
from crhelper import CfnResource
from botocore.exceptions import ClientError
from typing import Dict
helper = CfnResource()
@helper.create
def initialize_repo(event, _):
try:
logging.info('Starting script...')
# Get the Azure DevOps data from the environment variables
azdo_org = os.environ.get('AZDO_ORG')
azdo_project = os.environ.get('AZDO_PROJECT')
azdo_init_iac_repo = os.environ.get('AZDO_INIT_IAC_REPO')
azdo_pat_secret_name = os.environ.get('AZDO_PAT_SECRET_NAME')
base_path = os.environ.get('BASE_PATH')
# Retrieve Azure DevOps personal access token from AWS Secrets Manager
azdo_pat = get_azdo_pat(azdo_pat_secret_name)
# Get the parameters from the event
params = {
'aws_account_id': event['ResourceProperties']['AwsAccountId'],
'project': event['ResourceProperties']['Project'],
'environments': event['ResourceProperties']['Environments'].split(','),
'stacks': event['ResourceProperties']['Stacks'].split(','),
'initial_environments': event['ResourceProperties']['InitialEnvironments'].split(','),
'initial_stacks': event['ResourceProperties']['InitialStacks'].split(','),
'approver_email': event['ResourceProperties']['ApproverEmail']
}
# Clone the repository to the destination path
repo_url = f'https://{azdo_pat}@dev.azure.com/{azdo_org}/{azdo_project}/_git/{azdo_init_iac_repo}'
clone_repository(repo_url, f'{base_path}/{azdo_init_iac_repo}')
# Set Git user and email
git_user = 'Nopnithi Khaokaew (Game)'
git_email = 'me@nopnithi.dev'
set_git_config(f'{base_path}/{azdo_init_iac_repo}', git_user, git_email)
# Generate the .tfvars file from the parameters
generate_tfvars_file(params, f'{base_path}/{azdo_init_iac_repo}')
# Commit and push the changes to the remote repository
commit_and_push_changes(
f'{base_path}/{azdo_init_iac_repo}', params['project'],
f'feat: Initialize IaC repository for project "{params["project"]}"'
)
logging.info('Script completed successfully.')
except Exception as e:
logging.error(f'Error running script: {str(e)}')
raise e
@helper.update
@helper.delete
def no_op(_, __):
pass
def handler(event, context):
helper(event, context)
def clone_repository(repo_url: str, repo_path: str) -> None:
"""
Clones a Git repository using the GitPython library.
"""
try:
logging.info('Cloning repository...')
if os.path.exists(repo_path):
shutil.rmtree(repo_path)
Repo.clone_from(repo_url, repo_path)
logging.info('Repository cloned successfully.')
except Exception as e:
logging.error(f'Error cloning repository: {str(e)}')
raise e
def set_git_config(repo_path: str, user: str, email: str) -> None:
"""
Sets the Git user and email for the repository.
"""
try:
logging.info('Setting Git user and email...')
with Repo(repo_path) as repo:
repo.config_writer().set_value("user", "name", user).release()
repo.config_writer().set_value("user", "email", email).release()
logging.info('Git user and email set successfully.')
except Exception as e:
logging.error(f"Error setting Git user and email: {str(e)}")
raise e
def build_tfvars_content(params: Dict[str, str]) -> str:
"""
Builds the content of the .tfvars file from the received parameters.
"""
tz = pytz.timezone('Asia/Bangkok')
now = datetime.datetime.now(tz=tz)
content = f'# Auto-generated via service catalog at {now}\n\n'
for key, value in params.items():
if isinstance(value, list):
value_str = '[' + ', '.join([f'"{v}"' for v in value]) + ']'
else:
value_str = f'"{value}"'
content += f'{key:<21}= {value_str}\n'
return content
def generate_tfvars_file(params: Dict[str, str], repo_path: str) -> None:
"""
Generates the .tfvars file from the received parameters.
"""
try:
logging.info('Generating .tfvars file...')
content = build_tfvars_content(params)
project_path = f'{repo_path}/projects'
if not os.path.exists(project_path):
os.makedirs(project_path)
tfvars_path = f'{project_path}/{params["project"]}.tfvars'
with open(tfvars_path, 'w') as file:
file.write(content)
logging.info(f'{params["project"]}.tfvars file generated successfully.')
except Exception as e:
logging.error(f'Error generating .tfvars file: {str(e)}')
raise e
def commit_and_push_changes(repo_path: str, project: str, commit_message: str) -> None:
"""
Commits and pushes the changes to the remote repository.
"""
try:
logging.info('Committing and pushing changes...')
with Repo(repo_path) as repo:
index = repo.index
index.add(f'projects/{project}.tfvars')
index.commit(commit_message)
origin = repo.remote(name='origin')
origin.push()
logging.info('Changes committed and pushed successfully.')
except Exception as e:
logging.error(f'Error committing and pushing changes: {str(e)}')
raise e
def get_azdo_pat(secret_name: str) -> str:
"""
Retrieves the Azure DevOps personal access token from AWS Secrets Manager.
"""
secrets_manager = boto3.client('secretsmanager')
try:
response = secrets_manager.get_secret_value(SecretId=secret_name)
secret = json.loads(response['SecretString'])
return secret['token']
except ClientError as e:
print(f'Error getting secret {secret_name}: {e}')
raise e
And if you look at the code, you’ll see that this project only uses CloudFormation for provisioning. I didn’t write it to handle “update” and “delete” (just to clarify, it’s possible to do so).
Azure Pipeline YAML
The logic on my pipeline is quite complex, but I’ll briefly explain it. (But it depends on what you want to do, which doesn’t have to be as complex as mine.)
Above is the structure of repository contains Terraform code used for resource provisioning. In the projects
directory, I stored the .tfvars files. When a user requests to create a resource via the AWS Service Catalog, it eventually pushes a file in here. Then the pipeline will run and perform the following actions:
- Check all .tfvars in the
projects
directory. - Check the existing Terraform workspaces.
- Compare them to determine which workspaces need to be created (nothing will be done if they already exist).
- Then run Terraform to create resources. Each .tfvars file corresponds to a resource that will be created and they will be in separate workspaces (If we don’t separate workspaces, it could be risky and slow down in the future if there’re more projects).
- You may be wondering, why are there more than one
terraform apply
tasks? There is a bug between Terraform and Azure DevOps that will cause an error (stuck) during provisioning which requires re-applying to resolve. So, I wrote a Bash script to rerun the task if the error occurs.
trigger:
branches:
include:
- main
paths:
include:
- "projects/*.tfvars"
pool:
vmImage: ubuntu-latest
variables:
- group: myvargroup
- name: region
value: ap-southeast-1
jobs:
- job: CreateIacRepo
displayName: "Create IaC Repository"
steps:
- checkout: self
- task: TerraformInstaller@0
displayName: "Install Terraform"
inputs:
terraformVersion: "latest"
- bash: |
files_in_dir=$(ls "$(Build.SourcesDirectory)/projects"/*.tfvars | xargs -n1 basename | sed 's/\.tfvars//' | tr '\n' ' ')
echo "##vso[task.setvariable variable=filesInDir]$files_in_dir"
echo "------------------------------------------"
echo "Output:"
echo $files_in_dir
echo "------------------------------------------"
displayName: "List .tfvars file names in projects directory"
name: setfilesInDir
- bash: |
terraform init \
-backend-config="access_key=${AWS_ACCESS_KEY_ID}" \
-backend-config="secret_key=${AWS_SECRET_ACCESS_KEY}" \
-backend-config="region=$(region)" \
-backend-config="bucket=nopnithi-tfstate-xyz123" \
-backend-config="key=iacrepo/terraform.tfstate" \
-backend-config="role_arn=arn:aws:iam::123456789012:role/terraform-role" \
-backend-config="encrypt=true"
workspaces=$(terraform workspace list | sed 's/^\*//;s/ //g' | grep -v 'default' | tr '\n' ' ')
echo "##vso[task.setvariable variable=workspaces]$workspaces"
echo "------------------------------------------"
echo "Output:"
echo $workspaces
echo "------------------------------------------"
displayName: "List Terraform workspaces"
name: setworkspaces
env:
TF_TOKEN_app_terraform_io: $(tf_token_app_terraform_io)
AWS_ACCESS_KEY_ID: $(aws_access_key_id)
AWS_SECRET_ACCESS_KEY: $(aws_secret_access_key)
AZDO_PERSONAL_ACCESS_TOKEN: $(azdo_personal_access_token)
AZDO_ORG_SERVICE_URL: $(azdo_org_service_url)
- bash: |
IFS=' ' read -ra files_in_dir <<< "$(filesInDir)"
IFS=' ' read -ra workspaces <<< "$(workspaces)"
new_files=($(echo ${files_in_dir[@]} ${workspaces[@]} | tr ' ' '\n' | sort | uniq -u))
echo "##vso[task.setvariable variable=newFiles]${new_files[*]}"
echo "------------------------------------------"
echo "Output:"
echo ${new_files[*]}
echo "------------------------------------------"
displayName: "Find new .tfvars files"
name: setnewFiles
- bash: |
overall_status=0
echo "------------------------------------------"
echo "Input:"
echo $(newFiles)
echo "------------------------------------------"
echo "Running terraform init"
terraform init \
-backend-config="access_key=${AWS_ACCESS_KEY_ID}" \
-backend-config="secret_key=${AWS_SECRET_ACCESS_KEY}" \
-backend-config="region=$(region)" \
-backend-config="bucket=nopnithi-tfstate-xyz123" \
-backend-config="key=iacrepo/terraform.tfstate" \
-backend-config="role_arn=arn:aws:iam::123456789012:role/terraform-role" \
-backend-config="encrypt=true"
IFS=' ' read -ra new_files <<< "$(newFiles)"
for file in "${new_files[@]}"; do
workspace_name=$file
echo "Creating workspace: $workspace_name"
terraform workspace select -or-create=true "$workspace_name"
echo "Running terraform apply"
terraform apply -auto-approve -var-file=projects/$workspace_name.tfvars
# Check the exit status of the terraform command
if [ $? -ne 0 ]; then
echo "Terraform apply failed for $workspace_name"
overall_status=1
fi
done
echo "##vso[task.setvariable variable=overallStatus]$overall_status"
echo "------------------------------------------"
echo "Output:"
echo $overall_status
echo "------------------------------------------"
name: TerraformApply
displayName: "Run terraform apply"
condition: ne(variables.newFiles, '')
continueOnError: true
env:
TF_TOKEN_app_terraform_io: $(tf_token_app_terraform_io)
AWS_ACCESS_KEY_ID: $(aws_access_key_id)
AWS_SECRET_ACCESS_KEY: $(aws_secret_access_key)
AZDO_PERSONAL_ACCESS_TOKEN: $(azdo_personal_access_token)
AZDO_ORG_SERVICE_URL: $(azdo_org_service_url)
- bash: |
echo "------------------------------------------"
echo "Input:"
echo $(newFiles)
echo "------------------------------------------"
echo "Running terraform init"
terraform init \
-backend-config="access_key=${AWS_ACCESS_KEY_ID}" \
-backend-config="secret_key=${AWS_SECRET_ACCESS_KEY}" \
-backend-config="region=$(region)" \
-backend-config="bucket=nopnithi-tfstate-xyz123" \
-backend-config="key=iacrepo/terraform.tfstate" \
-backend-config="role_arn=arn:aws:iam::123456789012:role/terraform-role" \
-backend-config="encrypt=true"
IFS=' ' read -ra new_files <<< "$(newFiles)"
for file in "${new_files[@]}"; do
workspace_name=$file
echo "Selecting workspace: $workspace_name"
terraform workspace select -or-create=true "$workspace_name"
echo "Running terraform apply"
terraform apply -auto-approve -var-file=projects/$workspace_name.tfvars
done
name: TerraformApply2
displayName: "Run terraform apply 2"
condition: eq(variables.overallStatus, 1)
env:
TF_TOKEN_app_terraform_io: $(tf_token_app_terraform_io)
AWS_ACCESS_KEY_ID: $(aws_access_key_id)
AWS_SECRET_ACCESS_KEY: $(aws_secret_access_key)
AZDO_PERSONAL_ACCESS_TOKEN: $(azdo_personal_access_token)
AZDO_ORG_SERVICE_URL: $(azdo_org_service_url)
That’s all! I hope this article will be useful for those who are about to try this approach 😄.