CodePipeline Blue-Green X-Ray Tracing: CI/CD Mastery

AWS CI/CD and Monitoring: CodePipeline, X-Ray, and CloudWatch

Introduction

Continuous Integration and Delivery (CI/CD) and comprehensive monitoring are essential for modern cloud applications. AWS provides CodePipeline for orchestrating deployments, CodeBuild for building artifacts, CodeDeploy for automated deployments, X-Ray for distributed tracing, and CloudWatch for metrics and logs.

This guide explores CI/CD pipeline automation, blue/green deployments, distributed tracing patterns, and production monitoring. We'll cover practical deployment strategies, debugging microservices, and observability best practices.


AWS Developer Certification Series

📚 View Complete AWS Developer Certification Guide - Master all 7 parts with our comprehensive learning path.

This is Part VII (Current Article) of our comprehensive 7-part AWS developer guide:

  1. Part I: IAM EC2 & Auto Scaling
  2. Part II: RDS Aurora & DynamoDB
  3. Part III: SQS SNS & Kinesis
  4. Part IV: Lambda API Gateway
  5. Part V: ECS Fargate & IaC
  6. Part VI: Cognito KMS Security
  7. Part VII: CodePipeline & Monitoring (Current Article)

← Part VI: Cognito KMS Security


AWS CodePipeline

Pipeline Architecture

CodePipeline orchestrates CI/CD workflows through stages (Source, Build, Test, Deploy) with automated approvals.

   CodePipeline Workflow   

   trigger   

   artifacts   

  Source Stage  

  CodeCommit/GitHub  

  Build Stage  

  CodeBuild  

  Test Stage  

  CodeBuild  

  Manual Approval  

  SNS Notification  

  Deploy Stage  

  CodeDeploy  

  S3 Artifacts  

  Production  

  EC2/ECS  

Key Concepts:

  • Stage: Logical phase in pipeline (Source, Build, Test, Deploy)
  • Action: Task within stage (e.g., CodeBuild, CodeDeploy, Lambda invoke)
  • Artifact: Output from one stage passed to next stage
  • Transition: Connection between stages (can be disabled)

Create Pipeline with AWS CLI

# Create pipeline (requires pipeline.json definition)
aws codepipeline create-pipeline --cli-input-json file://pipeline.json

# Start pipeline execution
aws codepipeline start-pipeline-execution --name my-pipeline

# Get pipeline state
aws codepipeline get-pipeline-state --name my-pipeline

# Enable/disable stage transition
aws codepipeline disable-stage-transition \
  --pipeline-name my-pipeline \
  --stage-name Deploy \
  --transition-type Inbound \
  --reason "Testing in progress"

Example pipeline definition:

{
  "pipeline": {
    "name": "my-web-app-pipeline",
    "roleArn": "arn:aws:iam::123456789012:role/CodePipelineServiceRole",
    "artifactStore": {
      "type": "S3",
      "location": "my-pipeline-artifacts-bucket"
    },
    "stages": [
      {
        "name": "Source",
        "actions": [
          {
            "name": "SourceAction",
            "actionTypeId": {
              "category": "Source",
              "owner": "AWS",
              "provider": "CodeCommit",
              "version": "1"
            },
            "configuration": {
              "RepositoryName": "my-app",
              "BranchName": "main"
            },
            "outputArtifacts": [{ "name": "SourceOutput" }]
          }
        ]
      },
      {
        "name": "Build",
        "actions": [
          {
            "name": "BuildAction",
            "actionTypeId": {
              "category": "Build",
              "owner": "AWS",
              "provider": "CodeBuild",
              "version": "1"
            },
            "configuration": {
              "ProjectName": "my-build-project"
            },
            "inputArtifacts": [{ "name": "SourceOutput" }],
            "outputArtifacts": [{ "name": "BuildOutput" }]
          }
        ]
      },
      {
        "name": "Deploy",
        "actions": [
          {
            "name": "DeployAction",
            "actionTypeId": {
              "category": "Deploy",
              "owner": "AWS",
              "provider": "CodeDeploy",
              "version": "1"
            },
            "configuration": {
              "ApplicationName": "my-app",
              "DeploymentGroupName": "production"
            },
            "inputArtifacts": [{ "name": "BuildOutput" }]
          }
        ]
      }
    ]
  }
}

AWS CodeBuild

Build Architecture

CodeBuild compiles code, runs tests, and produces deployment artifacts using buildspec.yml.

   CodeBuild Environment   

  Source Code  

  CodeCommit/GitHub  

  buildspec.yml  

  Install Phase  

  Dependencies  

  Pre-build Phase  

  Tests  

  Build Phase  

  Compile  

  Post-build Phase  

  Package  

  S3 Artifacts  

  CloudWatch Logs  

buildspec.yml Example

version: 0.2

env:
  parameter-store:
    DB_PASSWORD: /myapp/db/password
  secrets-manager:
    API_KEY: prod/api:key

phases:
  install:
    runtime-versions:
      nodejs: 18
    commands:
      - npm install

  pre_build:
    commands:
      - echo "Running tests..."
      - npm test
      - echo "Logging in to Amazon ECR..."
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com

  build:
    commands:
      - echo "Building Docker image..."
      - docker build -t my-app:$CODEBUILD_RESOLVED_SOURCE_VERSION .
      - docker tag my-app:$CODEBUILD_RESOLVED_SOURCE_VERSION $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my-app:latest

  post_build:
    commands:
      - echo "Pushing Docker image..."
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my-app:latest
      - echo "Build completed on $(date)"

artifacts:
  files:
    - '**/*'
  base-directory: dist

cache:
  paths:
    - 'node_modules/**/*'

Start build:

# Start build
aws codebuild start-build --project-name my-project

# Start build with environment variable override
aws codebuild start-build \
  --project-name my-project \
  --environment-variables-override name=ENV,value=production,type=PLAINTEXT

AWS CodeDeploy

Deployment Strategies

CodeDeploy supports in-place and blue/green deployments for EC2, Lambda, and ECS.

In-Place Deployment (EC2):

  • Updates existing instances incrementally
  • No infrastructure changes
  • Potential downtime during deployment

Blue/Green Deployment (EC2/ECS/Lambda):

  • Creates new environment (green) alongside existing (blue)
  • Routes traffic to green after validation
  • Easy rollback by switching traffic back to blue

appspec.yml for EC2

version: 0.0
os: linux

files:
  - source: /
    destination: /var/www/html

hooks:
  ApplicationStop:
    - location: scripts/stop_server.sh
      timeout: 300
      runas: root

  BeforeInstall:
    - location: scripts/install_dependencies.sh
      timeout: 300
      runas: root

  AfterInstall:
    - location: scripts/configure_app.sh
      timeout: 300
      runas: root

  ApplicationStart:
    - location: scripts/start_server.sh
      timeout: 300
      runas: root

  ValidateService:
    - location: scripts/validate_service.sh
      timeout: 300
      runas: root

CodeDeploy CLI Commands

# Create deployment
aws deploy create-deployment \
  --application-name my-app \
  --deployment-group-name production \
  --s3-location bucket=my-deployments,key=app.zip,bundleType=zip

# Get deployment status
aws deploy get-deployment --deployment-id d-ABCDEFGH

# Stop deployment
aws deploy stop-deployment \
  --deployment-id d-ABCDEFGH \
  --auto-rollback-enabled

Blue/Green deployment config:

{
  "deploymentGroupName": "my-app-blue-green",
  "blueGreenDeploymentConfiguration": {
    "terminateBlueInstancesOnDeploymentSuccess": {
      "action": "TERMINATE",
      "terminationWaitTimeInMinutes": 5
    },
    "deploymentReadyOption": {
      "actionOnTimeout": "CONTINUE_DEPLOYMENT"
    },
    "greenFleetProvisioningOption": {
      "action": "COPY_AUTO_SCALING_GROUP"
    }
  }
}

AWS X-Ray

Distributed Tracing Architecture

X-Ray traces requests across microservices to identify performance bottlenecks and errors.

   Microservices   

   send trace   

   send trace   

  Client Request  

  API Gateway  

  Trace ID: abc123  

  Lambda Function  

  Segment  

  DynamoDB  

  Subsegment  

  X-Ray Service  

  Trace Analysis  

  X-Ray Console  

  Service Map  

Key Concepts:

  • Trace: End-to-end request journey (identified by Trace ID)
  • Segment: Data from single service (e.g., Lambda function)
  • Subsegment: Granular operation within segment (e.g., DynamoDB call)
  • Annotations: Indexed key-value pairs for filtering
  • Metadata: Non-indexed additional data

Instrumenting Node.js with X-Ray

const AWSXRay = require('aws-xray-sdk-core');
const AWS = AWSXRay.captureAWS(require('aws-sdk'));
const express = require('express');

const app = express();

// Enable X-Ray for Express
app.use(AWSXRay.express.openSegment('MyApp'));

const dynamodb = new AWS.DynamoDB.DocumentClient();

app.get('/api/users/:id', async (req, res) => {
  // Create subsegment for DynamoDB call
  const segment = AWSXRay.getSegment();
  const subsegment = segment.addNewSubsegment('DynamoDB-GetUser');

  try {
    const result = await dynamodb
      .get({
        TableName: 'Users',
        Key: { userId: req.params.id },
      })
      .promise();

    // Add annotation (indexed, filterable)
    subsegment.addAnnotation('userId', req.params.id);

    // Add metadata (not indexed)
    subsegment.addMetadata('userDetails', result.Item);

    subsegment.close();
    res.json(result.Item);
  } catch (error) {
    subsegment.addError(error);
    subsegment.close();
    res.status(500).json({ error: error.message });
  }
});

app.use(AWSXRay.express.closeSegment());

app.listen(3000);

X-Ray Sampling Rules

Control trace collection cost with sampling rules:

{
  "version": 2,
  "rules": [
    {
      "description": "Sample all errors",
      "host": "*",
      "http_method": "*",
      "url_path": "*",
      "fixed_target": 0,
      "rate": 1.0,
      "priority": 1,
      "service_name": "*",
      "service_type": "*",
      "resource_arn": "*",
      "attributes": {
        "http.status_code": "5*"
      }
    },
    {
      "description": "Sample 10% of successful requests",
      "host": "*",
      "http_method": "*",
      "url_path": "*",
      "fixed_target": 1,
      "rate": 0.1,
      "priority": 100
    }
  ],
  "default": {
    "fixed_target": 1,
    "rate": 0.05
  }
}

Query traces with AWS CLI:

# Get trace summaries
aws xray get-trace-summaries \
  --start-time 2025-01-01T00:00:00Z \
  --end-time 2025-01-01T23:59:59Z \
  --filter-expression 'service("my-api") AND http.status = 500'

# Get full trace details
aws xray batch-get-traces --trace-ids 1-5f27cd-abc123def456

AWS CloudWatch

Monitoring Architecture

CloudWatch collects metrics, logs, and events for comprehensive observability.

   CloudWatch Features   

   Data Sources   

  EC2 Metrics  

  Lambda Logs  

  Application  

  Custom Metrics  

  CloudWatch  

  Metrics  

  Logs  

  Alarms  

  Dashboards  

  SNS  

  Alerts  

Custom Metrics with Node.js SDK

const {
  CloudWatchClient,
  PutMetricDataCommand,
} = require('@aws-sdk/client-cloudwatch');

const client = new CloudWatchClient({ region: 'us-east-1' });

async function publishMetric(metricName, value, unit = 'Count') {
  const command = new PutMetricDataCommand({
    Namespace: 'MyApp/Production',
    MetricData: [
      {
        MetricName: metricName,
        Value: value,
        Unit: unit,
        Timestamp: new Date(),
        Dimensions: [
          { Name: 'Environment', Value: 'production' },
          { Name: 'Service', Value: 'api' },
        ],
      },
    ],
  });

  await client.send(command);
}

// Usage
await publishMetric('OrdersProcessed', 150, 'Count');
await publishMetric('ResponseTime', 245, 'Milliseconds');

CloudWatch Alarms

# Create alarm for high CPU
aws cloudwatch put-metric-alarm \
  --alarm-name high-cpu-alarm \
  --alarm-description "Alert when CPU exceeds 80%" \
  --metric-name CPUUtilization \
  --namespace AWS/EC2 \
  --statistic Average \
  --period 300 \
  --threshold 80 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 2 \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:alerts

# Create composite alarm (multiple conditions)
aws cloudwatch put-composite-alarm \
  --alarm-name critical-system-alarm \
  --alarm-rule "ALARM(high-cpu-alarm) OR ALARM(high-memory-alarm)" \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:critical-alerts

CloudWatch Logs Insights

Query logs with SQL-like syntax:

# Example query: Find errors in last hour
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
| limit 100

Run query via CLI:

aws logs start-query \
  --log-group-name /aws/lambda/my-function \
  --start-time $(date -u -d '1 hour ago' +%s) \
  --end-time $(date -u +%s) \
  --query-string 'fields @timestamp, @message | filter @message like /ERROR/ | limit 100'

Production Best Practices

CI/CD Best Practices

  1. Automated Testing: Run unit, integration, and security tests in build stage
  2. Manual Approvals: Require approval before production deployments
  3. Blue/Green Deployments: Use for zero-downtime deployments
  4. Rollback Strategy: Enable automatic rollback on deployment failure
  5. Artifact Versioning: Tag artifacts with commit SHA or build number

X-Ray Best Practices

  1. Sampling: Use rules to control trace volume and cost
  2. Annotations: Index important fields (userId, environment) for filtering
  3. Subsegments: Track external calls (DynamoDB, S3, HTTP) separately
  4. Error Tracking: Always capture errors with subsegment.addError()
  5. Service Map: Use to visualize dependencies and latency

CloudWatch Best Practices

  1. Custom Metrics: Publish business metrics (orders, revenue) alongside system metrics
  2. Log Aggregation: Use log groups with retention policies
  3. Alarms: Set thresholds based on historical data, avoid false positives
  4. Dashboards: Create role-specific dashboards (dev, ops, business)
  5. Metric Filters: Extract metrics from logs (error count, latency percentiles)

Troubleshooting

Common Issues:

  • CodePipeline stuck: Check IAM permissions for pipeline service role
  • CodeBuild fails: Review CloudWatch Logs, check buildspec.yml syntax
  • CodeDeploy timeout: Verify EC2 instance has CodeDeploy agent running
  • X-Ray missing traces: Ensure X-Ray daemon running, check IAM permissions
  • CloudWatch no data: Verify metric namespace/dimensions, check timestamp

Debug commands:

# Check CodeDeploy agent status
sudo service codedeploy-agent status

# View X-Ray daemon logs
tail -f /var/log/xray/xray.log

# Test CloudWatch metric publish
aws cloudwatch put-metric-data \
  --namespace Test \
  --metric-name TestMetric \
  --value 1

Exam Tips

Key Concepts:

  1. CodePipeline: Stages (Source, Build, Deploy), Artifacts stored in S3
  2. CodeBuild: Uses buildspec.yml, supports Docker, integrates with ECR
  3. CodeDeploy: In-place vs Blue/Green, requires appspec.yml
  4. X-Ray: Trace = full request, Segment = service, Subsegment = operation
  5. CloudWatch: Metrics (5-min default, 1-min detailed), Logs retention, Alarms

Common Scenarios:

  • "Need automated deployments" → CodePipeline + CodeDeploy
  • "Zero-downtime deployment" → Blue/Green with CodeDeploy
  • "Debug microservice latency" → X-Ray distributed tracing
  • "Monitor application errors" → CloudWatch Logs + Alarms
  • "Track business metrics" → CloudWatch custom metrics

Frequently Asked Questions

Q: What is AWS CodePipeline and how does it work?

CodePipeline is a continuous delivery service that automates release workflows through stages. Each stage contains actions like source retrieval from GitHub, building with CodeBuild, testing, and deploying with CodeDeploy. Artifacts pass between stages via S3. Changes trigger pipelines automatically, enabling automated deployments from commit to production.

Q: What is the difference between CodeDeploy in-place and blue/green deployment?

In-place deployment updates existing instances, causing brief downtime during updates. Blue/green deployment provisions new instances (green), validates them, then shifts traffic from old instances (blue) via load balancer. Blue/green enables zero-downtime deployments and instant rollback. Use blue/green for production, in-place for cost savings.

Q: How does AWS X-Ray distributed tracing work?

X-Ray traces requests across microservices by instrumenting code with the X-Ray SDK and running the X-Ray daemon. Each service creates segments containing timing and metadata. Subsegments track downstream calls to databases or APIs. X-Ray aggregates traces, builds service maps, and identifies bottlenecks and errors across distributed architectures.

Q: What are CloudWatch custom metrics and when should I use them?

CloudWatch custom metrics track application-specific data like orders processed, cart additions, or business KPIs beyond standard AWS metrics. Publish using PutMetricData API with custom namespaces and dimensions. Use custom metrics for business monitoring, application performance tracking, and creating alarms on domain-specific events.

Q: How does CodeBuild work?

CodeBuild is a fully managed build service that compiles code, runs tests, and produces artifacts using buildspec.yml configuration. It provisions build containers, executes commands in phases (install, pre_build, build, post_build), and stores artifacts in S3. Pay only for build time. Supports Docker and integrates with ECR.

Q: What is X-Ray sampling and why is it important?

X-Ray sampling controls what percentage of requests are traced to manage costs and performance overhead. Sampling rules define rates based on request attributes. The default rule traces 1 request per second plus 5% of additional traffic. Custom rules enable higher sampling for errors or specific services. Proper sampling balances visibility and cost.

Q: How do CloudWatch Alarms work?

CloudWatch Alarms monitor metrics and trigger actions when thresholds are breached. Configure evaluation periods (how many data points), comparison operators (greater than, less than), and threshold values. Alarms transition between OK, ALARM, and INSUFFICIENT_DATA states. Actions include SNS notifications, Auto Scaling policies, or EC2 actions like reboot.


Conclusion

AWS CI/CD and monitoring services provide complete automation and observability for cloud applications. CodePipeline orchestrates workflows, CodeBuild compiles artifacts, CodeDeploy executes deployments, X-Ray traces requests across services, and CloudWatch monitors metrics and logs.

Choose CodePipeline for end-to-end automation, blue/green deployments for zero downtime, X-Ray for debugging distributed systems, and CloudWatch for comprehensive monitoring. Understanding deployment patterns, tracing strategies, and alarm configuration is essential for both production reliability and the AWS Certified Developer Associate exam.