• Gavin In The Cloud
  • Posts
  • Bastion-Host in GCP using Terraform - Secure Access to Private Instances

Bastion-Host in GCP using Terraform - Secure Access to Private Instances

Creating bastion-host instance and to access a private instance from it using Terraform and implementing using GitLab CI/CD.

Bastion-Host in GCP using Terraform - Secure Access to Private Instances

Introduction: In cloud environments, it's crucial to ensure secure access to private instances while maintaining control over network traffic. One commonly used approach is to set up a bastion host, also known as a jump host or a jump server. In this blog post, we will explore how to create a bastion host instance in GCP using Terraform and establish secure access to private instance. Additionally, we will leverage GitLab CI/CD for automating the infrastructure deployment process.

Prerequisites: Before getting started, make sure you have the following prerequisites in place:

  1. A GCP account with the necessary permissions to create instances and networking resources.

  2. GitLab account and a repository set up to manage your Terraform code.

Repo Structure: To keep our project organized, we will follow the following structure within our GitLab repository:

You can simply clone my public repository: GitLab-repo

Now, let's dive into the details of each component.

  1. Terraform Configuration: The main.tf file contains the Terraform configuration for creating a VPC network, subnets, firewall rules, and VM instances for the bastion host and private server. It defines the necessary resources using the Google Cloud Platform provider.

    Let's break down the code to understand each component:

  • VPC Network: The google_compute_network resource creates a VPC network named "bastion-vpc" with auto-created subnetworks disabled.

  • Subnets: Two subnets, "subnet-a" and "subnet-b," are created using the google_compute_subnetwork resource. Each subnet has an IP CIDR range and is associated with the previously defined VPC network.

  • Firewall Rules: Two firewall rules, allow_bastion_host and allow_bastion_host_to_private_server, are defined using the google_compute_firewall resource. These rules allow incoming TCP traffic on port 22 (SSH) from any source IP to the bastion host and from the bastion host to the private server, respectively. The rules are associated with the VPC network using the network attribute and are controlled by tags.

  • VM Instances: Two VM instances, web_server and bastion_host, are created using the google_compute_instance resource. The web_server instance represents the private server and is tagged with "private-server". The bastion_host instance represents the bastion host and is tagged with "bastion-host". Both instances have a machine type, zone, and boot disk configuration. The web_server instance is associated with subnet-a, while the bastion_host instance is associated with subnet-b and has an ephemeral public IP assigned.

# Create the VPC network
resource "google_compute_network" "bastion_vpc" {
  name                    = "bastion-vpc"
  auto_create_subnetworks = false
}

# Create the subnets
resource "google_compute_subnetwork" "subnet_a" {
  name          = "subnet-a"
  ip_cidr_range = "10.0.1.0/24"
  network       = google_compute_network.bastion_vpc.self_link
  region        = var.region
}

resource "google_compute_subnetwork" "subnet_b" {
  name          = "subnet-b"
  ip_cidr_range = "10.0.2.0/24"
  network       = google_compute_network.bastion_vpc.self_link
  region        = var.region
}

# Create firewall rules
resource "google_compute_firewall" "allow_bastion_host" {
  name    = "allow-bastion-host"
  network = google_compute_network.bastion_vpc.self_link

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = ["0.0.0.0/0"]
  target_tags   = ["bastion-host"]
}

resource "google_compute_firewall" "allow_bastion_host_to_private_server" {
  name    = "allow-bastionhost-to-privateserver"
  network = google_compute_network.bastion_vpc.self_link

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_tags   = ["bastion-host"]
  target_tags   = ["private-server"]
}

# Create the VM instances
resource "google_compute_instance" "web_server" {
  name         = "web-server"
  machine_type = var.machine_type
  zone         = var.zone
  tags         = ["private-server"]

  boot_disk {
    initialize_params {
      image = var.image
    }
  }

  network_interface {
    network     = google_compute_network.bastion_vpc.self_link
    subnetwork  = google_compute_subnetwork.subnet_a.self_link
      # No nat_ip specified to prevent the creation of an external IP
  }
}


resource "google_compute_instance" "bastion_host" {
  name         = "bastion-host"
  machine_type = var.machine_type
  zone         = var.zone
  tags         = ["bastion-host"]

  boot_disk {
    initialize_params {
      image = var.image
    }
  }

  network_interface {
    network = google_compute_network.bastion_vpc.self_link
    subnetwork = google_compute_subnetwork.subnet_b.self_link
    access_config {
      // Ephemeral public IP
    }
  }
}
  1. Provider Configuration: The provider.tf code sets up the Google Cloud provider for Terraform with the specified project ID, region, and zone. The required_providers block ensures the use of the specified version of the Google provider. The backend "gcs" block configures the backend as Google Cloud Storage (GCS) to store the Terraform state in the specified bucket.

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "4.58.0"
    }
  }

  backend "gcs" {
    bucket  = "your-bucket-name" // Replace with your backend bucket
    prefix  = "terraform/state"
  }
}

provider "google" {
  project     = "your_project_id" // Replace with your project_id
  region      = var.region
  zone        = var.zone
}
  1. Variables: The variables.tf file defines the variables used within the Terraform configuration, such as project ID, region, instance types, and networking details.

variable "project_id" {
description = "The project ID where the resources will be created."
default = "your_project_id" // Replace with your project_id
}

variable "region" {
  description = "The zone where the resources will be created."
  default     = "us-central1" // Replace with your desirable region
}

variable "zone" {
  description = "The zone where the resources will be created."
  default     = "us-central1-c" // Replace with your desirable zone
}

variable "machine_type" {
  description = "The machine type for the instances."
  default     = "n1-standard-1"
}

variable "image" {
  description = "The image for the instances."
  default     = "ubuntu-os-cloud/ubuntu-1804-lts"
}
  1. Terraform.tfvars: The terraform.tfvars file contains the actual values for these variables.

project_id = "your project id" // Replace with your project_id
region = "us-central1"
zone = "us-central1-c"
machine_type = "n1-standard-1"
image = "ubuntu-os-cloud/ubuntu-1804-lts"
  1. GitLab CI/CD Configuration: The .gitlab-ci.yml file sets up the CI/CD pipeline for automating the infrastructure deployment process. It defines stages, jobs, and associated scripts to perform tasks such as linting, testing, and applying Terraform changes.

---
workflow:
  rules:
    - if: $CI_COMMIT_BRANCH != "main" && $CI_PIPELINE_SOURCE != "merge_request_event"
      when: never
    - when: always

variables:
  TF_DIR: ${CI_PROJECT_DIR}/src
  STATE_NAME: "gitlab-terraform-gcp-tf"

stages:
  - validate
  - plan
  - apply
  - destroy

image:
  name: hashicorp/terraform:light
  entrypoint: [""]
  
before_script:
  - terraform --version
  - cd ${TF_DIR}
  - terraform init -reconfigure

validate:
  stage: validate
  script:
    - terraform validate
  cache:
    key: ${CI_COMMIT_REF_NAME}
    paths:
    - ${TF_DIR}/.terraform
    policy: pull-push

plan:
  stage: plan
  script:
    - terraform plan 
  dependencies:
    - validate
  cache:
    key: ${CI_COMMIT_REF_NAME}
    paths:
    - ${TF_DIR}/.terraform
    policy: pull


apply:
  stage: apply
  script:
    - terraform apply  -auto-approve
  dependencies:
    - plan
  cache:
    key: ${CI_COMMIT_REF_NAME}
    paths:
    - ${TF_DIR}/.terraform
    policy: pull

destroy:
  stage: destroy
  script:
    - terraform destroy  -auto-approve
  dependencies:
    - plan
    - apply
  cache:
    key: ${CI_COMMIT_REF_NAME}
    paths:
    - ${TF_DIR}/.terraform
    policy: pull
  when: manual

Make sure to replace the variable STATE_NAME as per your choice.

  1. module.tf: In the module.tf file, a Terraform module named "bastion" is defined. The module is sourced from the local directory "./src", indicating that the module configuration is located in the "src" directory within the same project.

module "bastion" {
  source = "./src"
}

Implementation Steps: Now, let's walk through the implementation steps to create the bastion host and establish secure access to private instances using Terraform and GitLab CI/CD.

  1. Set up GitLab Repository: Create a new repository on GitLab or use an existing one to host your Terraform code. Alternatively, you can clone my GitLab repository to your own.

  2. Configure GCP Provider: In the providers.tf file, configure the GCP provider by specifying your GCS bucket ID and region.

  3. Define terraform.tfvars: In the terraform.tfvars file, replace these project_id, region, zone, machine_type and image with your values.

  4. Set Secrets in GitLab: In your GitLab repository, navigate to Settings > CI/CD > Variables. Add a new variable named "GOOGLE_CREDENTIALS" and paste the contents of your Google Cloud service account key file into the value field. This securely provides the necessary credentials for Terraform to authenticate with GCP.


    Note: Remove any white spaces in your token content and then paste it.

  5. Run the Pipeline: Commit and push your Terraform code to the GitLab repository. This will trigger the GitLab CI/CD pipeline. Monitor the pipeline execution in the CI/CD section of your repository to ensure it completes successfully.

  6. Check Resource Creation in GCP: After the pipeline is finished, verify the creation of resources in the Google Cloud Platform (GCP) Console. Make sure that the bastion host, private instances, and associated networking resources are provisioned accurately.

    When you try to access “web-server” via SSH you will receive following error:

    That is because you can access the “web-server” only via SSH from “bastion-host”.

    You should be able to access the bastion host instance via SSH and then SSH into your web server (private server) using the command:

    ssh [username]@[web_server_internal_ip]

Conclusion: In this blog post, we explored how to create a bastion host instance in GCP using Terraform and establish secure access to private instances. We also leveraged GitLab CI/CD to automate the infrastructure deployment process. By following the steps outlined above, you can ensure secure access to your private instances while maintaining control over network traffic.

Remember to regularly update your Terraform code and pipeline to reflect any changes in your infrastructure requirements. With the combined power of Terraform and GitLab CI/CD, you can efficiently manage and automate your GCP infrastructure.

References: GitLab-repo