This article brings the previous three to a natural conclusion :-
- We created a VM Image of Apache Kafka
- Stored the Apache Kafka Image on Google Cloud
- Created a dev subdomain so we can refer to our server using DNS names
Today we will again be using Terraform to :-
- Create a static IP address for our Apache Server
- Create a new compute server using our Apache VM Image
- Assign the DNS name kafka.dev.example.com to our server
Monetary Cost
Usual warning that using any Cloud Provider will cost some amount of money. Clearing up instructions will be provided.
Setup
- Run through tools setup for previous articles mentioned above
- Clone the github repo Create Customer Image For GCE Using Packer
- Clone the github repo Creating DNS Subzone With Terraform
- Clone the github repo Deploy Kafka Server in a Terraform Managed Subdomain
Create a new Google Image if required
On your terminal change directory to blog2_createCustomImageForGCE_usingPacker
. You may need to initially run the script installAnsibleGalaxyRoles.sh
as per the corresponding blog.
Then run,
packer build kafka-instance-template.json
After a few minutes the image should have been built and saved on Google Cloud. For me image is called kafka-1490689213, and if I check on the Google Cloud Console I can see it there.
DNS Subdomain
Let’s assume you’ve run through the relevant blog, but otherwise change directory to blog3_creating_dns_subzone_with_terraform
and run createDnsZone.sh
and update your manual root DNS entries with the new nameservers etc for dev sub-domain.
Create Kafka Server Using Image With Terraform
Referring to code cloned from the related github repo we will go over the salient points.
The file main.tf
is used simply to store the high level Google project details and credentials.
There are also 2 helper scripts, terraformCreate.sh
and terraformDestroy.sh
which either apply or destroy the resources defined in the *.tf scripts.
One thing to note here is that each script refers to 2 different Terraform state files. Each Terraform project manages its own state. We could have ALL state under one project, however the separation is a design choice. The DNS managed state should change way less often than a development env infrastructure. Further, every time we destroy and recreate the DNS Managed Zone the NameServers associated with the Zone change – this then means we would have to update our Root DNS etc every time will all the issues that would cause. So in this instance it is useful to separate these concerns at the project level.
Given that choice, it means that in the scripts you can see lines like this
export TERRAFORM_DNS_STATE_DIR=${HOME}/.terraform/dns mkdir -p ${TERRAFORM_DNS_STATE_DIR} export DEV_DNS_NAME=`terraform output -state=${TERRAFORM_DNS_STATE_DIR}/terraform.tfstate dev_dns_name` export DEV_DNS_ZONE_NAME=`terraform output -state=${TERRAFORM_DNS_STATE_DIR}/terraform.tfstate dev_dns_zone_name`
This is simply reading the state of the DNS Terraform project to get the current values of the DNS name and zone name. We need these to assign to our new Kafka Server.
Create a static IP address for our Apache Server
See Terraform documentation for Google Compute Address.
In the file kafka.tf
the first lines we see are,
resource "google_compute_address" "kafka-server-address" { name = "kafka-server-address" }
As the documentation mentions, this creates a new static IP address for the selected project, which can be overridden by optional args. As well as creating resources, Terraform allows inspection of the useful attributes of them; in the case of a google_compute_address
the main one is address
which accesses the IP address just created. This is used next.
Create a new compute server using our Apache VM Image
The next resource in kafka.tf
is a “google_compute_instance” where we create a new compute resource and plug in the IP Address just claimed above. See the Terraform Google Compute Instance docs for more details.
resource "google_compute_instance" "kafka-server" { name = "kafka-server-1" machine_type = "n1-standard-1" zone = "europe-west1-b" tags = ["kafka", "messaging"] disk { image = "kafka-1490689213" } disk { type = "local-ssd" scratch = true } network_interface { network = "default" access_config { nat_ip = "${google_compute_address.kafka-server-address.address}" } } service_account { scopes = ["userinfo-email", "compute-ro", "storage-ro"] } }
A few points of interest here.
image = "kafka-1490689213"
– refers to the Kafka Image instance we created earlier. So if there were any configuration changes (new topics say), security updates to it, etc, then we would create a new image and then update that in the code here. All in the code. Testable, repeatable and lots of other “ibles” – or ilities if you prefer abstract nouns! The point being that the very version you have built earlier is enshrined in version control.network_interface.access_config.nat_ip = "${google_compute_address.kafka-server-address.address}"
– Just sets the IP Address of the new compute server to the static IP resource created previously.
Apply the DNS name kafka.dev.example.com to our server
Next, we add a new DNS Record Set binding our Kafka Server above with the desired DNS name, kafka.dev.example.com
.
variable "dev_dns_name" {} variable "dev_dns_zone_name" {} resource "google_dns_record_set" "kafka" { name = "kafka.${var.dev_dns_name}" type = "A" ttl = 300 managed_zone = "${var.dev_dns_zone_name}" rrdatas = ["${google_compute_instance.kafka-server.network_interface.0.access_config.0.assigned_nat_ip}"] }
Here,
variable "dev_dns_name" {}
– Defines a Terraform Variable with no default value, meaning it has to be specified. Looking interraformCreate.sh
file we see that variables are passed into terraform using the-var
flag. In this instance the variable is the DNS Name of the sub-domain, namelydev.example.com
variable "dev_dns_zone_name" {}
– As above except the Zone Name which is essentially our own name for the value which has to be unique across Google Cloud project.name = "kafka.${var.dev_dns_name}"
– We are now using the variable to append dev.example.com to our server name of choice, kafka, to give a full DNS Name of our server askafka.dev.example.com
managed_zone = "${var.dev_dns_zone_name}"
– Again using the passed in variable of the DNS Managed Zone we wish to add this record to.rrdatas = ["${google_compute_instance.kafka-server.network_interface.0.access_config.0.assigned_nat_ip}"]
– Inspecting the compute resource created above we add the associated IP Address in the record set.
Output Values
And for completeness the last 2 lines of kafka.tf
look like,
output "kafka_external_ips" { value = "${join(" ", google_compute_instance.kafka-server.*.network_interface.0.access_config.0.assigned_nat_ip)}" } output "kafka_internal_ips" { value = "${join(" ", google_compute_instance.kafka-server.*.network_interface.0.address)}" }
… which simply output the external and internal IP Address of the new kafka server.
Running It
For the screenshots here I’m using the domain brownian-motion-driven-dev.com
rather than example.com
.
Having already run through the Creating A DNS Subdomain With Terraform article, I can take a screenshot of the DNS Managed Zone before we create our new resources …
Then if we run ./terraformCreate.sh
it can now be seen that the DNS Record for the new Kafka Server shows up, something like …
… where the new record is highlighted.
Even more exciting is I can now do,
$ ping -c 2 kafka.dev.brownian-motion-driven-dev.com PING kafka.dev.brownian-motion-driven-dev.com (35.187.39.138) 56(84) bytes of data. 64 bytes from 138.39.187.35.bc.googleusercontent.com (35.187.39.138): icmp_seq=1 ttl=57 time=25.9 ms 64 bytes from 138.39.187.35.bc.googleusercontent.com (35.187.39.138): icmp_seq=2 ttl=57 time=24.1 ms --- kafka.dev.brownian-motion-driven-dev.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 24.172/25.067/25.963/0.909 ms
… which corresponds to the same IP. ( I have since destroyed this instance 🙂 )
Whoa!
And that’s that!
Over the last few articles we’ve come a long way. We now have a server who’s configuration is entirely under version control, from
- the OS it sits on
- the software it runs ( Apache Kafka & Zookeeper )
- the configuration of the application ( topics, logs etc )
- to the DNS name we refer to it
- and potentially a whole lot more, for example Load Balancers, SSL certs, etc, etc.
The list goes on. It does stop, but man alive that is a great level of control to have over your infrastructure. The whole development environment can be built in the same Terraform project as you build up an entirely repeatable infrastructure definition.
Annihilate Work
Just run terraformDestroy.sh
to destroy your work above.
You’ll also want to check out the other articles to destroy DNS Managed Zone and the Kafka Image as each also will likely have an associated ongoing cost.
If all else fails you can use the Google Cloud Console to obliterate anything you have created – which would of course put your Terraform managed state out of sync.
What Next?
Not entirely sure. But for now we can draw a line under the HashiCorp stack 🙂
We will return to it in future to plug in a Kubernetes cluster for our microservices but topic wise will be taking a few turns until then.
Leave a Reply