AWS Project:
Deploy a Dynamic Website on AWS (+ Terraform)

This project involves building a dynamic website using the AWS Management Console using a 3-tier VPC architecture with public and private subnets, an Internet Gateway (IG), and NAT Gateways to provide a highly secure network environment. MySQL RDS and Workbench were employed to ensure scalability and high availability of the database. EC2 instances, ALB, ASG, and Route 53 were utilized to ensure high availability and scalability of the website, while S3 was used for storage and backup. IAM Roles were implemented to manage access to the resources, and AMI was utilized to create and launch instances. Finally, SNS and Certificate Manager were used to manage notifications and SSL/TLS certificates, ensuring the security of the website. There is also code provided for how to create this website using Terraform. Project provided by AOSNote.

Dynamic Website Terraform Code

Step 1: Build a Three-Tier AWS Network VPC from Scratch

Building a three-tier AWS network VPC from scratch involves creating a Virtual Private Cloud (VPC) from the AWS Management Console, setting up subnets for each tier (public, private, and database), and configuring routing tables and security groups to ensure traffic flows securely between the tiers. The VPC can then be launched with instances for each tier, and load balancers can be added for high availability and scalability.

1. Create a new VPC using the CIDR range from our reference architecture.

2. Enable DNS hostnames. Enabling DNS hostnames in an AWS VPC allows instances within the VPC to have DNS names associated with their IP addresses.

3. Create an Internet Gateway. The Internet Gateway is crucial for enabling internet traffic to enter and exit a VPC, allowing instances, NAT gateways, etc. within public subnets to have a public IP address and be directly accessible from the internet.

4. Attach the Internet Gateway to our VPC. We can only attach one Internet Gateway to a VPC at a time.

5. Create two public subnets in two different availability zones for high availability. We will create these with different CIDR blocks, since subnets cannot have overlapping CIDR blocks.

6. Enable auto-assign public IPv4 address for both subnets.

7. Create a new public route table. When a new VPC is created, a Main route table is automatically created and associated with all subnets within the VPC. We will add a public route to our own public route table and associate our previously made public subnets with it.

8. Add a public route to the table. We'll do this by adding a target for our Internet Gateway.

9. Associate our previously made public subnets with the public route table.

10. Lastly, we will create four private subnets that will host our apps and databases. Two in Availability Zone 1 (AZ1), and two in AZ2. This will leave us with six subnets.

In a VPC, subnets can be designated as public or private based on their route table configuration. Public subnets are associated with a route table that has a route to an internet gateway, and private subnets are associated with a route table that does not have a route to an internet gateway.

Subnets not associated with a route table default to the Main route table, which is private by default.

Terraform Code: VPC

Step 2: Create NAT Gateways

A NAT gateway allows instances in private subnets to securely access the internet or other AWS services, without needing public IP addresses or self-managed NAT.

1. Create a NAT gateway for AZ1 and allocate an elastic IP. This provides a static IP address that does not change even if the NAT gateway is stopped or restarted, and it ensures that the IP address of the NAT gateway remains constant, making it easier to maintain connectivity and security for external communication.

2. Create a private route table.

3. Add a route to our NAT gateway to route traffic to the internet.

4. Associate our private web and data subnets in AZ1 to the table.

5. Replicate steps 1-4, but this time for the subnets in AZ2.

Terraform Code: NAT Gateways

Step 3: Create Security Groups

Security groups control the inbound and outbound traffic for resources in a VPC. They use rules to allow or block traffic based on protocol, IP addresses, and ports. We will create four security groups to control inbound traffic for our webservers.

1. Create the application load balancer security group. Inbound rules will allow access from HTTP (Port 80) and HTTPS (Port 443). We will add this security group to the application load balancer we create.

2. Create the SSH security group. Inbound rules will allow access from SSH (Port 22). We will limit this to only our IP address.

3. Create the webserver security group. Inbound rules will allow access from HTTP (Port 80), HTTPS (Port 443), and SSH (Port 22), and the source is our ALB and SSH security groups. We will add this security group to our EC2 instances.

4. Create the database security group. Inbound rules will allow access from port 3306, and the source will be our webserver security group. We will add this to our database instance.

Terraform Code: Security Groups

Step 4: Launch a MySQL RDS Instance

Amazon Relational Database Service (Amazon RDS) is a fully managed database service provided by AWS that makes it easy to set up, operate, and scale a relational database in the cloud. With Amazon RDS, you can choose from six popular relational database engines, including MySQL, PostgreSQL, Oracle, SQL Server, MariaDB, and Amazon Aurora.

In this project, we will use MySQL as our database engine.

1. Create the subnet group. This will allow us to specify which subnets we want to create our RDS database in (Private Data Subnet AZ1 and Private Data Subnet AZ2).

2. Create the RDS database. We will create this, the Master DB, in Availability Zone 2 and have a stand-by in AZ1.

Terraform Code: RDS

Step 5: Create an S3 Bucket and Upload a File

Amazon S3 is a cloud-based storage service that provides users with scalable, reliable, and secure object storage capabilities. We will use it to upload and store our application webfiles.

1. Create the S3 bucket that will hold our webfiles.

2. Upload the webfiles into our newly created bucket.

3. Create the S3 bucket that will hold our site's dummy data.

4. Upload the dummy data into our newly created bucket.

Step 6: Create an IAM Role for the S3 Policy

An IAM role can be used to allow users or services to assume specific permissions to access S3 buckets. It helps to ensure that only authorized users or services can access the S3 bucket, and data remains secure.

1. Create an IAM role (called 'S3-Role') and attach the AmazonS3FullAccess permissions policy. This will allow our EC2 instance to download the file inside the bucket.

This role will be attached to our Setup Server instance in the next step.

Step 7: Deploy the Setup Server

We will now launch the EC2 instance setup server. We will utilize this instance to install and configure our application. We'll deploy it in the public subnet for ease of installation.

1. Create a key pair. We will use this to SSH into our EC2 instance.

2. Change permissions on our key. This will set the read-only permission for the owner of the file, and no permissions for anyone else.

3. Launch our setup EC2 instance with our three security groups: webserver security group, ALB security group, and SSH security group.

4. SSH into our Setup Server.

5. Update the EC2 instance.

sudo su
sudo yum update -y

6. Install Apache.

sudo yum install -y httpd httpd-tools mod_ssl
sudo systemctl enable httpd
sudo systemctl start httpd

7. Install PHP 7.4.

sudo amazon-linux-extras enable php7.4
sudo yum clean metadata
sudo yum install php php-common php-pear -y
sudo yum install php-{cgi,curl,mbstring,gd,mysqlnd,gettext,json,xml,fpm,intl,zip} -y

8. Install MySQL 5.7.

sudo rpm -Uvh https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm
sudo rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
sudo yum install mysql-community-server -y
sudo systemctl enable mysqld
sudo systemctl start mysqld

9. Set permissions.

sudo usermod -a -G apache ec2-user
sudo chown -R ec2-user:apache /var/www
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
sudo find /var/www -type f -exec sudo chmod 0664 {} \;

10. Download the FleetCart zip folder from S3 to the HTML directory on the EC2 instance.

sudo aws s3 sync s3://jordancampbell-webserver-files /var/www/html

11. Unzip the FleetCart zip folder.

cd /var/www/html
sudo unzip FleetCart.zip

12. Move all the files and folder from the FleetCart directory to the HTML directory.

sudo mv FleetCart/* /var/www/html

13. Move all the hidden files from the FleetCart diretory to the HTML directory.

sudo mv FleetCart/.DS_Store /var/www/html
sudo mv FleetCart/.editorconfig /var/www/html
sudo mv FleetCart/.env /var/www/html
sudo mv FleetCart/.env.example /var/www/html
sudo mv FleetCart/.eslintignore /var/www/html
sudo mv FleetCart/.eslintrc /var/www/html
sudo mv FleetCart/.gitignore /var/www/html
sudo mv FleetCart/.htaccess /var/www/html
sudo mv FleetCart/.npmrc /var/www/html
sudo mv FleetCart/.php_cs /var/www/html
sudo mv FleetCart/.rtlcssrc /var/www/html

14. Delete the FleetCart and FleetCart.zip folder.

sudo rm -rf FleetCart FleetCart.zip

15. Enable mod_rewrite on EC2 Linux, add Apache to group, and then restart the server.

sudo sed -i '//,/<\/Directory>/ s/AllowOverride None/AllowOverride All/' /etc/httpd/conf/httpd.conf
chown apache:apache -R /var/www/html
sudo service httpd restart

16. Check if we can access our website through the browser.

17. Connect the EC2 instance to our RDS database.

18. We have successfully connected to our (currently empty) website.

Step 8: Import the Dummy Data for the Website

We will use MySQL Workbench and create a new EC2 instance to import the SQL data for our application into the RDS database. To connect to an AWS RDS instance using MySQL Workbench, we can create a new connection using the "Standard TCP/IP over SSH" method, specifying the appropriate credentials and SSH settings. Once connected, we can manage and administer the RDS instance using MySQL Workbench's graphical interface.

1. Create a new key pair we will use to SSH into a Dummy Server.

2. Create the Dummy EC2 instance.

3. Create a Dummy security group to allow our RDS instance to connect to the Dummy instance.

4. Add the Dummy security group to the RDS instance.

5. Connect to the database.

6. Import the SQL file for our website into our RDS database.

7. We no longer need the Dummy Server or any of the resources we used with it, so we will delete them now.

8. Run the following commands in our Setup Server to add the dummy data to our website. This script downloads files from our S3 bucket, unzips them, moves them to the web server's public directory, and performs some cleanup tasks to ensure the web application is in a consistent state.


sudo su
sudo aws s3 sync s3://jordancampbell-dummy-data-files /home/ec2-user
sudo unzip dummy.zip
sudo mv dummy/* /var/www/html/public
sudo mv -f dummy/.DS_Store /var/www/html/public
sudo rm -rf /var/www/html/storage/framework/cache/data/cache
sudo rm -rf dummy dummy.zip
chown apache:apache -R /var/www/html
sudo service httpd restart

Step 9: Create an AMI

Now that we've finished using our Setup Server instance to install and configure our web application, we can stop it and create an Amazon Machine Image (AMI) from it. This AMI can be used to launch new instances with the same configuration and settings as the original instance, making it easier to replicate the web application environment.

1. Stop the Setup Server instance.

2. Create the AMI.

Step 10: Create an Application Load Balancer

An Application Load Balancer (ALB) is a service that routes incoming traffic to multiple targets based on the content of the request, such as the URL or HTTP header. ALBs operate at the application layer (Layer 7) and support features like SSL/TLS termination, health checks, and content-based routing.

1. Launch an EC2 instance in each of the private app subnets (AZ1 and AZ2).

2. Create a target group. To access the website we installed on the EC2 instances, we will put them in the target group to allow the ALB to route traffic to them.

Make sure to add status success codes 301 and 302, so we can redirect traffic from HTTP to HTTPS.

3. Create the application load balancer.

Terraform Code: ALB

Step 11: Register a New Domain Name in Route 53 and Create a Record Set

We will create a domain name for our website and use Route 53 as a service that helps people find that website on the internet. It will ensure that people can access the website easily and reliably.

1. Create a domain name.

2. Create a record. Creating a Route 53 alias record for an Application Load Balancer involves mapping the website or application's domain name to the ALB. This directs traffic to the targets behind the ALB. The user must specify the DNS name of the ALB and routing policy when creating the alias record.

Terraform Code: Route 53

Step 12: Register for an SSL Certificate in AWS Certificate Manager

We will use an SSL Certificate to encrypt all communications between the web browser and our webservers. This is also referred to as encryption in transit.

Currently we are not secure.

1. Create a public SSL Certificate in AWS Certificate Manager.

2. Create DNS records in Amazon Route 53. This is a validation process designed to ensure that only the domain owner can obtain the SSL certificate.

Our certificate is good to go.

Step 13: Create an HTTPS Listener

Using the SSL Certificate we just registered, we will secure our website. We will create an HTTPS (SSL) listener for our ALB. This involves configuring the ALB to handle SSL/TLS encryption for incoming requests and requires associating the SSL certificate we created with the ALB's listener configuration. Once configured, the ALB can decrypt and forward incoming HTTPS requests to the appropriate backend target group.

1. Add listener.

2. Redirect traffic to the HTTPS listener from the HTTP listener.

3. Check that our website is now secure.

Our website doesn't look quite right because we need to update the settings for our domain name in the website's configuration file. We must next SSH into our EC2 instance and update the configuration file.

Step 14: SSH into an Instance in the Private Subnet

We will now SSH into an EC2 instance in our private subnet to update the configuration file so that our website can load properly. To do so, we'll first have to launch a bastion host.

A bastion host is a dedicated server instance used to securely connect to other servers within a private network. It provides an additional layer of security by acting as a gateway that provides access control for remote connections.

1. Create the bastion host in the public subnet.

2. We will run the command that will allow us to SSH from the bastion host to any instance in the private subnet.

3. SSH into our bastion host.

4. SSH into the private webserver in AZ1. We will do this by using the instance's private IP address.

5. Edit the .env file with our updated app url.

7. After restarting the Apache server, our website should properly function again.

Step 15: Create Another AMI

Since we made changes to the configuration files on our instance, we will create a new AMI to reflect those changes.

1. Create a new AMI.

2. Check that the AMI has a snapshot. A snapshot is a backup of an EC2 instance that contains all the information needed to launch a new instance with the same configuration as the original. When a snapshot is created, a copy of the instance's data is stored in S3.

Step 16: Create an Auto Scaling Group

An Auto Scaling Group (ASG) is a group of EC2 instances that can automatically scale up or down based on demand. This helps maintain the required number of instances for the application to handle variable traffic loads without downtime or performance degradation.

1. Create a launch template. This contains the configurations of our AMI that the ASG will use to launch new instances in the private app subnets.

2. Create the auto scaling group.

Add an SNS topic to the ASG. This will notify subscribers (we'll get notified via email) when events are triggered, such as when instances launch, terminate, fail to launch, and fail to terminate.

We now have two instances running in our ASG, one in AZ1 and one in AZ2.

4. Check that these instances have been added to our target group.

Our dynamic website is complete and running!

Terraform Code: SNS

Terraform Code: ASG

Step 17: Terminate Resources

To complete this project, we will delete the resources we created to avoid unwanted charges. This includes our ASG, launch templates, ALB, target group, RDS, bastion host, security groups, NAT gateways, VPC, elastic IPs, S3 buckets, and record sets.

We will not delete our newest AMI version, nor our database snapshot. We will use these when we want to launch our app using Terraform.

Project Complete!