AWS 101 – Series 1 of N – Monitoring and Maintenance – How to check if an Amazon instance is a valid approved golden image


  • How to check if an Amazon instance is a valid approved golden image


  • Problem Detection 
    • You can use CLI as well as Console to perform this action
    • Using CLI
      •  Save the below script as
      • Change the mode – chmod u+x
      • The AWS command ‘aws ec2 describe-instances‘ will return all instances in a specific region and is filtered by ImageId
      • The array arrImages is holding the array of all ImageId
      • Run a for loop with command ‘aws ec2 describe-images‘ to get the Image owner
      • If the image owner is not “Self” it is not a valid approved image specific to your own customized base image

        #!/bin/bash#regions to check
        arrRegions=(“us-east-1” “us-east-2”);

        #image list
        for regionId in ${arrRegions}
        echo “Instances for $regionId region:”
        arrImages=$(aws ec2 describe-instances –region $regionId –output text –query ‘Reservations[*].Instances[*].ImageId’);

        #get Image owner
        for imgId in ${arrImages}
        aws ec2 describe-images –region $regionId –output text –image-ids $imgId –query ‘Images[*].ImageOwnerAlias’

    • Using Console
      • Go to EC2 Dashboard, Select Instances tab and then select a specific instance
      • In the Description tab, click on AMI ID link and select the AMI ID from the pop up. Copy the AMI ID.
      • Capture01
      • Now go to AMI tab is Images section of EC2 Dashboard
      • Select “Owned by me” from the drop down and filter by AMIID. Paste the AMI ID for the filter.
      • Capture02
      • If no rows are returned it means the images are either from Market Place or Amazon and are not “Self” customized approved valid image.


  • Problem Remedy – to continue
Categories AWS

Launch a LAMP(Linux-Apache-MariaDB-PHP-Python) server in AWS

  • First of all launch and EC2 Linux Instance from a public subnet and connect to the server using Putty or Mobaxterm – (For details on how to launch and connect a public instance read this – )
  • Once connected to the server run the following commands:Capture101
  • To Install Apache, start and configure it use the following commands:

echo “Installing Apache…”
yum -y update
yum -y install httpd
service httpd start
chkconfig httpd on
echo “Installed Apache…”

  • To Install MariaDB, start and configure it use the following commands:

echo “Installing MariaDB…”
yum -y install mysql-server mysqldb
service mysqld start
chkconfig mysqld on
echo “Installed MariaDB…”

echo “Logging to MariaDB…”
mysql -u root -p

  • To Test MariaDB, Login to MariaDB using following commands:Capture105


  • To Install PHP, start and configure it use the following commands:

echo “Installing PHP…”
yum -y install php php-mysql
service httpd restart
echo “Installed PHP…”
echo “<?php phpinfo(); ?>” > /var/www/html/info.php

echo “<html><body><h1>Server ready</h1></body></html>” > /var/www/html/info.html
echo “PHP Installation complete”

  • To Install Python, start and configure it use the following commands:

echo “Installing Python…”
yum -y install python python-pip
python –version
print “Installed Python…”

  • To Test Python, Execute “python –version” from command line, you will see default Python versionCapture104

Hope this helps…

How to build an Amazon VPC with public and private subnets from scratch

The following tutorial creates an Amazon VPC with public and private subnets from scratch.

  • Creation of the AWS VPC(AWS Virtual Private Cloud):
    • AWS VPC is  Virtual network of dedicated to your AWS account where you can launch and configure AWS resources.
    • To Create a VPC – Go to VPC Dashboard and click on “Your VPC” and then on “Create VPC”.
    • Then enter the name and CIDR block of the VPC to be created.v001


  • Creation of the Subnets:
    • Subnets are sub networks of a VPC where we can launch our AWS resources.
    • To Create a Subnet – On the VPC Dashboard and click on “Subnets” and then on “Create Subnet”.
    • Then enter the name of the subnet( Naeem Public Subnet ), select the VPC to which we want to add the subnet to, the CIDR block for the subnet and then click “Yes, Create”.v002
    • Similarly, follow the same process to create another subnet – Naeem Private Subnet.v003


  • Creation of Internet Gateway:
    • Internet gateways allow internet accessibility inside the VPC.
    • To Create a Internet gateway – Go to VPC Dashboard and click on “Internet Gateways” and then on “Create Internet gateway”.
    • Enter the name of the Internet gateway and click “Yes, create”.v004
    • Now attach this Internet gateway to the VPC, by clicking on “Attach to VPC”.Capture


  • Creation of Public Route tables:
    • route table contains a set of rules, called routes, that are used to determine where network traffic is directed.
    • To Create a Route table – Go to VPC Dashboard and click on “Route tables” and then on “Create Route table”.Capture2
    • Now, select “Naeem Public RouteTable” and then click on “Routes” tab to edit “route” for the route table to allow Internet access.Capture3
    • Then, click on “add another route” to add a route for Internet gateway. Add destination as “″(meaning access from anywhere) and target as the Internet gateway ID and click save.Capture4
    • Now click on “Subnet Associations” tab to “edit” and add the subnet to be mapped to this routing table.Capture5
    • Now, select the “Naeem Public Subnet” selecting on the check-box and save changes.Capture6
    • Now “Naeem Public Subnet” is a public subnet and instances launched with a public IP in this subnet will be accessible to the public.
    • The other subnet “Naeem Private Subnet” is private and instances launched in this subnet will not be accessible to the public.
    • We need to have a NAT Gateway/NAT server(used for private subnets to access the Internet)


  • Creation of the NAT Gateway:
    • To create a NAT Gateway – Go to VPC Dashboard and click on “NAT gateways” and then on “Create NAT gateway”.
    • Select the public subnet as the location where the NAT gateway will be created and assign an EIP(Elastic IP) to the server(by clicking on “Create New EIP” and then click on “Create NAT Gateway”.Capture7


  • Creation of the Private Route Table:
    • Now we need to create the Private Route table and add the NAT Gateway to the route so that the instances inside the private subnet can access internet.Capture8
    • Select the “Naeem Private Route Table” and click on “Routes” tab to edit and add the route for NAT Gateway.
    • Select “” as destination and the NAT gateway ID as target and save changes.Capture9
    • Now similarly, click on “Subnet Association” tab to edit and associate the private subnet to this route table.Capture10
  • Adding Network level security to the subnets:
    • NACL(Network Access Control List) are used to provide network level security and are stateful(means that you have to grant Inbound and Outbound rules.
    • To create a NACL – Go to VPC Dashboard and click on “Network ACLs” and then on “Create Network ACL”, provide a name and select the VPC and save.Capture11
    • If you are creating a custom NACL then by default it will deny any kind of Inbound or Outbound access ( if you are using a default NACL or the one created through wizard then, it will allow all inbound and outbound access).
    • We are creating a custom NACL so all Inbound and Outbound access is denied by default, so we will “edit” it to add new Inbound as well as Outbound Access”.Capture12
    • Lets add Inbound and Outbound rules. The rules are calculated in order from low to high and any explicit deny will override any explicit allow.
    • Add “allow” rules for both Inbound and Outbound for SSH(for remote acces), ICMP(for ping), HTTP/HTTPs(for web) and MYSQL(for database).
    • All other access will be denied.Capture13
    • Now associate both the subnets to this NACL by editing the association  from “subnet associations” tab and select both the subnet and save changes.=Capture14
    • Now we are ready to launch instances in the public as well as private subnets.


  • Adding a new instance in public subnet:
    • To add a instance – Go to EC2 Dashboard and click on “Instances” and then on “Launch Instance”.
    • Follow the step by step wizard
      • In step 1 – choose an Amazon AMI – we will select – “Amazon Linux AMI 2017.09.1 (HVM), SSD Volume Type – ami-6057e21a”Capture15
      • In Step 2 – choose Instance type based on CPU, memory, processing power – we will select = “general Purpose – t2.micro”Capture16
      • In step 3 – choose Instance details – select VPC, Subnet and Auto Assin Public IP and leave everything as default Capture17
      • Leave step 4(select storage) and 5(select tags) as default
      • In Step 6 – choose the security group – we will create a new security group for this instance – we will call it “Web DMZ Security Group”
      • Security groups are instance level security for AWS where we can define which ports to open, we will open SSH, ICMP, HTTP and HTTPS for the public facing instance.Capture19
      • and then “Review and Launch Instance”, you will be prompted to create a key pair and download the key pair for security and launch the instance.Capture20
      • Wait for the instance to get ready and use SSH tools like Putty or Mobaxterm to connect to this instance.
      • Follow the “connect” instructions to connect to this server. Capture21
      • Using Mobaxterm to connect to this instance – enter the public IP , user name ( by default as ec2-user and the private key.Capture25
      • Here is the connected server-
      • Capture26


  • Adding a new instance in private subnet:
    • To add a instance – Go to EC2 Dashboard and click on “Instances” and then on “Launch Instance”.
    • Follow the step by step wizard –
      • Follow similar steps 1 to 7 as in the previous public instance creation except that step 3 as this time we will launch the instance in private subnet with Public IP disabled.Capture27
      • Also since we are not using VPN, that’s we can connect to this private instance only though a jump server or a server in public subnet.
      • We will have to upload the private key to the public server(not the best practise, but for the sake of example here), change the mode using chmod and then use “ssh” as per the connection instructions below.Capture28
      • Lets upload the private key to the public instance and change mode using chmodCapture29
      • You can see the upload file “NaeemKey.pem” in the right side pane.
      • In left side pane, we are using “chmod 400 NaeemKey.pem” command the change the security mode of the private key.
      • Finally we are using – ssh -i “NaeemKey.pem” ec2-user@ where is the private IP of the instance launched in the private subnet.Capture30

Hope this tutorial helps.

How to monitor EC2, CloudWatch, EBS, RDS, ELB, ElasticCache using metrics

AWS is the front runners when it comes to have highly available, fault tolerant, secure and high scaling service which can integrate with almost everything in cloud as well as your own data center.

For monitoring your VPC(Virtual Private Cloud), AWS has these two very renowned services:
– CloudWatch and
– CloudTrail

While ‘CloudTrail’ is primarily used to monitor the API calls made to other services or Applications, ‘CloudWatch’ is used for monitoring and logging events periodically (default every 5 minutes, detailed every minute).

It can be used to monitor:
– Compute resources like EB2s, ELBs, Route53, Auto Scaling Groups,
– Storage & CDN resources like  S3, CloudFront, EBS Volumes,
– Database and Analytics services like DynamoDB, Elastic cache, RDS, Elastic MapReduce, Redshift
– SNS, SQS etc.
Let’s cover them one by one on how CloudWatch monitors different services:

A- EC2(Elastic Compute Cloud)

CloudWatch can monitor the following metrics:

# CPU 

– CPUCreditUsage (number of CPU credits consumed by the instance. One CPU credit equals one vCPU running at 100% utilization for one minute)

– CPUCreditBalance (number of CPU credits available for the instance to burst beyond its base CPU utilization, expire every 24 hrs)

– CPUUtilization (percentage of allocated EC2 compute units)

# Network

– NetworkIn (number of bytes received on all network interfaces by the instance)

– NetworkOut (number of bytes sent out on all network interfaces by the instance)

– NetworkPacketsIn (number of packets received on all network interfaces by the instance)

– NetworkPacketsOut (number of packets sent out on all network interfaces by the instance)

# Disk

– DiskReadOps (Completed read operations from all instance volumes)

– DiskWriteOps (Completed write operations to all instance store volumes )

– DiskReadBytes (Bytes read from all instance store volumes )

– DiskWriteBytes (Bytes written to all instance store volumes)

# Status Check

– StatusCheckFailed (Reports whether the instance has passed both the instance status check and the system status check in the last minute)

– StatusCheckFailed_Instance (whether the instance has passed the instance status check in the last minute.)

– StatusCheckFailed_System (whether the instance has passed the system status check in the last minute.)

– You can create alarms based on these above metrics to watch the health of the host and the instances.

Rest of the article to be continued…


How to manage website Failover using AWS Route 53 and a website hosted on an external domain


You have 2 websites:

  • “”, which is the primary website, is hosted on AWS EC2 Instance supported though an ELB(Elastic Load Balancer). Please keep in mind that is just the site registrar for the site name the site is not hosted there.
  • “”, which is the secondary/failover website is hosted on domain.
  • You are using Route 53 DNS service to resolve the domains.


  • “” should be the primary website and should run when the EC2 Instance is healthy.
  • In case of failover the external website “” becomes the failover website. The requests for “” should be routed to this external secondary website in case of the primary websites failure.


  • There can be various solutions. Let’s see them one by one.
  • Solution 1:
    • It is very primitive solution. Just write a domain forwarding rule on your registered websites domain manager panel. Any requests for “futureCloud.Technology” website will automatically be forwarded to “”
    • See snapshot below:
    • godaddy forwarding
    • Issue – this is a very basic solution and AWS Route 53 never comes into picture. It should be Amazon’s Route 53 which should dictate the failover logic not registrar.


  • Solution 2:
    • We will use AWS Route 53 to dictate the website failover from Primary to Secondary.
    • I am assuming the following:
      • That you created an AWS EC2 instance. (Ideally you create a couple of Instances hosting WordPress website and couple of MySQL Instances for your site to by highly available, fault tolerant and failover tolerant. But assuming that you have installed the WordPress website only on one LAMP server (Linux, Apache, MySQL and PHP). So that in case this primary website goes down then you can run the secondary from an external domain.)
      • That you already installed and configured ‘WordPress’ on the server.
      • That this site is configured behind an ELB (Elastic Load Balancer).
      • That your primary website is running on EC2 instance and failover secondary website (“”) is running on WordPress domain.
      • That you have registered a domain name ( say at any domain registrar. I did at
    • So now let’s take the things forward from here.
      • First you will have to create a hosted zone exactly matching your registered domain name “” for your primary website in Route 53.
      • As you create a hosted zone in Route 53, it creates a couple of NS(Name Server) records. A SOA(Start of Authority) record. You will have to add a ‘A’ record for your EC2 website configured behind the ELB and another ‘A’ record for a S3 bucket (exactly matching the name of the registered website and configured as a static website). We will use this bucket for redirection.
      • See snapshot:
      • AWS Hosted ZOne futureCloud for EC2
      • AWS Hosted ZOne futureCloud for S3
  • Note down the 4 name servers and login to your registrars domain manager control panel(mine was and add those 4 entries as Name Server records.
  • See snapshot:
  • godaddy nameservers
  • Now are configured to run your primary website, go ahead and open a browser and run the website – “”.
  • See snapshot:
  • AWS S3 buckets
  • Now you have to configure the failover so that when the EC2 instance is not healthy or not running your secondary failover website should run.
  • To do that first you have to create another bucket (exactly matching the name of the external failover website “”.
  • See snapshot:
  • S3 Redirect from futureCloud S3 to wordpress S3
  • In the bucket “” configure it as a website with a redirection rule to redirect to the other bucket “”.
  • See snapshot
  • S3 Redirect from futureCloud S3 to
  • In the bucket “” configure it as a website with a redirection rule to redirect to the external website “”.
  • See snapshot:
  • AWS S3 buckets
  • In case of EC2 is not healthy or not running the request will first reach “” bucket but it will redirect to the other bucket which is configured to redirect it to your external site.
  • Go to the Route 53 to configure another “hosted zone” which exactly matched the name of the external domain name (“”) and add an ‘A’ record as failover to the bucket by the same name.
  • As EC2 instance is down, the Route 53 first looks for s3 bucket “” but sees a redirection rule for bucket “” and which in turn has a redirection to the actual external website “”. The hosted zone entry thus maps this failover and redirects you to the external website.
  • Now go to the bowser and run “”, but this time you will see that the website is redirected to “”
  • See snapshot:
  • external website running
  • All these redirection setting are being done because the AWS Route 53 is still not a very powerful DNS tool.

Thanks for reading this article. Please contact me in case you need any help. Please go to “About Us” page to view my details.

CloudWatch – a service to do EC2 Instance Health Check/Monitoring , Troubleshooting, Metrics and Analysis

The Health Check/Monitoring , Troubleshooting, Metrics and Analysis of the EC2 instances and getting timely alerts to fix the problems to keep your cloud architecture highly available, auto-scaling and fault tolerant are one of the important roles and responsibilities of a cloud architect or SysOps admin. Let’s check how we can achieve this.

So lets first try to understand what CloudWatch is – Its is a AWS’s health monitoring service to monitor the AWS resources and the applications. It can monitor the following:
– Compute resources like Auto scaling groups, Load balancers, Route 53,
– Storage resources like EBS volumes, storage gateways, Cloud Front,
– Database services like relational RDS instances, non-relational services                   like DynamoDB,
– Analytics services like Elastic Map Reduce, Red Shift,
– In-memory cache services like Elastic Cache to name a few.

The CloudWatch can monitor the following metric:
 – CPU Utilization
 – Disk Reads
 – Network In and Outs
 – Status checks
But it can’t check a few other metrics like Memory Utilization for that we have to add custom metrics, which we will see later in this post.

The default monitoring checks these metrics every 5 minutes whereas the detailed monitoring is every 1 minute.

The status checks listed above can be of two type:
 – System Status Checks – checks related to the host on which the instance                  is virtualized. E.g Loss of network or power,  software or hardware issues               on the host machine. Normally restarting/terminating the instance or                           contacting AWS are the options available.
 – Instance Status Checks – checks related to the VM(Virtual machine) itself.               E.g. memory leaks, corrupted file system, incompatible file system,                                  mis-configured network. Normally restarting/terminating the instance or               checking/trouble shooting your own application for bugs are the options.

On the AWS console go to the CloudWatch service :
 – Click “Create dashboard”
 – Add a widget to dashboard based on the metrics listed above
 – Save the dashboard.(See snapshot below)

Now what if we want to monitor a custom metrics(Memory Utilization) which is not monitored by default by CloudWatch. Well then we have to use some custom scripts for it. Lets see how it is done.

- Install the required packages:
 sudo yum install perl-Switch perl-DateTime perl-Sys-Syslog perl-LWP-Protocol-https
- Download the CloudWatch Custom Monitoring Scripts:
 curl -O
- Unzip the scripts:
 cd aws-scripts-mon
- Execute the script(You will get a "Successfully reported metrics to CloudWatch. Reference Id: 84bf63d3-2841-11e7-a20f-7786b8297dbd
" message on success):
 ./ --mem-util --mem-used --mem-avail
- Add a crontab job for 5 minutes intervals:
 */5 * * * * ~/aws-scripts-mon/ --mem-util --disk-space-util --disk-path=/ --from-cron

Once you have run these scripts successfully, the custom metrics for memory utilization will also be available and you can add it as a widget. See below.

Very Intuitive and Easy going step by step AWS tutorial for Beginners

Please try this link –

I have tried a lot of sites like,, etc to name a few, but the best one is this one.

It is very intuitive and easy going step by step AWS tutorial for Beginners. In which it helps you to create a highly available, auto-scaling WordPress website using EC2 instances, MySQL database over a pair of public subnets hosting the instances, a pair of private subnets hosting the databases enforced with load balance and Route53 DNS Management. Further is also uses CloudFront as CDN for website media, Cloudwatch and SNS for Alerts, Failover management.

I have never seen all this much put into one tutorial. In fact this website is hosted using the same approach. If you are an AWS enthusiast, this website will give you the fire in you to start sailing on this AWS boat and charm to dive deeper to learn more and advanced topics and services in AWS.

Wish you guys best of luck.