Wednesday, March 25, 2020

Releasing TIME_WAIT connection

If you are noticing that heavy connection piling up on a port and never released. Please try this kernel tuning

# Network tunning
net.ipv4.tcp_fin_timeout = 35
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_keepalive_intvl = 35
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1 

[root@host1 hiuy]# netstat -atunlp | grep TIME
tcp        0      0      TIME_WAIT   -
tcp        0      0      TIME_WAIT   -
tcp        0      0      TIME_WAIT   -
tcp        0      0      TIME_WAIT   -
tcp        0      0      TIME_WAIT   -
tcp        0      0      TIME_WAIT   -

Sunday, November 17, 2019

Exposing system journal to HTTP REST endpoint.

Journal log naturally printing a lot of useful information, ranging from system kernel, critical, information, and etc log. If I would able to expose those log files onto a HTTP rest endpoint, then I could easily query all of the status of server without login to it. How cool, isn't it? Using the Flask and exposing any port(e.g. 5577).

from flask import Response
from flask import Flask
import json
import subprocess
import socket

app = Flask(__name__)

@app.route("/journal.json", methods = ['GET'])
def get_journal():
  journal_dict = {}
  cmd = "sudo journalctl -k -S today -o json"
  (output, _) = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, encoding='utf-8').communicate()
  json_list = [line.strip() for line in output.split("\n") if len(line) != 0]
  journal_dict["journal"] = json_list
  js = json.dumps(journal_dict)
  resp = Response(js, status=200, mimetype='application/json')
  resp.headers['Link'] = 'http://{}'.format(socket.gethostname())
  return resp

def main(port=5577):'', port=port)

if __name__ == '__main__':

Monday, November 4, 2019

Namenode migration in CDH cluster

  • Stop Namenode when they in the standby mode.
  • Backup the `,` directory.
  • Make sure the backup is restored at the same location on the target Namenode host and permission preserved.
  • Usually the permission is `hdfs:hadoop` if in case of missing it.
  • There are 5 important Namenode setting that need to be inherited to the target Namenode
    1. `dfs.ha.automatic-failover.enabled` checked
    2. `NameNode Nameservice`, usually it is `nameservice1`. But however it could vary depends on the case. It is good to record it before hand.
    3. `Mount Points`, ditto as per above
    4. `Quorum-based Storage Journal name` ditto as per above
    5. Java Opts setting
  • Namenode migration will be resonating with failover controller as well. Make sure they are migrated at the same time, same node.
  • Once we are ready for all the backup and recorded information, then we are good to kick the tyre get rolling.
  • First of all, delete the "Namenode(Standby)" role and "Failover Controller" from the host.
  • Add new role > select "Namenode" and "Failover Controller" to the new host.
  • Make sure all above mentioned Namenode settings are in place.
  • Close your palms and pray when you hit the start button on both. Should you start with Failover Controller first, later followed by Namenode.
  • Now you are relief once Namenode and Failover Controller started, you should now followed with a series of services restart.
  • Do a rolling restart on Data nodes 
  • Do a rolling restart on Hive Metastore Server
  • Do a rolling restart on Hive server
  • Do a rolling restart on Node Manager
  • Do a rolling restart on Resource Manager
  • Do a rolling restart on Oozie
  • Do a rolling restart on Httpfs
  • Do a rolling restart on Journal Nodes
  • Lastly Namenode
  • Show is completed. The End

Tuesday, November 20, 2018

Hadoop utils: YML to XML parser

Dealing with XML files especially those from hadoop configuration is really painful. So, I have an idea to keep all the configurations in the YAML format, and write a parser to convert them into XML format.

e.g. a hadoop-site.yml file

--- : /var/local/hadoop/hdfs/name : /var/local/hadoop/hdfs/data 
dfs.heartbeat.interval : 3
dfs.datanode.address :
dfs.datanode.http.address : | Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.

It will later been converted to hadoop-site.xml

    Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.

Here is the small python codes that I wrote to do the conversion

import yaml
from xml.etree.ElementTree import Element, SubElement, Comment
from xml.etree import ElementTree
from xml.dom import minidom

def prettify(element):
  rough_string = ElementTree.tostring(element, 'utf-8')
  reparsed = minidom.parseString(rough_string)
  return reparsed.toprettyxml(indent="  ")

def read_yaml_file(yaml_file):
  with open(yaml_file, "r") as file:
    site_config = yaml.load(file)
    return site_config

def generate_xml(yaml_file):
  config = read_yaml_file(yaml_file)
  _top = Element('configuration')
  for key, values in config.iteritems():
    _property = SubElement(_top, 'property')
    _name = SubElement(_property, 'name')
    _value = SubElement(_property, 'value')
    if "|" in str(values).split():
      [value, description] = values.split("|")
      value = values
      description = ""
    _name.text = key
    _value.text = str(value)
    if description:
      _description = SubElement(_property, 'description')
      _description.text = description
  xml_file = yaml_file.split(".")[0] + ".xml"
  with open(xml_file, "w") as file:

To test it you can import function and use it like this

from hadoop_xml_parser import generate_xml

if __name__ == '__main__':

Friday, July 13, 2018

Part 2: Docker networking domain sharing

Following up this post, I am giving out the solution to the problem.

Please revise the alipapa.yml, on the network part

    driver: bridge 

If I want to give a name to the network bridge, can I do thing like this?

    driver: bridge

Let's try out, and docker-compose it up!

ubuntu@ip-172-31-11-243:~$ docker-compose -f alipapa.yml up -d
Creating network "" with driver "bridge"
Recreating ali01 ... done
Recreating ali02 ... done

I can sense the smell of success. But, let's find out more.

ubuntu@ip-172-31-11-243:~$ docker exec -ti ali01 bash
root@ali01:/# hostname -f
root@ali01:/# ping ali01
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from ( icmp_seq=2 ttl=64 time=0.034 ms
root@ali01:/# ping ali02
PING ali02 ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=64 time=0.076 ms
64 bytes from ( icmp_seq=2 ttl=64 time=0.066 ms
64 bytes from ( icmp_seq=3 ttl=64 time=0.065 ms 

ubuntu@ip-172-31-11-243:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
8e0851dec6fd            bridge              local
07cb97e27689        bridge              bridge              local
e81946364c3d        host                host                local
b285c6c7236e        none                null                local

5fb1bdc9f13e     bridge              local

 Indeed! Problem has been solved. and is returning nicely to me as well. Thank you so much for the docker embedded DNS engine! The DNS resolution is working out of the box!

Part 1: Docker networking domain sharing

Take a look at this test yaml file.

ubuntu@ip-172-31-11-243:~$ cat alipapa.yml
version: "3.5"
    image: ubuntu:16.04
    hostname: ali01
    container_name: ali01
    entrypoint: sleep infinity
    image: ubuntu:16.04
    hostname: ali02
    container_name: ali02
    entrypoint: sleep infinity
    driver: bridge

When you are bringing up all these containers, considering that you tap into ali01 and ping ali02, what is your expected result.

ubuntu@ip-172-31-11-243:~$ docker-compose -f alipapa.yml up -d
Creating network "" with driver "bridge"
Creating ali02 ... done
Creating ali01 ... done
ubuntu@ip-172-31-11-243:~$ docker ps -a
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                       PORTS               NAMES
fdbfb1a48686        ubuntu:16.04           "sleep infinity"         7 seconds ago       Up 6 seconds                                     ali02
7566e8c7db8b        ubuntu:16.04           "sleep infinity"         7 seconds ago       Up 5 seconds                                     ali01

ubuntu@ip-172-31-11-243:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
07cb97e27689        bridge              bridge              local
e81946364c3d        host                host                local
b285c6c7236e        none                null                local
5fb1bdc9f13e     bridge              local

Wait... I didn't actually creating domain? Why it is so? Should it be just

root@ali01:/# ping ali02
PING ali02 ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=64 time=0.087 ms
64 bytes from ( icmp_seq=2 ttl=64 time=0.053 ms
64 bytes from ( icmp_seq=3 ttl=64 time=0.062 ms

root@ali01:/# ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from ( icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from ( icmp_seq=3 ttl=64 time=0.065 ms

Scratching head moment. 😓😓😓

docker-compose dilemma, don't think twice, just upgrade it!


I was been hitting the same problem on docker-compose lately. The problem sounds like this.

ubuntu@ip-172-31-11-243:~/apache-hadoop-docker$ docker-compose -v

docker-compose version 1.8.0, build unknown

ubuntu@ip-172-31-11-243:~/apache-hadoop-docker$ docker-compose -f hdfs-cluster-nonkerberized.yml  up
ERROR: Version in "./hdfs-cluster-nonkerberized.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a version of "2" (or "2.0") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see

There is nothing wrong when your docker-compose yml file started with version: 3.0 or above. Don't blame on your yml file, even you are the fresh guy to start to code docker-compose yml file. The problem is residing on the docker-compose that comes along with Ubuntu 16.04(Xenial). Don't think twice, please go ahead and upgrade it. Your life will be much better after all.

The steps to do upgrade. Considering that you are on Ubuntu Xenial, here are the steps

> mv /usr/bin/docker-compose /usr/bin/docker-compose-old

>curl -L`uname -s`-`uname -m` -o /usr/bin/docker-compose

>chmod +x /usr/bin/docker-compose


Sunday, November 12, 2017

Part 2: AWS pricing

Extension to the earlier post on AWS pricing. Here is a small script that you can find out the prices of each On Demand instances. Please checkout the requirements.txt for some pre-installed libraries.

Here is how it looks like.

(python3.6) nasilemak:aws yenonnhiu$ python3
Asia Pacific (Singapore): server2 t2.medium 2017-10-11 06:16:19
 * Price OnDemand t2.medium effective from 2017-11-09T22:41:06Z: 0.0584000000 USD/hour
 * Total accumulated price in USD: 45.44
 * Monthly charged price in USD: 16.37
US East (N. Virginia): server1 t2.nano 2016-11-21 13:59:05
 * Price OnDemand t2.nano effective from 2017-11-09T22:41:06Z: 0.0058000000 USD/hour
 * Total accumulated price in USD: 49.57
 * Monthly charged price in USD: 1.63
US West (Oregon): ubuntu2 t2.micro 2017-01-23 04:28:35
 * Price OnDemand t2.micro effective from 2017-11-09T22:41:06Z: 0.0116000000 USD/hour
 * Total accumulated price in USD: 81.71
 * Monthly charged price in USD: 3.25
** Total monthly price for all instances in USD: 21.25
** Total accumulated price for all instances in USD: 176.72

Friday, November 10, 2017

Part 1: AWS Pricing


AWS pricing seem to be difficult to find out, because aws yields out a list of json files for user to parse on prices( This index.json will eventually guide you to download another set of index.json files as per service Code, for example: AmazonEC2. Here is a link describing how you could achieve the purpose.

Recently, boto3 is allowing the developers to query is the json pricing objects online without downloading the json files. One of the most important blog that published recently from Amazon about the same objective. From the set of API libraries that you could call from boto3.client('pricing'). You can basically do full range of filtering based on the EC2 Attributes and values in order to nail down that is the instance price.

For example, I have an Linux instance located at US East (N. Virginia), the instance type is t2.nano
. So, the filter function call will be as below

Filters = [
         {'Type' :'TERM_MATCH', 'Field':'operatingSystem', 'Value':'Linux'},
         {'Type' :'TERM_MATCH', 'Field':'instanceType', 'Value':'t2.nano'},
         {'Type' :'TERM_MATCH', 'Field':'location', 'Value':'US East (N. Virginia)'}

However, the result of the json object is a bit overwhelming for us to parse the detail of the price per hours. With the helps of python library e.g. objectpath, it could help us to drill down the price easily. The small python code will be look like below.

import boto3
import json
import objectpath

pricing = boto3.client('pricing')
print("Selected EC2 Products")
response = pricing.get_products(
     Filters = [
         {'Type' :'TERM_MATCH', 'Field':'operatingSystem', 'Value':'Linux'},
         {'Type' :'TERM_MATCH', 'Field':'instanceType', 'Value':'t2.nano'},
         {'Type' :'TERM_MATCH', 'Field':'location', 'Value':'US East (N. Virginia)'}
[price_info_dump] = response['PriceList']
price_tree = objectpath.Tree(json.loads(price_info_dump))
publish_date = price_tree.execute("$.publicationDate")
[sku] = price_tree.execute("$.terms.OnDemand")
[rateCode] = price_tree.execute("$.terms.OnDemand.'{}'.priceDimensions".format(sku))
print("Price per hours for OnDemand t2.nano effective from {}: {}".format(
        price_tree.execute("$.terms.OnDemand.'{}'.priceDimensions.'{}'.pricePerUnit".format(sku, rateCode))

Here is output looks like

(python3-env1) nasilemak:Developments hiuy$ python3
Selected EC2 Products
Price per hours for OnDemand t2.nano effective from 2017-11-09T22:41:06Z: {'USD': '0.0116000000'}

Here are the range of Amazon EC2 filtering attributes and values that you can call.

Selected EC2 Attributes & Values
  volumeType: Cold HDD, General Purpose, Magnetic, Provisioned IOPS, Throughput Optimized HDD
  maxIopsvolume: 10000, 20000, 250 - based on 1 MiB I/O size, 40 - 200, 500 - based on 1 MiB I/O size
  instanceCapacity10xlarge: 1
  locationType: AWS Region
  instanceFamily: Compute optimized, GPU instance, General purpose, Memory optimized, Micro instances, Storage optimized
  operatingSystem: Linux, NA, RHEL, SUSE, Windows
  clockSpeed: 2 GHz, 2.3 GHz, 2.4  GHz, 2.4 GHz, 2.5 GHz, 2.6 GHz, 2.8 GHz, 2.9 GHz, 3.0 Ghz, Up to 3.0 GHz, Up to 3.3 GHz
  LeaseContractLength: 1 yr, 1yr, 3 yr, 3yr
  ecu: 0, 104, 108, 116, 124.5, 12, 132, 135, 139, 13, 14, 16, 188, 20, 26, 278, 27, 28, 2, 31, 33.5, 340, 349, 35, 3, 4, 52, 53.5, 53, 55, 56, 6.5, 62, 7, 88, 8, 94, 99, NA, Variable
  networkPerformance: 10 Gigabit, 20 Gigabit, 25 Gigabit, High, Low to Moderate, Low, Moderate, NA, Up to 10 Gigabit, Very Low
import boto3
  instanceCapacity8xlarge: 1, 2
  group: EBS I/O Requests, EBS IOPS, EC2-Dedicated Usage, ELB:Balancer, ELB:Balancing, ElasticIP:AdditionalAddress, ElasticIP:Address, ElasticIP:Remap, NGW:NatGateway
  maxThroughputvolume: 160 MB/sec, 250 MiB/s, 320 MB/sec, 40 - 90 MB/sec, 500 MiB/s
  ebsOptimized: Yes
  maxVolumeSize: 1 TiB, 16 TiB
  gpu: 16, 1, 2, 4, 8
  processorFeatures: Intel AVX, Intel AVX2, Intel AVX512, Intel Turbo, Intel AVX, Intel AVX2, Intel Turbo, Intel AVX; Intel AVX2; Intel Turbo, Intel AVX; Intel Turbo
  intelAvxAvailable: Yes
  instanceCapacity4xlarge: 2, 4
  servicecode: AmazonEC2
  groupDescription: Additional Elastic IP address attached to a running instance, Charge for per GB data processed by NAT Gateways with provisioned bandwidth, Charge for per GB data processed by NatGateways, Data processed by Elastic Load Balancer, Elastic IP address attached to a running instance, Elastic IP address remap, Fee for running at least one Dedicated Instance in the region, Hourly charge for NAT Gateways, IOPS, Input/Output Operation, LoadBalancer hourly usage by Application Load Balancer, LoadBalancer hourly usage by Network Load Balancer, Per hour and per Gbps charge for NAT Gateways with provisioned bandwidth, Standard Elastic Load Balancer, Used Application load balancer capacity units-hr, Used Network load balancer capacity units-hr
  processorArchitecture: 32-bit or 64-bit, 64-bit
  physicalCores: 20, 24, 36, 72
  productFamily: Compute Instance, Dedicated Host, Fee, IP Address, Load Balancer-Application, Load Balancer-Network, Load Balancer, NAT Gateway, Storage Snapshot, Storage, System Operation
  enhancedNetworkingSupported: Yes
  intelTurboAvailable: Yes
  memory: 0.5 GiB, 0.613 GiB, 1 GiB, 1,952 Gib, 1.7 GiB, 117 GiB, 122 GiB, 144 GiB, 15 GiB, 15.25 GiB, 16 GiB, 160 GiB, 17.1 GiB, 2 GiB, 22.5 GiB, 23 GiB, 244 GiB, 256 GiB, 3,904 GiB, 3.75 GiB, 30 GiB, 30.5 GiB, 32 GiB, 34.2 GiB, 4 GiB, 488 GiB, 60 GiB, 60.5 GiB, 61 GiB, 64 GiB, 68.4 GiB, 7 GiB, 7.5 GiB, 72 GiB, 768 GiB, 8 GiB, 976 Gib, NA
  dedicatedEbsThroughput: 1000 Mbps, 10000 Mbps, 12000 Mbps, 14000 Mbps, 1600 Mbps, 1750 Mbps, 2000 Mbps, 3000 Mbps, 3500 Mbps, 400 Mbps, 4000 Mbps, 425 Mbps, 450 Mbps, 4500 Mbps, 500 Mbps, 6000 Mbps, 7000 Mbps, 750 Mbps, 800 Mbps, 850 Mbps, 9000 Mbps, Upto 2250 Mbps
  vcpu: 128, 16, 17, 1, 2, 32, 36, 40, 4, 64, 72, 8
  OfferingClass: convertible, standard
  instanceCapacityLarge: 16, 22, 32, 36
  termType: OnDemand, Reserved
  storage: 1 x 0.475 NVMe SSD, 1 x 0.95 NVMe SSD, 1 x 1,920, 1 x 1.9 NVMe SSD, 1 x 160 SSD, 1 x 160, 1 x 32 SSD, 1 x 320 SSD, 1 x 350, 1 x 4 SSD, 1 x 410, 1 x 420, 1 x 60 SSD, 1 x 80 SSD, 1 x 800 SSD, 1 x 850, 12 x 2000 HDD, 2 x 1,920, 2 x 1.9 NVMe SSD, 2 x 1024 SSD, 2 x 120 SSD, 2 x 16 SSD, 2 x 160 SSD, 2 x 320 SSD, 2 x 40 SSD, 2 x 420, 2 x 80 SSD, 2 x 800 SSD, 2 x 840 GB, 2 x 840, 24 x 2000 HDD, 24 x 2000, 3 x 2000 HDD, 4 x 1.9 NVMe SSD, 4 x 420, 4 x 800 SSD, 4 x 840, 6 x 2000 HDD, 8 x 1.9 NVMe SSD, 8 x 800 SSD, EBS only, NA
  intelAvx2Available: Yes
  storageMedia: Amazon S3, HDD-backed, SSD-backed
  physicalProcessor: High Frequency Intel Xeon E7-8880 v3 (Haswell), Intel Xeon E5-2650, Intel Xeon E5-2666 v3 (Haswell), Intel Xeon E5-2670 (Sandy Bridge), Intel Xeon E5-2670 v2 (Ivy Bridge), Intel Xeon E5-2670 v2 (Ivy Bridge/Sandy Bridge), Intel Xeon E5-2670, Intel Xeon E5-2676 v3 (Haswell), Intel Xeon E5-2676v3 (Haswell), Intel Xeon E5-2680 v2 (Ivy Bridge), Intel Xeon E5-2686 v4 (Broadwell), Intel Xeon Family, Intel Xeon Platinum 8124M, Intel Xeon x5570, Variable
  provisioned: No, Yes
  servicename: Amazon Elastic Compute Cloud
  PurchaseOption: All Upfront, AllUpfront, No Upfront, NoUpfront, Partial Upfront, PartialUpfront
  instanceCapacity18xlarge: 1
  instanceType: c1.medium, c1.xlarge, c3.2xlarge, c3.4xlarge, c3.8xlarge, c3.large, c3.xlarge, c3, c4.2xlarge, c4.4xlarge, c4.8xlarge, c4.large, c4.xlarge, c4, c5.18xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.large, c5.xlarge, c5, cc1.4xlarge, cc2.8xlarge, cg1.4xlarge, cr1.8xlarge, d2.2xlarge, d2.4xlarge, d2.8xlarge, d2.xlarge, d2, f1.16xlarge, f1.2xlarge, f1, g2.2xlarge, g2.8xlarge, g2, g3.16xlarge, g3.4xlarge, g3.8xlarge, g3, hi1.4xlarge, hs1.8xlarge, i2.2xlarge, i2.4xlarge, i2.8xlarge, i2.xlarge, i2, i3.16xlarge, i3.2xlarge, i3.4xlarge, i3.8xlarge, i3.large, i3.xlarge, i3, m1.large, m1.medium, m1.small, m1.xlarge, m2.2xlarge, m2.4xlarge, m2.xlarge, m3.2xlarge, m3.large, m3.medium, m3.xlarge, m3, m4.10xlarge, m4.16xlarge, m4.2xlarge, m4.4xlarge, m4.large, m4.xlarge, m4, p2.16xlarge, p2.8xlarge, p2.xlarge, p2, p3.16xlarge, p3.2xlarge, p3.8xlarge, p3, r3.2xlarge, r3.4xlarge, r3.8xlarge, r3.large, r3.xlarge, r3, r4.16xlarge, r4.2xlarge, r4.4xlarge, r4.8xlarge, r4.large, r4.xlarge, r4, t1.micro, t2.2xlarge, t2.large, t2.medium, t2.micro, t2.nano
  tenancy: Dedicated, Host, NA, Reserved, Shared
  usagetype: APN1-BoxUsage:c1.medium, APN1-BoxUsage:c1.xlarge, APN1-BoxUsage:c3.2xlarge, APN1-BoxUsage:c3.4xlarge, APN1-BoxUsage:c3.8xlarge, APN1-BoxUsage:c3.large, APN1-BoxUsage:c3.xlarge, APN1-BoxUsage:c4.2xlarge, APN1-BoxUsage:c4.4xlarge, APN1-BoxUsage:c4.8xlarge, APN1-BoxUsage:c4.large, APN1-BoxUsage:c4.xlarge, APN1-BoxUsage:cc2.8xlarge, APN1-BoxUsage:cr1.8xlarge, APN1-BoxUsage:d2.2xlarge, APN1-BoxUsage:d2.4xlarge, APN1-BoxUsage:d2.8xlarge, APN1-BoxUsage:d2.xlarge, APN1-BoxUsage:g2.2xlarge, APN1-BoxUsage:g2.8xlarge, APN1-BoxUsage:g3.16xlarge, APN1-BoxUsage:g3.4xlarge, APN1-BoxUsage:g3.8xlarge, APN1-BoxUsage:hi1.4xlarge, APN1-BoxUsage:hs1.8xlarge, APN1-BoxUsage:i2.2xlarge, APN1-BoxUsage:i2.4xlarge, APN1-BoxUsage:i2.8xlarge, APN1-BoxUsage:i2.xlarge, APN1-BoxUsage:i3.16xlarge, APN1-BoxUsage:i3.2xlarge, APN1-BoxUsage:i3.4xlarge, APN1-BoxUsage:i3.8xlarge, APN1-BoxUsage:i3.large, APN1-BoxUsage:i3.xlarge, APN1-BoxUsage:m1.large, APN1-BoxUsage:m1.medium, APN1-BoxUsage:m1.xlarge, APN1-BoxUsage:m2.2xlarge, APN1-BoxUsage:m2.4xlarge, APN1-BoxUsage:m2.xlarge, APN1-BoxUsage:m3.2xlarge, APN1-BoxUsage:m3.large, APN1-BoxUsage:m3.medium, APN1-BoxUsage:m3.xlarge, APN1-BoxUsage:m4.10xlarge, APN1-BoxUsage:m4.16xlarge, APN1-BoxUsage:m4.2xlarge, APN1-BoxUsage:m4.4xlarge, APN1-BoxUsage:m4.large, APN1-BoxUsage:m4.xlarge, APN1-BoxUsage:p2.16xlarge, APN1-BoxUsage:p2.8xlarge, APN1-BoxUsage:p2.xlarge, APN1-BoxUsage:p3.16xlarge, APN1-BoxUsage:p3.2xlarge, APN1-BoxUsage:p3.8xlarge, APN1-BoxUsage:r3.2xlarge, APN1-BoxUsage:r3.4xlarge, APN1-BoxUsage:r3.8xlarge, APN1-BoxUsage:r3.large, APN1-BoxUsage:r3.xlarge, APN1-BoxUsage:r4.16xlarge, APN1-BoxUsage:r4.2xlarge, APN1-BoxUsage:r4.4xlarge, APN1-BoxUsage:r4.8xlarge, APN1-BoxUsage:r4.large, APN1-BoxUsage:r4.xlarge, APN1-BoxUsage:t1.micro, APN1-BoxUsage:t2.2xlarge, APN1-BoxUsage:t2.large, APN1-BoxUsage:t2.medium, APN1-BoxUsage:t2.micro, APN1-BoxUsage:t2.nano, APN1-BoxUsage:t2.small, APN1-BoxUsage:t2.xlarge, APN1-BoxUsage:x1.16xlarge, APN1-BoxUsage:x1.32xlarge, APN1-BoxUsage:x1e.32xlarge, APN1-BoxUsage, APN1-DataProcessing-Bytes, APN1-DedicatedUsage:c1.medium, APN1-DedicatedUsage:c1.xlarge, APN1-DedicatedUsage:c3.2xlarge, APN1-DedicatedUsage:c3.4xlarge, APN1-DedicatedUsage:c3.8xlarge, APN1-DedicatedUsage:c3.large, APN1-DedicatedUsage:c3.xlarge, APN1-DedicatedUsage:c4.2xlarge, APN1-DedicatedUsage:c4.4xlarge, APN1-DedicatedUsage:c4.8xlarge, APN1-DedicatedUsage:c4.large, APN1-DedicatedUsage:c4.xlarge, APN1-DedicatedUsage:cc2.8xlarge, APN1-DedicatedUsage:cr1.8xlarge, APN1-DedicatedUsage:d2.2xlarge, APN1-DedicatedUsage:d2.4xlarge, APN1-DedicatedUsage:d2.8xlarge, APN1-DedicatedUsage:d2.xlarge, APN1-DedicatedUsage:g2.2xlarge
  normalizationSizeFactor: 0.25, 0.5, 128, 144, 16, 1, 256, 2, 32, 4, 64, 72, 80, 8, NA
  instanceCapacity16xlarge: 1, 2
  instanceCapacity2xlarge: 4, 5, 8
  maxIopsBurstPerformance: 3000 for volumes <= 1 TiB, Hundreds
  instanceCapacity32xlarge: 1
  instanceCapacityXlarge: 11, 16, 18, 8
  licenseModel: Bring your own license, NA, No License required
  currentGeneration: No, Yes
  preInstalledSw: NA, SQL Ent, SQL Std, SQL Web
  location: AWS GovCloud (US), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon)
  instanceCapacity9xlarge: 2
  instanceCapacityMedium: 32
  operation: Hourly, LoadBalancing:Application, LoadBalancing:Network, LoadBalancing, NatGateway, RunInstances:0002, RunInstances:0006, RunInstances:000g, RunInstances:0010, RunInstances:0102, RunInstances:0202, RunInstances:0800, RunInstances, Surcharge

Tuesday, October 17, 2017

Finding out more on aws instances

hi all,

There is a quick way for you print all of the aws instances. Here is the small python code to help you. However, you do need to install boto3 library before everything starts to work. Please read the to set thing up.

The small function of show_ec2_instances will help you to print out all of the instances. Enjoy!

(python3-env1) nasilemak:aws hiuy$ python3
Python 3.5.2 (default, Oct 11 2016, 04:59:56)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.38)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from aws_lib import *
>>> show_ec2_instances()
  * linux1: stopped
  * linux2: stopped
  * linux3: stopped