Docker Swarm

Create a Manager Node:

$docker-machine create -d amazonec2 --swarm --amazonec2-region ap-southeast-1 --amazonec2-zone a --amazonec2-vpc-id vpc-12112 --amazonec2-ssh-keypath [SSH-PRIV-KEY-FILE] master

Note: Public Key should also be present in the same directory.

Create 2 Slave Nodes:

$docker-machine create -d amazonec2 --swarm --amazonec2-region ap-southeast-1 --amazonec2-zone a --amazonec2-vpc-id vpc-121212 --amazonec2-ssh-keypath [SSH-PRIV-KEY-FILE] slave1
$docker-machine create -d amazonec2 --swarm --amazonec2-region ap-southeast-1 --amazonec2-zone a --amazonec2-vpc-id vpc-121212 --amazonec2-ssh-keypath [SSH-PRIV-KEY-FILE] slave2

To default docker commands to Manager machine:

$docker-machine env master

Login to master and configure it as Manager node:

$docker-machine ssh master

curl http://169.254.169.254/latest/meta-data/public-ipv4

$docker swarm init --advertise-addr [PUBLIC-IP]

To add a worker to this swarm, run the following command:

$docker swarm join \
--token SWMTKN-1-34t11111111111111111021crh0xwoktwxzwb \
PUBLIC-IP:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.
Login to Slave and execute the above command to add as a worker.

 


DEPLOY A SERVICE


 

NOTE: To resolve to the container based on the service name you provide, you have to create separate networking with overlay driver and use that driver for service creation(use –publish option to expose the port outside).

$docker network create --driver overlay my_net
$docker service create --replicas 2 --network my_net -p 80:80 --name web --mount type=bind,src=/etc/hostname,dst=/usr/share/nginx/html/index.html,readonly nginx
  • This will resolve the name “web” to a virtual IP inside the container but it will not be accessible outside the containers.
  • This will load Nginx containers in both master and slave.

Log into one of the containers

$apt-get udpate
$apt-get install dns-utils curl net-tools
$nslookup web
$ifconfig

The IPs of nslookup and ifconfig are different, the IP of nslookup is Virtual IP.

This will RR between containers running on the same host only.
Status Check of Service:

$docker service ls
$docker service ps web

To scale up a service:

$docker service scale web=5

To remove the whole setup

$docker service rm web

 

 

SWARM has a built-in load balancer, why another load balancer?

SWARM does not have:

  1. SSL Termination
  2. Content Based routing
  3. Access control and authorization
  4. Rewrites and redirects.
  5. More on Nginx – Advance LB Algorithms, Multiprotocol support, Advanced logging, limits, scripting, security.
    (Native mod-security available for Nginx.)

[BLOG Incomplete]

Install MySQL plugin for Newrelic in a Minute

To use this plugin, you must have:

  • a Java Runtime Environment (JRE) of 1.6 or higher
  • at least one database to monitor (MySQL 5.0 or higher)
  • a New Relic account

New Relic Platform Installer (NPI) is a simple, lightweight command line tool that helps you easily download, configure and manage New Relic Platform Plugins

Plugin for Generic Linux OS (OS = Opensuse)

LICENSE_KEY=4eeeeeeeeeeeeeeeeeeeeeeee2e bash -c "$(curl -sSL https://download.newrelic.com/npi/release/install-npi-linux-x64.sh)"
npi install com.newrelic.plugins.mysql.instance

Configuration File

#vim ~/newrelic-npi/plugins/com.newrelic.plugins.mysql.instance/newrelic_mysql_plugin-2.0.0/config/plugin.json

 

{
"agents": [
{
"name" : "Host Name on Newrelic UI",
"host" : "localhost/RDS ENpoint",
"metrics" : "status,newrelic",
"user" : "DB_USER_NAME",
"passwd" : "DB_PASSWORD"
}
]
}

Start Plugin:

#cd /root/newrelic-npi/plugins/com.newrelic.plugins.mysql.instance/newrelic_mysql_plugin-2.0.0
#java -Xmx128m -jar plugin.jar

 

 

GitHub = https://github.com/newrelic-platform/newrelic_mysql_java_plugin
Plugin Home Page = https://rpm.newrelic.com/accounts/748441/plugins/directory/52

Run commands in Opsworks Instances using chef recipe

Using Custom Recipes in Opsworks

MY Recipe

Create the follwoing DIR structure:

myCookbookRepo -> myCustomCookbook -> recipe -> myCustomRecipe.rb

The name “recipe” must not be changed, remaining we can give the names we like.

vim myCustomRecipe.rb

execute 'bundle install' do
cwd '/srv/www/testapp/current'
end

Save it.

ZIP the directory myCookbookRepo.zip and upload to S3 Bucket.

In Opsworks, Click “Stack” , Click “Stack Settings” , Click “Edit

Paste the AWS S3 URL for myCookbookRepo.zip and AK , PK as well.

Now Click “Run Command” and Select “Execute Recipes” from the Command drop down list and mention the following in “Recipes to execute” box

cookbook::recipe (eg. myCustomCookbook::myCustomRecipe.rb)

Click “Execute Recipes

DONE!

Reference:
https://docs.getchef.com/resource_execute.html
http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-commands.html

Execute “rake task” in Opsworks

All the Opsworks Instances have GEMS installed in 2 locations
 
  1. System-Wide Location (/usr/local/lib/ruby/gems/2.0.0/gems)
  2. User-Home Location, in Opsworks its Deploy user (/home/deploy/.bundler/galaxylifecms/ruby/2.0.0/gems)

The GEMS listed in Gemfile are installed in the User-Home location by Bundler

If you need to execute a custom ruby script like 

#rake my_custom_script

Chances are high that you would run into GEM dependencies errors even though you had mentioned all the required GEMS in Gemfile. 

To verify if the GEM in error have been installed by bundler or not, 

# grep gem_name Gemfile.lock

IF it exist , then the issue is the custom ruby script is pickingup up the wrong environment ie System-Wide location and not User-Home location. 

Solution :

#bundle exec rake my_custom_script

The “bundle exec” will ensure the custom rake task picks up the GEM used by the Bundler environment.

Customize Nginx config on Ruby1.9 Elastic Beanstalk

Requirement : Set a URL redirection in Nginx Configuration.

Example :

download.appygeek.com => https://play.google.com/store/apps/details?id=com.mobilesrepublic.appygeek

On Beanstalk with Phusion Passenger Standalone [3.0.17] (Ruby1.9), Nginx customization is highly discouraged. However to accomplish this, the following workaround is done :

Beanstalk generates the nginx configuration file from a ERB template each time when restarted. Hence the configuration change has to be made at the ERB template instead of Nginx configuration.

Location of the ERB Template file = /usr/share/ruby/1.9/gems/1.9.1/gems/passenger-3.0.17/lib/phusion_passenger/templates/standalone/config.erb

Following contents are appended before the last closing curly bracket:

server {
 listen 0.0.0.0:80;
 server_name download.appygeek.com;
 root '<%= app[:root] %>/public/test';
 index index.html;
}

File Saved, phusion-passenger-nginx configuration is combined and it is Restarted

/etc/init.d/passenger restart

Create the Dir /var/app/current/public/test and a file called index.html with the below contents in it:

<html>
<head>
<meta http-equiv="refresh" content="0; url=https://play.google.com/store/apps/details?id=com.mobilesrepublic.appygeek" />
</head>
</html>

Hence the Nginx is customized.

Note : For Permanent fix we have to take Custom AMI and launch the ElasticBeanstalk environment.

 

Click this URL on Nginx Customization @ AWS Forum

 

Full Access to a specific S3 Bucket except DeleteObject

{
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListAllMyBuckets"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::testbucket-unni"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
              "s3:PutObject",
              "s3:GetObject"
              ],
            "Resource": [
                "arn:aws:s3:::testbucket-unni/*"
            ]
        }
    ]
}

Details:

  1. Get and List actions given to “arn:aws:s3:::*” to enable console view
  2. List action given by specifying exact ARN (without start) “arn:aws:s3:::testbucket-unni” to enable protection to other buckets if its name start “testbucket-unni”
  3. Put and Get actions given as “arn:aws:s3:::testbucket-unni/*” which means only to objects inside the bucket .