To Understand IF

To Understand IF
http://aws.typepad.com/aws/2011/08/aws-identity-and-access-management-now-with-identity-federation.html

AWS Security Token Service API Actions

AssumeRole (temp creds for upto 1hr)

Returns a set of temporary security credentials. You call this API using the credentials of an existing IAM user. This API is useful to grant AWS access to users who do not have an IAM identity (that is, to federated users). It is also useful to allow existing IAM users to access AWS resources that they don’t already have access to, such as resources in another account. For more information, see Creating Temporary Security Credentials for Delegating API Access.

AssumeRoleWithWebIdentity

Returns a set of temporary security credentials for federated users who are authenticated using a public identity provider like Login with Amazon, Facebook, or Google. This API is useful for creating mobile applications or client-based web applications that require access to AWS but where users do not have their own AWS or IAM identity. For more information, see Creating a Role to Allow AWS Access for the Mobile App.

GetFederationToken (Temp creds for upto 36hr)

Returns a set of temporary security credentials for federated users. This API differs from AssumeRole in that the default expiration period is substantially longer (up to 36 hours instead of up to 1 hour); this can help reduce the number of calls to AWS because you do not need to get new credentials as often. For more information, see Creating Temporary Security Credentials to Enable Access for Federated Users.

GetSessionToken

Returns a set of temporary security credentials to an existing IAM user. This API is useful to provide enhanced security, such as to make AWS requests when MFA is enabled for the IAM user. For more information, see Creating Temporary Security Credentials to Enable Access for IAM Users.
Information Available in Requests for Federated Users

Federated users are users who are authenticated using a system other than IAM. For example, a company might have an application for use in-house that makes calls to AWS. It might be impractical to give an IAM identity to every corporate user who uses the application. Instead, the company might use a proxy (middle-tier) application that has a single IAM identity. This proxy application first authenticates individual users using the corporate network; the proxy application then uses its IAM identity to get temporary security credentials for individual users and gives them to the user’s local copy of the corporate application. The user’s local copy of the corporate application can use these temporary credentials to call AWS.

Similarly, you might create an app for a mobile device in which the app needs to access AWS resources. In that case, you might use web identity federation, where the app authenticates the user using a well-known identity provider like Login with Amazon, Facebook, or Google. The app can then use the user’s authentication information from these providers to get temporary security credentials for accessing AWS resources.
Using Your Company’s Own Authentication System to Grant Access to AWS Resources
1. http://docs.aws.amazon.com/STS/latest/UsingSTS/STSUseCases.html#IdentityBrokerApplication

2.Ways to Get Temporary Security Credentials
http://docs.aws.amazon.com/STS/latest/UsingSTS/Welcome.html#AccessingSTS

3.Permissions in Temporary Security Credentials for Federated Users
http://docs.aws.amazon.com/STS/latest/UsingSTS/sts-controlling-feduser-permissions.html
Calls to the AssumeRole action are made using the long-term security credentials of an IAM user. The call must specify the ARN of the role to assume. The IAM user whose credentials are used to make the call must as a minimum have sts:AssumeRole permissions, and must be listed as the principal in the role that is being assumed. By default, the role being assumed determines the permissions that are granted to the temporary security credentials. The permissions of the IAM user that’s used to make the AssumeRole API have no effect on the permissions granted to the temporary security credentials that are returned by the API. Optionally, the call can include a policy that further restricts the permissions of the temporary security credentials. The resulting credentials are based on the intersection of the role’s permissions and the passed permissions. (This means that the passed permissions can never escalate the permissions defined in the role.)

 

How do I choose which API?

When deciding which API to use, you should consider what services are required for your use case and where you want to maintain the policies associated with your federated users.

How do you want to maintain the policies associated with your delegated users?

If you prefer to maintain permissions solely within your organization, GetFederationToken is the better choice. Since the base permissions will be derived from the IAM user making the request and you need to cover your entire delegated user base, this IAM user will require the combination of all permissions of all federated users.

If you prefer to maintain permissions within AWS, choose AssumeRole, since the base permissions for the temporary credentials will be derived from the policy on a role. Like a GetFederationToken request, you can optionally scope down permissions by attaching a policy to the request. Using this method, the IAM user credentials used by your proxy server only requires the ability to call sts:AssumeRole.

 

The Top Ten DevOps “Operational Requirements”

DevOpsGuys

Join us for “The Top 10 DevOps Operational Requirements” –  http://www.brighttalk.com/webcast/534/98059 via @BrightTALK

One of the key tenets in DevOps is to involve the Operations teams in the full software development life cycle (SDLC) and in particular to ensure that “operational requirements” (“OR’s”, formerly known as “non-functional requirements”, “NFR’s”) are incorporated into the design&build phases.

In order to make your life easier the DevOpsGuys have scoured the internet to compile this list of the Top Ten DevOps Operational Requirements (ok, it was really just chatting with some of the guys down the pub BUT we’ve been doing this a long time and we’re pretty sure that if you deliver on these your Ops people will be very happy indeed!).

#10 – Instrumentation

Would you drive a car with a blacked out windscreen and no speedo? No, didn’t think so, but often Operations are expected to run applications…

View original post 1,238 more words

Splunk: Centralized Machine Data Storage

Splunk Installation steps on Linux

http://docs.splunk.com/Documentation/Splunk/5.0/Installation/InstallonLinux

Installing Splunk

Download it from the Website (create a splunk account)

sudo dpkg -i splunk-5.0-140868-linux-2.6-amd64.deb

Output –

———————————————————————-
Splunk has been installed in:
/opt/splunk

To start Splunk, run the command:
/opt/splunk/bin/splunk start
To use the Splunk Web interface, point your browser at:
http://manager-desktop:8000
Complete documentation is at http://docs.splunk.com/Documentation/Splunk
———————————————————————-
Installation DONE 🙂

 

 

splunkforwarders

 

 

Forwarding Logs to the Central Server

In terms of Splunk to get remote machine data – there are 2 machines Forwarders (Splunk client) and Receivers (Splunk server)
Configuring the Splunk Server (The Receiver) :
Use Splunk Manager to set up a receiver:

1. Log into Splunk Web as admin on the server that will be receiving data from a forwarder.

2. Click the Manager link in the upper right corner.

3. Select Forwarding and receiving in the Data area.

4. Click Add new in the Receive data section.

5. Specify which TCP port you want the receiver to listen on (the listening port, also known as the receiving port). For example, if you enter “9997,” the receiver will receive data on port 9997. By convention, receivers listen on port 9997, but you can specify any unused port. You can use a tool like netstat to determine what ports are available on your system. Make sure the port you select is not in use by splunkweb or splunkd.

6. Click Save. You must restart Splunk to complete the process.
Quick Navigation

GO TO HOME PAGE : Click App -> Search
Create Users : Click Manager -> Access Controls
Include Data Inputs : Manager -> Data inputs -> Files & directories

Notes

If any of the universal forwarders will be running on a different operating system from the receiver, install the app for the forwarder’s OS on the receiver. For example, assume the receiver in the diagram above is running on a Linux box. In that case, you’ll need to install the Windows app on the receiver. You might need to install the *nix app, as well. — However, since the receiver is on Linux, you probably have already installed that app. Details and provisos regarding this can be found here.

After you have downloaded the relevant app, remove its inputs.conf file before enabling it, to ensure that its default inputs are not added to your indexer. For the Windows app, the location is: $SPLUNK_HOME/etc/apps/windows/default/inputs.conf.
Point Splunk at a data source. Tell Splunk a bit about the source. That source then becomes a data input to Splunk. Splunk begins to index the data stream, transforming it into a series of individual events. You can view and search those events right away. If the results aren’t exactly what you want, you can tweak the indexing process until you’re satisfied.

The data can be on the same machine as the Splunk indexer (local data), or it can be on another machine altogether (remote data). You can easily get remote data into Splunk, either by using network feeds or by installing Splunk forwarders on the machines where the data originates. Forwarders are lightweight versions of Splunk that consume data and then forward it on to the main Splunk instance for indexing and searching. For more information on local vs. remote data, see “Where is my data?”.

A Splunk instance that forwards data to another Splunk instance (an indexer or another forwarder) or to a third-party system is called FORWARDER

A Splunk instance that receives data from a forwarder is called a RECEIVER.

There are three types of forwarders:

The universal forwarder is a streamlined, dedicated version of Splunk that contains only the essential components needed to forward data.
A heavy forwarder is a full Splunk instance, with some features disabled to achieve a smaller footprint.
A light forwarder is also a full Splunk instance, with most features disabled to achieve as small a footprint as possible. The universal forwarder, with its even smaller footprint yet similar functionality, supersedes the light forwarder for nearly all purposes.

In most respects, the universal forwarder represents the best tool for forwarding data to indexers. Its main limitation is that it forwards only unparsed data. Therefore, you cannot use it to route data based on event contents. For that, you must use a heavy forwarder

Universal Forwarder Vs full Splunk

The universal forwarder’s sole purpose is to forward data. Unlike a full Splunk instance, you cannot use the universal forwarder to index or search data. To achieve higher performance and a lighter footprint, it has several limitations:

The universal forwarder has no searching, indexing, or alerting capability.
The universal forwarder does not parse data.
The universal forwarder does not output data via syslog.
Unlike full Splunk, the universal forwarder does not include a bundled version of Python.

@Splunk Client

Install universal forwarders on each machine that will be generating data. These will forward the data to the receiver.
Download Link
http://www.splunk.com/download/universalforwarder

#wget -O splunkforwarder-5.0-140868-linux-2.6-amd64.deb ‘http://www.splunk.com/page/download_track?file=5.0/universalforwarder/linux/splunkforwarder-5.0-140868-linux-2.6-amd64.deb&ac=&wget=true&name=wget&typed=releases’
#dpkg -i splunkforwarder-5.0-140868-linux-2.6-amd64.deb

Output :::

Selecting previously deselected package splunkforwarder.
(Reading database … 160868 files and directories currently installed.)
Unpacking splunkforwarder (from splunkforwarder-5.0-140868-linux-2.6-amd64.deb) …
Setting up splunkforwarder (5.0-140868) …
———————————————————————-
Splunk has been installed in:
/opt/splunkforwarder

To start Splunk, run the command:
/opt/splunkforwarder/bin/splunk start

Complete documentation is at http://docs.splunk.com/Documentation/Splunk
———————————————————————-

Configuration steps

After you start the universal forwarder and accept the license agreement, follow these steps to configure it:

1. Configure universal forwarder to auto-start:

#splunk enable boot-start

#cd /opt/splunkforwarder/bin
./splunk add forward-server <host>:<port> -auth <username>:<password>

For <host>:<port>, substitute the host and receiver port number of the receiver. For example, splunk_indexer.acme.com:9995.

Alternatively, if you have many forwarders, you can use an outputs.conf file to specify the receiver. For example:

[tcpout:my_indexers]
server= splunk_indexer.acme.com:9995

You can create this file once, then distribute copies of it to each forwarder.

Example
./splunk add forward-server 192.168.1.21:9995 -auth admin:changeme

where
9995 is the port i had openend on the server (receiver)
user/passwd is the forwarders username and passwd which by default is admin/changeme
IP – of the server.

Restart Splunk Forwarder
#cd /opt/splunkforwarder/bin
#./splunk stop/start

Test Forwarder connection:
/opt/splunkforwarder/bin/splunk list forward-server
Add Data:
/opt/splunkforwarder/bin/splunk add monitor /path/to/app/logs/ -index main -sourcetype %app%
Where /path/to/app/logs/ is the path to application logs on the host that you want to bring into Splunk, and %app% is the name you want to associate with that type of data

This will create a file: inputs.conf in /opt/splunk/etc/apps/search/local/ — here is some documentation on inputs.conf:
http://docs.splunk.com/Documentation/Splunk/4.3.2/admin/Inputsconf

Note: System logs in /var/log/ are covered in the configuration part of Step 7. If you have application logs in /var/log/*/

Example
./splunk add monitor /var/log/ -index main -sourcetype %VishnuMachine%
output
Added monitor of ‘/var/log’.
Final Setting on the Splunk server (Receiver)

Manager -> Add data -> TCP -> Add new
TCP Port : 8088 (port number on the client ie forwarder)
Set sourcetype : VishnuMachine (as mentioned in the command)

Programming

Notes !!!!

Application Server aka Container
-Its like a Shopping Mall and the apps is like a Store.
-Same as the shops dont have to worry about parking,waste mgmt,security etc.
-Similarly the developer dont have to worry about database contectivity,session integrity, security etc.
-There is a lot libraries, serivces that comes with container.

Tomcat
A Java driven Web Application Server
Provides support for Java web applications
– JSP Documents
– WAR (Web Application aRchive) – Compressed Form of the application
– Self-contained HTTP server
Fig 1.

JDK and JRE
JDK – development purpose, includes execution environment also
JRE – purely runtime environment
—–
JDK -includes JRE, set of API classes, Java Compiler, addtnl files to write apps
JRE -comprises a set of base classes, base java api
————-
JDK – Developers Use This
JRE – End users Use This
——–
Fig 2.

http://blog.engineering.kiip.me/post/12288961849/ec2-to-vpc-transition-worth-doing

Convert the Tamil Tutorials!!!

Cpanel

Restart Apache

#apachectl start or /usr/local/apache/bin/apachectl start

Virtual Host entry in Apache

/etc/httpd/conf/httpd.conf

Apache Error Log file Location

/etc/httpd/logs/error_log

To activate Cpanel License

You can check if the license is valid at http://verify.cpanel.net  by entering the IP address it is for. Then, run

/usr/local/cpanel/cpkeyclt

from the command line as root user to refresh the license.
In case to change the IP under the cPanel License. Please open a ticket for our customer service department about changing the ip on the license.

SVN Intro – Branching & Merging


We will create the SVN REPO using the

  • #svnadmin create sanoob.com   – @SVN Server

Now create 3 directory called Trunk, Branches, Tags

The working code is initially placed in the TRUNK dir

  • #vim wow.com/conf/passwd
  • abc 123
  • #svnserve -d

[Note: now a user abc with password 123 is created.]

@Dev

  • #telnet 22.22.22.22 3690
  • #svn co svn://22.22.22.22:/root/wow.com wow.com #cd wow.com #mkdir trunk branches tags
  • #svn add * #svn commit -m “Added  branch tags trunk”
  • #svn import ../repo/common/dbbackup svn://22.22.22.22:/root/wow.com/trunk -m “Adding Code to Trunk”
  • #svn cp svn://22.22.22.22:/root/wow.com/trunk svn://22.22.22.22:/root/wow.com/branches/sanoob -m “Giving Code to Developer sanoob”
  • #svn cp svn://22.22.22.22:/root/wow.com/trunk svn://22.22.22.22:/root/wow.com/branches/ammukutty -m “Giving Code to Developer Ammukutty”
  • #svn cp svn://22.22.22.22:/root/wow.com/trunk svn://22.22.22.22:/root/wow.com/branches/production -m “Placing Code on Production”

@Sanoob’s Machine

  • #svn co svn://22.22.22.22:/root/wow.com/branches/sanoob
Creating a New File (Hi.txt), modifying and Commiting it.
  • #echo Hiiiiiiiiiiiiiii > Hi.txt
  • #svn add Hi.txt
  • #svn ci -m “Deleted all and added Hi”

[NOTE: After editing a file an “SVN CI” has to be made to reflect the same on the Central Repo]

  • #svn status         – to know the added,modified or deleted status – Mostly applied to Dev Machines #echo rev2 unni > Hi.txt
  • #svn ci -m “changes to Hi.txt”

On a different machine where the checkout is already done then , the following command will update the status.

  • #svn up Hi.txt   – restore the status to a revision number latest in according to svn repo @ svn server

[NOTE: “SVN INFO” will show an updated Revision Number only if “SVN UP” is done]

To revert to a revision

  • #svp up -r 7             – Mass Updating

OR

  • #svp up [filename] -r 7  – Only one file is Updated

@Ammukutty’s Machine

  • #svn co svn://22.22.22.22:/root/wow.com/branches/ammukutty

@Production Server – always use “SVN UP”…….and never “SVN CO”

To DELETE a file both locally & repo simultaneously –  @ PRODUCTION SERVER

  • #svn delete testdir
  • #svn ci -m “Deleted testdir “

SVN Merging – @Sanoob (NOT applicable to merge multiple developers code)

  • #svn switch svn://22.22.22.22:/root/wow.com/branches/production
  • #svn up
  • #svn merge svn://22.22.22.22:/root/wow.com/branches/production svn://23.20.42.85:/root/wow.com/branches/sanoob
  • #svn ci -m “Merged Production with sanoob”

SVN Merging – @Production

  • #svn merge -r<NO1.>:<NO2.> path/of/developers/trunk

where

NO1. – revision number of current project – PRODUCTIONS – “Last Changed Revision”
NO2. – revision number of developer project – “Last Change Revision”

More Info

SVN Directory Structure
/trunk – Development sandbox for standard scenarios
/branches – Long Isolated work; Major Structural Changes
/tags – Stable, Numbered Versions Ready for Release

SVN Status –
!  – a file is deleted locally when compared to SVN Repo
A – a file is added to SVN Repo
U – a file is updated
C – a file conflict

Quick MySQL Commands

To Login into the MySQL

#mysql -u username -ppaswword

and if its RDS :

#mysql -h rds.indexpoint/dnsname/ip -u username -ppassword

To List Databases

mysql>show databases;

To View tables

mysql>use databasename;
mysql>show tables;

To View Contents Inside the Table

mysql>select * from tablename;

To Delete Database

mysql>drop database dbname;

To create Database user (we also need to grant permission to a db to gain access for the new user.)

mysql>create user 'UNNI'@'ipaddress/%/localhost' identified by 'passwd3@1';

To Grant Permission to a Database for a User and Create the User at the same Time

mysql>grant all on databasename.* to 'UNNI'@'ipaddress/%/localhost' identified by 'password';

[NOTE– To provide access from all IP Address use ‘%’ instead of ipaddress. By default ‘localhost’ is the value that accepts on a normal mysql installation on a standalone EC2 machine]

To List out all Database Users

mysql>SELECT user,host FROM mysql.user;

To Create a Database

mysql>create database UNNI;

To restore a specific Database in Mysql (database UNNI has to be created already)

#mysql -u username -ppassword UNNI < UNNI.sql

[NOTE: Database UNNI had to be created before restore]

To dump/backup a specific Database in Mysql

#mysqldump -u username -ppassword UNNI > UNNI.sql

[NOTE: Database UNNI is backed up into UNNI.sql]

To know the permission of Mysql User

#show grants for 'unni'@'ipaddress/%/localhost';

Get MySQL Server Info

mysql> \s

To list all mysql admin tables

mysql> show tables from mysql;

To list mysql variables related to Errors

mysql> use dbname;
mysql> show variables like '%err%';

Deleting Users

To delete users from the MySQL database use the DROP command.

 mysql>DROP USER user@host;

The command in turn removes the user record from the mysql.user table.

As the CREATE USER command, even the DROP USER command has been added since MySQL 5.0.2. In previous versions of MySQL you must revoke the user’s privileges first, delete the records from user manually and then issue the FLUSH PRIVILEGES command.

mysql>DELETE FROM user WHERE User= 'technofriends' AND Host= 'localhost';
FLUSH PRIVILEGES;

General Notes

  • There is no concept in MySQL of “Owner” of database or its objects, as there is in MS Access and MS SQL Server. I surmise this from the lack of “owner” field anywhere in mysql system tables.

Create a Read-Only User in MySQL

create user 'new_user'@'%' identified by 'my_tough_passwd';
GRANT SELECT, SHOW VIEW ON database_name.* TO new_user@'%' IDENTIFIED BY 'my_tough_passwd';
FLUSH PRIVILEGES;

To show current user

mysql>select current_user();

Find if a query is doing a full table scan

mysql> show full processlist;
| 7 | root | ip-10-142-159-56:50960 | mydb | Sleep | 0 | | NULL |
| 8 | root | ip-10-142-191-57:60270 | mydb | Query | 0 | Sending data | SELECT `mycustomtable`.* FROM `mycustomtable` WHERE `mycustomtable`.`column_id` = 8373 LIMIT 1 |
| 9 | root | ip-10-138-58-103:35042 | mydb | Sleep | 0 | | NULL
mysql>  explain SELECT  `mycustomtable`.* FROM `mycustomtable`  WHERE `mycustomtable`.`some_columnvalue` = 8373 LIMIT 1
+----+-------------+-----------------+------+---------------+------+---------+------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------------+------+---------------+------+---------+------+-------+-------------+
| 1 | SIMPLE | mycustomtable | ALL | NULL | NULL | NULL | NULL | 11906 | Using where |
+----+-------------+-----------------+------+---------------+------+---------+------+-------+-------------+
1 row in set (0.00 sec)

The ALL means – This query is doing a full table scan.

 

Restore Dump and You get error : max_allowed_size error (got a packet bigger than ‘max_allowed_packet’ bytes)

 

mysql>show global variables where variable_name='max_allowed_packet';
mysql>SET GLOBAL max_allowed_packet=1073741824;

Quick Fixes

 

1.Kill MySQL sessions which are in sleep state

a. Login to the DB server

b. Execute the below command

echo "select ID FROM INFORMATION_SCHEMA.PROCESSLIST where Command='Sleep' ;" | mysql -u username -p'password' | awk '{ print "kill "$1";"}' > kill.sql

c. Execute the kill.sql in the DB

#mysql -u username -p'passowrd'
mysql> source kill.sql

2.Error While Taking dump from mysql 5.6

mysqldump: Couldn’t execute ‘SET OPTION SQL_QUOTE_SHOW_CREATE=1’: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘OPTION SQL_QUOTE_SHOW_CREATE=1’ at line 1 (1064)

The reason for this is that MySQL 5.6 has removed support for “SET OPTION” and your mysql client tools are probably on older version. Most likely 5.5 or 5.1. There is more info about this issue on MySQL bugs website. The quickest solution is to update your mysql client tools to 5.6 and your problem will be solved. Unfortunately, there is now official binary of MySQL 5.6 tools for Ubuntu at the moment. However, I did find a solution on the good old GitHub where you can add this custom MySQL 5.6 client tools to your ubuntu repository. It works like a charm. To install mysql client tools 5.6 on ubuntu run the following commands:

sudo add-apt-repository ppa:ondrej/mysql-experimental
sudo apt-get update
sudo apt-get remove mysql-client-5.5
sudo apt-get install mysql-client-5.6

Now you should be able to run mysqldump backups with MySQL 5.6.