Full Access to a specific S3 Bucket except DeleteObject

    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:s3:::*"
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [


  1. Get and List actions given to “arn:aws:s3:::*” to enable console view
  2. List action given by specifying exact ARN (without start) “arn:aws:s3:::testbucket-unni” to enable protection to other buckets if its name start “testbucket-unni”
  3. Put and Get actions given as “arn:aws:s3:::testbucket-unni/*” which means only to objects inside the bucket .

ARN – AWS Documentation Excerpts

Here are some example ARNs:

<!– AWS Elastic Beanstalk application version –>

arn:aws:elasticbeanstalk:us-east-1:123456789012:environment/My App/MyEnvironment

<!– IAM user name –>


<!– Amazon RDS tag –>


<!– Amazon S3 bucket (and all objects in it)–>


The following are the general formats for ARNs; the specific components and values used depend on the AWS service.


ARN Examples for EC2
Amazon Elastic Compute Cloud (Amazon EC2)




ARN Examples of RDS
ARNs are used in Amazon RDS only with tags for DB instances. For more information, see Tagging a DB Instance in the Amazon Relational Database Service User Guide.




ARN Examples of Route53

Amazon Route 53


Note that Amazon Route 53 does not require an account number or region in ARNs.


ARN Examples of Amazon S3


Note that Amazon S3 does not require an account number or region in ARNs.


AWS Service Namespaces

When you create AWS IAM policies or work with Amazon Resource Names (ARNs), you identify an AWS service using a namespace. For example, the namespace for Amazon S3 is s3, and the namespace for Amazon EC2 is ec2. You use namespaces when identifying actions and resources.

The following example shows an IAM policy where the value of the Action elements and the values in the Resource and Condition elements use namespaces to identify the services for the actions and resources.


The following lists the AWS service namespaces.



Service Namespace
Auto Scaling autoscaling
AWS Account Billing aws-portal
AWS CloudFormation cloudformation
Amazon CloudFront cloudfront
CloudWatch cloudwatch
Amazon DynamoDB dynamodb
Amazon EC2 ec2
AWS Elastic Beanstalk elasticbeanstalk
Elastic Load Balancing elasticloadbalancing
Amazon Elastic MapReduce elasticmapreduce
Amazon ElastiCache elasticache
Amazon Glacier glacier
IAM iam
AWS Marketplace aws-marketplace
AWS OpsWorks opsworks
Amazon RDS rds
Amazon Route 53 route53
Amazon S3 s3
Amazon SES ses
Amazon SimpleDB sdb
Amazon SNS sns
Amazon SQS sqs
Amazon SWF swf
AWS Storage Gateway storagegateway
AWS Support support
Amazon VPC ec2

AWS EC2 Internal Security Structure


An insight into internal structure of EC2.

The Hypervisor
Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor, taking advantage of paravirtualization
(in the case of Linux guests). Because paravirtualized guests rely on the hypervisor to provide support for operations that
normally require privileged access, the guest OS has no elevated access to the CPU. The CPU provides four separate
privilege modes: 0-3, called rings. Ring 0 is the most privileged and 3 the least. The host OS executes in Ring 0. However,
rather than executing in Ring 0 as most operating systems do, the guest OS runs in a lesser-privileged Ring 1 and
applications in the least privileged Ring 3. This explicit virtualization of the physical resources leads to a clear separation
between guest and hypervisor, resulting in additional security separation between the two.

Paravirtualization: In computing, paravirtualization is a virtualization technique that presents a software interface to
virtual machines that is similar but not identical to that of the underlying hardware.

Instance Isolation
Different instances running on the same physical machine are isolated from each other via the Xen hypervisor. Amazon
is active in the Xen community, which provides awareness of the latest developments. In addition, the AWS firewall
resides within the hypervisor layer, between the physical network interface and the instance’s virtual interface. All
packets must pass through this layer, thus an instance’s neighbors have no more access to that instance than any other
host on the Internet and can be treated as if they are on separate physical hosts. The physical RAM is separated using
similar mechanisms.


Customer instances have no access to raw disk devices, but instead are presented with virtualized disks. The AWS proprietary disk virtualization layer automatically resets every block of storage used by the customer, so that one customer’s data are never unintentionally exposed to another. AWS recommends customers further protect their data using appropriate means. One common solution is to run an encrypted file system on top of the virtualized disk device.

Guest Operating System: Virtual instances are completely controlled by you, the customer. You have full root access or administrative control over accounts, services, and applications. AWS does not have any access rights to your instances or the guest OS. AWS recommends a base set of security best practices to include disabling password-only access to your guests, and utilizing some form of multi-factor authentication to gain access to your instances (or at a minimum certificate-based SSH Version 2 access). Additionally, you should employ a privilege escalation mechanism with logging on a per-user basis. For example, if the guest OS is Linux, after hardening your instance you should utilize certificatebased SSHv2 to access the virtual instance, disable remote root login, use command-line logging, and use ‘sudo’ for privilege escalation. You should generate your own key pairs in order to guarantee that they are unique, and not shared with other customers or with AWS.

You also control the updating and patching of your guest OS, including security updates. Amazon-provided Windows and Linux-based AMIs are updated regularly with the latest patches, so if you do not need to preserve data or customizations on your running Amazon AMI instances, you can simply relaunch new instances with the latest updated AMI. In addition, updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories.

Well-informed traffic management and security design are still required on a perinstance basis. AWS further encourages you to apply additional per-instance filters with host-based firewalls such as  IPtables or the Windows Firewall and VPNs. This can restrict both inbound and outbound traffic.

Why take snapshots if EBS is storing data redundantly?
Well-informed traffic management and security design are still required on a perinstance basis. AWS further encourages you to apply additional per-instance filters with host-based firewalls such as IPtables or the Windows Firewall and VPNs. This can restrict both inbound and outbound traffic.

Security Features on S3
1.Identity and Access Management (IAM) Policies.
2.Access Control Lists (ACLs).
3.Bucket Policies.
Server-side encryption (SSE): An option for S3 storage for automatically encrypting data at rest. With Amazon S3 SSE,
customers can encrypt data on upload simply by adding an additional request header when writing the object.
Decryption happens automatically when data is retrieved.

Security Features on RDS
Amazon RDS has multiple features that enhance reliability for critical production databases, including DB security
groups, permissions, SSL connections, automated backups, DB snapshots, and multi-AZ deployments. DB instances can
also be deployed in an Amazon VPC for additional network isolation.


s3cmd Elaborated…

Use –rr option (reduced redundancy) for every put and sync commands !!!. 
Use –bucket-location option to mention nearest geographical location to avoid latency.

To view contents inside a bucket
#s3cmd ls s3://bucketname

To copy/sync a directory into a bucket
#s3cmd sync Desktop/check s3://bucket_name

To view all contents of all buckets one level down (only non empty buckets)
#s3cmd la -H

To sync contents of a local dir in a buckter under an existing directory (s3 object)
#s3cmd sync Desktop/checkunni/ s3://writingz/check/

To sync remote s3 contents to a local directory
#s3cmd sync s3://writingz/check/ Desktop/checkunni/

To sync contents of a local dir in a bucket under a new directory name
#s3cmd sync Desktop/checkunni/ s3://homie/newname/
Here newname directory is created on the fly and files of checkunni are copied inside s3://homie/newname

Copy a non-empty directory (on s3) from one bucket to another bucket
#s3cmd -r cp s3://homie/newname s3://writingz/

Copy a non-empty directory (on s3) from one bucket to another bucket under a new name
#s3cmd -r cp s3://homie/newname s3://writingz/newname2/

To find the size of a bucket/directory
#s3cmd du -H s3://writingz

To download only a single file
#s3cmd get s3://homie/dirname/filename .

To download a remote directory locally.
#s3cmd get -rf s3://writingz/checkunni .
use a / (forward slash) after checkunni to download only the files in it.

To upload a single file
#s3cmd put PSY.mp3 s3://homie/newname/

To upload a local dir to bucket
#s3cmd put -rf s3test s3://homie/newname/

Delete a file
#s3cmd del s3://writingz/abc.jpg

Delete a directory
#s3cmd del -rf s3://writingz/check/

Move a file (can also be used for rename with files only)
#s3cmd mv s3://writingz/abc.png s3://haye/

Move a directory to another bucket 
#s3cmd mv -rf s3://writingz/newname2 s3://haye/

Know the s3cmd version
#s3cmd –version

Make a file public using
#s3cmd put –acl-public hangover3.jpg s3://viewzz/abc.jpg

Make a file private using
#s3cmd setacl –acl-private s3://viewzz/hangover3.jpg

Set all files in a bucket to public/private
#s3cmd setacl –acl-public -r s3://writingz/

If an md5 checksum is need to verify files integrity use
#sudo s3cmd info s3://viewzz/hangover3.jpg (an amazon s3 object)
#md5sum hangover3.jpg (locally downloaded file from s3)
and compare the checksum value.

To delete a bucket (bucket has to be empty use s3cmd del – to delete all files)
#s3cmd rb s3://logix.cz-test (use -f option if bucket is non-empty)

Get various information about Buckets or Files
#s3cmd info s3://BUCKET[/OBJECT]

Other useful options
–delete-removed Delete remote objects with no corresponding local file

–no-delete-removed Don’t delete remote objects.
–skip-existing Skip over files that exist at the destination (only
for [get] and [sync] commands).

–continue Continue getting a partially downloaded file (only for
[get] command).

–reduced-redundancy, –rr
Store object with ‘Reduced redundancy’. Lower per-GB
price. [put, cp, mv, sync]

–acl-public Store objects with ACL allowing read for anyone.

–acl-private Store objects with default ACL allowing access for you


–bucket-location=BUCKET_LOCATION Datacentre to create bucket in. Eg :  ap-northeast-1  (Tokyo)

The ACL (Access Control List) of a file can be set at the time of upload using –acl-public or –acl-private options with ‘s3cmd put’ or s3cmd sync’ commands (see below).

Alternatively the ACL can be altered for existing remote files with ‘s3cmd setacl –acl-public’ (or –acl-private) command.

Additional Links on 

Netflix on AWS


DC Analogy


Cloud Analogy


Transition : DC to CLOUD


Application Restructuring




Test Cloud Efficiency  


There are Chaos Monkey (which simulates instance failures) and Chaos Gorilla (which simulate AWS Region Failures).

Extensive Backup Strategy


As shown on the left side, the whole infra setup is redundant across multiple AZ. On the right side the production data availability is shown in 3 different levels.

1st Level – Each data element is stored redundant database to protect against hardware failure.
2nd Level – To protect from logical failure like software bugs, someone drops a column accidently etc the need occurs to take regular backup of data onto s3.
3rd Level – To protect against natural disasters, catastrophic security breaches there is a secondary backup of all customer data into a hetrogenous cloud environment.


Install & Use s3cmd for S3 Storage

Amazon S3 is a reasonably priced data storage service. Ideal for off-site backups, archiving and other data storage needs. It is generally more reliable than your regular web hosting for storing your files and images. Check out About Amazon S3 section to find out more.

S3cmd is a command line tool for uploading, retrieving and managing data in Amazon S3. It is best suited for power users who don’t fear command line. It is also ideal for scripts, automated backups triggered from cron, etc.

S3cmd is an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage. None of these money go to S3cmd developers.


#vim /etc/yum.repos.d/s3cmd.repo


name=Tools for managing Amazon S3 - Simple Storage Service (RHEL_6)
#yum install s3cmd


#apt-get install s3cmd

To configure s3cmd

#s3cmd --configure
[Enter Access Key and Secret Key]

Configuration file is saved into

To get Help

#s3cmd --help

To List Buckets

#s3cmd ls

To Delete Non-Empty Buckets

#s3cmd rb s3://buckt_name -fv

Copy buckets to local machine

#s3cmd get s3://buckt_name -r

Create Buckets

#s3cmd mb s3://buckt_name

Syncing local dir with s3 Buckets

#s3cmd sync local_dir/ s3://buckt_name