Full Access to a specific S3 Bucket except DeleteObject

{
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListAllMyBuckets"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::testbucket-unni"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
              "s3:PutObject",
              "s3:GetObject"
              ],
            "Resource": [
                "arn:aws:s3:::testbucket-unni/*"
            ]
        }
    ]
}

Details:

  1. Get and List actions given to “arn:aws:s3:::*” to enable console view
  2. List action given by specifying exact ARN (without start) “arn:aws:s3:::testbucket-unni” to enable protection to other buckets if its name start “testbucket-unni”
  3. Put and Get actions given as “arn:aws:s3:::testbucket-unni/*” which means only to objects inside the bucket .

IAM Users Only for Bucket Access

Ideally we have to IAM Roles if the access credentials is used by an App hosted in EC2, else the following can be setup :

  • Create an IAM Bucket say unni-test
  • Create an IAM User with the same name as bucket say – unni-test
  • Now we can use IAM Variables (here aws:username) to create just a Single Policy to Grant Acccess specifically to each bucket and apply it to the IAM Group. Hence all similar requirements can be added to this IAM Group.

Example :
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Action”: [“s3:*”],
“Effect”: “Allow”,
“Resource”: [“arn:aws:s3:::mybucket”],
“Condition”:{“StringLike”:{“s3:prefix”:[“home/${aws:username}/*”]}}
},
{
“Action”:[“s3:*”],
“Effect”:”Allow”,
“Resource”: [“arn:aws:s3:::mybucket/home/${aws:username}/*”]
}
]
}

The policy uses a policy variable (${aws:username}) that is evaluated at run time and contains the friendly name of the IAM user who made the request.

Example IAM Policies : http://docs.aws.amazon.com/IAM/latest/UserGuide/ExampleIAMPolicies.html

s3cmd Elaborated…

Use –rr option (reduced redundancy) for every put and sync commands !!!. 
Use –bucket-location option to mention nearest geographical location to avoid latency.

To view contents inside a bucket
#s3cmd ls s3://bucketname

To copy/sync a directory into a bucket
#s3cmd sync Desktop/check s3://bucket_name

To view all contents of all buckets one level down (only non empty buckets)
#s3cmd la -H

To sync contents of a local dir in a buckter under an existing directory (s3 object)
#s3cmd sync Desktop/checkunni/ s3://writingz/check/

To sync remote s3 contents to a local directory
#s3cmd sync s3://writingz/check/ Desktop/checkunni/

To sync contents of a local dir in a bucket under a new directory name
#s3cmd sync Desktop/checkunni/ s3://homie/newname/
Here newname directory is created on the fly and files of checkunni are copied inside s3://homie/newname

Copy a non-empty directory (on s3) from one bucket to another bucket
#s3cmd -r cp s3://homie/newname s3://writingz/

Copy a non-empty directory (on s3) from one bucket to another bucket under a new name
#s3cmd -r cp s3://homie/newname s3://writingz/newname2/

To find the size of a bucket/directory
#s3cmd du -H s3://writingz

To download only a single file
#s3cmd get s3://homie/dirname/filename .

To download a remote directory locally.
#s3cmd get -rf s3://writingz/checkunni .
use a / (forward slash) after checkunni to download only the files in it.

To upload a single file
#s3cmd put PSY.mp3 s3://homie/newname/

To upload a local dir to bucket
#s3cmd put -rf s3test s3://homie/newname/

Delete a file
#s3cmd del s3://writingz/abc.jpg

Delete a directory
#s3cmd del -rf s3://writingz/check/

Move a file (can also be used for rename with files only)
#s3cmd mv s3://writingz/abc.png s3://haye/

Move a directory to another bucket 
#s3cmd mv -rf s3://writingz/newname2 s3://haye/

Know the s3cmd version
#s3cmd –version

Make a file public using
#s3cmd put –acl-public hangover3.jpg s3://viewzz/abc.jpg

Make a file private using
#s3cmd setacl –acl-private s3://viewzz/hangover3.jpg

Set all files in a bucket to public/private
#s3cmd setacl –acl-public -r s3://writingz/

If an md5 checksum is need to verify files integrity use
#sudo s3cmd info s3://viewzz/hangover3.jpg (an amazon s3 object)
#md5sum hangover3.jpg (locally downloaded file from s3)
and compare the checksum value.

To delete a bucket (bucket has to be empty use s3cmd del – to delete all files)
#s3cmd rb s3://logix.cz-test (use -f option if bucket is non-empty)

Get various information about Buckets or Files
#s3cmd info s3://BUCKET[/OBJECT]

Other useful options
–delete-removed Delete remote objects with no corresponding local file
[sync]

–no-delete-removed Don’t delete remote objects.
–skip-existing Skip over files that exist at the destination (only
for [get] and [sync] commands).

–continue Continue getting a partially downloaded file (only for
[get] command).

–reduced-redundancy, –rr
Store object with ‘Reduced redundancy’. Lower per-GB
price. [put, cp, mv, sync]

–acl-public Store objects with ACL allowing read for anyone.

–acl-private Store objects with default ACL allowing access for you

only.

–bucket-location=BUCKET_LOCATION Datacentre to create bucket in. Eg :  ap-northeast-1  (Tokyo)

The ACL (Access Control List) of a file can be set at the time of upload using –acl-public or –acl-private options with ‘s3cmd put’ or s3cmd sync’ commands (see below).

Alternatively the ACL can be altered for existing remote files with ‘s3cmd setacl –acl-public’ (or –acl-private) command.

Additional Links on 

Copy S3 Buckets across AWS accounts

 This procedure will mess up the metadata of files and will not be able to set permissions from the new location.
 
 
Amazon S3 bucket names are UNIQUE accross all AWS Accounts.

For example, suppose your first account username is acc1@gmail.com and second is acc2@gmail.com.

#s3cmd –configure : Configure for acc1 aws account

and create similar bucket (not same bucket name) in the acc2 account and set those bucket permissions to (Gurantee=)Everyone – (Tick)Upload/Delete.

Then you can use s3cmd (using the credentials of the acc1) to do something like:

s3cmd cp s3://acc1_bucket/folder/ s3://acc2_bucket/folder -r

All transfer will be done on Amazon’s side.

Install & Use s3cmd for S3 Storage

Amazon S3 is a reasonably priced data storage service. Ideal for off-site backups, archiving and other data storage needs. It is generally more reliable than your regular web hosting for storing your files and images. Check out About Amazon S3 section to find out more.

S3cmd is a command line tool for uploading, retrieving and managing data in Amazon S3. It is best suited for power users who don’t fear command line. It is also ideal for scripts, automated backups triggered from cron, etc.

S3cmd is an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage. None of these money go to S3cmd developers.

REDHAT

#vim /etc/yum.repos.d/s3cmd.repo
[s3cmd]
name=s3cmd
baseurl=http://s3tools.org/repo/RHEL_5/
enabled=1
gpgcheck=0

AMAZON LINUX

[s3tools]
name=Tools for managing Amazon S3 - Simple Storage Service (RHEL_6)
type=rpm-md
baseurl=http://s3tools.org/repo/RHEL_6/
gpgcheck=1
gpgkey=http://s3tools.org/repo/RHEL_6/repodata/repomd.xml.key
enabled=1
#yum install s3cmd

UBUNTU

#apt-get install s3cmd

To configure s3cmd

#s3cmd --configure
[Enter Access Key and Secret Key]

Configuration file is saved into
/root/.s3cfg

To get Help

#s3cmd --help

To List Buckets

#s3cmd ls

To Delete Non-Empty Buckets

#s3cmd rb s3://buckt_name -fv

Copy buckets to local machine

#s3cmd get s3://buckt_name -r

Create Buckets

#s3cmd mb s3://buckt_name

Syncing local dir with s3 Buckets

#s3cmd sync local_dir/ s3://buckt_name