AWS - goatandsheep/goatandsheep.github.com GitHub Wiki
The easiest way to use the AWS CLI without running into permissions issues is to add your secret access key to your ~/.aws/credentials
file.
- If you have forgotten your
aws_access_key_id
andaws_secret_access_key
, click your name, clickMy Security Credentials
>Users
> click on<your user name>
>Security Credentials
> Scroll down toAccess Keys
. Create a new one and deactivate the old one. - Open your
~/.aws/credentials
file in vim, think of a relevant nickname and add:
[nickname-of-account]
aws_access_key_id=<access-id>
aws_secret_access_key=<access-key>
Make sure to also set a [default]
value
- Open your
~/.aws/config
file in vim and add:
[profile nickname-of-account]
region=us-east-1
output=json
When running commands, use AWS_PROFILE=nickname-of-account
before it, e.g. AWS_PROFILE=goatandsheep aws s3 sync * s3://sample-bucket
- Add the SSH ID to your hosts
Add the SSH key to your ~/.ssh/config
like this
Open vi ~/.ssh/config
Host goatandsheep
Hostname git-codecommit.us-east-1.amazonaws.com
User APKAxxxxxxxxxxxxxxxx
IdentityFile ~/.ssh/id_rsa
Note: the user is your CodeCommit / SSH credential, not your user config
This can be tested by going git clone ssh://goatandsheep/v1/repos/whatever
- Reset git config:
git config --global credential.helper '!aws codecommit AWS_PROFILE $@'
git config --global credential.UseHttpPath true
-
passRole
: gives role to the next service it uses
brew install awsebcli
Simple, event-triggered server that autoscales and will turn off when not in use to save money
Initially: npm install -g serverless
Setup: serverless create -t <template-name>
, e.g. serverless create -t aws-nodejs
If outputting to API Gateway, use the following for your callback:
return callback(undefined, {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Headers": "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token"
},
body: JSON.stringify(response)
});
You can also just zip your project including your node_modules folder. Make sure it runs as a script.
Number of requests per second is 100 concurrent sessions only
Serverless GraphQL interface to database, such as DynamoDB
- What: back end API that helps you focus on your GraphQL schema by connecting it to your database and syncing changes
- Why: if you need a simple back end that directly interfaces with your database, especially if your database is modified by and triggers existing lambdas for more complicated tasks
- How: a GUI helps you generate the queries, mutations, and subscriptions using the basic interfaces you provide it. It also allows you to test out your actions to ensure that you've configured it properly before you move from working on the GUI to your codebase.
Note: subscriptions only subscribe to internal mutations. To subscribe to DynamoDB changes that occur outside of AppSync, you need to do something funky.
Two parts:
- User pools: managed login. Each application can use a separate user pool to authorize your users.
- Federated Identity Pool: ability to give OAuth and User Pool users IAM roles
After you create your user pool, you cannot modify the attributes you've initially selected. To add more, you need to add a custom attribute. To reference custom attributes, you must prefix the name of your attribute with custom:
. For example, if there is an attribute named location
. You will reference it under user.attributes[custom:location]
.
If you have an API that needs to access resources protected by Cognito User Pools, you need to do something funky.
You can give your users access to anything you want them to be able to access. You control the access by whitelisting the actions the role can do.
It comes with an unauth and auth role. I usually give my unauth role nothing.
One thing that this is useful for is modifying S3 files directly through Amplify.
Great way to get an app up and running very quickly, but it is very difficult to reconfigure certain AWS features after the fact.
It comes with a number of great features:
- Cognito: login (with built-in form)
- AppSync: GraphQL synchronization with back-end
- S3: storage for files (with built-in uploader)
- Analytics
- Database: DynamoDB
- Cacheing
- MOAR!
The downside is that features will be auto-generated, which means you won't always be able to configure them. Some of the issues I ran into:
- Services may be generated in your default region as opposed to the region of your configuration
- Very difficult to change Cognito, such as removing SMS confirmation
- Make changes and experiment with queries in online AppSync GUI
- Run
awsmobile push
I know this has been done countless times, but here's why my tutorial is better than the others: it handles custom routers and it keeps the S3 bucket private.
- Setup your S3 bucket to be for Static Web hosting, but don't give the public any access.
- Create a new CloudFront Distribution that points to the website or add the origin. Make sure it uses a custom origin access identity. Set your default behavior to redirect HTTP to HTTPS
- Go to the error pages tab and make sure both 403 and 404 are handled by
/index.html
with a response code of200
. This will make sure the routes are handled by the client. Also make the default root objectindex.html
. - Go back to the S3 bucket and add the custom origin access identity with read and list permissions.
aws sync dist s3://<name-of-bucket> --cache-control private,max-age=172800 --delete
aws cloudfront create-invalidation --distribution-id <cloudfront-distribution-id> --paths /index.html
If you're dynamically getting the bucket name and cloudfront distributionId from an aws-exports.js
, use the following npm scripts:
{
"purge": "aws cloudfront create-invalidation --distribution-id $(shx grep \"aws_content_delivery_cloudfront_id\" ./src/aws-exports.js | sed \"s/ aws_content_delivery_cloudfront_id: \\'\\(.*\\)\\'\\,/\\1/\") --paths /index.html",
"sync": "aws s3 sync dist $(shx grep \"aws_content_delivery_bucket:\" ./src/aws-exports.js | sed \"s/ aws_content_delivery_bucket: \\'\\(.*\\)\\'\\,/s3:\\/\\/\\1/\") --cache-control private,max-age=172800 --delete",
"publish": "npm run build && npm run sync && npm run purge"
}
I don't recommend using this for user data storage, but you can give all logged in users access to a particular bucket by restricting it to the federated auth role and using the following request:
Amplify.Storage.put(this.src, this.value, {
customPrefix: { public: '' },
track: true,
level: 'public',
})
As you can see:
-
this.src
is the path not including the bucket name -
this.value
is the binary / text you want to use
GET access is trickier because Storage.get just gives a signed URL, which you use to fetch the file.
Amplify.Storage.get(this.src, {
customPrefix: { public: '' },
expires: 30,
track: true,
level: 'public',
})
.then((signed) => {
const req = new Request(signed, { mode: 'cors' });
fetch(req)
.then((response) => {
if (response.ok) {
const responseX = response.clone();
responseX.text().then((value) => {
console.log(value);
});
return response.blob();
}
throw new Error(`${response.status}: ${response.statusText}`);
})
.then(blob => URL.createObjectURL(blob))
.then(url => console.log('url', url))
.catch(err => console.error(err));
})
.catch(err => console.error(err));
An HTTP proxy that allows for custom auth rules
I like API Gateway because I specify which roles can access it.
The outline of all the inputs for an API should be
- Make sure to deploy (under Resources/Actions) your API to confirm your changes
- If only using GET, PUT, HEAD, don't put an Authorizor nor CORS on OPTIONS
- Be careful as you must reset your CORS config every time you try to change it
- If you're using a Lambda proxy, make sure you also add CORS headers to your response
- If you don't add CORS headers to your 4xx errors in Gateway responses, your errors will give problems
- Resource Policy is tricky because it is allow all for certain defaults, and some it's not.
Secure secrets manager
- infinite entries
- $0.40 per month per secret
- Whitelist roles to access it
- Secured with KMS
- Accessible through API
SyStems Manager (SSM) is a tool that gives you high level control of your app as will as utilities to help you
Allows you to store small text across your account
There are 3 types of inputs:
- String: basic string
- StringList: comma-separated String of inputs. Think raw CSV
- SecureString: which actually encrypts your text with KMS. That means you get most of the advantages of Secrets Manager with a couple restrictions.
- Free, but 20 entries max
- Keeps a history of previous values
- Only accessible to the user, i.e. no roles
- The console converts newline into space. Use the sdk or cli
Events between Amazon and your web clients
Analytics
The template of a system
CloudFormation is basically a config that lets you generate a full system, including the resources, attributes, settings, etc. Unfortunately, it does have some limitations including support for newer products, features, etc.
Each system generated from a CloudFormation template is known as a stack. Each stack is generated with the template file alone. That means that for you to use code, assets, etc. you'll likely have to upload it to S3 before using it. For that reason, you generally upload 2 templates: one for initializing the S3 bucket and things that may result in circular dependencies, and a second one that has a complete configuration including links to the code that you need to use. Initially, you create the stack using the first one. Wherever things aren't fully setup, you'd have a bit of fake data. Then you'd update the stack using the second one.
- Use this list of env variables
- You don't need to use
!Join
use!Sub
as it's less gross
Where you need to use an arn:
- Avoid
!Ref
- Avoid
!Sub {sub.arn}
- Avoid
DependsOn
- Enter the arn directly (use the appropriate arn structure) by copying the string from where you declared the name of the resource, e.g.
FunctionName: ${DeploymentName}-${Env}-config-exporter
Function: !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${DeploymentName}-${Env}-config-exporter
- If a resource is triggering a lambda, you must have a
AWS::Lambda::Permission
resource where the resource is theSourceArn
Setting up a remote Windows bastion instance on EC2 can be tricky, but isn't impossible
- Install something to access the desktop, e.g. Microsoft Remote Desktop
- connect
- enter windows password
- give pem file
- will give admin pass
- remote desktop file / ip / dns name
- go to ec2 panel
- choose instance
- Connect
- ssh client
- MobaXTerm
- pull new changes
-
screen -ls
- write down number of process
screen -r <process-number>
screen
- start server
- CTRL-A
- CTRL-D
- exit (twice) to close a screen
update node https://computingforgeeks.com/how-to-install-nodejs-on-ubuntu-debian-linux-mint/
unzip https://linuxize.com/post/how-to-unzip-files-in-linux/
To use port 80 you'll probably need to use root
, i.e. sudo
if that fails:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080