Getting started with AWS-SDK v3

When you try to install the latest aws-sdk, through npm i, it will still give you the latest v2. This is because version 3 does not work as a whole aws-sdk anymore, but instead has modularised packages.

Meaning, instead of installing the whole SDK, you install the specific client that you need. For example, the DynamoDBClient or the CognitoIdentityProviderClient. So instead of running npm i aws-sdk, if you want to use the DynamoDB SDK, you run npm i @aws-sdk/client-dynamodb.

What may take some getting used to, but is actually a beautiful pattern once you’re familiar with it, is that there is a similarity to all the calls you make. No matter to what module or part of the SDK. You will always send a command, with some input, getting output. For illustration, let’s stay close to DynamoDB and I will use this as an example throughout this post.

As said, for every call to the SDK you need 4 things:
1. A client (see below)
2. A command (see below)
3. Input (the Input type)
4. Output (the Output type)

The Client
The v3 SDK provides ‘AWS Services package clients’. These clients enable you to perform an operation for that specific AWS Service. Since this is a separate module, on run time you will only need to load this specific client. Especially on the serverless stack, this can save some cold start time for your Lambdas. In the API Reference you can easily find which clients there are, since they all start with ‘@aws-sdk/client-‘.

The Command
On the client you can perform certain commands. To find these I personally use the API Reference documentation. As example, take a look at the client-dynamodb. In case we want to get an item for DynamoDB, we look at the GetItemCommand. Here we also see the Input type, Output type and possible Configurations to add to the command.

You can still call the SDK v3 in the v2 style, with callbacks, but it is not recommended. You can now also use Promises or the Async/await pattern. Since I am personally a bigger fan of the latter I will use that in the examples as well.

So before stepping into code examples, let’s first explain the pattern you will see on all the calls. You need to go through the following steps:
1. Import the 4 things you need mentioned above.
2. Initialize the Client.
3. Define the Input parameters.
4. Initialize the Command with the Input params.
5. Call the ‘send‘ method on the Client.
6. Get the Output from the send method.

So let’s look at an actual code example of what this would look like.

import { DynamoDBClient, GetItemCommand, GetItemInput, GetItemOutput } from '@aws-sdk/client-dynamodb';
import { marshall, unmarshall } from '@aws-sdk/util-dynamodb';

export class ItemNotFoundError extends Error {}

export async function getItemFromDynamo(key: string) {
  try {
    const documentClient = new DynamoDBClient({ region: process.env.REGION });

    const getItemInput: GetItemInput = {
      TableName: process.env.tableName,
      Key: marshall({ key }),
    };
    const command = new GetItemCommand(getItemInput);

    const getItemResponse: GetItemOutput = await documentClient.send(command);

    if (!getItemResponse.Item) {
      throw new ItemNotFoundError(`Item: ${key} not found`);
    }
    return unmarshall(getItemResponse.Item);
  } catch (err) {
    throw new Error(`Error getting Item: ${key} | Error: ${JSON.stringify(err)}`);
  }
}

Alright, so on line 1 we see the import statements as mentioned. Now in this utility class, we get the key for the item we are looking for as an input parameter.
On line 8 we initialise the client with the region we have set on the environment variable (the region your DynamoDB is in).
On line 10 we declare the input parameter that we need to reach the correct table and set the key.
On line 12 & 21 we use the utilities to marshall & unmarshall (see below).
On line 14 you can see we initialised the command based on the input param.
Finally, on line 16 we actually send the command on the client and get the results back.

As mentioned, a call to a different client, for example Route53, or pick any client you like, will have the same structure of commands, input & output. The modularised packages will make your code run quicker and once used to the structure, I personally find it very readable and understandable.
The documentation and finding all the details might take you some more time of getting used to.

Utilities
The AWS SDK page also mentions a couple of utility libraries, make sure you scroll all the way down on the index page to check them out. In this code example, I used the Dynamo Utilities. You install these through npm i @aws-sdk/util-dynamodb. In there the marshall and unmarshall functions are very useful. This helps you to convert JSON from and to a DynamoDB record.

Resources:
AWS SDK for JavaScript
AWS API Reference
DynamoDB Utilities

Setting up TypeScript

In a previous blog I wrote about getting started with Serverless. We built a hello-world kind of app and deployed this to our Cloud Instance focusing on setting up the serverless yaml.

So now let’s have a look at setting up the development environment, with things like initializing npm and TypeScript on this project. We are starting with an npm init command in the command line. After answering some questions on the command line (or accepting the defaults) you should see a summary and hit enter (yes).

The main thing that happened, is that you now have a package.json file in your folder structure. So now we have a package.json and npm tools available. I like to also init TypeScript, since I prefer TypeScript over plain JavaScript on my projects. So first let’s install with npm i typescript. After installing TypeScript, we are running the init on the project with: tsc --init. This generates a tsconfig.json file.

After the two initialisations on the project, we see a couple of files have been added:

The tsconfig.json file is for configuration, for example let’s change something small here and set the outDir configured to ‘dist’. Now it is time to start using TypeScript instead of JavaScript. I also like to add a folder ‘src’ where my source files are. We move the handler.js to this folder and will change the extension from handler.js to .ts, in the message I added ‘Your typescript function executed successfully!‘ so we can see that it worked.

With the command tsc (TypeScript compile) you can compile your TypeScript code to a .js file. However, we would prefer to use the more common npm build command. To do this, we can add a build script in the package.json. In the scripts section we add: "build": "tsc". So now on the command line we can run npm run build.

This will run tsc and create a dist folder with the .js file.

Now remember that in the serverless.yml we have set the handler to handler.hello. We need to add the directory in front of this now, so this should be changed to dist/handler.hello.

To see if everything is still working correctly, we do an sls deploy to redeploy our serverless project to AWS. After this, we do a curl to the endpoint and get our success message:

Last but not least it is a good practice to ‘set up linting’. For this we start with installing eslint with npm i eslint, after which we do an npx eslint --init to initialize eslint on our project. The npx command makes sure you run the local library in your node_modules.

Answer the questions any way you like, above is just an example of what you can choose, but not necessarily a recommendation. So depending on the output you have selected, in this specific case I decided on JSON, a ‘.eslintrc.json’ file is added to the folder structure.

To run eslint on the whole project for all TypeScript files, you can use the command npx eslint src --ext .ts. But I’d rather put this in a script as well. So I add a script with the name eslint in the package.json. The beauty of putting this in the package.json is that we can now call the eslint from the build command as well, making sure when we build our code it is also linted. I changed the build script to "build": "tsc && npm run eslint". And now we can rerun an npm run build command.

This shows us 3 problems and the build fails! You can manually change some rules if you disagree, or you can change your code. To make the code compliant again, you can use the following:

module.exports.hello = async (event: any) => ({
  statusCode: 200,
  body: JSON.stringify(
    {
      message: 'Go Serverless v1.0! Your linted typescript function executed successfully!',
      input: event,
    },
    null,
    2,
  ),
});

Just to be totally sure everything works as expected, after the npm run build we can do an sls deploy again. Or if you like, you can create a deploy script within the package.json that runs sls deploy so you can run npm run deploy.

After this, we do another curl to the endpoint and we see the linted code has been deployed.

Getting started with Serverless Framework

While using the serverless stack from AWS, there are quite some tools out there to help you with the Infrastructure as Code part. In the end, we use CloudFormation to deploy our stacks, but there are multiple ways to create your CloudFormation template.

I have played around with SAM, CDK & Serverless Framework. The latter I used the most often, because it seems to be the weapon of choice by the majority of my clients. The plan is to write down a few blogs on tricks I learned while using the framework, no promise there though, for now, let’s begin with a “getting started”.

I write my Lambdas on AWS in Node.js, so the example code will be Node.js on AWS as well.
Through npm we can simply install Serverless:
npm install -g serverless

Now, simply running the serverless command, this will walk you through a couple of prompts. For now I will skip the monitor & test option and create a blog-examples project in AWS:

This creates a project structure with three files in it for you, a .gitignore, a handler.js, and a serverless.yml.

Basically, this is all you need. Personally, I prefer to switch to typescript, but for now, let’s have a closer look at the serverless.yml.

The serverless.yml is the main file concerning the Serverless Framework, in this YAML file you define your resources or point to other files where you have defined resources. On creation, there is a lot of documentation in the file, which can be helpful later on. The service is named after the project name and you can see that the provider is prefilled with aws and nodejs (12.x at the time of writing this).

If we scroll down, we hit a function. The function is called hello and it points the handler to handler.hello. For now that is all (besides the examples in the doc), a total of 5 key-value pairs in the YAML file.

Now, we want to override some of the defaults, to create our initial cf-stack. The following properties are already present in the serverless.yml, but still commented out with the default values attached to them. The stage I like to make explicit and the region I prefer to change to Frankfurt.

stage: dev
region: eu-central-1

Serverless uses an S3 bucket to upload the cf-templates and run the stack. I have created an S3 bucket manually on my AWS account, and I want to use this bucket instead of the default, so on the same level as the stage & region I added this bucket.

Now we are ready to go. If you want Serverless to create your CloudFormation templates, you can run the command serverless (sls) package:

This will now have created a .serverless folder in your folder structure. When you open the CloudFormation template, you might be surprised by what is created.

For this hello function, Serverless created the following resources:
– HelloLogGroup
– IamRoleLambdaExecution
– HelloLambdaFunction
– HelloLambdaVersion
It generated a LogGroup for the Lambda function, a default execution role, a function, and a versioned function. We will be able to install this correctly using the CloudFormation stack. However, to call it from the outside, we need an API Gateway as well.

We tell this to Serverless by adjusting the YAML file. After the handler we define an events property, telling Serverless this is an HTTP event, and enter a path and a method. This results in the following function defined in the YAML.

After this, we rerun the sls package command and have a look at the cf-template. We see that there are now some extra resources in the file to set up the API Gateway:
– ApiGatewayRestApi
– ApiGatewayResourceHello
– ApiGatewayMethodHelloGet
– ApiGatewayDeployment
– HelloLambdaPermissionApiGateway

Now, without going into the specifics, let’s run the deploy command (make sure you AWS user is up-to-date in your CLI) sls deploy.

Now, let’s have a closer look in the AWS console at what actually happens. We see that Serverless uploads the CloudFormation file to S3. Navigating into my S3 bucket, I indeed now see a folder serverless created.

After this it validates the stack and starts CloudFormation, so let’s have a look at CloudFormation in the console. Here, we now see the stack with status complete.

If we open it, we can now see the Stack Info, resources, output, etc., like we are used to with CloudFormation. Now let’s also take a look at the API Gateway in the AWS console.

Here we see the API Gateway resource is created, with the hello path as specified in the serverless.yml and with the HTTP method Get. We see it is set up as Lambda_proxy pointing to our Lambda function. Let’s click on the Lambda function to navigate to it and inspect this function as well.

Here we see our function in the designer, with an API Gateway trigger, and in the inline code we see the function code from our sources. So now there is only one final thing left to do, and that is: call the endpoint that has been given to us in the output. You can put this in your browser, or call it through curl. It should give us the message back, and the input.

So there you have it, easy to set up and to create CloudFormation templates. With just a few lines of YAML configuration, a CloudFormation template of almost 300 lines is generated.
Also notice how all our resources (the cf-stack, the Lambda function & the API Gateway) are prefixed with our stage (dev).

For more info and resources, check the Serverless website.

Setting up and using AWS SSO

When using the AWS CLI, you need your access key and secret key set up to talk to AWS. However, storing them on your disk is usually not what you want and using the same key in your install scripts over and over again also exposes some vulnerabilities. Luckily for us, AWS has an SSO feature that helps us prevent this and even better news is that it is included in the free tier.

Just type in SSO in the service bar and it should pop up:

If this is the first time you are using this service, you should see a big button saying ‘Enable AWS SSO’, so that is exactly what we are going to do. For this you already need to have set up Organizations in your account.
After simply pressing the button, you should be brought to a success page. On this page there is a quick overview and a link to the user portal, you have a one time option to change this user portal URL, I suggest you use it wisely.

We go through these steps to configure a default setup to show one of the possible use cases for SSO.

In step one, after you click the ‘Manage your directory’-link, you can create users and groups. Clicking this link will bring you to a page where you can enter users and groups. There are different ways to look at this, in our example case we will use the users to control the access and forget about groups. In here I will create a new user for myself and we will leave the groups for what they are.

Now in the next step, ‘Manage the SSO access to your AWS accounts’, we can link this user to an existing IAM user. If you click this link, you will see your AWS organization with the accounts that are currently in there. You can select multiple accounts and click the ‘Assign User’ button, but in this case I will look up my own user and select that one. This will bring you to the permission sets.

In the permission sets overview, you can create a new permission set or you can leverage the default permission sets. Make sure you create and/or select at least one permission set, as you will need this. For this example I have selected the AdministratorAccess.
AWS will take a couple of seconds to configure your accounts and it will show a complete-message when ready.

Last but not least, if you want you can now start to add applications and use step 3 ‘Manage SSO access to your cloud applications’ to complete SSO configuration for this account.
Following the link and clicking the ‘Add a new application’ button will bring up the AWS SSO Application Catalog, but it will also provide a link to add a custom SAML 2.0 application. We will skip this step, as we do not need this.

For your SSO users, you can configure a couple of settings at the Settings tab. Here you can configure settings regarding MFA as well as delete the SSO configuration and start over if you like.

Do you remember the customisable URL from the start of this post? We will follow that URL. This will bring you to your own SSO start page, you should be asked to login and you can enter the user created in step 1. You should see Single Sign-On on the top bar to confirm you are where you want to be.

By default you should see your coupled AWS accounts in here. Prefixed with an id you should see your username and if you open the details, you should see the permission set. This SSO page should be the start of your AWS journey from now on. It has lots of benefits if you have multiple accounts coupled to your user, but even in this example case, it is already more secure.

When you open the menu, you should see two links; ‘Management console’ and ‘Command line or programmatic access’. The first one simply logs you in to the management console, but with a temporary user that assumed a role.
The second link opens a popup with CLI information. You can switch between a macOS/Linux CLI and Windows, after which you get an easy to copy export of your access key id, secret access key & session token.

These keys are valid for only 60 minutes (by default). This means that if you lose your key info (or someone sniffs it or gets access to it), they will expire, unlike the keys you can generate through the console.

With one click you can copy these lines and paste them into your terminal.
If you run an sts command to see the caller ID you will notice the account number and username. However, also notice this is an assumed role, just like in the console.

After simply waiting 61 minutes, the same call will fail and tell you the session token is expired. Making sure that if you lose your keys or someone gets a hold of them, they are only valid for a maximum period of an hour.

Resources:
AWS Getting Started with SSO.