The Serverless Framework

In a previous post I talked quite a bit about Serverless Architectures. I will recommend going back to that post before moving forward since there are some important aspects mentioned to take into account before starting a new project. With that being said, lets start getting deeper into the practice.

The most widely-adopted toolkit for building serverless applications
Any provider. Same experience.

First thing that needs to be clarified is that Serverless (notice capital “S”) is not the same as Serverless Architectures. Serverless is a toolkit that makes it easier for developers to create serverless applications easier and that can be deployed into any BaaS provider. It’s a free toolkit you can download and start using it right away, they also provide enterprise support in case needed.

In this post I will be explaining the framework by providing code examples to create a simple app. The application will be simple enough to understand the main concepts of this framework. And for this, I will be creating a simple app that will have user authentication, will allow the user to read a list of posts or messages once authenticated, and will allow the user to create new posts to be appended to the list.

AWS Resources

As part of the app we will be creating, I will be using Amazon AWS to deploy a “dev” environment of the application, so lets start there and learn a little about some concepts on AWS resources we will be using.

Simple Storage Service (S3)

An object storage system built to store and retrieve data. We will be using this service to store our client application (HTML and JS files).

Lambda

Service that provides compute time for running code. This is where all our serverless functions will be ran.

DynamoDB

Fast and flexible NoSQL database service. We will use this to store and retrieve our message posts in the application.

API Gateway

A service that provides developers to create APIs that map requests to our serverless functions. This will generate the routes that will trigger our lambda functions.

Cognito

A simple user sign up, sign in and access control. We will be using this to create a user, provide authorization, verify access and map each post to the user that created it.

Identity and Access Management (IAM)

A simple user sign up, sign in and access control. We will be using this to create a user, provide authorization, verify access and map each post to the user that created it.

Architecture Design

Like I mentioned on the previous post, a big part of Serverless architectures is designing your solution first. This might seem non important on a small application like this, but it could suddenly run out of control and/or need redesign if not properly planned.

There is a lot of noise in this diagram but lets go top to bottom and left to right to explain what’s each step is doing.

Authentication

First need that we need to define is authentication to our application. If you follow the top of the diagram this are the steps that will be followed in order for our user to authenticate with the application:

  1. User sends credentials to Cognito
  2. Cognito sends credentials to check if they are valid on IAM
  3. AWS IAM generates a jwt token and sends it to Cognito
  4. Cognito returns the jwt token to the end user

One way to think of it is like IAM stores all user information (credentials, email, address, etc) while Cognito handles the sign in process for you automatically. We are not going to do it as part of this example but, Cognito would help make it fairly easy for us to setup a sign up screen and send information to get stored in IAM, in the end that is the role of Cognito.

Creating Comments

For any business logic we need, we would need to generate a function which is going to live on Lambda. This lambda function will have the ability to communicate with a database table on DynamoDB where all our comments will be stored, along with some additional information such as the username that generated the entry. In a serverless world all functions are event driven and that is exactly why we need ApiGateway for. We will provide a url entry point where our UI will communicate for it to send a POST request to create a new comment and store it in our database. Now, we don’t want any unauthorized user to be posting comments, and this is where the jwt token generated comes into place; that key will need to be sent along with the POST request in order for ApiGateway to confirm it is valid on IAM and then allow the Lambda function to be triggered. This is why ApiGateway is so powerful, it doesn’t only trigger the lambda function but it also works as a authorization mechanism at the front lines of our server. Following this pattern, you don’t REALLY need to have authorization mechanisms in place in your frontend UI, since ApiGateway will have you covered, although it is recommended so users wont be confused as to why a request might be returning a 401 error.

So, now in the right order of how things are supposed to happen:

  1. User sends POST request to our function url to create a comment
  2. ApiGateway receives the request and checks for authorization with IAM before sending the request to our Lambda function
  3. Lambda function sends a request for a comment to be created on DynamoDB
  4. Once its done, the same Lambda function returns a success status to ApiGateway
  5. ApiGateway sends the success status to the user’s request

Listing Comments

That was the hard part, now lets go quickly through the listing of comments which follows a similar pattern as done when creating a comment:

  1. User sends a GET request to our function url to list comments
  2. ApiGateway receives the request and checks for authorization with IAM before sending the request to our Lambda function
  3. Lambda function sends a request to the DynamoDB database to retrieve a list of comments
  4. DynamoDB processes and returns the list of comments to the Lambda Function
  5. Lambda Function returns a success status with an array of comments
  6. ApiGateway sends the lambda function response to the user

Accessing the UI

So now that we have designed our API, the last thing that we need to figure out is: how are we going to serve our UI to the end users? And this is where S3 comes into place, we can store any object or files in that service, plus it will generate a public url on which we could access those files easily. So for instance, we could store our HTML and compiled JavaScript there, along with any other assets, and we would basically get a url to our own website; which is exactly what we will do.

Getting into Code

Now, lets finally start coding something that we can deploy. All of the code used here will be accessible on the following repo for reference since I will only mention the important pieces in this post:

https://github.com/vikonava/twitterish-serverless-demo

The github repo Readme also contains instructions on how to install the required dependencies, manual user creation instructions, deployment and how to remove and cleanup all instances of our application from AWS itself.

There are a few things I need to mention about aws-sdk that I won’t be talking about in this post to try to keep it simple and on talk into the important details about the serverless framework but feel free to go through their documentation to understand more why it works that way.

  • Response Helper: a reusable helper on my repository to format all responses sent to the user
  • Request Helper: another helper to my library that helps retrieve information from IAM about the user who sent the request, given that the user is authenticated.
  • Async/Await:  by nature all JavaScript functions are synchronous, and it becomes a problem with the aws-sdk since certain functions we call (such as performing a query to the database) are asynchronous. Fortunately, AWS Lambda has Node.js 8.10 which supports async/await patterns
  • Serverless: the framework to be used, won’t be going too deep into the configuration but will just highlight important configurations for this application. In this link you can find all documentation and how to get started with this framework.

API

List Comments

Let’s start with a simple function that just retrieves information from the database and returns it to the user. So, the only logic we need is to perform a query to the DynamoDB database. I won’t be talking much about how DynamoDB or NoSQL databases work since it is a whole other topic I might talk about later, but you can read about it here.

const result = await dynamoDb.query({
  TableName: `${process.env.tablePrefix}-Comments`,
  KeyConditionExpression: "#visible = :visible",
  ExpressionAttributeNames: {
     "#visible": "visible", 
  },
  ExpressionAttributeValues: {
    ":visible": 1,
  },
  ScanIndexForward: false,
  Limit: 20,
}).promise();

In short we are going to query a table name that has a prefix and then is called Comments. This will provide us with the ability to have multiple tables that would be easily recognized on AWS when going through its UI, such as “MyApp-Dev-Comments”. I will limit the results being returned being to 20 and ScanIndexForward false will make results to be returned descending by its sort key which will be the date the comment got created. The full file with imports and catching errors can be found here.

Create Comment

For this function we need to do two different things; first we need to retrieve information we will be using to generate the post. For that we will first generate time stamp and parse the event body. Event parameter received on a lambda functions refers to the body that was received as part of the user request, so we will parse that data to retrieve the actual comment to be posted. We will also need to retrieve the user that is sending the request to be able to tie the comment to that particular user. Since ApiGateway is taking care of the authentication we should not worry about that, but we still should verify if a user object was found when checking with IAM so if its not found we should return a failure.

const timestamp = new Date().getTime();
const data = JSON.parse(event.body);

// Retrieve User Attributes
const reqContext = new RequestContext(event.requestContext);
const user = await reqContext.getUserAttributes();

if (!user) {
  callback(null, Response.failure());
}

Now that we have those, what is left is to create the object that has the item information that will be created and sending the item to be created in our table.

const item = {
  id: uuid.v1(),
  userId: user.sub,
  username: user.Username,
  text: data.text,
  visible: 1,
  createdAt: timestamp,
  updatedAt: timestamp,
};

await dynamoDb.put({
  TableName: `${process.env.tablePrefix}-Comments`,
  Item: item,
}).promise();

You can find the complete file here.

Setup Configurations

Now that we have the file of the functions created, now we have to set everything up on the framework for them to be able to get deployed and configured properly. First thing we need to do is to setup environment variables to be used for table name prefixes. Then we also need to configure the permissions for Lamda functions to be able to insert items and perform queries in the DynamoDB database, along with providing permissions for Cognito to list users since that is used by the request helper library to identify user generating the request.

environment:
  tablePrefix: ${self:service}-${opt:stage, self:provider.stage}
iamRoleStatements:
  - Effect: Allow
    Action:
      - dynamodb:PutItem
      - dynamodb:Query
    Resource:
      - "Fn::GetAtt": [CommentsTable, Arn]
  - Effect: Allow
    Action:
      - cognito-idp:ListUsers
    Resource:
      - "Fn::GetAtt": [CognitoUserPool, Arn]

Next, we need to map our function handler to the Lambda service along with the events that will get them trigger. Since we are marking the events to be http, the serverless framework will automatically configure ApiGateway for them. Another important thing is that the authorizer in the event has been marked as aws_iam which means that the authorization of the request coming in will automatically be checked against IAM before triggering the lambda function.

functions:
  commentsIndex:
    handler: src/comments/retrieveAll.main
    events:
      - http:
          path: comments
          method: get
          cors: true
          authorizer: aws_iam
  commentsCreate:
    handler: src/comments/create.main
    events:
      - http:
          path: comments
          method: post
          cors: true
          authorizer: aws_iam

Finally, we need just to specify the resources and configuration information that will get created into AWS automatically. You can find the configuration of each one of those resources here.

resources:
  - ${file(resources/api-gateway-errors.yml)}
  - ${file(resources/cognito-user-pool.yml)}
  - ${file(resources/cognito-identity-pool.yml)}
  - ${file(resources/dynamodb-tables.yml)}

Client

Now that we have the pieces in our API service configured, we now need to create a UI for this. I’ve created a simple UI using ReactJS in my repo which you can take a look and mess around with, but again will just touch the important pieces and demonstrate how simple it is to get everything working.

AWS Amplify

The most important thing we need to setup in our UI is Amplify. Amplify is basically an SDK that will help us communicate easily with the resources we have previously defined. For instance, it will allow us to sign in, store and manage the jwt token automatically; and also, it will send it as part of any request going to other services (such as ApiGateway) for authentication purposes.

import Amplify, { Auth } from "aws-amplify";

import awsConfig from '../config/aws.json';

Amplify.configure({
  Auth: {
    mandatorySignIn: true,
    region: awsConfig.UserPoolId.split('_')[0],
    userPoolId: awsConfig.UserPoolId,
    userPoolWebClientId: awsConfig.UserPoolClientId,
    identityPoolId: awsConfig.IdentityPoolId,
  },
  API: {
    endpoints: [
      {
        name: "comments",
        endpoint: awsConfig.ServiceEndpoint,
        region: awsConfig.ServiceEndpoint.split('.')[2],
      },
    ]
  }
});

Here I’m setting up all the configuration variables required for connection with my API. Whenever you start a new service and resources get created, those resources are assigned unique ids and most of the time they are following naming conventions defined by AWS so we don’t really have control over what they are; which we need to configure in Amplify. Since we want to create a “single click” deployment application, we will assume that those configurations are stored in a file named aws.json that we will be creating on every deployment and will go into the details of how that file will get created later on. I placed my configuration in here.

Authentication

Now that we have configured Amplify, we can start using its methods. The first thing we want to do is to enable users to login. For this, on my application, I defined a form that has a field for username and another one for password. Once the “Login” button is clicked, now I call a function that will communicate with Cognito and create the jwt token. We need to store the token locally and send it on subsequent requests for our API; so we do this inside the login function:

import { Auth } from "aws-amplify";

await Auth.signIn(username, password);

And that’s it! This function will automatically communicate with Cognito and store the token locally. That’s it, we don’t need to do anything else but perhaps catch an error that could mean that credentials entered were incorrect. The logic for my login form was placed here.

List Comments

Now that we are login, next thing would be to list all the comments that are on DynamoDB; so we need to trigger the list comments lambda function. So we need to perform the following:

import { API } from "aws-amplify";

refreshComments() {
  API.get('comments', '/comments').then(this.retrieveCallback);
}

retrieveCallback(data) {
  // Parameter data contains the array of comments
  // retrieved from the database.
}

Again, so simple it doesn’t look right… but it is. So what we do is we sent a GET request to the ‘comments’ API defined on the Amplify configuration, and specify what is the route that we will be using. So imagine that the first parameter defines and helps to translate what would the hostname of that API will be, since the url will be defined automatically and we won’t have a say (at least under this config) on what it looks like. Then the second part is what the route will get append to that hostname. Once the request is successful, we will send the results to a function called retrieveCallback. Then that callback function will receive the array of comments as the parameter specified by that function and we can use it however we like.

Now, we said we need to send the JWT token so that ApiGateway knows we are authenticated an check our permissions to that route. Well, guess what? Amplify has got you covered, there is no need to do anything. Amplify will automatically detect if there is a token previously created and it will be sent as part of that request. So my file ended up looking like this.

Create Comment

Last feature on the UI is to send a new comment to be saved in our application. So, I did a form that just contains a field with the text to be stored in the database.

const options = {
  header: {
    'Content-Type': 'application/json',
  },
  body: {
    text: this.state.text,
  },
};

API.post('comments', '/comments', options).then(this.postCallback);

So in this code we are basically defining the request sent to the API, with the header which is important for ApiGateway to properly receive our data, and the body to be sent which would be just a key named text that contains the value to be stored in the database. You can see in the last line how we are sending a POST request this time, to the same API and same path; since that is what we defined on the serverless.yml file. Response on that request will be sent to that the postCallback function can handle any exceptions if needed. You can find that logic on this file of my repo.

Connecting API and Client

Remember on the AWS Amplify section we mentioned that Amplify configurations need to be defined automatically during deployment of the API; well that is the only thing we are missing now. For this we will be using a plugin named serverless-stack-output which would do that for us. So we need just to include the plugin in our serverless file and configure the name of the file where configurations will be stored.

plugins:
  - serverless-stack-output

custom:
  output:
    file: ../client/config/aws.json

Don’t forget to install the plugin using npm and save it in your package.json file. The serverless.yml file will end up looking something like this.

Deploying to AWS

“One-Click” Deployment

Now, since we are trying to create a single command deployment type of thing to make it easy and fast for us to deploy, I will be using npm to help me out load the necessary commands automatically. If you are following a similar structure as my repo, you might just possibly copy paste the scripts that I created. This is how my structure looks like:

api/
  package.json
client/
  package.json
package.json

At the root level I have a package.json with the following scripts:

"api:deploy": "cd api && npm run deploy",
"api:install": "cd api && npm install",
"api:remove": "cd api && npm run deploy:remove",
"client:deploy": "cd client && npm run deploy",
"client:install": "cd client && npm install",
"client:remove": "cd client && npm run deploy:remove",
"deploy": "npm install && npm run api:deploy && npm run client:deploy",
"deploy:remove": "npm run api:remove && npm run client:remove",
"install": "npm run api:install && npm run client:install",

Inside my api folder, I have another package.json with the following:

"deploy": "rm ../client/config/aws.json & sls deploy",
"deploy:remove": "rm ../client/config/aws.json & sls remove"

And I’ve got a third package.json in the client folder that has this scripts:

"build:dist": "webpack --config webpack.prod.config.js -p",
"deploy": "npm run build:dist && sls client deploy --no-confirm",
"deploy:remove": "sls client remove --no-confirm",

Triggering Deployment

Finally, triggering the deployment is simple, just do:

npm run deploy

Once done you will the in the output the url you can hit to go and finally give your service a try. If you want to clear all the resources created on your AWS account, you can also trigger a cleanup by running:

npm run deploy:remove

Additional Information

More information about the setup and my project can be found in my github repo I made. It might not be perfect and have a couple flaws, but my goal was to give a concise and clear example of how “simple” it is to get a service running.

https://github.com/vikonava/twitterish-serverless-demo/blob/master/README.md

Leave a Reply

  Subscribe  
Notify of