AWS Lambda (Part 2)

Deepika Aggarwal
4 min readNov 8, 2020

I hope you have read my first article AWS Lambda and at least there will be a base foundation about AWS Lambda Service. As I told in my previous post that we will practically implement solution mentioned in the first article. So let’s do it. Before proceeding further, let’s take a step back and create an AWS account if it does not exist. Don’t worry Guys, it is absolutely free for one year. :)

  1. First of all, we need to create an S3 bucket if there is not already created. Otherwise, we can use the already created bucket also. Whenever there will be a new upload in the bucket lambda function will be triggered.

Steps to make an S3 bucket:-

  1. login into the AWS account and search for S3 service and click on create a bucket.
  2. Provide the bucket name. Here I am creating a basic S3 bucket with default settings. We need to unselect ‘Block all public access option’ while creating the S3 bucket.
AWS interface to create an S3 bucket

2. Our lambda function will access Dynamodb. So we need to create an IAM role for that and attach to our lambda function.

To create an IAM role, choose IAM from the AWS dashboard and click on create Role. Now select lambda and click on Next. There will be a dropdown for filter policy. Here we need to select AmazonDynamoDBFullAccess and click on next. Now again select next.

Here select Role name and click on create Role. Our IAM Role user is created.

3. Now we will create our Lambda function. Steps to create Lambda function are:-

  1. Select lambda from services and click on the create function.
  2. Now select Author from scratch and start entering basic information.
  3. Enter function name as per your own wish.
  4. Select Runtime (runtime is basically the selection of language in which we want to write our Lambda function).
  5. Go to permissions and select use an existing role .select the IAM role name that we have just created from dropdown.
  6. Now click on the create function.

4. Now we need to add a trigger through which lambda will invoke.

To add trigger we will click on add trigger and will select S3. From S3 we will select the bucket name and operations on which we want to trigger the lambda function.

Click on Add. Here trigger has been created.

5. Now our next step is to deploy our function code. For reference, I am writing a python function that we are going to use. You just need to replace the table name that you have created in dynamoDb to save data.

Function:-

import boto3
from uuid import uuid4
def lambda_handler(event, context):
s3 = boto3.client(“s3”)
dynamodb = boto3.resource(‘dynamodb’)
for record in event[‘Records’]:
bucket_name = record[‘s3’][‘bucket’][‘name’]
object_key = record[‘s3’][‘object’][‘key’]
size = record[‘s3’][‘object’].get(‘size’, -1)
event_name = record [‘eventName’]
event_time = record[‘eventTime’]
dynamoTable = dynamodb.Table(‘newtable’) // this is table name that I have created in DB to save data
dynamoTable.put_item(
Item={‘unique’: str(uuid4()), ‘Bucket’: bucket_name, ‘Object’: object_key,’Size’: size, ‘Event’: event_name, ‘EventTime’: event_time})

Write this function in editor and click on Deploy .

There are all the things that we need to do. Our lambda function is ready. Now we are ready to test. Just upload any file in S3 Bucket and check the Db table. You will see a new row created corresponding to each put operation in S3.

DynamoDb table structure

Congratulations Guys, we have made our first Lambda function. This is a very basic lambda function . I hope you have understood it. I will post some more examples. Stay tuned. Stay healthy.

--

--

Deepika Aggarwal

Senior Software Enginer at TO THE NEW. Passionate about distributed System ,learning new things and spread knowledage across people.