Writing backend code – like web services or anything else really – with  AWS Lambda functions is amazingly easy, in particular when you choose Node.js as your weapon of choice. The amount of code required to get going is so sparse, it’s almost magical. However, as you build out your Lambda, complexity will quickly rear its head, and you’ll soon feel the need to add some tests.

Unit testing is a part of any good developer’s workflow, but I feel it’s especially important when dealing with dynamically typed languages, like vanilla Javascript. Its loosely-typed nature makes development fast, but also makes for a certain degree of uncertainty when making changes, or while refactoring. Good test coverage can make up for this, and it can allow you to work faster. If you’re able to mock your Lambda’s dependencies, you’ll be pretty confident that your successful unit test is representative of the eventual production code.

Dependency Injection

“Dependency Injection” is the somewhat intidimidating term used in software engineering to describe something quite simple:

Dependency injection is a programming technique that makes a class independent of its dependencies. It achieves that by decoupling the usage of an object from its creation.

It’s most useful when applied in the context of unit testing, because it enables you to mock dependencies that shouldn’t be active during tests.

In Node.js Lambda functions, dependencies are imported using the require() function. It creates a constant in the function’s scope, pointing to some outside code. By default, you’ll do this at the top level of your Node.js file, effectively making the dependency globally accessible to said file. Consider this snippet, where we’re importing the AWS SDK, and creating a new instance of the DynamoDB DocumentClient:

const AWS = require('aws-sdk')
const documentClient = new AWS.DynamoDB.DocumentClient()

What happens when you unit test code that imports the above dependency? In this case, your test will establish a live connection to DynamoDB and potentially start reading and writing data to it! While you could argue this is a test in and of itself, this situation is far from ideal. Each unit test invocation will

  • potentially incur costs
  • write data to a live database, possibly messing up its consistency
  • be slow

Richard Hyatt’s Medium post from 2016 is still relevant today, as it describes how we can make dependency loading asynchronous and injectable by using the exports object to store and reference dependencies.

exports.deps = () => {
  const AWS = require('aws-sdk')
  const documentClient = new AWS.DynamoDB.DocumentClient()

  return Promise.resolve({
    dynamoBatchWrite: params => documentClient.batchWrite(params).promise()
  })
}

The actual dependency import is enclosed into the deps function scope, and is made asynchronous by wrapping the result dictionary in a Promise. This asynchronicity allows us to overwrite the deps function during tests, while leaving it as-is in production.

The production code will just await the dependencies at the top, after which you’ll be able to access the fully constructed dependencies:

exports.handler = async event => {
  const deps = await exports.deps()
  ...
}

Now, for the test:

require('chai').should()
const lambda = require('../index')
const sinon = require('sinon')

describe('importOPML', () => {
  beforeEach('mock dependencies', () => {
    const mockWriter = sinon.mock()
    mockWriter.resolves({ UnprocessedItems: [] })

    lambda.deps = () => Promise.resolve({
      dynamoBatchWrite: mockWriter
    })
  })

  it('should succeed with empty opml', async () => {
    // Using lambda here, will call the version that uses the mocked DynamoDB writer.
  }
})

This happens to be a Chai test that uses Sinon for mocking, but the premise is the same. Before each test block is run, the beforeEach block is executed, which preps the lambda with the mock dependencies.

That’s it. You’re off to the races!

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

A.

Android device debugging on Linux Mint: “error: insufficient permissions for device: udev requires plugdev group membership”

As I rejoiced in the achievement of finally finding a Linux distro which plays well with my Dell XPS 13 (2019 edition, model 9380), I plugged in my Moto test device, intending to continue working on an Android app. Pressing the Run button promptly made Android Studio (3.4) do its compilation magic, but got stopped in its tracks rather quickly. An angry-looking error message awaited me:

error: insufficient permissions for device: udev requires plugdev group membership

Oops, no device debugging for you.

Android Studio also left a note pointing me to their developer page on the subject. As I suspected, it seemed I had some configuration work left to get device debugging working on Linux. However, following Google’s instructions on setting up adb didn’t do much to resolve my problem. The part about adding yourself to the udev group is important, though, as it’s linked to the actual solution described in the next paragraph.

sudo usermod -aG plugdev $LOGNAME

Adds your user to the plugdev group.

Update: Someone mentioned recently that the below steps may not actually be required; just logging out and back in again at this point, should also do the trick. YMMV of course :-).

An ill-timed dog walk (let’s just say I hadn’t expected to be caught in the middle of a downpour), and a few mildly frustrated Google searches later, I ran across a blog post from 2013 (!) on the subject of “Adding udev rules for USB debugging Android devices“, by Janos Gyerik.

I will not pretend to know why this extra configuration is required, but it describes looking up the device’s identifier and adding it to the aforementioned plugdev access group, so Android Studio can properly access the USB device.

And there you go, a working USB debugging connection to my Android device, thanks to some great advice from 2013!