Load Testing in Node.js with Artillery

Jul 14, 2022

From the title, first we need to know what is the need of testing in node.js and how it helps developers to know about their developed system.

So here below goes some points which will help us to know the need and siginificance of testing of the system.

  • In software quality assurance, performance testing is, in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
  • Testing is an integral part of software development.
  • It’s common for programmers to run code that tests their application as they make changes in order to confirm it’s behaving as they’d like.
  • With the right test setup, this process can even be automated, saving a lot of time.
  • Running tests consistently after writing new code ensures that new changes don’t break pre-existing features.
  • This gives the developer confidence in their code base, especially when it gets deployed to production so users can interact with it.

Testing of an application can be done basically in 2 ways

  • Manual Testing - Manual testing is a software testing process in which test cases are executed manually without using any automated tool. All test cases executed by the tester manually according to the end user's perspective. It ensures whether the application is working, as mentioned in the requirement document or not. Test cases are planned and implemented to complete almost 100 percent of the software application. Test case reports are also generated manually. Manual testing is tedious and prone to human error

  • Automated testing - Automation Testing is a software testing technique that performs using special automated testing software tools to execute a test case suite. It is the best way to increase the effectiveness, test coverage, and execution speed in software testing.

What is Artillery??
Artillery is an open-source command-line tool purpose-built for load testing and smoke testing applications. It is written in JavaScript and it supports testing HTTP, Socket.io, and WebSockets APIs.
Interesting, but what does it actually do? In a nutshell, Artillery is a load generator. That is, it sends a lot of requests—i.e., the load—to the server you specify, very quickly.

What follows is a non-exhaustive list of Artillery features:

  • Easy installation with npm. Since Artillery is a Node.js tool, you can easily install it using the npm utility.
  • Simple CLI interface. Unlike other load testing tools that have clunky GUI, Artillery’s interface is a simple CLI.
  • Support to HTTP, Socket.io, WebSockets, and AWS Kinesis out of the box. Artillery allows you to test virtually any back end service.
  • Easy Usage. Since you’d use Artillery to describe Artillery’s test scenarios, it’s easy to learn, even for non-technical people.
  • Easy Automation. You can easily integrate Artillery’s CLI into other scripts and CI solutions.
  • Easy Extension. You can extend Artillery writing custom engines, plugins, or custom reporters using JavaScript.

Installing Artillery for Node.js
Artillery is an npm package so you can install it through npm or yarn:
npm install -g artillery
You can then check whether the installation was successful by running the following command:
artillery -V
If everything went well, Artillery would display its version number.

Basic Artillery Usage
Once you've installed the Artillery CLI, you can start using it to send traffic to a web server. It provides a quick subcommand that lets you run a test without writing a test script first.

You'll need to specify:

  • an endpoint
  • the rate of virtual users per second or a fixed amount of virtual users
  • how many requests should be made per user

Writing Your First Artillery Test Script

In this section, I'll demonstrate a basic test configuration that you can apply to any application.
Artillery allows users to run custom test scripts written in either yaml or json, users can also perform a quick test without loading external scripts.
Artillery is able to simulate realistic user behaviour with scenarios. The quick command can be useful for very simple tests only. Now let’s see how can we run a yaml script and get the load testing results.

As you can see below, I have written my first script in order to test the load of create messages on local project API.
The script’s written in YAML and has been split into two main sections: config and scenarios.
The config section is where you choose the target of your load test (the address of the API server under test), specify any plugins that you would like to use, and define the load phases.
The scenarios section is where you identify what the virtual user’s behaviour is going to be during your test. I’ve chosen four common read queries.

config:
  target: "http://localhost:4000"
  phases:
    - duration: 60
      arrivalRate: 60
      name: Warm up
    - duration: 120
      arrivalRate: 10
      rampTo: 25
      name: Ramp up load
    - duration: 1200
     arrivalRate: 25
     name: 'Cruise'  
    - duration: 30
     arrivalRate: 100
     name: 'Crash'  
  payload:
    # path is relative to the location of the test script
    -
      path: "user-auth.csv"
      fields:
        - "name"
        - "token"
    -    
      path: "instance-ids.csv"
      fields: 
        - "id"
scenarios:
  - name: "Create Message"
    flow:
      - post:
          url: "/graphql"
          headers:
            authorization: "{{ token }}"
          json:
            operationName: "createMessage"
            variables: { "data": { "instance": "{{id}}", "message": "New post by - {{name}}" } }
            query: "mutation createMessage($data: MessageCreateInput!) {\n  createMessage(data: $data) \n}\n"

Artillery also lets you inject custom data through a payload file in CSV format. For example, instead of passing specific token, you can have a predefined list of such data in a CSV file:

Dovie32@example.net,rwkWspKUKy
Allen.Fay@example.org,7BaFHbaWga
Jany30@example.org,CWvc6Bznnh
Dorris47@example.com,1vlT_02i6h
Imani.Spencer21@example.net,1N0PRraQU7

To access the data in this file, you need to reference it in the test script through the config.payload.path property. Secondly, you need to specify the names of the fields you'd like to access through config.payload.fields. The config.payload property provides several other options to configure its behavior, and it's also possible to specify multiple payload files in a single script.

Once you have the script setup, you can run the load test by using the following command:

artillery run path_to_file/createMessage.yaml

Once you have run the above command and the load test has completed, Artillery will provide you with a summary report as below:

All virtual users finished
Summary report @ 17:30:13(+0530) 2021-12-27
  Scenarios launched:  2393
  Scenarios completed: 2008
  Requests completed:  2008
  Mean response/sec: 8.44
  Response time (msec):
    min: 1449
    max: 9944
    median: 2650
    p95: 8856.7
    p99: 9613.7
  Scenario counts:
    Create Message: 2393 (100%)
  Codes:
    200: 2008
  Errors:
    ETIMEDOUT: 385

As you can see, Artillery provided us with detailed metrics that helped analyse the behaviour of our application when under load.

  • Scenarios launched: the total number of completed sessions.
  • Requests completed: the total number of HTTP requests and responses or WebSocket messages sent
  • Status codes: the HTTP status code responses from each request that has been completed.
  • Response time (mesc): are in milliseconds, and p95 and p99 values are the 95th and 99th percentile values.

We noticed the request latency had started to degrade rapidly during our performance test. It began with a minimum of 1449ms, and then suddenly ramped up to reaching a median request latency that exceeded 2650ms. This was concerning as we were only sending on average 20 RPS, and we were expecting our application to be able to handle a much higher number of concurrent users.

Moreover, out of the 2393 scenarios that were launched and completed, some of requests received an HTTP status code 3XX- indicating that the server timed out or it was simply incapable of performing the request.

These results provide the insight you need to know how your system behaved under load. Artillery provides the number of mean responses per second, along with all kinds of statistics on response times like minimum, maximum, median, and percentiles. It also shows the response codes received, so you can see if your API begins failing under stress.

With that, you've completed your first API load test using Artillery. With about few lines of code inside a YAML file, it shows how simple it is to create basic load tests for your systems.

The examples shown in this article serve as the basics of Artillery, and they only scratch the surface of what the toolkit can do for your applications. The Artillery documentation contains lots of additional information on what the toolkit can do for you.

Conclusion:
Part of verifying that your application's APIs work well includes making sure they perform well, especially with a lot of traffic. Load testing is an essential skill to have in your repertoire these days since speed matters for your system. If your application responds slowly, there's a good chance your users will leave and not come back.
You might think that load testing is a difficult skill to acquire, but you can get started rather quickly. For this article, I took a look at the Artillery toolkit. Artillery is a simple Node.js package that's simple to install and set up basic load tests in minutes. However, it's also a fairly robust tool that goes beyond just hammering an API endpoint for a specific amount of time. While this article covered plenty of ground, it only scratches the surface of what Artillery can do for you.
The examples from this article show how easy and straightforward it is to create a small test script to load-test your API. Artillery can throw a constant flow of virtual users to your defined endpoints and return all kinds of stats. You can also gradually ramp up or scale down the requests throughout a given period. Combining these methods improves your chance of detecting performance issues in a controlled environment.

Reference Links:

Tags

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.