Tests and mocking the API

Running all tests

# Run all tests
yarn test

Unit tests with Jest

Running unit tests

# Run unit tests
yarn test:unit

# Run unit tests in watch mode
yarn test:unit:watch

Introduction to Jest

For unit tests, we use Jest with the describe/expect syntax. If you're not familiar with Jest, I recommend first browsing through the existing tests to get a sense for them.

Then at the very least, read about:

Unit test files

Configuration for Jest is in jest.config.js, support files are in tests/unit, but as for the tests themselves - they're first-class citizens. That means they live alongside our source files, using the same name as the file they test, but with the extension .unit.js.

This may seem strange at first, but it makes poor test coverage obvious from a glance, even for those less familiar with the project. It also lowers the barrier to adding tests before creating a new file, adding a new feature, or fixing a bug.

Unit test helpers

See tests/unit/setup.js for a list of helpers, including documentation in comments.

Unit test mocks

Jest offers many tools for mocks, including:

  • For a function, use jest.fn().
  • For a source file, add the mock to a __mocks__ directory adjacent to the file.
  • For a dependency in node_modules, add the mock to tests/unit/__mocks__. You can see an example of this with the axios mock, which intercepts requests with relative URLs to a local/live API if the API_BASE_URL environment variable is set.

End-to-end tests with Cypress

Running end-to-end tests

# Run end to end tests
yarn test:e2e

# Run the dev server with the Cypress client
yarn dev:e2e

Introduction to Cypress

Cypress offers many advantages over other test frameworks, including the abilities to:

  • Travel through time to dissect the source of a problem when a test fails
  • Automatically record video and screenshots of your tests
  • Easily test in a wide range of screen sizes

And much more! I recommend checking out our Cypress tests in tests/e2e/specs, then reading through at least these sections of the excellent Cypress docs:

Beyond that, also know that you can access our app in Cypress on the window.

Accessibility-driven end-to-end tests

Ideally, tests should only fail when either:

  • something is actually broken, or
  • the requirements have changed

Unfortunately, there are a lot of ways to get this wrong. For example, when creating a selector for a login link:

cy.get('a')
// Too general, as there could be many links

cy.get('.login-link')
// Tied to implementation detail of CSS

cy.get('#login-link')
// Tied to implementation detail of JS and prevents component reusability

cy.contains('Log in')
// Assumes the text only appears in one context

To create the right selector, think from the perspective of the user. What exactly are they looking for? They're not looking for:

cy.get('a')
// Any link

cy.get('.login-link')
// An element with a specific class

cy.get('#login-link')
// An element with a specific id

cy.contains('Log in')
// Specific text anywhere on the page

But rather:

cy.contains('a', 'Log in')
// A link containing the text "Log in"

Note that we're targeting a semantic element, meaning that it tells the web browser (and users) something about the element's role within the page. Also note that we're trying to be as general as possible. We're not looking for the link in a specific place, like a navbar or sidebar (unless that's part of the requirements), and we're not overly specific with the content. The link may also contain other content, like an icon, but that won't break the test, because we only care that some link contains the text "Log in" somewhere inside it.

Now, some will be thinking:

"But isn't this brittle? Wouldn't it be better to add another attribute to the link, like data-testid="login-link? Then we could target that attribute and even if the element or content changes, the test won't break."

I would argue that if the link's semantic element or content changes so drastically that it's no longer an anchor and doesn't even contain the text "Log in" anymore, the requirements have changed, so the test should break. And from an accessibility perspective, the app might indeed be broken.

For example, let's imagine you replaced "Log in" with an icon:

<a href="/login">
  <span class="icon icon-login"></span>
</a>

Now users browsing your page with a screen reader will have no way to find the login link. From their perspective, this is just a link with no content. You may be tempted to try to fix the test with something like:

cy.get('a[href="/login"]')
// A link going to "/login"

But when you're trying to find a login link as a user, you don't just inspect the destination of unlabeled links until you find one that looks like it's possibly a login page. That would be a very slow and painful experience!

Instead, thinking from a user's perspective forces you to stay accessible, perhaps updating your generated HTML to:

<a aria-label="Log in" href="/login">
  <span
    aria-hidden="true"
    class="icon icon-login"
  ></span>
</a>

Then the selector in your test can update as well:

cy.get('a[aria-label*="Log in"]')
// A link with a label containing the text "Log in"

And the app now works for everyone:

  • Sighted users will see an icon that they'll (hopefully) have the cultural context to interpret as "Log in".
  • Non-sighted users get a label with the text "Log in" read to them.

This strategy could be called accessibility-driven end-to-end tests, because you're parsing your own app with the same mindset as your users. It happens to be great for accessibility, but also helps to ensure that your app always breaks when requirements change, but never when you've just changed the implementation.