You Shouldn’t Use Faker (or other test randomization libraries)

It's a common, and distressing, pattern to have factories in tests that call out to a library like Faker to "randomize" the returned object. So you might see something like this:

User.create({
    id: uuid(),
    email: faker.random.email(),
    first_name: faker.random.firstName(),
    last_name: faker.random.lastName(),
    address: faker.random.address(),
});

package = Package.create({
    width: faker.math.range(100),
    height: faker.math.range(200),
    length: faker.math.range(300),
})

This is a bad practice and it's distressing to see it in such widespread use. I want to explain some reasons why it's bad.

The corpus is not large enough to avoid uniqueness failures

Let's say you write a test that creates two different users, and for whatever reason, the test fails if the users have the same name (due to a database constraint failure, write to same map key, array having 1 value instead of 2, &c). Faker has about three thousand different possible values for a first name. This means that every 3000th run on average, the randomly chosen names will collide, and your test will fail. In a CI environment or on a medium size team, it's easy to run the tests 3000 times in a week. Alternatively, if the same error exists in ten tests, it will emerge on average once every 300 runs.

The onus on catching failures falls on your team

Because the error only appears once in every 3000 test runs, the odds are you haven't fully or partially probed the space of possible test values when you submit your code and tests for review. This means that the other members of your team will be the ones who run into the error caused by your tests. They might lack the context necessary to effectively debug the problem. This is a very frustrating experience for your teammates.

It's hard to reproduce failures

If you don't have enough logging to see the problem on the first failure, it's going to be difficult to track down a failure that appears only once every 3,000 test runs. Run the test ten or even fifty times in a loop and it still won't appear.

An environment where tests randomly fail for unknown reasons is corrosive to build stability. It encourages people to just hit the "Rebuild" button and hope for a pass on the next try. Eventually this attitude will mask real problems with the build. If your test suite takes a long time to run (upwards of three minutes) it can be demoralizing to push the deploy or rebase button and watch your tests fail for an unrelated reason.

It's a large library (and growing)

It's common for me to write Node.js tests and have an entire test file with 50 or 100 tests exit in about 10 milliseconds.

By definition a "fixture" library needs to load a lot of fake data. In 2015 it took about 60 milliseconds to import faker. It now takes about 300 milliseconds:

var a = Date.now()
require('faker');
console.log(Date.now() - a);
$ node faker-testing/index.js
303

You can hack around this by only importing the locale you need, but I've never seen anyone do this in practice.

What you need instead

So faker is a bad idea. What should you use? It depends on your use case.

  • Unique random identifiers. If you need a random unique identifier, just use a uuid generator directly. If you create 103 trillion objects in a given test, the odds of a uuid collision are still only one in a billion, so the odds of a collision are much lower than using the faker library. Alternatively, you can use an incrementing integer each time you call your factory, so you get "Person 0", "Person 1", "Person 2", etc.

  • A fuzzer, or QuickCheck. Faker has tools for randomizing values within some input space, say a package width that varies between 0 and 100. The problem is that the default set up is to only exhaust one of those values each time you run the test, so you're barely covering the input space.

    What you really want is a fuzzer, that will generate a ton of random values within a set of parameters, and test a significant percentage of them each time the test runs. This gives a better guarantee that there are no errors. Alternatively, if there are known edge cases, you can add explicit tests for those edge cases (negative, 0, one, seven, max, or whatever). Tools like Haskell's QuickCheck or afl-fuzz make it easy to add this type of testing to your test suite.

If you want random human-looking data I suggest adding it yourself and using some scheme that makes sense for your company. For example, users named Alice and Bob are always friends that complete actions successfully, and Eve always makes errors. Riders with Android phones always want to get picked up in Golden Gate Park and taken to the Mission; riders with iPhones want to get picked up in the Tenderloin and taken to Russian Hill (or something). This scheme will provide more semantic value, and easily pinpoint errors (a "Bob" error means we should have submitted something successfully but failed to, for example).

That's all; happy testing, and may your tests never be flaky!

Liked what you read? I am available for hire.

3 thoughts on “You Shouldn’t Use Faker (or other test randomization libraries)

  1. Christian

    Do people really use faker libraries for automated tests without a seed? With seeds you can generate huge amounts of data in a reproducible way. I also think that faker libraries are very good for setting up test systems for manual tests. For example you will need a huge amount of data for usability testing search and auto-complete features.

    Reply
  2. Jorin

    Hi, I definitely agree with you about not having random behavior in your test suite.
    A non-predictable test suite can be the source of a lot of suffering. Tests need to be reproducible to be able to identify bugs or otherwise you might better save the time of writing them all together.

    However, I think that generating random data still has valid use cases in other places:
    You can use a tool like faker to create a large set of test data, save it to a file and commit it with your tests. These are sometimes also referred to as Golden Files.
    Automating the creation of tests cases can be definitely useful for complicated logic.

    Fuzzers like QuickCheck are another great tool to find even more edge cases.
    Since running a fuzzer is rather expensive in terms of computing, you probably don’t want it as part of your standard test suite. Instead the fuzzing results should be added to the corresponding unit tests.
    While fuzzing is great to find additional edge cases, you want to have a decent initial data set that guides the fuzzer to look in the right direction to find any meaningful cases in a tolerable time frame.

    Reply

Leave a Reply to kevin Cancel reply

Your email address will not be published. Required fields are marked *

Comments are heavily moderated.