Suppose you start introducing some functional flavour into your code. And, you like it. The code is more expressive. It’s easier to test. It’s easier to pull apart and reconfigure. On several measures, it’s better code than you used to write.

There’s a problem, though. The team have been communicating, with not-so-subtle hints, that they don’t like it. Whether it’s through PR feedback or comments in stand-up, the message comes through loud and clear:

  • “Functional JavaScript is slow.”
  • “I don’t think this will scale well.”
  • “We fully expect to rewrite this when we hit performance issues.”

What do you do? Do you rewrite everything in spaghetti style to create the illusion of performance? Or is there a chance to do something more intelligent here?

The first thing to consider is that the team might have a point. It’s easy to trip up and write poorly performing functional code. And some functional techniques don’t work well with JS runtime engines. But let’s be fair. This is no different to writing any code in any language or style. Some ways of writing things will be faster than others. And there will be some convenient approaches that work, but perform horribly. We’re constantly striking a balance between well-factored code and well-performing code. This is what good software engineers do.

Still, we’ve asserted that there are pitfalls peculiar to functional JavaScript. What are they? And how do we avoid them?

Be careful with heuristics and rules-of-thumb

When it comes to JavaScript, talking about performance is tricky. As soon as you think you know something, a new browser version comes out and turns it all upside down. Consider, for example, the built-in .map() method for JavaScript arrays.

const myNewArray = someOtherArray.map(transformTheThing);

It’s simple and expressive. And for a long time, we knew that using the array method was always slower than the following:

const fasterMap = (arr, transform) => {
    const len = arr.length;
    const newArr = new Array(len);
    for (let i = 0; i < len; i++) {
        newArr[i] = transform(arr[i]);
    }
    return newArr;
}

const myNewArray = fasterMap(someOtherArray, transformTheThing);

This fasterMap() approach used to be faster 100% of the time. And it still is, in browsers that use the V8 engine (like Chrome, Edge, etc.). But in Safari at least, it gets complicated:

  • On my 2019 MacBook Pro, fasterMap() is slower than .map();
  • On my iPhone XS, fasterMap() is also slower than .map(); but,
  • On my iPad Mini (5th Generation), fasterMap() is slower until there are around 500 elements in the array. After that, it’s dramatically faster.1

All this with one browser. It gets even more complex when you consider different browsers on different devices.

Because of these complications, it pays to measure. And the closer you can measure what actual, real users are experiencing, the better. Browsers and devices will continue to change. And, over time, our habits, heuristics, and rules of thumb lose their utility.

Avoiding pitfalls

We can find plenty of cases where taking a naïve functional approach can get us into trouble, though. Delving into them all is beyond the scope of this article. But one of the most common pitfalls is doing a lot of object copying in a reducer function. For example, suppose we’re writing a function that takes an array of objects. It pulls out a specific property and uses that as keys for a new object. We’ll call it keyBy(). The well-intentioned, but not-so-performant approach might look like the following:

// This is a bad example, don't copy it.
const keyBy = (key, arr) => arr.reduce((obj, item) => ({
    ...obj,
    [key]: item,
}), {});

This code is neat and concise. The trouble is, though, that for every item in the array, we make a copy of obj. And as the array gets bigger, that object we’re copying gets larger and larger. And we make more and more copies. It ends up doing a lot of unnecessary work.

The person who wrote it likely felt like they were doing the right thing, though. They were trying to keep things safe by avoiding mutation. In some functional languages like, say, Haskell, it’s all immutable data, all the time. So you can’t mutate a record/object/map, even if you want to. If you never mutate anything, you never accidentally update shared state. And that means no nasty surprises where data is changing in unexpected ways.

That’s all well and good. But the trouble is, there’s nothing wrong with mutating state. We only get into trouble if we mutate shared state.

That’s an important distinction, so it’s worth repeating:

  • Sharing state is fine, so long as we don’t mutate it.
  • Mutating state is fine, so long as we don’t share it.

We can fix our keyBy() example by introducing a small amount of mutation:

const keyBy = (key, arr) => {
  const init = {};
  const keyByReducer = (acc, item) => {
    acc[item[key]] = item;
    return acc;
  };
  return arr.reduce(keyByReducer, init);
};

In this version, we’re mutating the init object, since it’s passed by reference to keyByReducer(). But we’re not mutating any shared data. We don’t change any of the arguments, key or arr. And we don’t reach outside the function scope and change any global variables, either.

Wait a moment, though. We mutate the init object. And then return it. Isn’t that sharing mutable state?

It’s not. Because the function doesn’t keep any reference to the return value. Once the function returns, it’s gone. It can never touch that object again.

What about keyByReducer() though? It’s modifying one of its arguments, every time it’s called. How is that okay?

It would not be okay if we used keyByReducer() anywhere outside keyBy(). But we don’t. It exists solely within the scope of keyBy(). It doesn’t reach outside keyBy()’s scope. It modifies one variable, init, which ends up becoming our return value. And, as we discussed, once the function returns, it’s gone. It’s no longer shared.

In short, there’s no sharing of mutated data outside the scope of keyBy(). And, since we can verify that this function doesn’t cause any side effects, we know it’s a pure function.

So what?

Okay, so we’ve managed to describe one pitfall. And we’ve talked about one way to mitigate it. Big deal. Aren’t there thousands of other ways functional code might mess up performance?

There may well be. But our keyBy() and fasterMap() examples illustrate an important point. In both cases, our alternative version is a pure function. And (if we choose to), we can verify that the alternative version produces the same results. Because of this, we can swap one version for the other with confidence. We know no side effects are going on. And this means we don’t have to go and check the rest of the codebase, just in case something else is affected. We know that swapping one version for the other won’t change the application’s behaviour. That is, other than making it faster or slower (or both, depending on circumstances).

This is where functional programming helps, rather than hinders, performance. Because working with pure functions ensures it’s safe to switch implementations. And a functional style makes it easier to swap out different parts. One small improvement to a utility function can improve performance in many locations. And do so in a single, isolated change.

Of course, the rest of the team may not be aware of all this. And they’re unlikely to read this article. What, then, do we do in the meantime?

If you are copping some of those not-so-subtle hints, first consider that they may be right. There may well be performance bottlenecks in your functional code. But this may also be an opportunity to show the strength of functional programming. See if you can pinpoint the bottleneck, and swap out the problem code with a faster pure function. You’ll then be able to show, with data, how functional programming can help make code faster. Not to mention more maintainable and expressive.


P.S.: I mentioned above that an article like this can’t delve deep into potential performance issues. But, if you want to know more about how to deal with scenarios like this, I’ve written a book that may help. It’s called A Skeptic’s Guide to Functional Programming with JavaScript.


Chart comparing array map implementations across an iPhone, iPad, and Macbook Pro. See Table 1 below for details
Comparison of array map implementations
Number of array elementsiPhone XS iOS 16.0.1: fasterMap()iPhone XS iOS 16.0.1: built-in .map()iPad Mini (5th generation) iOS 15.6.1: fasterMap()iPad Mini (5th generation) iOS 15.6.1: built-in .map()2019 Macbook Pro, Safari 16.0: fasterMap()2019 Macbook Pro, Safari 16.0: built-in .map()
100.1470.0570.1340.0570.1740.062
2103.0010.5732.8250.6483.3080.550
4106.2141.0265.9181.0966.7871.086
6109.2941.5038.59722.2779.8801.513
81012.2632.10911.58830.12913.4482.112
101014.6231.68613.20636.88116.8242.478
Comparison of array map implementations (microseconds per operation)