In-Place Algorithms; what are they, and why use them?

Sean Welsh Brown - Jul 22 '20 - - Dev Community

In the world of computer science, one of the most important aspects of algorithmic thinking to take into consideration is optimization, both of system memory and of runtime complexity.

Every time another iteration or loop is run within a function, or another data structure is added, the storage and runtime footprint of that function becomes larger. While this may not be noticeable on a small scale and likely won't affect small applications, this has a large incremental effect that can and will affect large applications with thousands or millions of function calls.

A good example to demonstrate this effect is an in-place algorithm.

An in-place algorithm is an algorithm which transforms its input using no auxiliary data structure. Essentially, this means that if an in-place algorithm is written to take in an array as an argument, it must modify that array in its original place in memory; no extra arrays, objects, or pointers in memory may be used.

The strictest definition of an in-place algorithm requires a constant and predictable amount of memory to be used, although the limitations this imposes can be more restrictive than they are productive. Looser definitions allow for auxiliary variables and function calls to aid in calculation/incrementation.

The reason this is important is that allocation and destruction of pointers in memory, particularly for arrays and objects, are relatively memory-heavy operations. If we can modify an array in-place, without adding additional and unnecessary copies, we can create a tangible savings on memory and runtime complexity.


Let's use an example in context. Let's use a common interview algorithm, one where we remove duplicates from a sorted array and then return the number of unique elements.

Say we're given the following array a as input in a JavaScript context:

const a = [0, 0, 1, 1, 2, 2, 3, 3]

We could try and solve this quickly using one of JavaScript's built in array functions, includes(), as well as modern ES6 syntax:

function removeDuplicates(a) {

  const b = [];

  for (const element of a) {
    if (!b.includes(element)) {
      b.push(element);
    }
  }

  return b.length;
}

The output of this would, in fact, be the proper shortened array with all duplicates removed. However, the potential issue here is that we created a second array variable in memory, pushed elements onto it, and then left the original array in memory as well. This works as getting us the answer, but in an enterprise level piece of software it would take up far more memory than we need-- as the array being passed in could be hundreds or thousands of elements long.

Instead, we could rewrite this to modify the original passed in array in-place, like so:

function removeDuplicates(a) {

  let i = 0;
  for (j = 1; j < a.length; j++) {
      if (a[i] != a[j]) {
          i++;
          a[i] = a[j];
      }
  }

  return i + 1;

};

In this case we've gone about the problem differently; instead of creating allocating a new array and passing elements to it and then checking the length, we're sorting the original array in-place and then returning the number of sorting operations that we did (plus one)-- returning the same result but doing so with a smaller memory footprint.

To reiterate, this may seem like an unnecessary refactoring of logic to achieve the same result, but the importance comes from understanding the fundamental computer science logic at play. This is an algorithm that is now language-agnostic (it can be repeated without relying on built-in functions), and operates with a minimal amount of storage comparatively-- something that would be important on a larger scale application.

Writing In-Place Algorithms is a concept that comes up in numerous programming interview questions and algorithms for this very reason. It's important to understand not just how to get the answer, but also why considering memory footprint and runtime complexity make a difference.


If you've come this far, thank you for reading! I'll be writing more blog posts like this one exploring Computer Science fundamentals as I learn them myself. :)

. . . . . . . . . . . . . . . . . . . . .
Terabox Video Player