Algorithm Efficiency

Nora LC
4 min readJun 10, 2021

Big O notation — Part 1

source

An algorithm is a step-by-step list of instructions used to perform an ultimate task and like almost everything else in life, their efficiency is measurable. It’s important to know how many resources an algorithm is using. The least those resources are the most efficient the algorithm is.

Efficiency covers lots of resources, including:

  • CPU (time) usage
  • memory usage
  • disk usage
  • network usage

In computer science, big O notation — also known as the Landau’s Symbol is used to classify algorithms according to how their run time or space requirements grow as the input size grows. In this blog, I am going to focus on CPU (time) usage.

from MIT

When we are trying to find the complexity of the function/ procedure/ algorithm/ program, we are not interested in the exact number of operations that are being performed. Instead, we are interested in the relation of the number of operations to the problem size.

from jarednielsen.com

What Problem(s) Does Big O Notation Solve?

Big O notation helps us answer the question, “Will it scale?”

Big O notation equips us with a shared language for discussing performance with other developers (and mathematicians!).

Main Notations

Except for Logarithmic and Linearithmic (covered in my next blog post), the following are the most common notations in software engineering.

Source

O(1) — Constant Complexity

A constant time Algorithm is bounded by a value that does not depend on the size of the input. No matter the size of the input data, the running time will always be the same. It is considered the safest type of complexity.

Just like in the following example, the function always looks for the third element. No matter how big the array is, it will execute our one line of code one time.

function getThirdElement(arr){
return arr[2]
}

let foods = ['bread', 'kiwi', 'orange']
console.log(getThirdValue(foods)) // -> 'orange'

O(n) — Linear Complexity

Linear time complexity O(n) algorithms take proportionally longer to complete as the input gets bigger. It means that each element from the input is visited once.

Here are two examples of linear time algorithms:

  • Print all the values in a list.
  • Find the number with the longest sequence

let’s solve the second one:

function longestSequenceNumber(arr) {
let number;
let count = 0;
let maxNumber;
let maxCount = 0;
arr.forEach((num, i) => {
if (number === undefined) {
number = num;
count++;
maxNumber = number;
} else if (num === number) {
if (i === arr.length - 1) {
if (maxCount < count) {
maxNumber = number;
}
} else {
count++;
}
} else if (num !== number) {
if (maxCount < count) {
maxCount = count;
maxNumber = number;
}
number = num;
count = 1;
}
})
return maxNumber;
}
console.log(longestSequenceNumber([4,4,4,4,7,7,1,1,1,1,1,1,4,4,4]))
// -> 1

O(n²) — Quadratic Complexity

Running time in a quadratic complexity is exponential. In other words for each element of the input, you would visit all other elements and so on.

The bubble sorting algorithm is a good example of O(N²) complexity. Here is a good visualization of how it works followed by its code.

source

O(n³) — Cubic Complexity

Cubic time complexity is basically adding a third nested layer/loop to what we had right above with O(n²).

O(2^n) & O(n!) — Exponential & Factorial Complexity

The exponential complexity — O(2^n) is when the number of steps it takes to accomplish a task is a constant to the n power. n usually ends up being a very large number.

from jarednielsen.com

If Big O helps us identify the worst-case scenario for our algorithms, O(n!) is the worst of the worst.

The factorial time complexity — O(n!) Just like you can see in the Big O chart above, theO(n!)is headed towards the moon. Indeed, it’s the slowest. Not a good sign! We like going to the moon but definitely not while waiting for algorithm steps to complete and get our delicate solution.

While most problems we deal with don’t require a factorial time complexity. There are some more complex ones where it might be the only way to go. Often when calculating permutations and combinations.

Conclusion

I hope that what I listed above is helpful enough to understand the concept of time complexity. My next blog will focus on covering the Linearithmic and the Logarithmic Time Complexities.

--

--