JavaScript Required

We're sorry, but we doesn't work properly without JavaScript enabled.

Introduction of Big O Notation

“Big O notation”, a well-known concept in computer science, and a must ask topic in any technical round for developer job. If you heard this term for the very first time, no worry, here you will understand what this “Big O notation” all about.

“Big O notation” is a general computer science concept, which helps the developer to measure the time and memory space complexity of an algorithm or program block. Measuring complexity is a common practice in a development environment for many causes, like:

  • To determine the efficiency of an algorithm (asymptotic analysis).
  • To determine the requirement for algorithm optimization.
  • Find total execution time for given input data.
  • Cost estimation on executing time.

Here “Big O notation” plays significant role to determine the time and memory complexity of an algorithm, depend upon the given number of inputs or dataset. It helps to understand how the number of inputs affects the number of works to perform as well as space consumption by a block of code. Mathematically, it is denoted by “Big O notation”, which shows the relationship between the algorithm and the number of inputs to be processed by it. Let's look at the different Big O values and their analysis for your Swift programming.

O(1) - Constant Time

Probably, the best performance. Regardless of several inputs, the operation complexity of O(1) is always constant.

For example,

let anArr = [11, 20, 32, 41, 55, 63]
    anArr[1]
    anArr[4]

From the above code, you can pick any of the Array elements by accessing the correct index. Whether the array has 10 elements or 1000, picking an element from the Array needs constant time to 1. So the complexity is O(1). A similar example of O(1) would be, appending element in an Array or push/pop operation on Stack.

O(log n) - Logarithmic Time

Probably, the second-best performance. When dealing with an O(log n) complexity operation, each operation cycle should reduce the complexity of the input.

For example,

funcbinarySearch(_ a: [T], key: T) ->Int? {
var lowerBound = 0
var upperBound = a.count
while lowerBound < upperBound {
let midIndex = lowerBound + (upperBound - lowerBound) /2
if a[midIndex] == key {
return midIndex
    } elseif a[midIndex] < key {
        lowerBound = midIndex +1
            } else {
            upperBound = midIndex
        }
    }
returnnil
}
letnumbers = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67]
binarySearch(numbers, key: 43)

Consider the above Swift function, a typical binary search on a sorted Array. On every iteration of the While loop, it reduces the number of elements by log2(n). This way, the whole process complexity will be log2(n) of the total number of inputs n.

Mathematically, say, we have total 100 elements for binary search, then the complexity would be log2(100), i.e. approx. 7 times. Same way, for 1000 of elements, it should be 10 times.

O(n) - Linear Time

The most common complexity and usually consider as good performance. Here the complexity is equal to the size of the inputs.

For example,

for i in1...10{
    print(i)
}

The above FOR loop will be running for 10 times as it should traverse through 10 elements. So the complexity of this loop is O(10). Same way, traversing an Array or Adding element in a specific index or linear search on an Array are having complexity O(n), where n is the total number of Array elements.

O(n log n) - Linearithmic Time

Slightly worse than the linear, but it’s still acceptable. Any divide-and-conquer approach which divides a big problem into small pieces of the problem and then solves is having the complexity of O(n log n) where n is the total inputs.

For example,

funcmergeSort(_ array: [Int]) - > [Int] {
    guard array.count > 1e lse {
        return array
    }
    let middleIndex = array.count / 2
    let leftArray = mergeSort(Array(array[0.. < middleIndex]))
    let rightArray = mergeSort(Array(array[middleIndex.. < array.count]))
    return merge(leftPile: leftArray, rightPile: rightArray)
}

The above example of merge sort in Swift, where we can sort elements of an Array by, first decided into two parts, then sort each of the parts and lastly merge all the part together to have sorted Array. Same way, Heap sort or quick sort are having the complexity of O(n log n).

O(n^2) - Quadratic Time

Comparably slow in performance. The complexity of this algorithm square up the total number of inputs. If the input size is 10, the complexity of this algorithm would be 10*10 or 10^2 or 100.

For example,

vartwoDimensionalArray: [[Int]] = [[0, 1], [2, 3]]
for i in0..< twoDimensionalArray.count {
    for n in0..< twoDimensionalArray[i].count {
        print(twoDimensionalArray[i][n])
    }
}

Above is an example of traversing a 2D Array. As it contains two nested loops, it’s doubled up the iterations. Similar way, Bubble sort or Insertion sort or Selection sort are having the complexity of O(n^2) where n is the total number of elements.

Depending on the number of nested loops, this complexity will increase exponentially. For example, if you want to do matrix multiplication, you should have 3 nested loops, and then the complexity would be O(n^3).Same way, for 5 nested loops having the complexity of O(n^5), and so on.

O(2^n) - Exponential Time

Poor in performance, recommended avoiding these kinds of algorithms. Algorithms with the complexity of O(2^n) are often recursive algorithms that solve a problem of size n by recursively solving two smaller problems of size n-1.

funcsolveHanoi(n: Int, from: String, to: String, spare: String) {
    guard n >= 1else {
        return
    }
    if n > 1 {
        solveHanoi(n: n -1, from: from, to: spare, spare: to) solveHanoi(n: n -1, from: spare, to: to, spare: from)
    }
}

The above Swift code will print the necessary steps to solve the famous "Towers of Hanoi" problem for n disks. Depending on the number of disks, here is n, the complexity will increase exponentially to O(2^n) times. Fibonacci Series may be another example.

O(n!) - Factorial Time

The slowest one in performance and recommended to always ignore such algorithm. The complexity will increase abnormally based on the number of inputs, as it takes the factorial value of input numbers. An infinite loop will be the best example of this scenario.

Usually, you don't need typical mathematical calculations to figure out what the Big-O of an algorithm is, but you can simply use your intuition.If your code uses a single loop that looks at all n elements of your input, the algorithm is O(n). If the code has two nested loops, it is O(n^2). Three nested loops give O(n^3), and so on.

Remember, Big-O notation gives an estimated value and is only really useful for large values of n. For example, the worst-case running time for the insertion sort algorithm is O(n^2). In theory, that is worse than the running time for merge sort, which is O(n log n).But for small amounts of data, insertion sort is faster, especially if the array partially sorted already. So here, the developer’s experience works to find out a better algorithm.

Following graph will give you a better idea on an algorithm’s performance with “Big O notation”

Big O notation

Problem Statement

Say, we have a requirement of the sort a number Array. The Array may look like this,

letlist = [ 10, 1, 31, 9, 21, 27, 8, 50, 15, 35]

Developers may pick any kind of sort algorithm available in Data Structure concept. Assume, a developer picks the Selection Sort Algorithm for this job. The Selection Sort in Swift may look like this.

funcselectionSort(_ array: [Int]) - > [Int] {
    guard array.count > 1e lse {
        return array
    }
    var a = array
    for x in0.. < a.count - 1 {
        var lowest = x
        for y in x + 1.. < a.count {
            if a[y] < a[lowest] {
                lowest = y
            }
        }
        if x != lowest {
            a.swapAt(x, lowest)
        }
    }
    return a
}
selectionSort(list) // O(n^2)

If we closely observe the flow pattern of this Selection Sort, we will find that,

First, It found the lowest element from the entire Array called list and placed it at the beginning of the Array.From the second element onwards, it will search for again lowest element and then place it to the second position of the Array.Step 2, continues until the end of the Array element.So, for Selection Sort, the Best-Case Complexity is О(n^2) and Worst-Case Complexity is О(n^2). That means the Selection Sort is slow in performance.

Solutions

You can pick the best algorithm for sort among Heap or Quick Sort. In this example, let’s look into Quick Sort to get better performance than above Selection Sort.

If we need to rewrite the sort algorithm for Quick Sort, it may look like,

funcquicksort < T: Comparable > (_ a: [T]) - > [T] {
    guard a.count > 1e lse {
        return a
    }
    let pivot = a[a.count / 2]
    let less = a.filter {
        $0 < pivot
    }
    let equal = a.filter {
        $0 == pivot
    }
    let greater = a.filter {
        $0 > pivot
    }
    returnquicksort(less) + equal + quicksort(greater)
}
quicksort(list) // O(n log n)

The above algorithm works on the divide-and-conquer approach. We used generic type Comparable for this quicksort() so that we can use Swift functionals like “filter”.

  1. Based upon central element or “pivot”, the Array called list, has been sliced into three parts.
    • Any element less than the pivot goes to the less part.
    • Any element greater than the pivot goes to the greater part.
    • Any element equal to the pivot goes to the equal part.
  2. These less and greater parts are then recursively sorted.
  3. Once these less part and greater part sorted, it will be glued together with equal part to form the complete sorted Array.

So, for Quick Sort, Best-Case Complexity, is O(n log n) and Worst-Case Complexity is O(n log n)

Conclusion

Usually, “Big O notation”, tells a developer about an Algorithm that, how fast the runtime grows for arbitrarily large input sizes? And the worst-case scenario for an algorithm. As a developer, you must consider the worst-case scenario to decide whether the Algorithm gives better performance on all scenarios or not.

As we saw here, Quick Sort is better than the Bubble or Selection or Insertion Sort. But it doesn’t mean that we should ignore those sorts completely. After all, Bubble sort is one of the favorite sort Algorithm among developers.

For example, if you want to search an element from an Array,

  • If the element is the 1st element of the Array, the search complexity would be O(1).
  • If the element is somewhere in the middle, the search complexity would be O(log n).
  • If the element is the last one, the search complexity would be O(n).

So, as a iPhone application programmers, we have to consider the worst case, say O(n) in this scenario, and then decide whether the Algorithm is good or not. We all should bring this “Big O notation” in practice for better performance.

Image