We are revisiting a topic that we have encountered before - given a list of values, say, numbers, we want to sort it such that the new list is in order (ascending, descending, etc.). This is a very simple task - when faced with this sort of problem we would naturally use something like insertion or selection sort, and for the sake of simplicity we can use an algorithm like bubble sort, which is very easy to implement. These methods however, are extremely inefficient - when executed by a computer, each value must be compared to each other value for each iteration of the algorithm. This doesn't really matter for smallish lists, where our brains and computers have plenty enough processing power to sort the list with no trouble at all. But sometimes, we have BIG lists. Like, very big. Let's say with n = (a million), or n = (ten million), or anything so big and incomprehensible that we start to play with the number by adding and knocking off zeroes here and there.
The efficiency of a sorting algorithm is roughly measured by the number of comparisons needed to sort the list compared to the list size, in a best or worst case (some algorithms work better depending on the features of the list e.g. number of repeated values). For something like selection sort, each time you add a value to the "sorted" part of the list, it must be compared to each value in the "unsorted" part, giving it a worst case efficiency of O(n2), increasing the processing time exponentially as n goes up. For very large lists, we have better options - algorithmns like quick sort and merge sort break up the list into smaller lists based on their size relative to a reference value in the list. This information can then be used when searching for a larger or smaller value - larger values will be in the "larger" sublist and smaller values will be in the "smaller" sublist, allowing us to ignore half the list each time we run a comparison. These "divide and conquer" algorithmns substantially reduce the increase in sorting time with n, putting it in the order of O(log2(n)). Of course, there is the initial step of dividing the list into sublists, so the average performance is usually more like O(nlog(n)). Or something. It's a bit hard to wrap your head around the details, but the point is, that's a lot better then O(n2) when n is big. So when n is big, one should strive to use more efficient algorithms, rather than just simple ones, because even if there are more steps (ie. sorting a list or tree into a binary search list or tree then running binary search, rather than attempting to run a search algorithm straight out), the difference in efficiency for large n is substantial.