So my title seems a bit ambiguous because of 'Big O'. I know. So let me explain Big O
notation first. When we think of efficient something is, we would normally look
at the worst case run time and the best case run time. The best-case run time
is the time is takes for an algorithm to run under optimal conditions, and the
worst-case run time is the opposite, i.e. the worst/longest time, it takes for
an algorithm to run. Big O is simply the worst run time.
Big O isn't simply a number. Its describe the type worst-case run
time we expect, i.e. if we have to run
through every element in a list we would say it is in O(n); which just means it
is in Big O of n, and n describes that it is a linear function.
Imagine a pile of ten cards, from Ace to ten, all of the same kind.
I then come along and shuffle them up. Think of how many ways we could put them
back into order. We have a similar concept in Python. When given an object with
similar elements that can be sorted, they are multiple ways to sort these
objects in python. The most common sorting algorithms are merge
sort, bubble sort, selection sort, quick sort, insertion sort and tim sort
(python's built-in sort). All these algorithms have different methods of
sorting. And all have particular cases in which they would work best. One thing
to note is a lot of either recursion or looking through elements of what it
being sorted happens when we call on sort.
Merge sort
Merge sort works in pairs, sorts the pairs and once
all the pairs are sorted, it will form a new pile of cards from the already sorted
pairs.
Insertion sort
Insertion sort is good for lists in python that are short. We
look at our pile of cards. What insertion simply does it looks at each card and
sees where it should be and places it in its correct place but in a new pile.
Bubble sort
Bubble sort is simpler than some of the others I have and
will explain. You could say it is similar to insertion sort that compares cards
in pairs. What it does is compares the first two cards in the pile, and if the
second card is smaller in value than the first, it will swap them, if not it
will leave them as is. It then moves compares the second and third cards in the
piles and swaps them if the third card is small then the second. This process
continues until the list has been sorted.
Selection sort
We will look through all the cards until we find the Ace
i.e. the smallest card. When it is found, we just move it to the top of the
pile. Because we know that the first card is in its correct position, we can
start with the second card of the pile, and we will look through all the cards
until we find the card with two on it. That card is then moved to the second
position ( or you could think of it as the card is moved to the top of the
unsorted part of the pile). This process continues to happen until all the
cards are sorted.
Quick sort
Quick sort
works similarly to a binary search tree. It gets a pivot, for example the card
five. It then compares the elements with
the pivot and all the elements that are less than five will be moved before it
and all those larger that five will be moved after 5 in the pile. It then
analyses smaller piles i.e. all the card before 5 are a sub-pile and all the
cards after 5 are another sub-pile. This is done until all the cards are
organised.
Tim sort
Tim sort is
awesome. That’s first. Tim sort is an algorithm that was derived from both
merge sort and insertion sort. Python uses this list to sort as it is the most
efficient. I am not too sure how it works but I have attached a link for anyone
interested in finding out the mechanics behind it.
The table
below is something I found on Wikipedia and edited a little bit. Shows a little
insight
Name
|
Best
|
Worst
|
Method
|
Other notes
|
|
|
Merging
|
Highly parallelizable (up to O(log n) using the Three Hungarian's
Algorithm or, more practically, Cole's
parallel merge sort) for processing large amounts of data.
|
|
n
|
|
Insertion & Merging
|
Makes n comparisons when the data is already
sorted or reverse sorted.
|
|
n
|
|
Exchanging
|
Tiny code size.
|
|
n
|
|
Insertion
|
O(n + d),where d is the number of inversions.
|
|
|
|
Partitioning
|
Quicksort is usually done in place with O(log n) stack space.] Most
implementations are unstable, as stable in-place partitioning is more
complex. Naïve variants use an O(n) space
array to store the partition. Quicksort
variant using three-way (fat) partitioning takes O(n) comparisons when
sorting an array of equal keys.
|
|
|
|
Selection
|
Stable with O(n) extra space, for example using lists.[3]
|