def merge(L1, L2):
'''Return the sorted version of L1 + L2
Arguments:
L1, L2 -- lists of floats
'''
if len(L1) == 0:
return L2[:]
if len(L2) == 0:
return L1[:]
if L1[0] < L2[0]:
return [L1[0]] + merge(L1[1:], L2)
else:
return [L2[0]] + merge(L1, L2[:])
What's the complexity here? Let's start with the call tree for a worst case. Here's an example: we will keep calling merge for L1 = [1, 2, 3, 4, 5] and L2 = [6], since we first will be using up all of 1, 2, 3, 4, 5.
Let's keep track of the sizes of L1 and L2.
(0, 1)
.
.
.
(n-3, 1)
|
(n-2, 1)
|
(n-1, 1)
So we have n calls in total. However, the calls don't all take the same amount of time. That's becase the slicing L1[1:], which involves copying all but one of the elements of L1, takes longer if L1 is larger.
(Likewise, the addition of the lists takes longer, but we'll ignore this in what follows -- that's because the + in [L1[0]] + merge(L1[1:], L2) will take time that's proportional to the time that L1[1:], since both involves creating new lists of approximately the same size.
We can now keep track of how long each call takes, if we only care about the fact that L1[1:] takes k(len(L1)-1) time for some constant k.
time
(0, 1) 0
. .
. .
. .
(n-3, 1) k(n-4)
|
(n-2, 1) k(n-3)
|
(n-1, 1) k(n-2)
So the total is $k\sum_{i=1}^{n-2} i = k\frac{(n-2)(n-1)}{2}$, which is $\mathcal{O}(n)$.