2014年3月30日星期日

Week 11(Sorting and Efficiency )

Sorting is one of the most important terms we've come across in CSC148. We’ve recently been looking at several sorts, most notably selection sort, quick sort, and merge sort. 

  • The selection sort – for each element – finds the minimum value from that element and switches the two elements.
  • The quick sort chooses a “pivot” from the beginning of the list, and splits the unsorted list into halves – the first half being smaller than the pivot, the second half being larger. The first pivot is now sorted (and is in the middle of the list), so the new first element becomes the pivot, and the process is [recursively] begun again.
  • The merge sort, on the other hand, continually halves an unsorted list, then sorts those “halves” and then finally combines all of these halves to create a sorted list.

 In the class,  I came to learn that essentially, the efficiency of any algorithm can be modeled as a function of its input size. The Big-O notation expresses the worst-case scenario of the algorithm by  extracting the part of the function that causes the function to grow the fastest.

That's all for the last slog! LoL!


2014年3月2日星期日

Week 7(Recursion)

Recursion is a problem solving technique in which a function calls itself. A  recursive function has two parts. First: the base case. The base case is the smallest part to solve. It usually returns some value. The second part of a recursive function is where the problem is broken down into one or more smaller parts and that's why it is called recursion. It will use itself inside the smaller part. This helps us solving many problem. However a problem with recursion is memory usage. Each time the function calls itself we have to keep the previous variables it was using in memory. When function reaches the base case the stack can be huge and slowing down the program. For example, the A1.