Posts

Showing posts from September, 2009

Recursion in Computer Science

Recursion is a method that defining a thing using the thing being defined in its definition. For example, we define an expression as a number, or a concatenation, in order, of an expression, an operand, and an another expression. The definition of an expression is recursive since the definition applies expressions. In Computer Science, "divide and conquer" is a very general programming strategy to solve problems. The idea of this strategy is based on simplifying a problem into small scaled same problems. For example, assuming we want to sort a list of books in a book store, we can partition the books to be sorted into multiple small groups of books. For each group, we can partition the books again, etc. When each small groups of books are sorted, we can combine the sorted groups of books together and eventually the whole book lot will be sorted. Recursion can be applied to write sorting algorithms based on the divide and conquer strategy. Indeed, the Quick Sort algori

Study with Open Mind

When studying mathematical proof techniques, I found that it is important to study with an open mind. To prove a statement, first it is important to clearly formulate a problem, and then to collect facts, and then to follow logic reasoning. For example, assuming we want to prove log(n!) is O(nlogn). The problem is clear. The fact we can use is the definition of Big-O in addition to the problem formulation. The definition of Big-O indicates that to prove a function f(n)is O(g(n))we need find constants c>0(real number) and n_0>0(integer) such that f(n)<=c.g(n) when n>=n_0. To conclude the result, we need find c and n_0 in order to apply the definition of Big-O, which should be the goal during our reasoning. Here is the sequence. log(n!) = log(1 * 2 *...*n) = log1 + ... + logn <=logn + ... + logn = nlogn Therefore, there exist c =1, when n>=1, log(n!) <= nlogn, based on the definition of Big-O, log(n!) = O(nlogn).

Why Asymptotoic Notations and Analysis?

When developing code for a software application, we frequently need estimate how the application code consumes resources such as processor, time, and memory and whether the code scales well. It turns out asymptotic notations and analysis can help to answer these questions. The asymptotic notations include Big-O, little-o, Big-Omega, little-omega, and Big-Theta. Most algorithms handles input whose size affects the time and memory to execute the algorithms. If we use a function f(n), where n is standing for the size of the input, to indicate the cost to execute the algorithm, f(n) is a non-decreasing function, which grows as the input n does. Usually the larger the input n is, the longer the time or the memory is used to execute an algorithm. Asymptotic notations are usually discussed in theoretical CS courses. For a cost function f(n), Big-O notation is usually used to describe a tight-upper bound for the cost of an algorithm; little-o notation is used to describe an upper bound; Bi

Abstraction in Mathematics and Computer Science

Abstraction is a purposeful process to focus on essentials, observable behavior, and ignore irrelevant details. Abstraction is a process frequently used in mathematics and computer science. In computer science, there is a complementary process called encapsulation for abstraction. Encapsulation applies information hiding to hide secrets so that observable behavior is abstracted. For example, assuming we want to reinvent a virtual pizza store, a chef in the store is responsible to make pizzas. A customer orders a cheese pizza and the order is delivered to the chef, who then cooks the pizza to fulfill the order. The customer and/or some waiter doesn't need to know how the chef take actions to make the cheese pizza. The chef, however, hides the 'secrets' to make the pizza. Abstraction and encapsulation work together to invent the automating agents in the pizza store. In mathematics, abstraction is important to produce a simple, elegant proof. Assuming that we want to prove

Write Interesting Computing Books

I was asked a question, "Can I find a data structure book writing for dummies?" I knew the boy was joking. But I think this is a good question. When we read a book, we are supposed to interact with the book like interacting with a friend. We are asking questions and the friend answers and guides our exploration. The unfortunate thing is that we have so many books that are dry to read like hanging with very boring friends. Sometimes the friend is so arrogant and monotoning. I personally think this is a big problem in Computer Science as well as Mathematics. There are so many terrific things happened in computing fields. And yet it is hard to find exciting textbooks for students to enjoy.