Algorithms can be classified according to their design paradigm. Some common paradigms include:
- Exhaustive search. This is the naive method of trying every possible solution to see which is best.
- Divide and conquer. A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem until the instances are small enough to solve easily.
- Dynamic Programming. a problem shows an optimal substructure, meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems, and overlapping subproblems then dynamic programming avoids recomputing solutions that have already been computed. Dynamic programming and memoization go together. Using memoization (maintaining table of subproblems already solved) dynamic programming reduces the exponential nature of many problems to polynomial complexity.The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming.
- The greedy method. Greedy method is similar to a dynamic programming, but the difference is that solutions to the subproblems do not have to be known at each stage; instead a "greedy" choice can be made of what looks best for the moment.
- Linear programming. When solving a problem using linear programming, specific inequalities the inputs are found and then an attempt is made to maximize or minimize some linear function of the inputs.
- Reduction. This technique involves solving a difficult problem by transforming it into a better known problem for which we have asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's.
- Search and enumeration. Many problems such as playing chess, can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking.
- Randomized algorithms are those that make some choices randomly (or pseudo-randomly); for some problems, it can in fact be proven that the fastest solutions must involve some randomness. There are two large classes of such algorithms:
- Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time)
- Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.
- In optimization problems, heuristic algorithms do not try to find an optimal solution, but an approximate solution where the time or resources are limited. They are not practical to find perfect solutions. An example of this would be local search, tabu search, or simulated annealing algorithms, a class of heuristic probabilistic algorithms that vary the solution of a problem by a random amount. The name "simulated annealing" alludes to the metallurgic term meaning the heating and cooling of metal to achieve freedom from defects. The purpose of the random variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that the random element will be decreased as the algorithm settles down to a solution. Approximation algorithms are those heuristic algorithms that additionally provide some bounds on the error. Genetic algorithms attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of "solutions". Thus, they emulate reproduction and "survival of the fittest". In genetic programming, this approach is extended to algorithms, by regarding the algorithm itself as a "solution" to a problem.
No comments:
Post a Comment