SELECTION SORT

This type of sorting is called "Selection Sort" because it works by repeatedly element. It works as follows: first find the smallest in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.

SELECTION SORT (A)

for i 1 to n-1 do
min j i;
min x A[i]
for j i + 1 to n do
If A[j] < class="apple-converted-space">
j j
min x A[j]
A[min j] A [i]
A[i] min x

Selection sort is among the simplest of sorting techniques and it work very well for small files. Furthermore, despite its evident "naïve approach "Selection sort has a quite important application because each item is actually moved at most once, Section sort is a method of choice for sorting files with very large objects (records) and small keys.







The worst case occurs if the array is already sorted in descending order. Nonetheless, the time require by selection sort algorithm is not very sensitive to the original order of the array to be sorted: the test "if A[j] < class="apple-converted-space"> j j; min x A[j] of this test are executed.

The Selection sort spends most of its time trying to find the minimum element in the "unsorted" part of the array. It clearly shows the similarity between Selection sort and Bubble sort. Bubble sort "selects" the maximum remaining elements at each stage, but wastes some effort imparting some order to "unsorted" part of the array. Selection sort is quadratic in both the worst and the average case, and requires no extra memory.

For each i from 1 to n - 1, there is one exchange and n - i comparisons, so there is a total of n -1 exchanges and (n -1) + (n -2) + . . . + 2 + 1 = n(n -1)/2 comparisons. These observations hold no matter what the input data is. In the worst case, this could be quadratic, but in the average case, this quantity is O(n log n). It implies that the running time of Selection sort is quite insensitive to the input.

Implementation

void selectionSort(int numbers[], int array_size)
{
  int i, j;
  int min, temp;
 
  for (i = 0; i <>
  {
    min = i;
    for (j = i+1; j <>
    {
      if (numbers[j] <>
        min = j;
    }
    temp = numbers[i];
    numbers[i] = numbers[min];
    numbers[min] = temp;
  }
}
http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Sorting/selectionSort.htm

INSERTION SORT

If the first few objects are already sorted, an unsorted object can be inserted in the sorted set in proper place. This is called insertion sort. An algorithm consider the elements one at a time, inserting each in its suitable place among those already considered (keeping them sorted). Insertion sort is an example of an incremental algorithm; it builds the sorted sequence one number at a time. This is perhaps the simplest example of the incremental insertion technique, where we build up a complicated structure on n items by first building it on n − 1 items and then making the necessary changes to fix things in adding the last item. The given sequences are typically stored in arrays. We also refer the numbers as keys. Along with each key may be additional information, known as satellite data. [Note that "satellite data" does not necessarily come from satellite!]

Algorithm: Insertion Sort

It works the way you might sort a hand of playing cards:

  1. We start with an empty left hand [sorted array] and the cards face down on the table [unsorted array].

  2. Then remove one card [key] at a time from the table [unsorted array], and insert it into the correct position in the left hand [sorted array].

  3. To find the correct position for the card, we compare it with each of the cards already in the hand, from right to left.

Note that at all times, the cards held in the left hand are sorted, and these cards were originally the top cards of the pile on the table.

Pseudocode

We use a procedure INSERTION_SORT. It takes as parameters an array A[1.. n] and the length n of the array. The array A is sorted in place: the numbers are rearranged within the array, with at most a constant number outside the array at any time.

INSERTION_SORT (A)

1. FOR j ← 2 TO length[A]
2. DO key ← A[j]
3. {Put A[j] into the sorted sequence A[1 . . j − 1]}
4. ij − 1
5. WHILE i > 0 and A[i] > key
6. DO A[i +1] ← A[i]
7. ii − 1
8. A[i + 1] ← key

Example: Following figure (from CLRS) shows the operation of INSERTION-SORT on the array A= (5, 2, 4, 6, 1, 3). Each part shows what happens for a particular iteration with the value of j indicated. j indexes the "current card" being inserted into the hand.

" src="http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Sorting/Gifs/insertion-fig2-2.gif">

Read the figure row by row. Elements to the left of A[j] that are greater than A[j] move one position to the right, and A[j] moves into the evacuated position.

Implementation

void insertionSort(int numbers[], int array_size)
{
int i, j, index;

for (i = 1; i < index =" numbers[i];">j = i;
while ((j > 0) && (numbers[j1] > index))
{
numbers[j] = numbers[j 1];
j = j 1;
}
numbers[j] = index;
}
}

http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Sorting/insertionSort.htm

BUBBLE SORT

Bubble sort is a simple and well-known sorting algorithm. It is used in practice once in a blue moon and its main application is to make an introduction to the sorting algorithms. Bubble sort belongs to O(n2) sorting algorithms, which makes it quite inefficient for sorting large data volumes. Bubble sort is stable and adaptive.

Algorithm

  1. Compare each pair of adjacent elements from the beginning of an array and, if they are in reversed order, swap them.
  2. If at least one swap has been done, repeat step 1.

You can imagine that on every step big bubbles float to the surface and stay there. At the step, when no bubble moves, sorting stops. Let us see an example of sorting an array to make the idea of bubble sort clearer.

Example. Sort {5, 1, 12, -5, 16} using bubble sort.

Bubble sort example

Complexity analysis

Average and worst case complexity of bubble sort is O(n2). Also, it makes O(n2) swaps in the worst case. Bubble sort is adaptive. It means that for almost sorted array it gives O(n) estimation. Avoid implementations, which don't check if the array is already sorted on every step (any swaps made). This check is necessary, in order to preserve adaptive property.

Turtles and rabbits

One more problem of bubble sort is that its running time badly depends on the initial order of the elements. Big elements (rabbits) go up fast, while small ones (turtles) go down very slow. This problem is solved in the Cocktail sort.

Turtle example. Thought, array {2, 3, 4, 5, 1} is almost sorted, it takes O(n2) iterations to sort an array. Element {1} is a turtle.

Turtle example

Rabbit example. Array {6, 1, 2, 3, 4, 5} is almost sorted too, but it takes O(n) iterations to sort it. Element {6} is a rabbit. This example demonstrates adaptive property of the bubble sort.

Rabbit example

Code snippets

There are several ways to implement the bubble sort. Notice, that "swaps" check is absolutely necessary, in order to preserve adaptive property.

C++

void bubbleSort(int arr[], int n) {

bool swapped = true;

int j = 0;

int tmp;

while (swapped) {

swapped = false;

j++;

for (int i = 0; i < n - j; i++) {

if (arr[i] > arr[i + 1]) {

tmp = arr[i];

arr[i] = arr[i + 1];

arr[i + 1] = tmp;

swapped = true;

}

}

}

}





SHELL SORT

Invented in 1959 by Donald Shell, the shell sort is a relatively fast algorithm and is easy to code. It attempts to roughly sort the data first, moving large elements towards one end and small elements towards the other. It performs several passes over the data, each finer than the last. After the final pass, the data is fully sorted.

It is important to note that the shell sort algorithm does not actually sort the data itself; it increases the effeciency of other sorting algorithms. By roughly grouping the data, it reduces the number of times the data elements need to be rearranged in the sort process. Usually an insertion or bubble sort is used to arrange the data at each step, but other algorithms can be used. Shell sort does not noticably benefit the faster algorithms (such as merge and quicksort), and in some cases will actually reduce their performance.

Sort routines that rely on directly swapping elements are most effectively combined with a shell sort. Shell sort quickly arranges data by sorting every nth element, where n can be any number less than half the number of data. Once the initial sort is performed, n is reduced, and the data is sorted again until n equals 1. It is vital that the data is finally sorted with n = 1, otherwise there may be out-of-order elements remaining.

Selecting good values for n

Choosing n is not as difficult as it might seem. The only sequence you have to avoid is one constructed with the powers of 2. Do not choose (for example) 16 as your first n, and then keep dividing by 2 until you reach 1. It has been mathematically proven that using only numbers from the power series {1, 2, 4, 8, 16, 32, ...} produces the worst sorting times. The fastest times are (on average) obtained by choosing an initial n somewhere close to the maximum allowed and continually dividing by 2.2 until you reach 1 or less. Remember to always sort the data withn = 1 as the last step.


Example:

The data in the table needs to be sorted into ascending order. For simplicity, we have chosen the sequence {3, 2, 1} for our n. Elements in the table that have a white background are being ignored in that step. Elements with a blue background are the ones we are interested in. The top line of each table shows the data before we performed that step, the bottom shows the data afterwards.

In this example, we will assume there is a selection sort being used to do the actual sorting.

    The unsorted data
    841576932

  1. As mentioned above, we will be using an initial value of 3 for n. This is less than the maximum (which in this case is 4, because this is the largest number less than half the number of elements we have).

    We pretend that the only elements in the data set are elements containing 5, 8 and 9 (highlighted blue). Notice that this is every 3rd element (n is 3). After sorting these, we look at the elements to the right of the ones we just looked at. Repeat the sort and shift routine until all of the elements have been looked at once. So if n = 3, you will need to repeat this step 3 times to sort every element. Note that only the highlighted elements are ever changed; the ones with a white background are totally ignored.


    Put the 8, 5 and 9 into ascending order (ignoring the other elements)
    841576932
    541876932

    Do the same for 4, 7 and 3
    541876932
    531846972

    As well as 1, 6 and 2...
    531846972
    531842976

  2. Now that all of the elements have been sorted once with n = 3, we repeat the process with the next value of n (2). If you look carefully, there is a general congregation of large numbers at the right hand side, and the smaller numbers are at the left. There are still quite a few misplaced numbers (most notably 8, 2, 5 and 6), but it is better sorted than it was.


    Place the odd elements into ascending order
    531842976
    134852679

    And the same for the even elements...
    134852679
    124357689

  3. You can see now that the data is almost completely sorted - after just 2 more steps! All that remains is to sort it again with n = 1 to fix up any elements that are still out of order. Whenn = 1, we are just performing a normal sort, making sure every element in the dataset is in its correct place.

    You may wonder why don't we just skip to this step. Yes, doing that would work, however the selection and bubble sorts are fastest when the data is already sorted (or close to). The shell sort method orders the data in fewer steps than would be required for either of the above methods.


    The few elements which are still out of order are fixed.
    124357689
    123456789

  4. All sorted!
    123456789
    http://goanna.cs.rmit.edu.au/~stbird/Tutorials/ShellSort.html

Assignment II: Empirical Analysis, Analysis Algorithm and Big-Oh Notation

Empirical Analysis of Algorithms
*empirical denotes information gained by means of observation, experience, or experiment

- tells about a specific implementation of the
abstract algorithm, so you have to be careful about what conclusions you can
draw about an algorithm’s efficiency from the empirical analysis of a specific
implementation. (For deciding on the time it takes to check membership in a
set, did you implement your set as a list, a hash table, ...?)

-its strength lies in its applicability to any
algorithm, but its results can depend on the particular sample of instances
and the computer used in the experiment.

- the principal alternative of mathematical analysis.

The General Plan
1. Decide on the efficiency metric M to be measured and the measurement unit
(an operation’s count vs. a time unit).
2. Decide on characteristics of the input sample (its range, size, and so on).
3. Prepare a program implementing the algorithm for the experimentation.
4. Generate a sample of inputs.
5. Run the algorithm on the sample inputs and record the data observed.
6. Analyze the data obtained.



Analysis Algorithms


-An algorithm is a step- by- step procedure for solving a problem in a finite amount of time

-In mathematics and computer science, an algorithm is an effective method expressed as a finite list.

In order to learn more about an algorithm, we can ``analyze'' it. By this we mean to study the specification of the algorithm and to draw conclusions about how the implementation of that algorithm--the program--will perform in general. But what can we analyze? We can

  • determine the running time of a program as a function of its inputs;
  • determine the total or maximum memory space needed for program data;
  • determine the total size of the program code;
  • determine whether the program correctly computes the desired result;
  • determine the complexity of the program--e.g., how easy is it to read, understand, and modify; and,
  • determine the robustness of the program--e.g., how well does it deal with unexpected or erroneous inputs?


Big-oh Notation


w Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants c and n0 such that
f(n) £ cg(n) for n ³ n0

Big-Oh Rules

-If is f(n) a polynomial of degree d, then f(n) is O(nd), i.e.,
Drop lower-order terms
Drop constant factors
-Use the smallest possible class of functions
Say “2n is O(n)” instead of “2n is O(n2)”
-Use the simplest expression of the class
Say “3n+ 5 is O(n)” instead of “3n + 5 is O(3n)”


O(l) - constant time

This means that the algorithm requires the same fixed number of steps regardless of the size of the task.

Examples (assuming a reasonable implementation of the task):

A. Push and Pop operations for a stack (containing n elements);

B. Insert and Remove operations for a queue.

II. O(n) - linear time

This means that the algorithm requires a number of steps proportional to the size of the task.

Examples (assuming a reasonable implementation of the task):

A. Traversal of a list (a linked list or an array) with n elements;

B. Finding the maximum or minimum element in a list, or sequential search in an unsorted list of n elements;

C. Traversal of a tree with n nodes;

D. Calculating iteratively n-factorial; finding iteratively the nth Fibonacci number.

III. O(n2) - quadratic time

The number of operations is proportional to the size of the task squared.

Examples:

A. Some more simplistic sorting algorithms, for instance a selection sort of n elements;

B. Comparing two two-dimensional arrays of size n by n;

C. Finding duplicates in an unsorted list of n elements (implemented with two nested loops).

IV. O(log n) - logarithmic time

Examples:

A. Binary search in a sorted list of n elements;

B. Insert and Find operations for a binary search tree with n nodes;

C. Insert and Remove operations for a heap with n nodes.

V. O(n log n) - "n log n " time

Examples:

A. More advanced sorting algorithms - quicksort, mergesort

VI. O(an) (a > 1) - exponential time

Examples:

A. Recursive Fibonacci implementation

B. Towers of Hanoi

C. Generating all permutations of n symbols

The best time in the above list is obviously constant time, and the worst is exponential time which, as we have seen, quickly overwhelms even the fastest computers even for relatively small n. Polynomial growth (linear, quadratic, cubic, etc.) is considered manageable as compared to exponential growth.







UNION-FIND ALGORITHM


In computing, given a set of elements, it is often useful to break them up or partition them into a number of separate, nonoverlapping sets. A disjoint-set data structure is a data structure that keeps track of such a partitioning. A union-find algorithm is an algorithm that performs two useful operations on such a data structure:

  • Find: Determine which set a particular element is in. Also useful for determining if two elements are in the same set.
  • Union: Combine or merge two sets into a single set.

The Union-Find algorithm is used for maintaining
a number of non-overlapping sets from a finite universe of
elements. The algorithm has applications in a number of areas
including the computation of spanning trees, in image processing,
as well as in scientific computations.
Although the algorithm is inherently sequential there has been
some previous efforts at constructing parallel implementations.
These have mainly focused on shared memory computers.


By REX LOUIE B PILONGO

Piclyf.com

Check this page out!!!





These pictures were taken on our recent field trip last September 22, 2010 at Sutherland. We've had a great time, it was a good hour to spend in your company, thank you Mr. Eric Mendoza for giving us a tour and accommodating us well in that field trip.


Life is good!! Hope to be in good shape when the Major Major field trip arrives, hope it to be the happiest field trip of my life for it will be the only field trip in my college life. Ü

It is nice to be with good people, thanks to my dear friends for these pictures, fun and happiness is in the air, though thrill and worry is in our hearts, we can make it through!!!!


It was a fantastic trip, though i was not satisfied by the information we gathered. We went to see the company's systems, what computers do they use, how they process data and all the other information about the company.

Hope to graduate soon, on time or not on time, no matter how long!!!


About the Dino

I am someone who can commmunicate. i speak up things that i know will fire up

the sitch and make things interesting. i really am a positive person and do believe

that nothing is imposible and our thoughts only make it imposible. i dont frankly tell a

person if i like or hate what he or she is or on what he do, i do it on tellin' my friends.

Yes, its plankton. the sea creature who wont give up his interests, like me and yes im a

big fan of spongebob squarepants and a lot of funny interesteng series.