Saturday, February 22, 2025

CST370 - Week 7

 

CST370-30_2252: Design & Analysis of Algorithms – Week 7

In this module, we discuss the non-comparing sorting techniques, Counting sort, and Radix sort. In the counting sort, we use a range of values to calculate the frequency of the values, and then the sum of these frequencies is used to present the distribution. In the Radix, numbers are sorted by applying the counting sort for each digit, starting with the least significant digit. Unlike counting sort, Radix does not require a range of values and runs for the number of digits with time efficiency equal to O(d*(n + k)), where d is the number of digits.

Additionally, the lecture explored the use case of Dynamic programming as a design approach to solve problems with overlapping subproblems. For example, an algorithm that implements a solution to calculate the Fibonacci numbers uses a recursive function to repeatedly call functions for calculation. Applying dynamic programming to store the results from previously called functions in an array to avoid resolving a problem that has already been resolved once.

The coin collection problem is an example of an applied dynamic programming strategy to find an optimal path. The problem is modeled by a robot path with restricted movement to the right or down over a board of cells m x n. The solution highlights the principle of optimality for an algorithm that maximizes the collection of coins in a restricted path. Furthermore, we reviewed the coin-row problem, which calculates the maximum values of coins collected in a row. The constraint of this problem is that we are not allowed to pick up two adjacent coins.

Warshall’s algorithm provides a method of the traverse that determines the reachability of a ith vertex from the jth vertex, forming a transitive closure matrix of the digraph. The main idea of the algorithm is to obtain information on the existing path from the row ith  vertex vi to the jth vertex vj, We use a matrix R(k) and immediate predecessor R(k−1) to mark the value 1 for exiting path. Similarly, Floyd’s algorithm for all-pairs shortest paths problem utilizes the same approach as Warshall’s. We can calculate the distance for n vertices in the nxn adjacency matrix for undirected and directed weighted graphs. The calculation is completed for each vertex to reach the final solution all pairs shortest path.

Finally, the module explains the Greedy technique to reach an approximate optimal solution. The logarithm finds the local optimal decision (greedy) for each step while adhering to any constraints. Once a step is performed, it cannot be changed until all steps are taken, yielding the final aggregated optimal solution. The minimum spanning tree (MST) utilizes the greedy concept in Prim’s algorithm to find the smallest weight of an undirected connected graph.

Sunday, February 16, 2025

CST370 - Week 6

CST370-30_2252: Design & Analysis of Algorithms – Week 6

The Lecture this week covers a variety of terms, starting with the AVL tree and the 2-3 tree. A binary search tree must be balanced to enhance the time efficiency for searching, insertion, and deletion. The AVL tree represents a self-balanced structure that preserves the average time complexity of a BST to the Ꝋ(log n). The main property of the AVL tree is that the difference between the right and left subtree heights must be 0 or +1 or −1 at any node. Rotation techniques (R, L, LR, RL) are used to rebalance the AVL trees in the case of insertion or deletion. 


A 2-3 Tree data structure with 2 children and one data element or 3 children with 2 data elements

Another structure that can be used to balance BST is the 2-3 tree. The tree could have two nodes for a single key K and two children: the left subtree with keys less than K and the right subtree with values greater than K.  Another type of 2-3 tree is the 3-node with two key values K1, and K2 (K1 < K2). The 3-node has three children: the left subtree contains key values less than K1, a middle child with key values between K1 & K2, and the right child that holds all key values greater than K2. One of the properties of 2-3 trees is that leaves are the same height. We also reviewed inserting a key value in 2-3 trees.  For the worst and average case, the time complexity for searching, insertion, and deletion is Ꝋ(log n).

The module introduces the Heap data structure as an efficient method of processing priority queue operations. There are two essential conditions for a Max Heap structure: 1) the binary tree must be complete except to the rightmost leaves. 2) The value of the key for each node must be equal to or higher than that of the children. A heap could be presented in an array with n elements starting at the index 1 in top-down left-right order. The bottom-up heap construction algorithm provides the necessary steps to set up a binary tree, then heapifies its nodes to construct a heap. Heapify is also used to insert a key into a Heap structure to initiate a key exchange with the parent node as needed. The heapsort could be accomplished by constructing a max heap and then deleting the max number key repeatedly. The result will be the sorted keys.

The final section of this module discusses the implementation of hashing as a method to distribute keys in the hashing table. The hashing function h(K) = K mod m; denotes keys starting from 0 to m-1 (the range of the division remainder). Assigning key values to the same cell of the hash table results in collision. We reviewed two techniques to avoid collision: Separate Chaining and Linear Probing.

 


Friday, February 7, 2025

CST370 - Week 5

 

CST370-30_2252: Design & Analysis of Algorithms – Week 5

Module 5 introduces Quicksort, another algorithm using the divide-and-conquer technique. Unlike mergesort, division is the primary stage for solving the problem, and not much work for combining subproblems. A pivot is used to partition an array with a value of elements smaller than the pivot to the right and a value of elements larger than the pivot to the left. We can define the best-case scenario, if an array splits in the middle for partitioning. On the contrary, the worst-case scenario occurs if one subarray yields empty – for example, an increasing array with the selected pivot at an index zero.

The decrease and conquer algorithm design technique utilizes three approaches: decrease by a constant, decrease by a constant factor, and variable size decrease. The textbook explains the concept by illustrating solving a^n where a > 0, and n is a nonnegative integer. The exponential problem could be resolved by decreasing by a constant 1 as follows a^n = a^n-1 * a. Applying the recursive function obtained from this relationship leads to the solution to

the original problem. The binary search is an example of a decrease by a constant factor since elements in a list is reduced by 50%. 



Binary Search Tree (BST) with root node 10

In this section, we applied the divide-and-conquer strategy to solve many problems that resemble a binary tree. The standard representation of a binary tree consists of the left subtree, the right subtree, and the root. Additionally, we reviewed the three types of binary tree traversals: preorder, inorder, and postorder.

We also discussed topological sorting for Directed Acyclic Graphs (DAGs). An important aspect of the Depth-First Search (DFS) forest is the absence of back edges, which is essential for topological sorting. In other words, it is necessary for the directed graph to be acyclic in order to resolve the topological sorting problem. Lastly, the module covered presorting to improve overall time complexity. For example, checking element uniqueness in a sorted array can enhance the algorithm's efficiency, reducing the complexity fromꝊ(n^2) to Ꝋ(n log n).


Sunday, February 2, 2025

CST370 - Week 4

 

CST370-30_2252: Design & Analysis of Algorithms – Week 4

We are now halfway through the course after completing the midterm exam. This week was shorter because we were given time to study for the exam. I dedicated most of my study time to reviewing topics covered in previous weeks. Some of the key concepts required significant focus and practice in specific procedures to solve the problems. I also attended the midterm Q&A session, which was very helpful in discussing several topics.

This week's material was relatively light, focusing only on introducing the Merge Sort algorithm. The Merge Sort technique is based on the divide and conquer strategy. An array that needs to be sorted is divided into smaller arrays, and each part is solved individually before merging them. During the merging process, elements are compared between the two arrays to sort them into the array. This process continues until all elements are sorted into a single array. The time efficiency of Merge Sort is calculated using the Master Theorem, resulting in a time complexity of Θ(n log n).


CST462S - Final Week

Service Learning Journal Reflections This marks the final week of our Service Learning experience. Our team successfully completed the final...