Algorithms - rs-hash/Learning GitHub Wiki
Sorting Algorithms
Here's a table that lists common sorting algorithms along with their average time complexity (in Big O notation) and space complexity. Please note that these complexities can vary depending on implementation and specific cases.
| Sorting Algorithm | Average Time Complexity | Space Complexity |
|---|---|---|
| Bubble Sort | O(n^2) | O(1) |
| Selection Sort | O(n^2) | O(1) |
| Insertion Sort | O(n^2) | O(1) |
| Merge Sort | O(n log n) | O(n) |
| Quick Sort | O(n log n) | O(log n) to O(n) |
| Heap Sort | O(n log n) | O(1) |
| Counting Sort | O(n + k) | O(n + k) |
| Radix Sort | O(n * k) | O(n + k) |
| Bucket Sort | O(n^2) | O(n) |
| Tim Sort | O(n log n) | O(n) |
- n represents the number of elements in the array to be sorted.
- k represents the range of input values (e.g., in Counting Sort and Radix Sort).
Merge Sort and Quick Sort are two efficient sorting algorithms commonly used in computer science and programming. Here, I'll explain each algorithm in detail and provide JavaScript code examples for both.
Merge Sort:
Merge Sort is a popular divide-and-conquer sorting algorithm known for its stability and efficiency. It works by recursively dividing the unsorted array into smaller subarrays, sorting those subarrays, and then merging them back together to obtain the final sorted array.
Algorithm Steps:
- Divide the unsorted array into two halves.
- Recursively sort each half.
- Merge the sorted halves to produce the final sorted array.
JavaScript Code Example:
function mergeSort(arr) {
if (arr.length <= 1) {
return arr; // Base case: If the array has 0 or 1 elements, it's already sorted.
}
// Divide the array into two halves
const mid = Math.floor(arr.length / 2);
const leftHalf = arr.slice(0, mid);
const rightHalf = arr.slice(mid);
// Recursively sort each half
const sortedLeft = mergeSort(leftHalf);
const sortedRight = mergeSort(rightHalf);
// Merge the sorted halves
return merge(sortedLeft, sortedRight);
}
function merge(left, right) {
const result = [];
let leftIndex = 0;
let rightIndex = 0;
// Compare elements from both halves and merge them in sorted order
while (leftIndex < left.length && rightIndex < right.length) {
if (left[leftIndex] < right[rightIndex]) {
result.push(left[leftIndex]);
leftIndex++;
} else {
result.push(right[rightIndex]);
rightIndex++;
}
}
// Append remaining elements from both halves (if any)
return result.concat(left.slice(leftIndex), right.slice(rightIndex));
}
const unsortedArray = [64, 34, 25, 12, 22, 11, 90];
const sortedArray = mergeSort(unsortedArray);
console.log(sortedArray); // Output: [11, 12, 22, 25, 34, 64, 90]
Quick Sort:
Quick Sort is another efficient divide-and-conquer sorting algorithm known for its speed. It works by selecting a "pivot" element from the array and partitioning the other elements into two subarrays, according to whether they are less than or greater than the pivot. The subarrays are then sorted recursively.
Algorithm Steps:
- Choose a pivot element from the array.
- Partition the array into two subarrays: elements less than the pivot and elements greater than the pivot.
- Recursively apply Quick Sort to both subarrays.
- Combine the sorted subarrays and the pivot to obtain the final sorted array.
JavaScript Code Example:
function quickSort(arr) {
if (arr.length <= 1) {
return arr; // Base case: If the array has 0 or 1 elements, it's already sorted.
}
const pivot = arr[0];
const left = [];
const right = [];
// Partition the array into left (<= pivot) and right (> pivot) subarrays
for (let i = 1; i < arr.length; i++) {
if (arr[i] <= pivot) {
left.push(arr[i]);
} else {
right.push(arr[i]);
}
}
// Recursively sort left and right subarrays, and combine them with the pivot
return [...quickSort(left), pivot, ...quickSort(right)];
}
const unsortedArray = [64, 34, 25, 12, 22, 11, 90];
const sortedArray = quickSort(unsortedArray);
console.log(sortedArray); // Output: [11, 12, 22, 25, 34, 64, 90]
Both Merge Sort and Quick Sort are efficient sorting algorithms with an average time complexity of O(n log n). They are commonly used in practice and are suitable for sorting large datasets efficiently.
Sorting algorithms are essential operations in computer science and programming, as they allow you to arrange a list of items in a specific order, such as ascending or descending. Here, I'll explain three common sorting algorithms—Bubble Sort, Selection Sort, and Insertion Sort—using JavaScript code examples for each.
Bubble Sort:
Bubble Sort is a simple comparison-based sorting algorithm that repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The pass-through is repeated until no swaps are needed, indicating that the list is sorted.
JavaScript Code Example:
function bubbleSort(arr) {
let swapped;
do {
swapped = false;
for (let i = 0; i < arr.length - 1; i++) {
if (arr[i] > arr[i + 1]) {
// Swap arr[i] and arr[i+1]
[arr[i], arr[i + 1]] = [arr[i + 1], arr[i]];
swapped = true;
}
}
} while (swapped);
return arr;
}
const unsortedArray = [64, 34, 25, 12, 22, 11, 90];
const sortedArray = bubbleSort(unsortedArray);
console.log(sortedArray); // Output: [11, 12, 22, 25, 34, 64, 90]
Selection Sort:
Selection Sort divides the input list into two parts: the sorted part and the unsorted part. Initially, the sorted part is empty, and the algorithm repeatedly selects the minimum element from the unsorted part and moves it to the end of the sorted part.
JavaScript Code Example:
function selectionSort(arr) {
const n = arr.length;
for (let i = 0; i < n - 1; i++) {
let minIndex = i;
for (let j = i + 1; j < n; j++) {
if (arr[j] < arr[minIndex]) {
minIndex = j;
}
}
if (minIndex !== i) {
// Swap arr[i] and arr[minIndex]
[arr[i], arr[minIndex]] = [arr[minIndex], arr[i]];
}
}
return arr;
}
const unsortedArray = [64, 25, 12, 22, 11];
const sortedArray = selectionSort(unsortedArray);
console.log(sortedArray); // Output: [11, 12, 22, 25, 64]
Insertion Sort:
Insertion Sort builds the final sorted array one item at a time. It takes each element from the unsorted part and inserts it into its correct position within the sorted part.
JavaScript Code Example:
function insertionSort(arr) {
const n = arr.length;
for (let i = 1; i < n; i++) {
const key = arr[i];
let j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = key;
}
return arr;
}
const unsortedArray = [64, 34, 25, 12, 22, 11, 90];
const sortedArray = insertionSort(unsortedArray);
console.log(sortedArray); // Output: [11, 12, 22, 25, 34, 64, 90]
These are three basic sorting algorithms in JavaScript. While they are educational and useful for small datasets, for larger datasets, more efficient algorithms like Merge Sort, Quick Sort, or built-in methods like Array.prototype.sort() are typically preferred due to their better time complexity.
Searching Algorithms
Here's a table that lists common searching algorithms along with their average and worst-case time complexities (in Big O notation) and space complexities:
| Searching Algorithm | Average Time Complexity | Worst-Case Time Complexity | Space Complexity |
|---|---|---|---|
| Linear Search | O(n) | O(n) | O(1) |
| Binary Search | O(log n) | O(log n) | O(1) |
| Jump Search | O(√n) | O(√n) | O(1) |
| Interpolation Search | O(log log n) | O(n) | O(1) |
| Exponential Search | O(log n) | O(log n) | O(1) |
| Hash Table Lookup | O(1) average | O(n) worst-case | O(n) |
- n represents the number of elements in the data structure being searched.
- Some searching algorithms, like Hash Table Lookup, have average-case and worst-case complexities because they depend on factors such as the distribution of data and the quality of the hash function.
- Space complexity for most searching algorithms is minimal (O(1)) because they typically do not require additional data structures proportional to the size of the input.
Searching algorithms are fundamental operations in computer science that help locate a specific item within a collection of data. Two commonly used searching algorithms are Binary Search and Linear Search. Here, I'll explain each algorithm in depth and provide JavaScript code examples for both.
1. Linear Search:
Linear Search, also known as Sequential Search, is the simplest searching algorithm. It sequentially checks each element of the list until a match is found or the entire list has been searched. It's suitable for both sorted and unsorted lists.
Algorithm Steps:
- Start at the beginning of the list.
- Compare the target value with the current element.
- If they match, return the index of the current element.
- If not, move to the next element and repeat the comparison.
- Continue until the end of the list is reached or the target value is found.
JavaScript Code Example:
function linearSearch(arr, target) {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === target) {
return i; // Found the target at index i
}
}
return -1; // Target not found in the array
}
const myList = [4, 8, 2, 1, 9, 6];
const targetValue = 9;
const result = linearSearch(myList, targetValue);
if (result !== -1) {
console.log(`Target ${targetValue} found at index ${result}`);
} else {
console.log(`Target ${targetValue} not found in the array`);
}
2. Binary Search:
Binary Search is a more efficient searching algorithm, but it requires the list to be sorted in ascending or descending order. It works by repeatedly dividing the search interval in half, reducing the number of elements to be checked.
Algorithm Steps:
- Start with the entire sorted list.
- Compare the target value with the middle element.
- If they match, return the index of the middle element.
- If the target is smaller, search the left half of the list.
- If the target is larger, search the right half of the list.
- Repeat steps 2-5 until the target is found or the search interval becomes empty.
JavaScript Code Example:
function binarySearch(arr, target) {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
if (arr[mid] === target) {
return mid; // Found the target at index mid
}
if (arr[mid] < target) {
left = mid + 1; // Search the right half
} else {
right = mid - 1; // Search the left half
}
}
return -1; // Target not found in the array
}
const sortedList = [1, 2, 4, 6, 8, 9];
const targetValue = 4;
const result = binarySearch(sortedList, targetValue);
if (result !== -1) {
console.log(`Target ${targetValue} found at index ${result}`);
} else {
console.log(`Target ${targetValue} not found in the array`);
}
Binary Search is particularly efficient for large sorted lists as it reduces the search space logarithmically, resulting in faster search times compared to Linear Search. However, it requires the list to be sorted initially, which is a limitation in some cases.
Graph Algorithms
Graph algorithms are essential for solving problems involving networks, relationships, and connectivity. Here's a table that lists common graph algorithms along with their time and space complexities:
| Graph Algorithm | Time Complexity | Space Complexity |
|---|---|---|
| Breadth-First Search | O(V + E) | O(V) |
| Depth-First Search | O(V + E) | O(V) |
| Dijkstra's Algorithm | O((V + E) * log(V)) | O(V) |
| Bellman-Ford Algorithm | O(V * E) | O(V) |
| Kruskal's Algorithm | O(E * log(E)) | O(V + E) |
| Prim's Algorithm | O(V^2) (with adjacency matrix) or O(E + V log V) (with priority queue) | O(V) or O(E + V) |
| Topological Sort | O(V + E) | O(V) |
| Floyd-Warshall Algorithm | O(V^3) | O(V^2) |
- V represents the number of vertices in the graph.
- E represents the number of edges in the graph.
- Time complexities may vary depending on the specific implementation and data structures used (e.g., adjacency lists vs. adjacency matrices).
- Space complexities are typically related to data structures used for traversal and storage of information during the algorithm's execution.
These complexities provide a general understanding of the efficiency of these graph algorithms. However, real-world performance may vary based on factors such as the graph's size, density, and the specific use case.
Here's a brief summary of how these algorithms work:
- Breadth-First Search (BFS) ( Queue )
explores neighbors of a node before visiting their children, ensuring that it visits nodes in order of their distance from the source.
- Depth-First Search (DFS) ( Recursion / Stack )
explores as far as possible along one branch before backtracking. This can be implemented either recursively or using a stack.
- Dijkstra's Algorithm ( min heap )
finds the shortest path in weighted graphs by iteratively selecting the node with the smallest known distance and updating distances to its neighbors. It uses a priority queue to efficiently choose nodes.
1. Breadth-First Search (BFS):
Breadth-First Search is a graph traversal algorithm used to explore or search through a graph or tree data structure. It starts from a source vertex and systematically explores all the vertices in the graph in breadth-first order, level by level.
Key Concepts:
- BFS is often used to find the shortest path in an unweighted graph because it explores nodes in order of their distance from the source.
- It uses a queue data structure to keep track of nodes to visit next.
- BFS is typically implemented using iterative techniques (queue-based) rather than recursion.
Use Cases:
- Shortest path finding in unweighted graphs.
- Detecting connected components in a graph.
- Solving puzzles like the 15-puzzle or the Rubik's Cube.
2. Depth-First Search (DFS):
Depth-First Search is another graph traversal algorithm used to explore or search through a graph or tree data structure. It explores as far as possible along one branch before backtracking. DFS can be implemented using either recursion or a stack data structure.
Key Concepts:
- DFS explores as deeply as possible along each branch before moving to the next branch.
- It can be used for tasks such as topological sorting, cycle detection, and finding connected components.
- Recursive DFS visits nodes depth-first, while iterative DFS uses a stack to mimic the same behavior.
Use Cases:
- Topological sorting of directed acyclic graphs.
- Detecting cycles in graphs.
- Solving problems related to graph connectivity.
3. Dijkstra's Algorithm:
Dijkstra's Algorithm is used to find the shortest path in a weighted graph, where each edge has a non-negative weight. It maintains a set of vertices whose shortest distance from the source is known and gradually improves these distances until the shortest paths are determined.
Key Concepts:
- Dijkstra's Algorithm is greedy in nature, always choosing the node with the smallest known distance.
- It uses a priority queue (min-heap) to efficiently select the node with the shortest distance.
- The algorithm updates distances from the source node to all other nodes in the graph.
Use Cases:
- Finding the shortest path between two locations on a map, considering road lengths as weights.
- Network routing to determine the most efficient path for data transmission.
- Optimizing resource allocation in computer networks.
Each of these algorithms has its strengths and is suited to different types of problems and graph structures. The choice of which algorithm to use depends on the specific requirements of the task at hand.