"In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them."
This subfield focuses on understanding and analyzing the computational complexity of algorithms, including time and space complexity, and upper and lower bounds.
Big-O Notation: The measure of worst-case performance for an algorithm as the input size grows.
Time and Space Complexity: Measuring the amount of time and space required by an algorithm to solve a problem of a specified size.
Data Structures: Building blocks that algorithms use for storing and manipulating data.
Arrays and Linked Lists: Ordered collections of items in memory, used in various data storage and access algorithms.
Stacks and Queues: Useful for maintaining order, and are used in a variety of algorithms to sort or search through data.
Hash Tables: A data structure that stores keys and associated values as a fixed-size array, with O(1) average case access.
Graphs and Trees: A collection of nodes that may be connected to one another by edges, useful for modelling relationships between data items.
Sorting Algorithms: The process of arranging data in a particular order, useful for efficient searching and data manipulation.
Searching Algorithms: Finding the location of data within a large data set quickly and efficiently.
Optimization Techniques: Refining algorithms to minimize time, space, or other resources used.
Dynamic Programming: A method of solving complex problems by breaking them down into smaller, more manageable subproblems.
Greedy Algorithms: A heuristic approach that seeks to find the optimal solution at each stage, hoping that the final result will also be optimal.
Divide and Conquer Algorithms: A problem-solving technique that divides a complex problem into smaller sub-problems, solves each sub-problem separately and combines the solution of smaller sub-problems to get the solution of the larger problem.
Backtracking Algorithms: A method of solving problems by systematically trying out different solutions until a successful one is found.
Randomized Algorithms: A class of algorithms that use random numbers in their operation for better performance and efficiency.
Approximation Algorithms: Algorithms that provide approximate solutions to optimization problems in polynomial time.
String Algorithms: Algorithms that work with strings, such as searching a string for a particular pattern or replacing parts of a string with other text.
Geometry Algorithms: Algorithms that work with geometry, including finding the area or perimeter of a shape, or determining the intersection of multiple shapes.
Number Theory Algorithms: Algorithms that work with numbers, including factorization and primality testing.
Parallel Algorithms: Algorithms designed to take advantage of multiple processors or cores to perform tasks in parallel, resulting in improved efficiency and processing speed.
Asymptotic analysis: An analysis of the algorithm's behavior as the input size approaches infinity.
Worst-case analysis: Analysis of an algorithm's performance in the worst-case scenario.
Average-case analysis: An analysis of an algorithm's average performance across all inputs and scenarios.
Best-case analysis: Analysis of an algorithm's performance in the best-case scenario.
Time complexity analysis: Analysis of an algorithm's time complexity.
Space complexity analysis: Analysis of an algorithm's space complexity.
Big O notation: A mathematical notation used to describe the upper bound of an algorithm's complexity.
Omega notation: A mathematical notation used to describe the lower bound of an algorithm's complexity.
Theta notation: A mathematical notation used to describe an algorithm's tight upper and lower bounds.
Amortized analysis: An analysis of an algorithm's performance over a sequence of operations rather than just a single operation.
Constant time analysis: Analysis of an algorithm that takes constant time to complete regardless of input size.
Logarithmic time analysis: Analysis of an algorithm that has a logarithmic time complexity.
Linear time analysis: Analysis of an algorithm that has a time complexity that is proportional to the input size.
Polynomial time analysis: Analysis of an algorithm that has a time complexity that is proportional to a polynomial of the input size.
Exponential time analysis: Analysis of an algorithm that has a time complexity that grows exponentially with input size.
Linearithmic time analysis: Analysis of an algorithm that has a time complexity that is a combination of logarithmic and linear time complexity.
Recursive analysis: Analysis of an algorithm that uses recursion to solve a problem.
Greedy algorithm analysis: Analysis of an algorithm that chooses the locally optimal solution at each step to reach the globally optimal solution.
Divide and conquer analysis: Analysis of an algorithm that breaks down a problem into smaller sub-problems to solve.
Dynamic programming analysis: Analysis of an algorithm that uses previously-computed solutions to solve larger sub-problems.
"Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity)."
"An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input."
"Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest."
"The term 'analysis of algorithms' was coined by Donald Knuth."
"Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem."
"In theoretical analysis of algorithms, it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input."
"Big O notation, Big-omega notation, and Big-theta notation are used to this end."
"Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency."
"However, the efficiencies of any two 'reasonable' implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant."
"Exact (not asymptotic) measures of efficiency can sometimes be computed, but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation."
"A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time."
"For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2(n) + 1 time units are needed to return an answer."