Computational complexity
Introduction
Computational complexity theory is a branch of the theory of computation in computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. This field deals with the resources required during computation to solve a given problem. The most common resources include time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
Computational Problems and Complexity Measures
A computational problem is understood to be a task solved by a computer. A computation problem is said to be solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them.
Complexity Classes
Complexity classes are sets of problems of related resource-based complexity. A typical complexity class has a definition of the form: the set of problems that can be solved by an abstract machine M using O(f(n)) of resource R, where n is the size of the input. Complexity classes are concerned with the rate of growth of the requirement in resources as the input n grows. It is a quantitative measure that describes the increase in complexity that an algorithm would require to solve a problem of size n.
P and NP
The most well-known complexity classes, P and NP, capture the intuitive notion of polynomial time. Informally, P is the class of problems for which a deterministic Turing machine (a mathematical model of a general computing machine) can find a solution in polynomial time. On the other hand, NP is the class of problems for which a deterministic Turing machine can verify a given solution in polynomial time.
NP-Completeness
A decision problem C is NP-complete (denoted NPC) if it is in NP and every problem in NP is reducible to C. In other words, C is the hardest problems in NP, in the sense that any given solution to C can be verified quickly (in polynomial time), but there is no known efficient way to locate a solution in the first place.
P versus NP Problem
The P versus NP problem is a major unsolved problem in computer science. It asks whether every problem whose solution can be quickly verified (technically, verified in polynomial time) can also be solved quickly (again, in polynomial time). The underlying issues were first discussed in the 1950s, in letters from John Nash to the National Security Agency, and from Kurt Gödel to John von Neumann. The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and independently by Leonid Levin.
Space Complexity
Space complexity is a measure of the amount of memory an algorithm needs. The study of space complexity can be dated back to Pippenger and Fischer (1979) who introduced the notion of a Turing machine with a write-restricted tape and studied a function that maps a natural number n to the maximum space used by any computation that starts with an input of length n.
Time Complexity
Time complexity is a measure of the computational time taken by an algorithm to solve a problem. It is usually expressed as a function of the size of the input to the program. The time complexity of an algorithm quantifies the amount of time taken by an algorithm to run, as a function of the length of the string representing the input.