Computational Complexity Theory
Introduction
Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty. This field is also concerned with the relative relationship between problems. More specifically, it is about quantifying the computational resources needed to solve a given problem. These resources include time (how many steps it takes to solve a problem) and space (how much memory it requires).
Complexity Classes
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. The most commonly used complexity classes, P and NP, group problems by the resources required by the most efficient known algorithm for them.
P
The class P consists of those problems that are solvable in polynomial time, i.e., the set of problems that can be solved in time O(n^k) for some constant k, where n is the size of the input. Examples of problems in P include the deterministic finite automaton acceptance problem and the problem of determining whether a given number is prime.
NP
The class NP consists of problems whose solutions can be verified in polynomial time. An example of an NP problem is the boolean satisfiability problem, where the task is to determine if there exists an assignment of truth values to boolean variables that makes a given boolean formula true.
P versus NP Problem
One of the most important open questions in computer science is the P versus NP problem. This problem asks whether every problem for which a solution can be checked quickly (in polynomial time) can also be solved quickly. Despite much effort, the question remains open.
Complexity Measures
In computational complexity theory, a complexity measure is a way of measuring the complexity of a problem or an algorithm. Common complexity measures include time complexity, space complexity, and circuit complexity.
Time Complexity
Time complexity is a measure of the amount of time an algorithm takes in terms of the size of the input to the problem. The time complexity of an algorithm is commonly expressed using big O notation, which describes the upper bound of the time complexity in the worst-case scenario.
Space Complexity
Space complexity is a measure of the amount of memory an algorithm needs in terms of the size of the input. Like time complexity, space complexity is also commonly expressed using big O notation.
Computational Models
A computational model is a mathematical model that represents a particular class of computational processes. The choice of computational model can greatly affect the classification of a problem's complexity.
Turing Machine
The Turing machine is one of the most common models used in computational complexity theory. It provides a simple model that captures the essential elements of any digital computer.
Random Access Machine
The random access machine (RAM) is another common model used in computational complexity theory. It is an abstract machine in computer science akin to an idealized assembly language.
Notable Problems and Results
There are many notable problems and results in computational complexity theory. Some of these include the P versus NP problem, the existence of problems that are NP-complete, and the existence of problems that are undecidable.
NP-Complete Problems
An NP-complete problem is a problem for which a solution can be verified quickly, and a solution to one can be used to find solutions to all other problems in NP. The concept of NP-completeness was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem-proving procedures" and independently by Leonid Levin.