Computational Complexity


In theoretical computer science, computational complexity theory focuses on classifying computational problems according to the amount of resources they require to be solved, and relating these complexity classes to each other. A computational problem is understood as a task that can be solved by a computer through the mechanical application of mathematical steps, such as an algorithm.

Key Concepts

  • Algorithms and Problems: An algorithm is a step-by-step procedure for solving a computational problem. A problem is considered inherently difficult if there's no algorithm that can solve it efficiently, regardless of the computational power available.
  • Resource Usage: The primary resources measured in computational complexity are:
    • Time complexity: The number of elementary computational steps required to solve a problem.
    • Space complexity: The amount of memory (e.g., RAM) required to solve a problem.
  • Complexity Classes: Computational problems are grouped into complexity classes based on the resources needed to solve them. Important examples include:
    • P: Problems solvable in polynomial time (time complexity grows as a polynomial function of the input size).
    • NP: Problems where a proposed solution can be verified in polynomial time (may or may not be solvable in polynomial time themselves).
    • EXPTIME: Problems solvable in exponential time.
  • NP-completeness: A subset of NP-problems considered the "hardest" in the class. All other problems in NP can be reduced to an NP-complete problem in polynomial time, meaning if an efficient solution is found for one NP-complete problem, it would solve all of them.
  • P versus NP Problem: One of the most fundamental open questions in computer science is whether P equals NP. In other words, if a solution to a problem can be quickly verified, does that mean an efficient algorithm to find that solution exists?
  • Intractability: Problems that are extremely difficult to solve, requiring exponential time or worse, are considered intractable. Such problems might be practically impossible to solve for large input sizes.

Methods and Techniques

  • Big O Notation: Used to express the asymptotic behavior of an algorithm's time or space complexity, describing how the resource requirements grow with the input size.
  • Reductions: A way to demonstrate relative difficulty between problems. If problem A can be reduced to problem B, it implies problem B is at least as hard as problem A.
  • Lower Bounds: Proving minimum resource requirements for a problem, which is significantly more challenging than establishing upper bounds (provided by an algorithm).


Computational complexity has far-reaching implications in various fields, including:

  • Algorithm Design: Understanding complexity helps design efficient algorithms and predict their performance.
  • Cryptography: Secure encryption often relies on computationally hard problems.
  • Artificial Intelligence: The complexity of learning and optimization problems is a major concern in AI.
  • Other fields: Computational complexity impacts biology (e.g., sequence analysis), physics, economics, and more.