Misplaced Pages

DSPACE

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computational complexity theory , DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine . It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm .

#319680

83-640: The measure DSPACE is used to define complexity classes , sets of all of the decision problems that can be solved using a certain amount of memory space. For each function f ( n ), there is a complexity class SPACE( f ( n )), the set of decision problems that can be solved by a deterministic Turing machine using space O ( f ( n )). There is no restriction on the amount of computation time that can be used, though there may be restrictions on some other complexity measures (like alternation ). Several important complexity classes are defined in terms of DSPACE . These include: Proof: Suppose that there exists

166-414: A Turing machine with bounded time or space resources. For example, the complexity class P is defined as the set of decision problems that can be solved by a deterministic Turing machine in polynomial time . Intuitively, a computational problem is just a question that can be solved by an algorithm . For example, "is the natural number n {\displaystyle n} prime ?"

249-688: A model of computation , and a bounded resource like time or memory . In particular, most complexity classes consist of decision problems that are solvable with a Turing machine , and are differentiated by their time or space (memory) requirements. For instance, the class P is the set of decision problems solvable by a deterministic Turing machine in polynomial time . There are, however, many complexity classes defined in terms of other types of problems (e.g. counting problems and function problems ) and using other models of computation (e.g. probabilistic Turing machines , interactive proof systems , Boolean circuits , and quantum computers ). The study of

332-517: A two-tape Turing machine so that it is possible for the machine to store the entire input (it can be shown that in terms of computability the two-tape Turing machine is equivalent to the single-tape Turing machine). In the two-tape Turing machine model, one tape is the input tape, which is read-only. The other is the work tape, which allows both reading and writing and is the tape on which the Turing machine performs computations. The space complexity of

415-482: A 1 and change to state 6". The Turing machine starts with only the input string on its tape and blanks everywhere else. The TM accepts the input if it enters a designated accept state and rejects the input if it enters a reject state. The deterministic Turing machine (DTM) is the most basic type of Turing machine. It uses a fixed set of rules to determine its future actions (which is why it is called " deterministic "). A computational problem can then be defined in terms of

498-409: A TM on a particular input is the number of elementary steps that the Turing machine takes to reach either an accept or reject state. The space complexity is the number of cells on its tape that it uses to reach either an accept or reject state. The deterministic Turing machine (DTM) is a variant of the nondeterministic Turing machine (NTM). Intuitively, an NTM is just a regular Turing machine that has

581-597: A Turing machine M {\displaystyle M} is defined as the function t M : N → N {\displaystyle t_{M}:\mathbb {N} \to \mathbb {N} } , where t M ( n ) {\displaystyle t_{M}(n)} is the maximum number of steps that M {\displaystyle M} takes on any input of length n {\displaystyle n} . In computational complexity theory, theoretical computer scientists are concerned less with particular runtime values and more with

664-473: A Turing machine M which, given a string 1 consisting of n ones, outputs the binary (or unary) representation of f ( n ), while using only O ( f ( n )) space. Also, a function f is fully space-constructible if there exists a Turing machine M which, given a string 1 consisting of n ones, halts after using exactly f ( n ) cells. All the commonly used functions f ( n ) (such as n , n , 2 ) are time- and space-constructible, as long as f ( n )

747-478: A Turing machine M which, given a string 1 , outputs the binary representation of f ( n ) in O ( f ( n )) time (a unary representation may be used instead, since the two can be interconverted in O ( f ( n )) time). There is also a notion of a fully time-constructible function. A function f is called fully time-constructible if there exists a Turing machine M which, given a string 1 consisting of n ones, stops after exactly f ( n ) steps. This definition

830-537: A Turing machine as the set of input strings that a particular Turing machine accepts. For example, the primality problem PRIME {\displaystyle {\texttt {PRIME}}} from above is the set of strings (representing natural numbers) that a Turing machine running an algorithm that correctly tests for primality accepts. A Turing machine is said to recognize a language (recall that "problem" and "language" are largely synonymous in computability and complexity theory) if it accepts all inputs that are in

913-438: A broad range of computational problems. Other types of problems that certain complexity classes are defined in terms of include: To make concrete the notion of a "computer", in theoretical computer science problems are analyzed in the context of a computational model . Computational models make exact the notions of computational resources like "time" and "memory". In computational complexity theory , complexity classes deal with

SECTION 10

#1732793564320

996-513: A non-regular language L ∈ DSPACE( s ( n )), for s ( n ) = o (log log n ). Let M be a Turing machine deciding L in space s ( n ). By our assumption L ∉ DSPACE( O (1)); thus, for any arbitrary k ∈ N {\displaystyle k\in \mathbb {N} } , there exists an input of M requiring more space than k . Let x be an input of smallest size, denoted by n, that requires more space than k , and C {\displaystyle {\mathcal {C}}} be

1079-531: A nondeterministic Turing machine can solve a problem using f ( n ) {\displaystyle f(n)} space, then a deterministic Turing machine can solve the same problem in f ( n ) 2 {\displaystyle f(n)^{2}} space, i.e. in the square of the space. Formally, Savitch's theorem states that for any f ( n ) > n {\displaystyle f(n)>n} , Important corollaries of Savitch's theorem are that PSPACE = NPSPACE (since

1162-949: A nondeterministic Turing machine can solve in exponential space, a deterministic Turing machine can also solve in exponential space. By definition of DTIME , it follows that D T I M E ( n k 1 ) {\displaystyle {\mathsf {DTIME}}(n^{k_{1}})} is contained in D T I M E ( n k 2 ) {\displaystyle {\mathsf {DTIME}}(n^{k_{2}})} if k 1 ≤ k 2 {\displaystyle k_{1}\leq k_{2}} , since O ( n k 1 ) ⊆ O ( n k 2 ) {\displaystyle O(n^{k_{1}})\subseteq O(n^{k_{2}})} if k 1 ≤ k 2 {\displaystyle k_{1}\leq k_{2}} . However, this definition gives no indication of whether this inclusion

1245-442: A polynomial-size certificate string c {\displaystyle c} , and accepts w {\displaystyle w} if w {\displaystyle w} is in the language and rejects w {\displaystyle w} if w {\displaystyle w} is not in the language. Intuitively, the certificate acts as a proof that the input w {\displaystyle w}

1328-636: A problem X {\displaystyle X} reduces to a problem Y {\displaystyle Y} if there exists a function f {\displaystyle f} such that for every x ∈ Σ ∗ {\displaystyle x\in \Sigma ^{*}} , x ∈ X {\displaystyle x\in X} if and only if f ( x ) ∈ Y {\displaystyle f(x)\in Y} . Generally, reductions are used to capture

1411-429: A subclass of nondeterministic Turing machines that don't make use of their nondeterminism; or under the verifier definition, P is the class of problems whose polynomial time verifiers need only receive the empty string as their certificate), it is not known whether NP is strictly larger than P . If P = NP , then it follows that nondeterminism provides no additional computational power over determinism with regards to

1494-501: A suitable power of the alphabet, for all c ≥ 1 and f such that f ( n ) ≥ 1 , the class of languages recognizable in c f ( n ) space is the same as the class of languages recognizable in f ( n ) space. This justifies usage of big O notation in the definition. The space hierarchy theorem shows that, for every space-constructible function f : N → N {\displaystyle f:\mathbb {N} \to \mathbb {N} } , there exists some language L which

1577-480: Is a computational problem. A computational problem is mathematically represented as the set of answers to the problem. In the primality example, the problem (call it PRIME {\displaystyle {\texttt {PRIME}}} ) is represented by the set of all natural numbers that are prime: PRIME = { n ∈ N | n  is prime } {\displaystyle {\texttt {PRIME}}=\{n\in \mathbb {N} |n{\text{

1660-413: Is also known that P ⊆ P S P A C E {\displaystyle {\mathsf {P}}\subseteq {\mathsf {PSPACE}}} , which follows intuitively from the fact that, since writing to a cell on a Turing machine's tape is defined as taking one unit of time, a Turing machine operating in polynomial time can only write to polynomially many cells. It is suspected that P

1743-637: Is also possible to simulate any NTM using a DTM (the DTM will simply compute every possible computational branch one-by-one). Hence, the two are equivalent in terms of computability. However, simulating an NTM with a DTM often requires greater time and/or memory resources; as will be seen, how significant this slowdown is for certain classes of computational problems is an important question in computational complexity theory. Complexity classes group computational problems by their resource requirements. To do this, computational problems are differentiated by upper bounds on

SECTION 20

#1732793564320

1826-481: Is also possible to use the Blum axioms to define complexity classes without referring to a concrete computational model , but this approach is less frequently used in complexity theory. A Turing machine is a mathematical model of a general computing machine. It is the most commonly used model in complexity theory, owing in large part to the fact that it is believed to be as powerful as any other model of computation and

1909-426: Is an example of a decision problem as it can be represented by the yes–no question "is the natural number n {\displaystyle n} prime ". In terms of the theory of computation, a decision problem is represented as the set of input strings that a computer running a correct algorithm would answer "yes" to. In the primality example, PRIME {\displaystyle {\texttt {PRIME}}}

1992-516: Is as hard as the hardest problems in C ). Of particular importance is the class of NP -complete problems—the most difficult problems in NP . Because all problems in NP can be polynomial-time reduced to NP -complete problems, finding an NP -complete problem that can be solved in polynomial time would mean that P  =  NP . Savitch's theorem establishes the relationship between deterministic and nondetermistic space resources. It shows that if

2075-418: Is at least cn for a constant c > 0. No function which is o ( n ) can be time-constructible unless it is eventually constant, since there is insufficient time to read the entire input. However, ⁠ log ⁡ ( n ) {\displaystyle \log(n)} ⁠ is a space-constructible function. Time-constructible functions are used in results from complexity theory such as

2158-565: Is close to 1. ) Many complexity classes are defined using the concept of a reduction . A reduction is a transformation of one problem into another problem, i.e. a reduction takes inputs from one problem and transforms them into inputs of another problem. For instance, you can reduce ordinary base-10 addition x + y {\displaystyle x+y} to base-2 addition by transforming x {\displaystyle x} and y {\displaystyle y} to their base-2 notation (e.g. 5+7 becomes 101+111). Formally,

2241-406: Is closed under all Boolean operations, and under quantification over polynomially sized domains. Closure properties can be helpful in separating classes—one possible route to separating two complexity classes is to find some closure property possessed by one class but not by the other. Each class X that is not closed under negation has a complement class co-X , which consists of the complements of

2324-455: Is comparatively slow compared to problems in the exponential complexity class EXPTIME (or more accurately, for problems in EXPTIME that are outside of P , since P ⊆ E X P T I M E {\displaystyle {\mathsf {P}}\subseteq {\mathsf {EXPTIME}}} ). Note that the study of complexity classes is intended primarily to understand

2407-405: Is decidable in space O ( f ( n ) ) {\displaystyle O(f(n))} but not in space o ( f ( n ) ) {\displaystyle o(f(n))} . DSPACE is the deterministic counterpart of NSPACE , the class of memory space on a non-deterministic Turing machine . By Savitch's theorem , we have that NTIME is related to DSPACE in

2490-478: Is easy to analyze mathematically. Importantly, it is believed that if there exists an algorithm that solves a particular problem then there also exists a Turing machine that solves that same problem (this is known as the Church–Turing thesis ); this means that it is believed that every algorithm can be represented as a Turing machine. Mechanically, a Turing machine (TM) manipulates symbols (generally restricted to

2573-428: Is extremely broad: it is known to be a strict superset of PSPACE , NP , and P , and is believed to be a strict superset of EXPTIME . Complexity classes have a variety of closure properties. For example, decision classes may be closed under negation , disjunction , conjunction , or even under all Boolean operations . Moreover, they might also be closed under a variety of quantification schemes. P , for instance,

DSPACE - Misplaced Pages Continue

2656-444: Is in the language. Formally: This equivalence between the nondeterministic definition and the verifier definition highlights a fundamental connection between nondeterminism and solution verifiability. Furthermore, it also provides a useful method for proving that a language is in NP —simply identify a suitable certificate and show that it can be verified in polynomial time. While there might seem to be an obvious difference between

2739-437: Is needed in order to increase the number of problems that can be solved. Space-constructible function In complexity theory , a time-constructible function is a function f from natural numbers to natural numbers with the property that f ( n ) can be constructed from n by a Turing machine in the time of order f ( n ). The purpose of such a definition is to exclude functions that do not provide an upper bound on

2822-487: Is not known whether any of these relationships is proper. The complexity classes PSPACE and NPSPACE are the space analogues to P and NP . That is, PSPACE is the class of problems solvable in polynomial space by a deterministic Turing machine and NPSPACE is the class of problems solvable in polynomial space by a nondeterministic Turing machine. More formally, While it is not known whether P = NP , Savitch's theorem famously showed that PSPACE = NPSPACE . It

2905-512: Is not known whether this is proper, but if P = NP then EXPTIME must equal NEXPTIME . While it is possible to define logarithmic time complexity classes, these are extremely narrow classes as sublinear times do not even enable a Turing machine to read the entire input (because log ⁡ n < n {\displaystyle \log n<n} ). However, there are a meaningful number of problems that can be solved in logarithmic space. The definitions of these classes require

2988-762: Is polynomial-time reducible to a problem Y {\displaystyle Y} if there exists a polynomial-time computable function p {\displaystyle p} such that for all x ∈ Σ ∗ {\displaystyle x\in \Sigma ^{*}} , x ∈ X {\displaystyle x\in X} if and only if p ( x ) ∈ Y {\displaystyle p(x)\in Y} . Note that reductions can be defined in many different ways. Common reductions are Cook reductions , Karp reductions and Levin reductions , and can vary based on resource bounds, such as polynomial-time reductions and log-space reductions . Reductions motivate

3071-435: Is prime}}\}} . In the theory of computation, these answers are represented as strings ; for example, in the primality example the natural numbers could be represented as strings of bits that represent binary numbers . For this reason, computational problems are often synonymously referred to as languages, since strings of bits represent formal languages (a concept borrowed from linguistics ); for example, saying that

3154-409: Is slightly less general than the first two but, for most applications, either definition can be used. Similarly, a function f is space-constructible if there exists a positive integer n 0 and a Turing machine M which, given a string 1 consisting of n ones, halts after using exactly f ( n ) cells for all n ≥ n 0 . Equivalently, a function f is space-constructible if there exists

3237-468: Is solved by defining the multi-tape Turing machine with input and output , which is a standard multi-tape Turing machine, except that the input tape may never be written-to, and the output tape may never be read from. This allows smaller space classes, such as L (logarithmic space), to be defined in terms of the amount of space used by all of the work tapes (excluding the special input and output tapes). Since many symbols might be packed into one by taking

3320-410: Is strict. For time and space requirements, the conditions under which the inclusion is strict are given by the time and space hierarchy theorems, respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. The hierarchy theorems enable one to make quantitative statements about how much more additional time or space

3403-478: Is strictly smaller than PSPACE , but this has not been proven. The complexity classes EXPSPACE and NEXPSPACE are the space analogues to EXPTIME and NEXPTIME . That is, EXPSPACE is the class of problems solvable in exponential space by a deterministic Turing machine and NEXPSPACE is the class of problems solvable in exponential space by a nondeterministic Turing machine. Or more formally, Savitch's theorem showed that EXPSPACE = NEXPSPACE . This class

DSPACE - Misplaced Pages Continue

3486-461: Is the class of decision problems solvable by a deterministic Turing machine in exponential time and NEXPTIME (sometimes shortened to NEXP ) is the class of decision problems solvable by a nondeterministic Turing machine in exponential time. Or more formally, EXPTIME is a strict superset of P and NEXPTIME is a strict superset of NP . It is further the case that EXPTIME ⊆ {\displaystyle \subseteq } NEXPTIME . It

3569-440: Is the class of problems that are solvable by a deterministic Turing machine in polynomial time and NP is the class of problems that are solvable by a nondeterministic Turing machine in polynomial time. Or more formally, P is often said to be the class of problems that can be solved "quickly" or "efficiently" by a deterministic computer, since the time complexity of solving a problem in P increases relatively slowly with

3652-468: Is the maximum number of steps that the NTM uses on any branch of its computation. Similarly, the space complexity of an NTM is the maximum number of cells that the NTM uses on any branch of its computation. DTMs can be viewed as a special case of NTMs that do not make use of the power of nondeterminism. Hence, every computation that can be carried out by a DTM can also be carried out by an equivalent NTM. It

3735-493: Is the set of strings representing natural numbers that, when input into a computer running an algorithm that correctly tests for primality , the algorithm answers "yes, this number is prime". This "yes-no" format is often equivalently stated as "accept-reject"; that is, an algorithm "accepts" an input string if the answer to the decision problem is "yes" and "rejects" if the answer is "no". While some problems cannot easily be expressed as decision problems, they nonetheless encompass

3818-484: The PRIME {\displaystyle {\texttt {PRIME}}} problem is in the complexity class NP is equivalent to saying that the language PRIME {\displaystyle {\texttt {PRIME}}} is in NP . The most commonly analyzed problems in theoretical computer science are decision problems —the kinds of problems that can be posed as yes–no questions . The primality example above, for instance,

3901-418: The inherent complexity required to solve computational problems. Complexity theorists are thus generally concerned with finding the smallest complexity class that a problem falls into and are therefore concerned with identifying which class a computational problem falls into using the most efficient algorithm. There may be an algorithm, for instance, that solves a particular problem in exponential time, but if

3984-447: The inherent resource requirements of problems and not the resource requirements that depend upon how a physical computer is constructed. For example, in the real world different computers may require different amounts of time and memory to solve the same problem because of the way that they have been engineered. By providing an abstract mathematical representations of computers, computational models abstract away superfluous complexities of

4067-634: The subset relation). However, many relationships are not yet known; for example, one of the most famous open problems in computer science concerns whether P equals NP . The relationships between classes often answer questions about the fundamental nature of computation. The P versus NP problem, for instance, is directly related to questions of whether nondeterminism adds any computational power to computers and whether problems having solutions that can be quickly checked for correctness can also be quickly solved. Complexity classes are sets of related computational problems . They are defined in terms of

4150-451: The time hierarchy theorem . They are important because the time hierarchy theorem relies on Turing machines that must determine in O ( f ( n )) time whether an algorithm has taken more than f ( n ) steps. This is, of course, impossible without being able to calculate f ( n ) in that time. Such results are typically true for all natural functions f but not necessarily true for artificially constructed f . To formulate them precisely, it

4233-451: The NTM accepts the input. In this way, an NTM can be thought of as simultaneously exploring all computational possibilities in parallel and selecting an accepting branch. NTMs are not meant to be physically realizable models, they are simply theoretically interesting abstract machines that give rise to a number of interesting complexity classes (which often do have physically realizable equivalent definitions). The time complexity of an NTM

SECTION 50

#1732793564320

4316-594: The Turing machine is measured as the number of cells that are used on the work tape. L (sometimes lengthened to LOGSPACE ) is then defined as the class of problems solvable in logarithmic space on a deterministic Turing machine and NL (sometimes lengthened to NLOGSPACE ) is the class of problems solvable in logarithmic space on a nondeterministic Turing machine. Or more formally, It is known that L ⊆ N L ⊆ P {\displaystyle {\mathsf {L}}\subseteq {\mathsf {NL}}\subseteq {\mathsf {P}}} . However, it

4399-473: The ability to quickly find a solution to a problem; that is, being able to explore all possible branches of computation provides at most a polynomial speedup over being able to explore only a single branch. Furthermore, it would follow that if there exists a proof for a problem instance and that proof can be quickly be checked for correctness (that is, if the problem is in NP ), then there also exists an algorithm that can quickly construct that proof (that is,

4482-415: The added capability of being able to explore multiple possible future actions from a given state, and "choosing" a branch that accepts (if any accept). That is, while a DTM must follow only one branch of computation, an NTM can be imagined as a computation tree, branching into many possible computational pathways at each step (see image). If at least one branch of the tree halts with an "accept" condition, then

4565-405: The bits 0 and 1 to provide an intuitive connection to real-life computers) contained on an infinitely long strip of tape. The TM can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write

4648-460: The class of problems that are efficiently solvable and the class of problems whose solutions are merely efficiently checkable, P and NP are actually at the center of one of the most famous unsolved problems in computer science: the P versus NP problem. While it is known that P ⊆ N P {\displaystyle {\mathsf {P}}\subseteq {\mathsf {NP}}} (intuitively, deterministic Turing machines are just

4731-411: The computational difficulty of solving the problems contained within them with respect to particular computational resources like time or memory. More formally, the definition of a complexity class consists of three things: a type of computational problem, a model of computation, and a bounded computational resource. In particular, most complexity classes consist of decision problems that can be solved by

4814-510: The concept of a problem being hard for a complexity class. A problem X {\displaystyle X} is hard for a class of problems C if every problem in C can be polynomial-time reduced to X {\displaystyle X} . Thus no problem in C is harder than X {\displaystyle X} , since an algorithm for X {\displaystyle X} allows us to solve any problem in C with at most polynomial slowdown. Of particular importance,

4897-449: The crossing sequences at boundary i and j , respectively. Let x' be the string obtained from x by removing all cells from i + 1 to j . The machine M still behaves exactly the same way on input x' as on input x , so it needs the same space to compute x' as to compute x . However, | x' | < | x | , contradicting the definition of x . Hence, there does not exist such a language L as assumed. □ The above theorem implies

4980-400: The following way. For any time constructible function t ( n ), we have Complexity class In computational complexity theory , a complexity class is a set of computational problems "of related resource-based complexity ". The two most commonly analyzed resources are time and memory . In general, a complexity class is defined in terms of a type of computational problem,

5063-426: The general class of functions that the time complexity function falls into. For instance, is the time complexity function a polynomial ? A logarithmic function ? An exponential function ? Or another kind of function? The space complexity of an algorithm with respect to the Turing machine model is the number of cells on the Turing machine's tape that are required to run an algorithm on a given input size. Formally,

SECTION 60

#1732793564320

5146-420: The input size. An important characteristic of the class NP is that it can be equivalently defined as the class of problems whose solutions are verifiable by a deterministic Turing machine in polynomial time. That is, a language is in NP if there exists a deterministic polynomial time Turing machine, referred to as the verifier, that takes as input a string w {\displaystyle w} and

5229-411: The key reasons many complexity classes are defined in the way that they are. Take, for example, a problem that can be solved in O ( n ) {\displaystyle O(n)} time (that is, in linear time) and one that can be solved in, at best, O ( n 1000 ) {\displaystyle O(n^{1000})} time. Both of these problems are in P , yet the runtime of

5312-487: The language and is said to decide a language if it additionally rejects all inputs that are not in the language (certain inputs may cause a Turing machine to run forever, so decidability places the additional constraint over recognizability that the Turing machine must halt on all inputs). A Turing machine that "solves" a problem is generally meant to mean one that decides the language. Turing machines enable intuitive notions of "time" and "space". The time complexity of

5395-407: The languages contained in X (i.e. co-X = { L | L ¯ ∈ X } {\displaystyle {\textsf {co-X}}=\{L|{\overline {L}}\in {\mathsf {X}}\}} ). co-NP , for instance, is one important complement complexity class, and sits at the center of the unsolved problem over whether co-NP = NP . Closure properties are one of

5478-418: The length of a crossing sequence of M on x is at most | C | {\displaystyle |{\mathcal {C}}|} : if it is longer than that, then some configuration will repeat, and M will go into an infinite loop. There are also at most | C | {\displaystyle |{\mathcal {C}}|} possibilities for every element of a crossing sequence, so

5561-415: The maximum amount of resources that the most efficient algorithm takes to solve them. More specifically, complexity classes are concerned with the rate of growth in the resources required to solve particular computational problems as the input size increases. For example, the amount of time it takes to solve problems in the complexity class P grows at a polynomial rate as the input size increases, which

5644-403: The most efficient algorithm for solving this problem runs in polynomial time then the inherent time complexity of that problem is better described as polynomial. The time complexity of an algorithm with respect to the Turing machine model is the number of steps it takes for a Turing machine to run an algorithm on a given input size. Formally, the time complexity for an algorithm implemented with

5727-419: The necessity of the space-constructible function assumption in the space hierarchy theorem . DSPACE is traditionally measured on a deterministic Turing machine . Several important space complexity classes are sublinear , that is, smaller than the size of the input. Thus, "charging" the algorithm for the size of the input, or for the size of the output, would not truly capture the memory space used. This

5810-434: The notion of a problem being at least as difficult as another problem. Thus we are generally interested in using a polynomial-time reduction, since any problem X {\displaystyle X} that can be efficiently reduced to another problem Y {\displaystyle Y} is no more difficult than Y {\displaystyle Y} . Formally, a problem X {\displaystyle X}

5893-530: The number of different crossing sequences of M on x is According to pigeonhole principle , there exist indexes i < j such that C i ( x ) = C j ( x ) {\displaystyle {\mathcal {C}}_{i}(x)={\mathcal {C}}_{j}(x)} , where C i ( x ) {\displaystyle {\mathcal {C}}_{i}(x)} and C j ( x ) {\displaystyle {\mathcal {C}}_{j}(x)} are

5976-505: The polynomials are the smallest class that ensures composition of "efficient algorithms". (Note that the definition of P is also useful because, empirically, almost all problems in P that are practically useful do in fact have low order polynomial runtimes, and almost all problems outside of P that are practically useful do not have any known algorithms with small exponential runtimes, i.e. with O ( c n ) {\displaystyle O(c^{n})} runtimes where c

6059-450: The problem is in P ). However, the overwhelming majority of computer scientists believe that P ≠ N P {\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}} , and most cryptographic schemes employed today rely on the assumption that P ≠ N P {\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}} . EXPTIME (sometimes shortened to EXP )

6142-501: The real world (like differences in processor speed) that obstruct an understanding of fundamental principles. The most commonly used computational model is the Turing machine . While other models exist and many complexity classes are defined in terms of them (see section "Other models of computation" ), the Turing machine is used to define most basic complexity classes. With the Turing machine, instead of using standard units of time like

6225-423: The relationships between complexity classes is a major area of research in theoretical computer science. There are often general hierarchies of complexity classes; for example, it is known that a number of fundamental time and space complexity classes relate to each other in the following way: L ⊆ NL ⊆ P ⊆ NP ⊆ PSPACE ⊆ EXPTIME ⊆ NEXPTIME ⊆ EXPSPACE (where ⊆ denotes

6308-442: The runtime of some Turing machine. There are two different definitions of a time-constructible function. In the first definition, a function f is called time-constructible if there exists a positive integer n 0 and Turing machine M which, given a string 1 consisting of n ones, stops after exactly f ( n ) steps for all n ≥ n 0 . In the second definition, a function f is called time-constructible if there exists

6391-407: The second (which make it impossible to disentangle running time from the speed of physical hardware) and standard units of memory like bytes , the notion of time is abstracted as the number of elementary steps that a Turing machine takes to solve a problem and the notion of memory is abstracted as the number of cells that are used on the machine's tape. These are explained in greater detail below. It

6474-415: The second grows considerably faster than the runtime of the first as the input size increases. One might ask whether it would be better to define the class of "efficiently solvable" problems using some smaller polynomial bound, like O ( n 3 ) {\displaystyle O(n^{3})} , rather than all polynomials, which allows for such large discrepancies. It turns out, however, that

6557-432: The set of all configurations of M on input x . Because M ∈ DSPACE( s ( n )), then | C | ≤ 2 c . s ( n ) = o ( log ⁡ n ) {\displaystyle |{\mathcal {C}}|\leq 2^{c.s(n)}=o(\log n)} , where c is a constant depending on M . Let S denote the set of all possible crossing sequences of M on x . Note that

6640-649: The set of all polynomials is the smallest class of functions containing the linear functions that is also closed under addition, multiplication, and composition (for instance, O ( n 3 ) ∘ O ( n 2 ) = O ( n 6 ) {\displaystyle O(n^{3})\circ O(n^{2})=O(n^{6})} , which is a polynomial but O ( n 6 ) > O ( n 3 ) {\displaystyle O(n^{6})>O(n^{3})} ). Since we would like composing one efficient algorithm with another efficient algorithm to still be considered efficient,

6723-473: The set of problems that are hard for NP is called the set of NP-hard problems. If a problem X {\displaystyle X} is hard for C and is also in C , then X {\displaystyle X} is said to be complete for C . This means that X {\displaystyle X} is the hardest problem in C (since there could be many problems that are equally hard, more precisely X {\displaystyle X}

6806-754: The space complexity of an algorithm implemented with a Turing machine M {\displaystyle M} is defined as the function s M : N → N {\displaystyle s_{M}:\mathbb {N} \to \mathbb {N} } , where s M ( n ) {\displaystyle s_{M}(n)} is the maximum number of cells that M {\displaystyle M} uses on any input of length n {\displaystyle n} . Complexity classes are often defined using granular sets of complexity classes called DTIME and NTIME (for time complexity) and DSPACE and NSPACE (for space complexity). Using big O notation , they are defined as follows: P

6889-470: The square of a polynomial is still a polynomial) and EXPSPACE = NEXPSPACE (since the square of an exponential is still an exponential). These relationships answer fundamental questions about the power of nondeterminism compared to determinism. Specifically, Savitch's theorem shows that any problem that a nondeterministic Turing machine can solve in polynomial space, a deterministic Turing machine can also solve in polynomial space. Similarly, any problem that

#319680