In computer science , graph traversal (also known as graph search ) refers to the process of visiting (checking and/or updating) each vertex in a graph . Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal.
50-609: [REDACTED] Look up traversal in Wiktionary, the free dictionary. Traversal may refer to: Graph traversal , checking and/or changing each vertex in a graph Tree traversal , checking and/or changing each node in a tree data structure NAT traversal , establishing and maintaining Internet protocol connections in a computer network, across gateways that implement network address translation See also [ edit ] Traverse (disambiguation) Topics referred to by
100-493: A call frame ). In some environments there may be more or fewer functions assigned to the call stack. In the Forth programming language , for example, ordinarily only the return address, counted loop parameters and indexes, and possibly local variables are stored on the call stack (which in that environment is named the return stack ), although any data can be temporarily placed there using special return-stack handling code so long as
150-486: A computer program . This type of stack is also known as an execution stack , program stack , control stack , run-time stack , or machine stack , and is often shortened to simply the " stack ". Although maintenance of the call stack is important for the proper functioning of most software , the details are normally hidden and automatic in high-level programming languages . Many computer instruction sets provide special instructions for manipulating stacks. A call stack
200-409: A process ), although additional stacks may be created for signal handling or cooperative multitasking (as with setcontext ). Since there is only one in this important context, it can be referred to as the stack (implicitly "of the task"); however, in the Forth programming language the data stack or parameter stack is accessed more explicitly than the call stack and is commonly referred to as
250-467: A programming language depend upon the compiler , operating system , and the available instruction set . As noted above, the primary purpose of a call stack is to store the return addresses . When a subroutine is called, the location (address) of the instruction at which the calling routine can later resume must be saved somewhere. Using a stack to save the return address has important advantages over some alternative calling conventions , such as saving
300-515: A call to a subroutine which has not yet terminated with a return. For example, if a subroutine named DrawLine is currently running, having been called by a subroutine DrawSquare , the top part of the call stack might be laid out like in the adjacent picture. A diagram like this can be drawn in either direction as long as the placement of the top, and so direction of stack growth, is understood. Architectures differ as to whether call stacks grow towards higher addresses or towards lower addresses, so
350-413: A field in a known location in the stack frame enables code to access each frame successively underneath the currently executing routine's frame, and also allows the routine to easily restore the frame pointer to the caller's frame, just before it returns. Programming languages that support nested subroutines also have a field in the call frame that points to the stack frame of the latest activation of
400-410: A given invocation of a function is a copy of the stack pointer as it was before the function was invoked. The locations of all other fields in the frame can be defined relative either to the top of the frame, as negative offsets of the stack pointer, or relative to the top of the frame below, as positive offsets of the frame pointer. The location of the frame pointer itself must inherently be defined as
450-413: A graph is to be traversed by a tree-based algorithm (such as DFS or BFS), then the algorithm must be called at least once for each connected component of the graph. This is easily accomplished by iterating through all the vertices of the graph, performing the algorithm on each vertex that is still unvisited when examined. A depth-first search (DFS) is an algorithm for traversing a finite graph. DFS visits
500-419: A negative offset of the stack pointer. In most systems a stack frame has a field to contain the previous value of the frame pointer register, the value it had while the caller was executing. For example, the stack frame of DrawLine would have a memory location holding the frame pointer value that DrawSquare uses (not shown in the diagram above). The value is saved upon entry to the subroutine. Having such
550-440: A particular function, popping a frame off the stack does not constitute a fixed decrement of the stack pointer . At function return, the stack pointer is instead restored to the frame pointer , the value of the stack pointer just before the function was called. Each stack frame contains a stack pointer to the top of the frame immediately below. The stack pointer is a mutable register shared between all invocations. A frame pointer of
SECTION 10
#1732775793011600-427: A return address needs to be stored for each activation of the function so that it can later be used to return from the function activation. Stack structures provide this capability automatically. Depending on the language, operating system, and machine environment, a call stack may serve additional purposes, including, for example: The typical call stack is used for the return address, locals, and parameters (known as
650-407: A return value. The more general act of popping one or more frames off the stack to resume execution elsewhere in the program is called stack unwinding and must be performed when non-local control structures are used, such as those used for exception handling . In this case, the stack frame of a function contains one or more entries specifying exception handlers. When an exception is thrown, the stack
700-482: A running, but paused, C program. Taking regular-time samples of the call stack can be useful in profiling the performance of programs as, if a subroutine's address appears in the call stack sampling data many times, it is likely to act as a code bottleneck and should be inspected for performance problems. In a language with free pointers or non-checked array writes (such as in C), the mixing of control flow data which affects
750-401: A special case), hence the stack structure. For example, if a subroutine DrawSquare calls a subroutine DrawLine from four different places, DrawLine must know where to return when its execution completes. To accomplish this, the address following the instruction that jumps to DrawLine , the return address , is pushed onto the top of the call stack as part of each call. Since
800-404: A value that points directly to the executable code. As a result, when the function returns, the computer executes that code. This kind of an attack can be blocked with W^X , but similar attacks can succeed even with W^X protection enabled, including the return-to-libc attack or the attacks coming from return-oriented programming . Various mitigations have been proposed, such as storing arrays in
850-399: A vertex connected to yet more uncharted territory. It will then proceed down the new path as it had before, backtracking as it encounters dead-ends, and ending only when the algorithm has backtracked past the original "root" vertex from the very first step. DFS is the basis for many graph-related algorithms, including topological sorts and planarity testing . A breadth-first search (BFS)
900-438: Is a simple greedy algorithm : A universal traversal sequence is a sequence of instructions comprising a graph traversal for any regular graph with a set number of vertices and for any starting vertex. A probabilistic proof was used by Aleliunas et al. to show that there exists a universal traversal sequence with number of instructions proportional to O ( n ) for any regular graph with n vertices. The steps specified in
950-454: Is a tree: during a traversal it may be assumed that all "ancestor" vertices of the current vertex (and others depending on the algorithm) have already been visited. Both the depth-first and breadth-first graph searches are adaptations of tree-based algorithms, distinguished primarily by the lack of a structurally determined "root" vertex and the addition of a data structure to record the traversal's visitation state. Note. — If each vertex in
1000-432: Is an online problem , meaning that the information about the graph is only revealed during the runtime of the algorithm. A common model is as follows: given a connected graph G = ( V , E ) with non-negative edge weights. The algorithm starts at some vertex, and knows all incident outgoing edges and the vertices at the end of these edges—but not more. When a new vertex is visited, then again all incident outgoing edges and
1050-418: Is another technique for traversing a finite graph. BFS visits the sibling vertices before visiting the child vertices, and a queue is used in the search process. This algorithm is often used to find the shortest path from one vertex to another. Breadth-first search can be used to solve many problems in graph theory, for example: The problem of graph exploration can be seen as a variant of graph traversal. It
SECTION 20
#17327757930111100-464: Is different from Wikidata All article disambiguation pages All disambiguation pages Graph traversal Unlike tree traversal, graph traversal may require that some vertices be visited more than once, since it is not necessarily known before transitioning to a vertex that it has already been explored. As graphs become more dense , this redundancy becomes more prevalent, causing computation time to increase; as graphs become more sparse,
1150-419: Is unwound until a handler is found that is prepared to handle (catch) the type of the thrown exception. Some languages have other control structures that require general unwinding. Pascal allows a global goto statement to transfer control out of a nested function and into a previously invoked outer function. This operation requires the stack to be unwound, removing as many stack frames as necessary to restore
1200-405: Is used for several related purposes, but the main reason for having one is to keep track of the point to which each active subroutine should return control when it finishes executing. An active subroutine is one that has been called, but is yet to complete execution, after which control should be handed back to the point of call. Such activations of subroutines may be nested to any level (recursive as
1250-526: The Electrologica X8 and somewhat later the Burroughs large systems , had special "display registers" to support nested functions, while compilers for most modern machines (such as the ubiquitous x86) simply reserve a few words on the stack for the pointers, as needed. For some purposes, the stack frame of a subroutine and that of its caller can be considered to overlap, the overlap consisting of
1300-406: The stack (see below). In high-level programming languages , the specifics of the call stack are usually hidden from the programmer. They are given access only to a set of functions, and not the memory on the stack itself. This is an example of abstraction . Most assembly languages , on the other hand, require programmers to be involved in manipulating the stack. The actual details of the stack in
1350-400: The "return stack"). When a subroutine is ready to return, it executes an epilogue that undoes the steps of the prologue. This will typically restore saved register values (such as the frame pointer value) from the stack frame, pop the entire stack frame off the stack by changing the stack pointer value, and finally branch to the instruction at the return address. Under many calling conventions,
1400-418: The algorithm visits each vertex. If the vertex has already been visited, it is ignored and the path is pursued no further; otherwise, the algorithm checks/updates the vertex and continues down its current path. Several special cases of graphs imply the visitation of other vertices in their structure, and thus do not require that visitation be explicitly recorded during the traversal. An important example of this
1450-451: The area where the parameters are passed from the caller to the callee. In some environments, the caller pushes each argument onto the stack, thus extending its stack frame, then invokes the callee. In other environments, the caller has a preallocated area at the top of its stack frame to hold the arguments it supplies to other subroutines it calls. This area is sometimes termed the outgoing arguments area or callout area . Under this approach,
1500-407: The call stack is organized as a stack , the caller pushes the return address onto the stack, and the called subroutine, when it finishes, pulls or pops the return address off the call stack and transfers control to that address. If a called subroutine calls on yet another subroutine, it will push another return address onto the call stack, and so on, with the information stacking up and unstacking as
1550-568: The child vertices before visiting the sibling vertices; that is, it traverses the depth of any particular path before exploring its breadth. A stack (often the program's call stack via recursion ) is generally used when implementing the algorithm. The algorithm begins with a chosen "root" vertex; it then iteratively transitions from the current vertex to an adjacent, unvisited vertex, until it can no longer find an unexplored vertex to transition to from its current location. The algorithm then backtracks along previously visited vertices, until it finds
Traversal - Misplaced Pages Continue
1600-432: The continuation. This is not the only way to implement continuations; for example, using multiple, explicit stacks, application of a continuation can simply activate its stack and wind a value to be passed. The Scheme programming language allows arbitrary thunks to be executed in specified points on "unwinding" or "rewinding" of the control stack when a continuation is invoked. The call stack can sometimes be inspected as
1650-401: The current stack pointer and/or frame pointer values may be pushed. If frame pointers are being used, the prologue will typically set the new value of the frame pointer register from the stack pointer. Space on the stack for local variables can then be allocated by incrementally changing the stack pointer. The Forth programming language allows explicit winding of the call stack (called there
1700-442: The execution of code (the return addresses or the saved frame pointers) and simple program data (parameters or return values) in a call stack is a security risk, and is possibly exploitable through stack buffer overflows , which are the most common type of buffer overflow . One such attack involves filling one buffer with arbitrary executable code, and then overflowing this or some other buffer to overwrite some return address with
1750-429: The immediately enclosing), so that deeply nested routines that access shallow data do not have to traverse several links; this strategy is often called a "display". Access links can be optimized away when an inner function does not access any (non-constant) local data in the encapsulation, as is the case with pure functions communicating only via arguments and return values, for example. Some historical computers, such as
1800-405: The items popped off the stack by the epilogue include the original argument values, in which case there usually are no further stack manipulations that need to be done by the caller. With some calling conventions, however, it is the caller's responsibility to remove the arguments from the stack after the return. Returning from the called function will pop the top frame off the stack, perhaps leaving
1850-443: The logic of any diagram is not dependent on this addressing choice by convention. The stack frame at the top of the stack is for the currently executing routine, which can access information within its frame (such as parameters or local variables) in any order. The stack frame usually includes at least the following items (in push order): When stack frame sizes can differ, such as between different functions or between invocations of
1900-623: The needs of calls and returns are respected; parameters are ordinarily stored on a separate data stack or parameter stack , typically called the stack in Forth terminology even though there is a call stack since it is usually accessed more explicitly. Some Forths also have a third stack for floating-point parameters. A call stack is composed of stack frames (also called activation records or activation frames ). These are machine dependent and ABI -dependent data structures containing subroutine state information. Each stack frame corresponds to
1950-427: The opposite holds true. Thus, it is usually necessary to remember which vertices have already been explored by the algorithm, so that vertices are revisited as infrequently as possible (or in the worst case, to prevent the traversal from continuing indefinitely). This may be accomplished by associating each vertex of the graph with a "color" or "visitation" state during the traversal, which is then checked and updated as
2000-473: The procedure that most closely encapsulates the callee, i.e. the immediate scope of the callee. This is called an access link or static link (as it keeps track of static nesting during dynamic and recursive calls) and provides the routine (as well as any other routines it may invoke) access to the local data of its encapsulating routines at every nesting level. Some architectures, compilers, or optimization cases store one link for each enclosing level (not just
2050-428: The program dictates. If the pushing consumes all of the space allocated for the call stack, an error called a stack overflow occurs, generally causing the program to crash . Adding a block's or subroutine's entry to the call stack is sometimes called "winding", and removing entries "unwinding". There is usually exactly one call stack associated with a running program (or more accurately, with each task or thread of
Traversal - Misplaced Pages Continue
2100-472: The program is running. Depending on how the program is written and compiled, the information on the stack can be used to determine intermediate values and function call traces. This has been used to generate fine-grained automated tests, and in cases like Ruby and Smalltalk, to implement first-class continuations. As an example, the GNU Debugger (GDB) implements interactive inspection of the call stack of
2150-411: The proper context to transfer control to the target statement within the enclosing outer function. Similarly, C has the setjmp and longjmp functions that act as non-local gotos. Common Lisp allows control of what happens when the stack is unwound by using the unwind-protect special operator. When applying a continuation , the stack is (logically) unwound and then rewound with the stack of
2200-408: The return address before the beginning of the called subroutine or in some other fixed location. One is that each task can have its own stack, and thus the subroutine can be thread-safe , that is, able to be active simultaneously for different tasks doing different things. Another benefit is that by providing reentrancy , recursion is automatically supported. When a function calls itself recursively,
2250-399: The routine is begun. For instruction set architectures in which the instruction used to call a subroutine puts the return address into a register, rather than pushing it onto the stack, the prologue will commonly save the return address by pushing the value onto the call stack, although if the called subroutine does not call any other routines it may leave the value in the register. Similarly,
2300-414: The same term [REDACTED] This disambiguation page lists articles associated with the title Traversal . If an internal link led you here, you may wish to change the link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=Traversal&oldid=781981032 " Category : Disambiguation pages Hidden categories: Short description
2350-424: The sequence are relative to the current node, not absolute. For example, if the current node is v j , and v j has d neighbors, then the traversal sequence will specify the next node to visit, v j +1 , as the i neighbor of v j , where 1 ≤ i ≤ d . Call stack In computer science , a call stack is a stack data structure that stores information about the active subroutines of
2400-419: The size of the area is calculated by the compiler to be the largest needed by any called subroutine. Usually the call stack manipulation needed at the site of a call to a subroutine is minimal (which is good since there can be many call sites for each subroutine to be called). The values for the actual arguments are evaluated at the call site, since they are specific to the particular call, and either pushed onto
2450-400: The stack or placed into registers, as determined by the used calling convention . The actual call instruction, such as "branch and link", is then typically executed to transfer control to the code of the target subroutine. In the called subroutine, the first code executed is usually termed the subroutine prologue , since it does the necessary housekeeping before the code for the statements of
2500-417: The vertices at the end are known. The goal is to visit all n vertices and return to the starting vertex, but the sum of the weights of the tour should be as small as possible. The problem can also be understood as a specific version of the travelling salesman problem , where the salesman has to discover the graph on the go. For general graphs, the best known algorithms for both undirected and directed graphs
#10989