Misplaced Pages

Ocean Drilling Program

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In  hydrology , an  oceanic basin  (or ocean basin ) is anywhere on Earth that is covered by  seawater . Geologically , most of the ocean basins are large  geologic basins  that are below sea level .

#737262

68-798: The Ocean Drilling Program ( ODP ) was a multinational effort to explore and study the composition and structure of the Earth's oceanic basins , running from 1985 to 2004. ODP was the successor to the Deep Sea Drilling Project initiated in 1968 by the United States. ODP was an international effort with contributions of Australia, Germany, France, Japan, the United Kingdom and the ESF Consortium for Ocean Drilling (ECOD) including 12 further countries. The program used

136-401: A Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now ." A countably infinite sequence, in which the chain moves state at discrete time steps, gives

204-403: A countable set S called the state space of the chain. A continuous-time Markov chain ( X t ) t  ≥ 0 is defined by a finite or countable state space S , a transition rate matrix Q with dimensions equal to that of the state space and initial probability distribution defined on the state space. For i  ≠  j , the elements q ij are non-negative and describe

272-864: A discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov . Markov chains have many applications as statistical models of real-world processes. They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo , which are used for simulating sampling from complex probability distributions , and have found application in areas including Bayesian statistics , biology , chemistry , economics , finance , information theory , physics , signal processing , and speech processing . The adjectives Markovian and Markov are used to describe something that

340-443: A detailed study on Markov chains. Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener 's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived

408-560: A diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé . Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938

476-469: A global ocean model. These trajectories are of particles that move only on the surface of the ocean. The model outcome gives the probability of a particle at a certain grid point to end up somewhere else on the ocean's surface. With the model outcome a matrix can be created from which the Eigenvectors and Eigenvalues are taken. These Eigenvectors show regions of attraction, aka regions where things on

544-406: A lengthy task. However, there are many techniques that can assist in finding this limit. Let P be an n × n matrix, and define Q = lim k → ∞ P k . {\textstyle \mathbf {Q} =\lim _{k\to \infty }\mathbf {P} ^{k}.} It is always true that Subtracting Q from both sides and factoring then yields where I n

612-452: A particle on the ocean surface in a certain region is more likely to stay in the same region than to pass over to a different one. Depending on the chemical composition and the physical state, the Earth can be divided into three major components:  the mantle , the core , and the crust . The crust is referred to as the outside layer of the Earth. It is made of solid rock, mostly basalt and granite . The crust that lies below sea level

680-517: A quarter are drawn. Thus X 6 = $ 0.50 {\displaystyle X_{6}=\$ 0.50} . If we know not just X 6 {\displaystyle X_{6}} , but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that X 7 ≥ $ 0.60 {\displaystyle X_{7}\geq \$ 0.60} with probability 1. But if we do not know

748-600: A rank-one matrix in which each row is the stationary distribution π : where 1 is the column vector with all entries equal to 1. This is stated by the Perron–Frobenius theorem . If, by whatever means, lim k → ∞ P k {\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}} is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matrices P ,

SECTION 10

#1732772394738

816-623: A set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation , in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to

884-556: Is a stub . You can help Misplaced Pages by expanding it . Oceanic basin Most commonly the ocean is divided into basins following the continents distribution : the North and South Atlantic (together approximately 75 million km / 29 million mi ), North and South Pacific (together approximately 155 million km / 59 million mi ), Indian Ocean (68 million km / 26 million mi ) and Arctic Ocean (14 million km / 5.4 million mi ). Also recognized

952-413: Is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P , that is, Σ = diag( λ 1 , λ 2 , λ 3 ,..., λ n ). Then by eigendecomposition Let the eigenvalues be enumerated such that: Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector

1020-555: Is affected not only by the volume of the ocean basin, but also by the volume of water in them. Factors that influence the volume of the ocean basins are: The Atlantic Ocean and the Arctic Ocean are good examples of active, growing oceanic basins, whereas the Mediterranean Sea is shrinking. The Pacific Ocean is also an active, shrinking oceanic basin, even though it has both spreading ridge and oceanic trenches. Perhaps

1088-473: Is diagonalizable or equivalently that P has n linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, defective matrices , one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way. ) Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column

1156-532: Is known as the oceanic crust , while on land it is known as the continental crust . The former is thinner and is composed of relatively dense basalt, while the latter is less dense and mainly composed of granite. The lithosphere is composed of the crust (oceanic and continental) and the uppermost part of the mantle. The lithosphere is broken into sections called plates . Tectonic plates move very slowly (5 to 10 cm (2 to 4 inches) per year) relative to each other and interact along their boundaries. This movement

1224-429: Is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distribution π i {\displaystyle \textstyle \pi _{i}} are associated with

1292-575: Is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, a discrete-time Markov chain (DTMC) , but a few authors use the term "Markov process" to refer to a continuous-time Markov chain (CTMC) without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model ). Moreover,

1360-418: Is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of the X n = i , j , k {\displaystyle X_{n}=i,j,k} state depends exclusively on

1428-401: Is one method for doing so: first, define the function f ( A ) to return the matrix A with its right-most column replaced with all 1's. If [ f ( P − I n )] exists then One thing to notice is that if P has an element P i , i on its main diagonal that is equal to 1 and the i th row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of

SECTION 20

#1732772394738

1496-483: Is possible to model this scenario as a Markov process. Instead of defining X n {\displaystyle X_{n}} to represent the total value of the coins on the table, we could define X n {\displaystyle X_{n}} to represent the count of the various coin types on the table. For instance, X 6 = 1 , 0 , 5 {\displaystyle X_{6}=1,0,5} could be defined to represent

1564-499: Is related to a Markov process. A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as " memorylessness "). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history. In other words, conditional on

1632-604: Is responsible for most of the Earth's seismic and volcanic activity. Depending on how the plates interact with each other, there are three types of boundaries. The Earth's deepest trench is the Mariana Trench which extends for about 2500 km (1600 miles) across the seabed. It is near the Mariana Islands , a volcanic archipelago in the West Pacific. Its deepest point is 10994 m (nearly 7 miles) below

1700-474: Is the Kronecker delta , using the little-o notation . The q i j {\displaystyle q_{ij}} can be seen as measuring how quickly the transition from i to j happens. Define a discrete-time Markov chain Y n to describe the n th jump of the process and variables S 1 , S 2 , S 3 , ... to describe holding times in each of the states where S i follows

1768-603: Is the Southern Ocean (20 million km / 7 million mi ). All ocean basins collectively cover 71% of the Earth's surface, and together they contain almost 97% of all water on the planet. They have an average depth of almost 4 km (about 2.5 miles). "Limits of Oceans and Seas" , published by the International Hydrographic Office in 1953, is a document that defined the ocean's basins as they are largely known today. The main ocean basins are

1836-404: Is the identity matrix of size n , and 0 n , n is the zero matrix of size n × n . Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q . Including the fact that the sum of each

1904-414: Is the identity matrix . If the state space is finite , the transition probability distribution can be represented by a matrix , called the transition matrix, with the ( i , j )th element of P equal to Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix . A stationary distribution π is a (row) vector, whose entries are non-negative and sum to 1,

1972-430: Is unchanged by the operation of transition matrix P on it and so is defined by By comparing this definition with that of an eigenvector we see that the two concepts are related and that is a normalized ( ∑ i π i = 1 {\textstyle \sum _{i}\pi _{i}=1} ) multiple of a left eigenvector e of the transition matrix P with an eigenvalue of 1. If there

2040-533: Is unique too (because there is no other π which solves the stationary distribution equation above). Let u i be the i -th column of U matrix, that is, u i is the left eigenvector of P corresponding to λ i . Also let x be a length n row vector that represents a valid probability distribution; since the eigenvectors u i span R n , {\displaystyle \mathbb {R} ^{n},} we can write If we multiply x with P from right and continue this operation with

2108-405: Is unity and that π lies on a simplex . If the Markov chain is time-homogeneous, then the transition matrix P is the same after each step, so the k -step transition probability can be computed as the k -th power of the transition matrix, P . If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution π . Additionally, in this case P converges to

Ocean Drilling Program - Misplaced Pages Continue

2176-424: The exponential distribution with rate parameter − q Y i Y i . For any value n = 0, 1, 2, 3, ... and times indexed up to this value of n : t 0 , t 1 , t 2 , ... and all states recorded at these times i 0 , i 1 , i 2 , i 3 , ... it holds that where p ij is the solution of the forward equation (a first-order differential equation ) with initial condition P(0)

2244-439: The integers or natural numbers , and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty

2312-804: The Atlantic and Arctic basins. The Atlantic Basin began to form around 180 million years ago, when the continent Laurasia (North America and Eurasia ) started to drift away from Africa and South America. The Pacific plate grew, and subduction led to a shrinking of its bordering plates. The Pacific plate continues to move northward. Around 130 million years ago the South Atlantic started to form, as South America and Africa started to separate. At around this time India and Madagascar rifted northwards, away from Australia and Antarctica, creating seafloor around Western Australia and East Antarctica. When Madagascar and India separated between 90 and 80 million years ago,

2380-617: The Mariana Islands. It is located far away from oceanic spreading centers, where oceanic crust is constantly created or destroyed. The oldest crust is estimated to be only around 200 million years old, compared to the age of Earth which is 4.6 billion years. 200 million years ago nearly all land mass was one large continent called Pangea , which started to split up. During the splitting process of Pangea, some ocean basins shrunk, such as the Pacific, while others were created, such as

2448-610: The Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin , written by Alexander Pushkin , and proved a central limit theorem for such chains. In 1912 Henri Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include

2516-713: The best example of an inactive oceanic basin is the Gulf of Mexico, which formed in Jurassic times and has been doing nothing but collecting sediments since then. The Aleutian Basin is another example of a relatively inactive oceanic basin. The Japan Basin in the Sea of Japan which formed in the Miocene , is still tectonically active although recent changes have been relatively mild. Markov chain In probability theory and statistics,

2584-403: The definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are

2652-527: The drillship JOIDES Resolution on 110 expeditions (legs) to collect about 2,000 deep sea cores from major geological features located in the ocean basins of the world. Drilling discoveries led to further questions and hypotheses, as well as to new disciplines in earth sciences such as the field of paleoceanography . In 2004 ODP transformed into the Integrated Ocean Drilling Program (IODP). This oceanography article

2720-460: The earlier values, then based only on the value X 6 {\displaystyle X_{6}} we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about X 7 {\displaystyle X_{7}} are impacted by our knowledge of values prior to X 6 {\displaystyle X_{6}} . However, it

2788-456: The early 20th century in the form of the Poisson process . Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of

Ocean Drilling Program - Misplaced Pages Continue

2856-416: The first draw results in state X 1 = 0 , 1 , 0 {\displaystyle X_{1}=0,1,0} . The probability of achieving X 2 {\displaystyle X_{2}} now depends on X 1 {\displaystyle X_{1}} ; for example, the state X 2 = 1 , 0 , 1 {\displaystyle X_{2}=1,0,1}

2924-436: The foundations of Markov processes include William Feller , starting in 1930s, and then later Eugene Dynkin , starting in the 1950s. Suppose that there is a coin purse containing five quarters (each worth 25¢), five dimes (each worth 10¢), and five nickels (each worth 5¢), and one by one, coins are randomly drawn from the purse and are set on a table. If X n {\displaystyle X_{n}} represents

2992-572: The individual ocean basins has fluctuated in the past due to, amongst other, tectonic plate movements. Therefore, an oceanic basin can be actively changing size and/or depth or can be relatively inactive. The elements of an active and growing oceanic basin include an elevated mid-ocean ridge , flanking abyssal hills leading down to abyssal plains and an oceanic trench . Changes in biodiversity, floodings and other climate variations are linked to sea-level, and are reconstructed with different models and observations (e.g., age of oceanic crust). Sea level

3060-408: The limit lim k → ∞ P k {\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}} does not exist while the stationary distribution does, as shown by this example: (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be

3128-506: The line between the North and South Atlantic is set at the equator . The Antarctic or Southern Ocean, which reaches from 60° south to Antarctica had been omitted until 2000, but is now also recognized by the International Hydrographic Office. Nevertheless, and since ocean basins are interconnected, many oceanographers prefer to refer to one single ocean basin instead of multiple ones.   Older references (e.g., Littlehales 1930) consider

3196-448: The nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). The system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there

3264-406: The ocean is very slow compared to horizonal flow and observing the deep ocean is difficult. Defining the ocean basins based on connectivity of the entire ocean (depth and width) is therefore not possible. Froyland et al. (2014) defined ocean basins based on surface connectivity. This is achieved by creating a Markov Chain model of the surface ocean dynamics using short term time trajectory data from

3332-440: The oceanic basins to be the complement to the continents , with erosion dominating the latter, and the sediments so derived ending up in the ocean basins. This vision is supported by the fact that oceans lie lower than continents, so the former serve as sedimentary basins that collect sediment eroded from the continents, known as clastic sediments, as well as precipitation sediments. Ocean basins also serve as repositories for

3400-877: The ones named in the previous section. These main basins are divided into smaller parts. Some examples are: the Baltic Sea (with three subdivisions), the North Sea , the Greenland Sea , the Norwegian Sea , the Laptev Sea , the Gulf of Mexico , the South China Sea , and many more. The limits were set for convenience of compiling sailing directions but had no geographical or physical ground and to this day have no political significance. For instance,

3468-448: The outcome of the X n − 1 = ℓ , m , p {\displaystyle X_{n-1}=\ell ,m,p} state. A discrete-time Markov chain is a sequence of random variables X 1 , X 2 , X 3 , ... with the Markov property , namely that the probability of moving to the next state depends only on the present state and not on the previous states: The possible values of X i form

SECTION 50

#1732772394738

3536-429: The present state of the system, its future and past states are independent . A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of

3604-429: The rate of the process transitions from state i to state j . The elements q ii are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process. Let X t {\displaystyle X_{t}} be the random variable describing

3672-632: The results, in the end we get the stationary distribution π . In other words, π = a 1 u 1 ← xPP ... P = xP as k → ∞. That means Since π is parallel to u 1 (normalized by L2 norm) and π is a probability vector, π approaches to a 1 u 1 = π as k → ∞ with a speed in the order of λ 2 / λ 1 exponentially. This follows because | λ 2 | ≥ ⋯ ≥ | λ n | , {\displaystyle |\lambda _{2}|\geq \cdots \geq |\lambda _{n}|,} hence λ 2 / λ 1

3740-415: The rows in P is 1, there are n+1 equations for determining n unknowns, so it is computationally easier if on the one hand one selects one row in Q and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector 0 , and next left-multiplies this latter vector by the inverse of transformed former matrix to find Q . Here

3808-455: The skeletons of carbonate - and silica -secreting organisms such as coral reefs , diatoms , radiolarians , and foraminifera . More modern sources (e.g., Floyd 1991) regard the ocean basins more as basaltic plains, than as sedimentary depositories, since most sedimentation occurs on the continental shelves and not in the geologically defined ocean basins. The flow in the ocean is not uniform but varies with depth. Vertical circulation in

3876-673: The spreading ridges in the Indian Ocean were reorganized. The northernmost part of the Atlantic Ocean was also formed at this time when Europe and Greenland separated. About 60 million years ago a new rift and oceanic ridge formed between Greenland and Europe, separating them and initiating the formation of oceanic crust in the Norwegian Sea and the Eurasian Basin in the eastern Arctic Ocean. The area occupied by

3944-411: The state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Markov Processes in continuous time were discovered long before his work in

4012-841: The state of the process at time t , and assume the process is in a state i at time t . Then, knowing X t = i {\displaystyle X_{t}=i} , X t + h = j {\displaystyle X_{t+h}=j} is independent of previous values ( X s : s < t ) {\displaystyle \left(X_{s}:s<t\right)} , and as h → 0 for all j and for all t , Pr ( X ( t + h ) = j ∣ X ( t ) = i ) = δ i j + q i j h + o ( h ) , {\displaystyle \Pr(X(t+h)=j\mid X(t)=i)=\delta _{ij}+q_{ij}h+o(h),} where δ i j {\displaystyle \delta _{ij}}

4080-417: The state space of P and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as ∑ i 1 ⋅ π i = 1 {\textstyle \sum _{i}1\cdot \pi _{i}=1} we see that the dot product of π with a vector whose components are all 1

4148-442: The state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by 6 × 6 × 6 = 216 {\displaystyle 6\times 6\times 6=216} possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that

SECTION 60

#1732772394738

4216-452: The subsequent powers P . Hence, the i th row or column of Q will have the 1 and the 0's in the same positions as in P . As stated earlier, from the equation π = π P , {\displaystyle {\boldsymbol {\pi }}={\boldsymbol {\pi }}\mathbf {P} ,} (if exists) the stationary (or steady state) distribution π is a left eigenvector of row stochastic matrix P . Then assuming that P

4284-489: The surface of the ocean (plastic, biomass, water etc.) become trapped. One of these regions is for example the Atlantic garbage patch . With this approach the five main ocean basins are still the North and South Atlantic, North and South Pacific and the Arctic Ocean, but with different boundaries between the basins. These boundaries show the lines of very little surface connectivity between the different regions which means that

4352-492: The surface of the sea. The Earth's longest trench runs alongside the coast of Peru and Chile, reaching a depth of 8065 m (26460 feet) and extending for approximately 5900 km (3700 miles). It occurs where the oceanic Nazca plate slides under the continental South American plate and is associated with the upthrust and volcanic activity of the Andes. The oldest oceanic crust is in the far western equatorial Pacific, east of

4420-406: The system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in

4488-497: The term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (see Variations ). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. The changes of state of

4556-436: The time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the state space of a Markov chain does not have any generally agreed-on restrictions:

4624-404: The total value of the coins set on the table after n draws, with X 0 = 0 {\displaystyle X_{0}=0} , then the sequence { X n : n ∈ N } {\displaystyle \{X_{n}:n\in \mathbb {N} \}} is not a Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and

#737262