Misplaced Pages

OBO Foundry

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The Open Biological and Biomedical Ontologies ( OBO ) Foundry is a group of people who build and maintain ontologies related to the life sciences . The OBO Foundry establishes a set of principles for ontology development for creating a suite of interoperable reference ontologies in the biomedical domain. Currently, there are more than a hundred ontologies that follow the OBO Foundry principles .

#173826

71-424: The OBO Foundry effort makes it easier to integrate biomedical results and carry out analysis in bioinformatics . It does so by offering a structured reference for terms of different research fields and their interconnections (ex: a phenotype in a mouse model and its related phenotype in zebrafish ). The Foundry initiative aims at improving the integration of data in the life sciences. One approach to integration

142-530: A human-readable way. That means that beside the alphanumeric identification for each item, they should be described in natural language by logical affirmations following the Aristotelian logic in a way that is unique within the ontology. The ontologies should use relations between items from the Relations Ontology (RO) . This ensures that different ontologies can integrated seamlessly, which

213-735: A central element of the NCBO's BioPortal. It is an initiative led by the OBO Foundry. The OBO Foundry is open to participations of any interested individuals. Ontologies that intend to be officially part of the OBO Foundry have to adhere to the OBO principles and pass a series of reviews done by the members, when "the Foundry coordinators serve as analogs of journal editors". There are ontologies that follow OBO principles but are not officially part of OBO, such as eagle-i 's Reagent Application Ontology. and

284-536: A community effort, standard common mappings have been created for lossless roundtrip transformations between Open Biomedical Ontologies (OBO) format and OWL. The research contains methodical examination of each of the constructs of OBO and a layer cake for OBO, similar to the Semantic Web stack. The initial set of OBO Foundry ontologies was composed by mature ontologies (such as the Gene Ontology , GO, and

355-428: A comprehensive picture of these activities. Therefore , the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences , protein domains , and protein structures . Important sub-disciplines within bioinformatics and computational biology include: The primary goal of bioinformatics

426-445: A critical area of bioinformatics research. In genomics , annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics . Genome annotation can be classified into three levels:

497-966: A field parallel to biochemistry (the study of chemical processes in biological systems). Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology. Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory , artificial intelligence , soft computing , data mining , image processing , and computer simulation . The algorithms in turn depend on theoretical foundations such as discrete mathematics , control theory , system theory , information theory , and statistics . There has been

568-399: A particular population of cancer cells. Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and

639-407: A pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution . Another early contributor to bioinformatics was Elvin A. Kabat , who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991. In

710-460: A protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy . This structure is linked to the function of the protein. Additional structural information includes the secondary , tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of

781-438: A single Uniform Resource Identifier (URI). New ontologies have, then, to reuse work done in other efforts. Despite the ideal of uniqueness of terms and interoperability, in practice, this is difficult to enforce, leading to the occurrence of term duplication. Furthermore, some ontologies do not reuse terms or even reuse terms inappropriately. Ontologies evolve in time, refining concepts and descriptions according to advances in

SECTION 10

#1732802058174

852-488: A spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics , fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models. Many of these studies are based on the detection of sequence homology to assign sequences to protein families . Pan genomics

923-569: A tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $ 1,000 or less. Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff ,

994-413: A tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes. Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures. Since

1065-488: Is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error). While genome annotation is primarily based on sequence similarity (and thus homology ), other properties of sequences can be used to predict

1136-477: Is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and

1207-417: Is an effort to create ontologies ( controlled vocabularies ) for use across biological and medical domains. A subset of the original OBO ontologies has started the OBO Foundry, which leads the OBO efforts since 2007. The creation of OBO in 2001 was largely inspired by the efforts of the Gene Ontology project. OBO forms part of the resources of the U.S. National Center for Biomedical Ontology (NCBIO) and

1278-403: Is an open competition where worldwide research groups submit protein models for evaluating unknown protein models. The linear amino acid sequence of a protein is called the primary structure . The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of

1349-421: Is arguably easier to understand and use than the traditional ontology models (which require a high degree of specific expertise). Summary of OBO Foundry Principles for development of an OBO-compatible life sciences ontology : The ontologies are openly available and have to be released under either the license CC-BY 3.0 or under the public domain ( CC0 ). The openness of the ontologies has enabled, for example,

1420-434: Is based on a series of surveys at cataloguing naming conventions of current ontologies, as well as discover issues relating to these conventions. The ontologies should be updated with regards to changes in scientific consensus . The OBO Foundry defines scientific consensus as "multiple publications by independent labs over a year come to the same conclusion, and there is no or limited (<10%) dissenting opinions published in

1491-582: Is called protein function prediction . For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing . By contrast, if a protein is found in mitochondria , it may be involved in respiration or other metabolic processes . There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools. Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET , can provide information on

SECTION 20

#1732802058174

1562-560: Is often found to contain considerable variability, or noise , and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes. Two important principles can be used to identify cancer by mutations in the exome . First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers. Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in

1633-578: Is specially important for logical inference . The Relation Ontology (RO) is an ontology designed to represent the relationships between different biomedical concepts. It describes rigorously relations like "part_of", "located_in" and "preceded_by" that are reused by many OBO Foundry ontologies. OBO ontologies need to be thoroughly documented. Frequently this is done via GitHub repositories for each specific ontologies (see List of OBO Foundry ontologies ). The ontologies should be useful for multiple different people, and ontology developers should document

1704-485: Is the annotation of data from different sources using controlled vocabularies . Ideally, such controlled vocabularies take the form of ontologies , which support logical reasoning over the data annotated using the terms in the vocabulary. The formalization of concepts in the biomedical domain is especially known via the work of the Gene Ontology Consortium, a part of the OBO Foundry. This has led to

1775-454: Is the study of the origin and descent of species , as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to: Future work endeavours to reconstruct the now more complex tree of life . The core of comparative genome analysis is the establishment of the correspondence between genes ( orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace

1846-461: Is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function. Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been

1917-605: Is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition , data mining , machine learning algorithms, and visualization . Major research efforts in the field include sequence alignment , gene finding , genome assembly , drug design , drug discovery , protein structure alignment , protein structure prediction , prediction of gene expression and protein–protein interactions , genome-wide association studies ,

1988-430: Is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments. Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about

2059-608: Is used to predict the structure of an unknown protein from existing homologous proteins. One example of this is hemoglobin in humans and the hemoglobin in legumes ( leghemoglobin ), which are distant relatives from the same protein superfamily . Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor. Foundational Model of Anatomy The Foundational Model of Anatomy Ontology ( FMA )

2130-773: The Disease Ontology , the Plant Ontology , the Sequence Ontology , the Ontology for Biomedical Investigations and the Protein Ontology . The number of ontologies in OBO has grown to the order of hundreds, and they are gathered in the list of OBO Foundry ontologies . A number of different OBO Foundry ontologies have also been integrated to the Wikidata knowledge graph. This has led to

2201-757: The Foundational Model of Anatomy , FMAO), by mergers of previously existing ontologies (ex: the Cell Ontology, CL, formed from different dedicated ontologies, and related parts on GO and FMAO) and by development of new ontologies based on its principles. The original set of ontologies also included the Zebrafish Anatomical Ontology (a part of the Zebrafish Information Network ), the CheBI ontology,

OBO Foundry - Misplaced Pages Continue

2272-590: The Java Virtual Machine . Other tool related to the OBO effort is OBO-Edit , an ontology editor and reasoner funded by the Gene Ontology Consortium . There are also plugins for OBO-Edit which facilitate the development of ontologies, such as the semi-automatic ontology generator DOG4DAG. The OBO file format is a biology-oriented language for building ontologies. It is based on the principles of Web Ontology Language (OWL) . As

2343-596: The Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility , breast cancer and Alzheimer's disease ), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable

2414-550: The Protégé ontology editor and the Web Ontology Language (OWL) for building ontologies. To facilitate command line management of ontologies in a Protégé- and OWL-compatible format, the OBO Foundry has developed the tool ROBOT (ROBOT is an OBO Tool). ROBOT aggregates functions for routine tasks in ontology development, is open source , and can be used either via the command line or as a library for any language on

2485-449: The nucleotide , protein, and process levels. Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome. The principal aim of protein-level annotation

2556-560: The 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful. In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form

2627-587: The Animals in Context Ontology. An integration into OBO of the OntoClean's theory of rigidity has been proposed as a step to standardize candidate ontologies. This integration would make it easier to develop software to automatically check candidates. The OBO Foundry community is also dedicated to developing tools to facilitate creating and maintaining ontologies. Most ontology developers in OBO use

2698-596: The Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species. As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at

2769-401: The OBO space, meaning that each item has a unique ontology prefix (such as CHEBI , GO , PRO ) and a local numeric identifier within the ontology. The choice of a numerical ID was made in order to improve maintenance and evolution of the resources. In order to participate in OBO Foundry, ontologies have to be orthogonal and the concepts it models must be unique within OBO, so each concept has

2840-399: The activity of one or more proteins . Bioinformatics techniques have been applied to explore various steps in this process. For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region

2911-579: The bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins , RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees ). With

OBO Foundry - Misplaced Pages Continue

2982-446: The biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in

3053-439: The biological pathways and networks that are an important part of systems biology . In structural biology , it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions. The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as

3124-520: The choices an algorithm provides. Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability . Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants . Functional annotations predict

3195-455: The community. Naming conventions for OBO ontologies aim at making primary labels unambiguous and unique inside the ontology (and preferably, inside OBO). Labels and synonyms should be written in English, avoiding the use of underscores and camel case . OBO lacks a mechanism for multilingual support, in contrast to Wikidata , which allows labels in different systems. The naming system in OBO

3266-451: The complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays . Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone , eventually leads to an increase or decrease in

3337-411: The development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue

3408-616: The development of certain proposed principles of good practice in ontology development, which are now being put into practice within the framework of the Open Biomedical Ontologies consortium through its OBO Foundry initiative. OBO ontologies form part of the resources of the National Center for Biomedical Ontology , where they form a central component of the NCBO's BioPortal. The Open Biological and Biomedical Ontologies (OBO; formerly Open Biomedical Ontologies)

3479-582: The effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to

3550-424: The evidence of use. This criterion is important for the review process. Examples of use include linking to terms by other ontologies, use in semantic web projects, use in annotations or other research applications. The ontologies should be developed in a way that allows collaborations with other OBO Foundry members. The ontologies should have one person responsible for the ontology who mediates interaction with

3621-643: The evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to

SECTION 50

#1732802058174

3692-421: The first bacterial genome, Haemophilus influenzae ) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling

3763-473: The fragments can be quite complicated for larger genomes. For a genome as large as the human genome , it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are

3834-456: The function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure , or protein-protein interactions . Evolutionary biology

3905-642: The genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving. Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute . This project

3976-642: The genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle , along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements . Examples of clustering algorithms applied in gene clustering are k-means clustering , self-organizing maps (SOMs), hierarchical clustering , and consensus clustering methods. Several approaches have been developed to analyze

4047-554: The genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics , which tries to understand the organizational principles within nucleic acid and protein sequences. Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations . Bioinformatics includes text mining of biological literature and

4118-775: The genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors. The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays , expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq , also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in

4189-406: The growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides . Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as

4260-410: The import of terms from the Gene Ontology (one of the ontologies that follow OBO Principles) to the Wikidata project. The ontologies have to be available in a common formal language . In practice, that means that ontologies that are part of the OBO foundry need to describe items unsing the formats OWL/ OWL2 or OBO using a RDF/XML syntax to maximize interoperability. Terms should be unique in

4331-431: The inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem. The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research , which performed the first complete sequencing and analysis of the genome of a free-living (non- symbiotic ) organism, the bacterium Haemophilus influenzae . The system identifies

SECTION 60

#1732802058174

4402-481: The integration of OBO structured ontologies to data from other, non-OBO databases . For example, the integration of the Human Disease Ontology to Wikidata has enabled its link to the description of cell-lines from the resource Cellosaurus . One of the goals of the integration of OBO Foundry to Wikidata has been to lower the barriers for non-ontologists to contribute to and use ontologies. Wikidata

4473-498: The knowledge of their specific domains. In order to ensure that new versions are updated, but tools that use older version of the ontologies are still function, OBO enforces a system of versioning systems , with each ontology version receiving a unique identifier, either in the format of a date or a numbering system, and metadata dags. The ontologies should have a clearly specified scope (the domain it intends to cover). The ontologies should have textual definitions for each item, in

4544-427: The location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component , has been devised to capture subcellular localization in many biological databases . Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases. Finding the location of proteins allows us to predict what they do. This

4615-462: The modeling of evolution and cell division/mitosis. Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data. Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce

4686-516: The problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes. In cancer , the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization ). These detection methods generate terabytes of data per experiment. The data

4757-405: The raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing. Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence

4828-662: The same time frame." Bioinformatics Bioinformatics ( / ˌ b aɪ . oʊ ˌ ɪ n f ər ˈ m æ t ɪ k s / ) is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology , chemistry , physics , computer science , computer programming , information engineering , mathematics and statistics to analyze and interpret biological data . The process of analyzing and interpreting data can sometimes be referred to as computational biology , however this distinction between

4899-486: The three-dimensional structure and nuclear organization of chromatin . Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space. Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP)

4970-454: The time. In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A , whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling

5041-498: The two terms is often disputed. To some, the term computational biology refers to building and using models of biological systems. Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics , such as by the identification of genes and single nucleotide polymorphisms ( SNPs ). These pipelines are used to better understand

#173826