Misplaced Pages

International Image Interoperability Framework

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The International Image Interoperability Framework (IIIF, spoken as 'triple-I-eff') defines several application programming interfaces that provide a standardised method of describing and delivering images over the web, as well as "presentation based metadata" (that is, structural metadata) about structured sequences of images. If institutions holding artworks, books, newspapers, manuscripts, maps, scrolls, single sheet collections, and archival materials provide IIIF endpoints for their content, any IIIF-compliant viewer or application can consume and display both the images and their structural and presentation metadata.

#38961

31-479: There are many digitisation programmes that have resulted in a particular collection’s content exposed on the web in a particular viewer application, but these various collections have not typically been interoperable with one another, and end users or institutions cannot substitute a viewer of their choice to consume the digitised material. The IIIF aims to cultivate shared technologies for both client and server to enable interoperability across repositories, and to foster

62-467: A DZI and differ only in that there is not a single "highest resolution" level for the entire DZI. Microsoft Live Labs has created an application for the App Store called Seadragon Mobile . It is run over the internet and includes Deep Zoom on the following categories; art, history, maps, photos, Photosynth which anybody can upload to, space and technology & web. Photosynth Photosynth

93-452: A Manifest (a JSON-LD document) that describes the structure of each book, artwork, manuscript or other artefact. The manifest contains references to Image API endpoints. A viewer application consuming the manifest can produce a coherent user experience for the artefact by implementing features such as page by page navigation, deep zooming into images and annotations on images. The IIIF Search API allows for "searching annotation content within

124-650: A demonstration of Seadragon and Photosynth at the 2007 TED conference. In November 2009, 352 Media Group , a Silverlight developer in the Microsoft Silverlight Partner Program, created an example of Deep Zoom using Microsoft Silverlight version 3. It is online at 352 Media Group's Web site . The Winston Churchill Deep Zoom Archived 2010-07-04 at the Wayback Machine mosaic, created by Silverlight developers Shoothill , features as both an online interactive deep zoom and

155-400: A given high resolution source image is to allow clients to request low resolution tiles for use in a Deep Zoom style viewing tool such as OpenSeadragon. The IIIF Presentation API specifies a web service that returns JSON-LD structured documents that together describe the structure and layout of a digitized object or other collection of images and related content. An institution would publish

186-440: A market in compatible servers and viewing applications. The IIIF Image API specifies a web service that returns an image in response to a standard HTTP or HTTPS request. The URI can specify the region, size, rotation, quality characteristics and format of the requested image. A URI can also be constructed to request basic technical information about the image to support client applications. One major use of an Image API endpoint for

217-467: A quadtree pattern of increasing resolution of image (in other words twice the zoom and twice the resolution). The main difference is that with Google Maps the actual details on the image change from one zoom level to another, while with Deep Zoom the same image is displayed at each zoom level. Seadragon Software , formerly Sand Codex, first created the Seadragon technology and its implementation of what

248-492: A single IIIF resource, such as a Manifest, Range or Collection." A use case for IIIF would be to allow a user to view a manuscript that has been dismembered in the past, with its leaves now scattered across various collections. If each collection exposes its digitized images via the Image API, then a scholar can construct and publish a manifest that digitally recombines the leaves to present a single coherent user experience for

279-535: A software tool available for download at the Photosynth website. Photosynth is based on Photo Tourism, a research project by University of Washington graduate student Noah Snavely. Shortly after Microsoft's acquisition of Seadragon in early 2006, that team began work on Photosynth, under the direction of Seadragon founder Blaise Agüera y Arcas . Microsoft released a free tech preview version on November 9, 2006. Users could view models generated by Microsoft or

310-774: A standalone deep zoom which forms part of the Churchill exhibit in the Churchill War Rooms in Whitehall. In 2010, Shoothill built the Sumatran Tiger Deep Zoom - the largest seen to date - for worldwide conservation charity Fauna and Flora International , featuring thousands of images of endangered species. An early example of Deep Zoom-like technology was implemented at The Department of Maori Affairs in New Zealand in 1997. The technology

341-427: Is a collection of some number of DZIs linked and referenced by a DZC file (with either a .dzc or .xml extension). At a high level, a collection is a number of image thumbnails whose location is kept track of by the .dzc/.xml file, when zooming into an image, it accesses greater resolutions tiles. A DZC's structure is similar to that of a DZI; the .dzc/.xml file defines the collection and the subdirectory of folders maps to

SECTION 10

#1732793171039

372-486: Is a discontinued app and service from Microsoft Live Labs and the University of Washington that analyzes digital photographs and generates a three-dimensional model of the photos and a point cloud of a photographed object. Pattern recognition components compare portions of images to create points, which are then compared to convert the image into a model. Users are able to view and generate their own models using

403-518: Is a sparse image. Though used in the proprietary Deep Zoom, the dzi format is open and able to be used by anyone. A DZI has two parts: a DZI file (with either a .dzi or .xml extension) and a subdirectory of image folders. Each folder in the image subdirectory is labeled with its level of resolution. Higher numbers correspond to a higher resolution level; inside each folder are the image tiles corresponding to that level of resolution, numbered consecutively in columns from top left to bottom right. A DZC

434-399: Is known scientifically as bundle adjustment and is commonly used in the field of photogrammetry , with similar products available such as Imodeller and D-Sculptor . This first step is extremely computationally intensive, but only has to be performed once on each set of photographs. The second step involves the display of and navigation through the 3D point cloud of features identified in

465-574: Is now called Deep Zoom. This technology was then absorbed into the Microsoft Live Labs when Seadragon Software was acquired. Engineers from Seadragon now work with Microsoft to integrate their work into technology such as Silverlight and Photosynth . The most famous implementation of Deep Zoom was probably the first: the memorabilia collection at the Hard Rock website . Conceived and designed by Duncan/Channon and built by Vertigo, it

496-413: Is processed using an interest point detection and matching algorithm developed by Microsoft Research which is similar in function to UBC 's Scale-invariant feature transform . This process identifies specific features, for example the corner of a window frame or a door handle. Features in one photograph are then compared to and matched with the same features in the other photographs. Thus photographs of

527-580: Is the Deep Earth project. It is described by its creators as "a community project focused on creating a rich interactive mapping control using Silverlight2 Deep Zoom. Concentrating on Microsoft Virtual Earth imagery and data the project offers team members the opportunity to learn and share while creating something cool and useful." A paintings collection project http://galleryzoom.co.uk/ shows 1000 high resolution/sensor images individually indexed. (Using Deep Zoom Composer). Blaise Aguera y Arcas gave

558-819: The BBC , but not create their own models at that time. Microsoft teamed up with NASA on August 6, 2007 allowing users to preview its Photosynth technology showing the Space Shuttle Endeavour . On August 20, 2007, a preview showing the tiles of Endeavour during the backflip process was made available for viewing. On August 20, 2008, Microsoft officially released Photosynth to the public, allowing users to upload their images and generate their own Photosynth models. In March 2010, Photosynth added support for gigapixel panoramas stitched in Microsoft ICE . The panoramas use Seadragon based technology similar to

589-573: The DZI file structure, each with their set of .dzi/.xml and image tiles. The DZC is used in Microsoft's Pivot, but not in SeaDragon per se. Sparse images are a sub-classification of the DZI file type. A sparse image is normally a number of separate photographs with varying resolution levels that have been placed in a single DZI instead of a DZC. Sparse images have no different file structure than that of

620-490: The Photosynth website and services. On 20 December 2017, Photosynth returned as a feature of the Microsoft Pix app. In the development of Microsoft Flight Simulator , Microsoft's Photosynth technology returned to recreate buildings and terrain across the entire world. The Photosynth technology works in two steps. The first step involves the analysis of multiple photographs taken of the same area. Each photograph

651-537: The Presentation API was published in 2013 and of the Search API in 2016. Deep Zoom Deep Zoom is a technology developed by Microsoft for efficiently transmitting and viewing images. It allows users to pan around and zoom in on a large, high resolution image or a large collection of images. It reduces the time required for initial load by downloading only the region being viewed or only at

SECTION 20

#1732793171039

682-475: The fall within the Microsoft Pix app for iOS, however as of late 2020, the Photosynth features appear to no longer be part of the Microsoft Pix app. The latest generation of photosynths are easy to capture, as photographs taken by any regular digital camera or mobile phone can be uploaded to Photosynth. Users have the option to geotag their digital shots on sites such as Flickr and then upload them to

713-520: The first step. This is done with the publicly downloadable Photosynth viewer. The viewer resides on a client computer and maintains a connection to a server that stores the original photographs. It enables a user to, among other things, see any of the photographs from their original vantage point. It incorporates DeepZoom technology Microsoft obtained through its acquisition of Seadragon in January 2006. The Seadragon technology enables smooth zooming into

744-639: The high-resolution photographs without downloading them to the user's machine. The Photosynth Direct 3D -based viewing software is only available to the Windows 7 , Windows Vista and Windows XP operating systems. However, the team released a Silverlight version of the viewer which has succeeded the D3D viewer as the main option to view photosynths. As of March 2009, user uploaded Photosynth collections were available for viewing on iPhones using iSynth (3D) or Seadragon Mobile (2D only). The Photosynth application

775-550: The manuscript in any compatible viewer. The Image API was proposed in late 2011 as a collaboration between The British Library , Stanford University , the Bodleian Libraries (Oxford University), the Bibliothèque nationale de France , Nasjonalbiblioteket (National Library of Norway), Los Alamos National Laboratory Research Library , and Cornell University . Version 1.0 was published in 2012. Version 1.0 of

806-488: The resolution it is displayed at. Subsequent regions are downloaded as the user pans to (or zooms into) them; animations are used to hide any jerkiness in the transition. The libraries are also available in other platforms including Java and Flash . The Deep Zoom file format is very similar to the Google Maps image format where images are broken into tiles and then displayed as required. The tiling typically follows

837-414: The same areas are identified. By analyzing the position of matching features within each photograph, the program can identify which photographs belong on which side of others. By analyzing subtle differences in the relationships between the features (angle, distance, etc.), the program identifies the 3D position of each feature, as well as the position and angle at which each photograph was taken. This process

868-441: The system already used in synths. In July 2015, Microsoft announced it would be retiring the Photosynth mobile apps. As Photosynth prepared to shut down in early 2017, Mapillary , a crowdsourced street-level imaging platform, reached out to the Photosynth community with their Photosynth-to-Mapillary blog post, and the official Photosynth Twitter account suggested users "check them out". On 6 February 2017, Microsoft decommissioned

899-508: Was also available from the App Store to download on iPod Touch and iPhone . In May 2012, Microsoft released a Photosynth App for its mobile platform, Windows Phone . On July 10, 2015, Microsoft announced that they are retiring the Photosynth Mobile Apps, removing them from their stores, and are no longer supporting or updating them. While the Photosynth platform was shut down in early 2017, its features re-appeared in

930-637: Was demonstrated for the first time in March 2008 at the Microsoft MIX convention in Las Vegas . In 2010, Microsoft Live Labs partnered with the University of California, Berkeley to create ChronoZoom , a DeepZoom-powered time visualization tool that pushed the limits of DeepZoom, since it required zooming from the scale of 13 billion years down to a single day. The project has since graduated to development under Microsoft Research . Another example

961-476: Was used to display Maori land ownership. The file format used by Deep Zoom (as well as Photosynth and Seadragon Ajax) is XML based. Users can specify a single large image (dzi) or a collection of images (dzc). It also allows for "Sparse Images"; where some parts of the image have greater resolution than others, an example of which can be found on the Seadragon Ajax home page ; The bike image displayed

International Image Interoperability Framework - Misplaced Pages Continue

#38961