Misplaced Pages

WFS

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Wave field synthesis ( WFS ) is a spatial audio rendering technique, characterized by creation of virtual acoustic environments . It produces artificial wavefronts synthesized by a large number of individually driven loudspeakers from elementary waves . Such wavefronts seem to originate from a virtual starting point, the virtual sound source. Contrary to traditional phantom sound sources, the localization of WFS established virtual sound sources does not depend on the listener's position. Like as a genuine sound source the virtual source remains at fixed starting point.

#152847

25-555: WFS may stand for: Wave field synthesis Web Feature Service , a standard protocol for serving georeferenced map data over the Internet Well-founded semantics Wells Fargo Securities William French Smith Wilmington Friends School Windows Fax and Scan Women for Sobriety World Flute Society World Food Summit World Fuel Services World Future Society Worldwide Flight Services ,

50-432: A few transmitting channels causes a significant loss of spatial information. This spatial distribution can be synthesized much more accurately by the rendition side. Compared to conventional channel-orientated rendition procedures, WFS provides a clear advantage: Virtual acoustic sources guided by the signal content of the associated channels can be positioned far beyond the conventional material rendition area. This reduces

75-508: A global, world-leading air cargo, passenger & ground handling organisation. Topics referred to by the same term [REDACTED] This disambiguation page lists articles associated with the title WFS . If an internal link led you here, you may wish to change the link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=WFS&oldid=1246798761 " Category : Disambiguation pages Hidden categories: Short description

100-444: A separate transmission of content (dry recorded audio signal) and form (the impulse response or the acoustic model). Each virtual acoustic source needs its own (mono) audio channel. The spatial sound field in the recording room consists of the direct wave of the acoustic source and a spatially distributed pattern of mirror acoustic sources caused by the reflections by the room surfaces. Reducing that spatial mirror source distribution onto

125-475: A volume free of sources, if sound pressure and velocity are determined in all points on its surface. Therefore, any sound field can be reconstructed, if sound pressure and acoustic velocity are restored on all points of the surface of its volume. This approach is the underlying principle of holophony . For reproduction, the entire surface of the volume would have to be covered with closely spaced loudspeakers, each individually driven with its own signal. Moreover,

150-514: Is different from Wikidata All article disambiguation pages All disambiguation pages Wave field synthesis WFS is based on the Huygens–Fresnel principle , which states that any wavefront can be regarded as a superposition of spherical elementary waves. Therefore, any wavefront can be synthesized from such elementary waves. In practice, a computer controls a large array of individual loudspeakers and actuates each one exactly by

175-445: Is generally sufficient. Another cause for disturbance of the spherical wavefront is the truncation effect . Because the resulting wavefront is a composite of elementary waves, a sudden change of pressure can occur if no further speakers deliver elementary waves where the speaker row ends. This causes a 'shadow-wave' effect. For virtual acoustic sources placed in front of the loudspeaker arrangement, this pressure change hurries ahead of

200-566: Is hardly any reverberation even in highly reflective environments. The company's largest project to date is the Sphere in the Las Vegas Valley. The venue's sound system is made of 1,586 permanently installed X1 Matrix Arrays comprising 167,000 speaker drivers, and it will combines elementary waves into common wave fronts. Sweet spot (acoustics) The sweet spot is a term used by audiophiles and recording engineers to describe

225-457: Is high cost. A large number of individual transducers must be very close together. Reducing the number of transducers by increasing their spacing introduces spatial aliasing artifacts. Reducing the number of transducers at a given spacing reduces the size of the emitter field and limits the representation range; outside of its borders no virtual acoustic sources can be produced. Early development of WFS began 1988 at Delft University . Further work

250-566: The X1 modules from the Berlin-based technology company Holoplot. The startup eschewed the usual restriction to a horizontal plane and installed 96 individually controlled speaker drivers in a modular system. Optimized according to WFS principles, the beams are able to deliver sound very evenly to large, arbitrarily shaped audience areas, even simultaneously with beams of different content. Because reflective surfaces are not hit unintentionally, there

275-401: The sweet spot can be adjusted dynamically to the actual position of the listener. Therefore, a correct phantom source localization is possible over the whole listening area. This approach is implemented in the open source project SweetSpotter. Massive multi-channel audio systems that apply wave field synthesis or higher order ambisonics exhibit an extended optimal listening area instead of

SECTION 10

#1732787209153

300-452: The acoustic characteristics of the recording space, the acoustics of the rendition area must be suppressed. One possible solution is use of acoustic damping or to otherwise arrange the walls in an absorbing and non-reflective configuration. A second possibility is playback within the near field. For this to work effectively the loudspeakers must couple very closely at the hearing zone or the diaphragm surface must be very large. In some cases,

325-490: The actual wavefront whereby it becomes clearly audible. In signal processing terms, this is spectral leakage in the spatial domain and is caused by application of a rectangular function as a window function on what would otherwise be an infinite array of speakers. The shadow wave can be reduced if the volume of the outer loudspeakers is reduced; this corresponds to using a different window function that tapers off instead of being truncated. A further and resultant problem

350-481: The focal point between two speakers, where an individual is fully capable of hearing the stereo audio mix the way it was intended to be heard by the mixer. The sweet spot is the location which creates an equilateral triangle together with the stereo loudspeakers, the stereo triangle . In the case of surround sound , this is the focal point between four or more speakers, i.e., the location at which all wave fronts arrive simultaneously. In international recommendations

375-418: The frequency response within the rendition range. Their frequency depends on the angle of the virtual acoustic source and on the angle of the listener to the loudspeaker arrangement: For aliasing-free rendition in the entire audio range a distance of the single emitters below 2 cm would be necessary. But fortunately, our ear is not particularly sensitive to spatial aliasing. A 10–15 cm emitter distance

400-412: The horizontal plane of the loudspeakers. Real 3D audio is not possible with such loudspeaker rows. For sources behind the loudspeakers, the array will produce convex wavefronts. Sources in front of the speakers can be rendered by concave wavefronts that focus in the virtual source inside playback area and diverge again as convex wave. Hence the reproduction inside the volume is incomplete - it breaks down if

425-430: The influence of the listener position because the relative changes in angles and levels are clearly smaller compared to conventional loudspeakers located within the rendition area. This extends the sweet spot considerably; it can now cover nearly the entire rendition area. WFS thus is not only compatible with, but potentially improves the reproduction for conventional channel-oriented methods. Since WFS attempts to simulate

450-465: The listener is situated between the speakers and the virtual source. If overcome the restriction to the horizontal plane, it becomes possible to establish a virtual copy of a genuine sound field indistinguishable from the real sound field. Changes of the listener position in the rendition area produce the same impression as an appropriate change of location in the recording room. Two dimensionally arrays can establish parallel wavefronts, which are direct at

475-431: The listening area would have to be anechoic , in order to avoid sound reflections that would violate source-free volume assumption. In practice, this is hardly feasible. Because our acoustic perception is most exact in the horizontal plane, practical approaches generally reduce the array to a horizontal loudspeaker line, circle or rectangle around the listener. So origin of the synthesized wavefront restrict at any point on

500-471: The loudspeakers not louder as in some meter distance. The horizontal arrays can only produce cylinder waves, which lose 3 dB level at any doubling of distance. But already with that restriction the Listeners at wave field synthesis are no longer relegated to a sweet spot area within the room. The Moving Picture Expert Group standardized the object-oriented transmission standard MPEG-4 which allows

525-459: The most perceptible difference compared to the original sound field is the reduction of the sound field to two dimensions along the horizontal of the loudspeaker lines. This is particularly noticeable for reproduction of ambiance. The suppression of acoustics in the rendition area does not complement playback of natural acoustic ambient sources. There are undesirable spatial aliasing distortions caused by position-dependent narrow-band break-downs in

SECTION 20

#1732787209153

550-502: The sweet spot is referred to as reference listening point . Different static methods exist to broaden the area of the sweet spot . A discussion of methods and their benefits can be found in Merchel et al. By means of such methods more than one listener can enjoy the sound experience as intended by the audio engineer, including the desired phantom source locations, spectral and spatial balance and degree of immersion. Alternatively,

575-550: The time and level, at which the desired virtual wavefront would pass through its point. By that way from a mono signal source a genuine wave front of a sound source may by restored. The basic procedure was developed in 1988 by Professor A.J. Berkhout at the Delft University of Technology . Its mathematical basis is the Kirchhoff–Helmholtz integral . It states that the sound pressure is completely determined within

600-403: The world's largest speaker system with 2700 loudspeakers on 832 independent channels. Research trends in wave field synthesis include the consideration of psychoacoustics to reduce the necessary number of loudspeakers, and to implement complicated sound radiation properties so that a virtual grand piano sounds as grand as in real life. A practical breakthrough of WFS technology only came with

625-809: Was carried out from January 2001 to June 2003 in the context of the CARROUSO project by the European Union which included ten institutes. The WFS sound system IOSONO was developed by the Fraunhofer Institute for digital media technology (IDMT) by the Technische Universität Ilmenau in 2004. The first live WFS transmission took place in July 2008, recreating an organ recital at Cologne Cathedral in lecture hall 104 of Technische Universität Berlin . The room contains

#152847