Direct-to-Indirect Acoustic Radiance Transfer - GAMMA

1y ago
4 Views
2 Downloads
1.37 MB
13 Pages
Last View : 23d ago
Last Download : 3m ago
Upload by : Philip Renner
Transcription

1Direct-to-Indirect Acoustic RadianceTransferLakulish Antani, Anish Chandak, Micah Taylor and Dinesh ManochaAbstract—We present an efficient algorithm for simulating diffuse reflections of sound in a static scene. Ourapproach is built on recent advances in precomputed light transport techniques for visual rendering and usesthem to develop an improved acoustic radiance transfer technique. We precompute a direct-to-indirect acoustictransfer operator for a scene, and use it to map direct sound incident on the surfaces of the scene to multibounce diffuse indirect sound, which is gathered at the listener to compute the final impulse response. Ouralgorithm decouples the transfer operator from the source position so we can efficiently update the acousticresponse at the listener when the source moves. We highlight its performance on various benchmarks andobserve significant speedups over prior methods based on acoustic radiance transfer.Index Terms—sound propagation, radiosity, virtual reality, precomputed transportF1I NTRODUCTIONSound rendering can augment visual rendering and provide an enhanced spatial sense ofpresence. Some of the driving applications ofsound rendering include video games, architectural acoustics and VR simulations.The modeling of sound propagation effectsneeds to account for different wave propagation phenomena such as specular reflections,diffuse reflections, edge diffraction and interference. In this paper, we focus on modeling diffuse reflections. Many objective [1], [2]and perceptual [3] studies have demonstratedthe importance of diffuse reflections in soundpropagation. Further, it is computationallychallenging to model high orders of diffusereflection. Hence, modeling diffuse reflectionsfor sound propagation is an active area ofinterest in many interactive sound renderingapplications.Sound propagation algorithms can bebroadly classified into wave-based andgeometric methods. Wave-based methodsnumerically solve the acoustic wave equation.However, their complexity is proportionalto the volume of the scene and the fourthpower of the maximum frequency of sound L. Antani, A. Chandak, M. Taylor and D. Manocha are withthe Department of Computer Science, University of NorthCarolina at Chapel Hill, Chapel Hill, NC 27599.E-mail: d, therefore they can be very slow forlarge acoustic spaces or high frequency soundsources. Geometric methods approximatesound waves by rays. Two standard methodsused to simulate diffuse sound reflections arebased on ray (or volume) tracing and radiancetransfer. Our approach is motivated by recentdevelopments in global illumination basedon precomputed light transport algorithms[4], [5], [6]. Specifically, our work is based ondirect-to-indirect transfer algorithms for visualrendering, which map direct light incident onthe surfaces of a scene to indirect light on thesurfaces of the scene after multiple bounces.Main Results We present a new algorithm formodeling diffuse reflections of sound based onthe direct-to-indirect transfer approach. Thealgorithm computes an acoustic transfer operator in matrix form which is decoupled fromboth the source and the listener positions, andcan efficiently update the acoustic response atthe listener whenever the source moves.The algorithm approximates the transfermatrix using the singular value decomposition(SVD) to perform higher-order diffuse reflections. We show that this approximation reduces the memory requirements and increasesthe performance of our algorithm.We highlight the performance of our algorithm on various models. In practice, it ismuch faster than prior methods based on ra-

2diance transfer. To the best of our knowledge,it is the first approach that can perform highorders of diffuse reflections in static sceneswith moving sources and listeners at almostinteractive rates.The rest of this paper is organized as follows. We briefly survey related work in Section 2, and discuss mathematical prerequisitesin Section 3. We present our algorithm inSection 4 and discuss implementation detailsin Section 5. We present experimental resultsin Section 6 and conclude in Section 7.2R ELATED W ORKWave-based Acoustics The propagationof sound in a medium is described by theacoustic wave equation, a second-order partialdifferential equation [7]. Several techniquesare known for numerically solving the waveequation [8], [9] and accurately modelingsound propagation in a scene. Diffusereflections can be modeled by performing thesimulation on a grid fine enough to capturethe detailed “roughness” of the surfacesthat results in acoustic wave scattering[10]. However, despite recent advances [9],these methods can be rather slow and aremainly limited to scenes with static sources.Precomputation-based methods have recentlybeen developed [11] that use a numericalwave equation solver to compute the acousticresponse of a scene from several sampledsource positions; at run-time these responsesare interpolated given the actual position of amoving source. These methods are fast, butrequire large amounts of precomputed data.Geometric Acoustics Most interactive systemsmodel sound waves in terms of rays or 3D volumes. These geometric acoustics techniquescannot accurately solve the wave equation,and cannot easily model all kinds of propagation effects, but allow efficient simulationof early reflections. However, geometric techniques have trouble handling some acoustic phenomena such as finite-size diffractingedges, or absorbers with complex boundaryconditions.Methods based on ray tracing [12], [13] canmodel both diffuse and specular reflections ofsound. Since early specular reflections providethe listener with important directional cues,specialized techniques have been developedfor modeling specular reflections, such asvolume tracing [14], [15] and the image sourcemethod [16], [17]. For static scenes, whichfrequently arise in architectural acoustics andvirtual environments, algorithms based onacoustic radiosity [18], [19], [20] or radiancetransfer methods can be used to modelreflections from surfaces with arbitrarybidirectional reflectance distribution functions(BRDFs) [21], [22]. Many techniques have alsobeen designed to model edge diffraction [23],[24], [25].Precomputed Light Transport Radiosity [26]is the classic precomputed light transport algorithm. However, it computes a full solutionthat has to be recomputed whenever the lightsource moves. In contrast, precomputed radiancetransfer (PRT) algorithms decouple light transport effects from the light source configurationby computing a linear operator that defineshow a variable light source configuration affects the radiances at surface sample points.PRT techniques can support both distant [4],[27] and local [28] source configurations.Direct-to-indirect transfer algorithms [5], [6]are one class of precomputed light transportalgorithms. These algorithms compute linearoperators which map direct light incident onthe surface samples to multi-bounce indirectlight at the samples. They are designed tohandle diffuse reflections, and some of themcan also support limited glossy reflections.Our approach is based on applying these ideasto sound propagation.3P RELIMINARIESThis section briefly describes the mathematicalbackground on which our algorithm is based.3.1Sound Rendering vs. Visual RenderingLight transport simulation is concerned withthe steady-state values of radiance over thesurface of the scene, since light travels fastenough (3 108 m/s) that transient radiancevalues are not observed and can be ignored.However, the speed of sound in air is muchslower (340 m/s), and hence it is importantto compute time-varying radiances over thesurface.

3paper we focus on computing higher-orderdiffuse reflections; our approach can be usedin tandem with existing approaches for modeling specular reflections and edge diffraction.3.2Fig. 1. Parts of a typical impulse response [30](absolute values shown). The early response isdominated by specular reflections and diffraction; the late reverberation is dominated bydiffuse reflections [29].Furthermore, sound waves have muchlarger wavelengths than light waves, andare comparable in size to obstacles in typical architectural and game scenes. Therefore,diffraction plays an important role in soundpropagation, and it must be modeled in orderto generate plausible sounds.The computation of sound propagationpaths takes into account the knowledge ofsound sources, listener locations, the 3Dmodel of the environment, material absorptionand scattering properties, and air absorption.Given the signal emitted by a sound source(i.e., a time-varying pressure wave), the signal heard by a listener (after modeling reflections, diffractions and interference) is typically computed using impulse responses (IRs).An IR is the signal received at the listenerif the source emits a unit impulse at t 0.Since room acoustics are modeled as a lineartime-invariant system [29], the response at thelistener to an arbitrary source sound can becomputed by convolving the source soundwith the IR.IRs can be divided into three parts [30]: (a)direct, sound arriving directly from the source;(b) early response, sound reaching the listenersoon after the direct sound, via a small number of reflections or diffractions; and (c) latereverberation, the gradual decay of amplitudeafter the early response (see Figure 1). Latereverberation gives an impression of the sizeof an acoustic space, and is mostly composedof diffuse reflections [29], therefore in thisImpulse Response RepresentationThe IR at a point is a function of time. Attenuation and delay can be applied using aunified formulation [22] by representing ourIRs in Fourier space. For a continuous function f : [0, T ] R, the Fourier transformprojects f into a sinusoidal basis (with basisvectors of the form eιωi t ). For discrete signals,we compute the Discrete Fourier Transform(DFT) using the Fast Fourier Transform (FFT)algorithm. The signal is discretized into Ntime-domain samples, where the value of N ischosen based on the desired audio samplingfrequency and the length of the IR modeled(which could be tuned based on the expectedreverberation time of a room).Since the Fourier transform is linear, attenuations and accumulation of IRs can beperformed easily (n denotes a discrete sampleindex):F(af1 (n) bf2 (n)) aF(f1 (n)) bF(f2 (n)).(1)Unlike in the time domain, in the frequencydomain delays can also be applied using ascale factor, since the Fourier basis vectorsare eigenvectors of linear time-invariant operators:F(f (n n)) e ιω n F(f (n)).(2)Note that care must be taken to ensure that thedelays align on time-domain sample boundaries, otherwise the inverse Fourier transformwill contain non-zero imaginary parts. An alternative solution to this issue would be to usefractional delays [31]; we choose to addressthis in future work in the interests of simplicity of implementation.A unit impulse emitted by the source at timet 0 has all Fourier coefficients set to 1. Computing the IR using the above expressions fordelay and attenuation results in a frequencydomain signal. Computing the inverse Fouriertransform of this signal using the frequencydomain replication method described by Siltanen et al. [22] yields a periodic function whichis approximately equal to the time-domain IR

4Our overall approach is as follows (see Figure 2 and Algorithms 1 and 2): Preprocessing. We sample the surface ofthe scene and compute a transfer operator which models one or more ordersof diffuse reflections of sound among thesurface samples.3.3 Acoustic Rendering Equation Run-time. First, we shoot rays from theThe propagation of sound in a scene can besource to determine the direct IR at eachmodeled using an extension of the standardsurface sample. Next, we apply the transgraphics rendering equation [32], called thefer operator to the direct response toacoustic rendering equation [21]:obtain the indirect response. Finally, weshoot rays from the listener and gatherL(x, Ω) L0 (x, Ω)(3) Zthe direct and indirect responses from0x x0dx R(x, x0 , Ω)L x0 ,each surface sample hit by a ray. These are x x0 Sadded to obtain the final IR at the listener.where L is the total outgoing radiance, L0 isthe emitted radiance and R is the reflection ker- Algorithm 1 Preprocessingnel, which describes how radiance at point x0P set of samples on scene surfaceinfluences radiance at point x. Ω is the exitantpi denotes the ith element of Pradiance direction at x; the incident radianceT 0direction at x is implicit in the specification offor all i [0, P 1] dox0 :for all l [0, Nrays ] dor random path traced from piR(x, x0 , Ω) ρ(x, x0 , Ω)G(x, x0 )V (x, x0 )P (x, x0 ).pj final sample hit by r(4)Tij IR contribution along rHere, ρ is the BRDF of the surface at x, Gend foris the form factor between x and x0 , V isend forthe point-to-point visibility function, and Pis a propagation term [21] that accounts forpropagation delays (as per Equation 2).The radiances in Equation 3 are IRs; the time Algorithm 2 Run-timel0 direct IR from source at each samplevariable t is hidden for the sake of brevity.ln T · l0This added dimension of time complicatesIR gather from (l0 ln )the storage and processing requirements ofalgorithms based on the acoustic renderingequation.at the listener. Note that this method does notcompute the steady-state acoustic response, butthe time-varying impulse response. The key tothis is the frequency-domain delay equationsdescribed above.4A LGORITHMOur algorithm provides two main improvements over the state-of-the-art acoustic radiance transfer algorithms: (a) we decouple thesource position from the precomputed databy computing an acoustic transfer operator asopposed to simply precomputing the IRs atsurface samples due to a sound source as perthe method of Siltanen et al. [22]; and (b) weuse the SVD to compress the transfer operatorand quickly compute higher-order reflections.The rest of this section details how our algorithm achieves these improvements over thestate-of-the-art.4.1Acoustic Transfer OperatorThe acoustic transfer operator is expressedover a set of p samples chosen over the surfaceof the scene. The transfer operator is computed in terms of the responses at all surfacesamples to impulses emitted from every othersurface sample. We use Fourier coefficients torepresent the sample-to-sample IRs. Let therebe f Fourier coefficients per surface sample.All subsequent computations are performedon each Fourier coefficient independently.For each frequency ωm , we define the acoustic radiance vector l(ωm ), which contains pelements that represent the mth Fourier coefficients of the IRs at each surface sample.

5Fig. 2. Overview of our algorithm. In a precomputation step, we sample the surfaces on thescene, and compute a one-bounce transfer operator for these samples (T). We then use theSVD to compute the modes of the transfer operator. At runtime, we shoot rays from the source(which may move freely) and compute direct IRs at the surface samples. We then apply thetransfer operator (with a user-specified number of modes retained) repeatedly to quickly obtainthe multi-bounce indirect IRs at the surface samples. We then compute the final IR at the listenerposition in a final gathering step.For the sake of brevity, we shall omit theparameter ωm from the equations in the restof the paper as it may be obvious from thecontext.The Neumann series expansion of Equation3 expressed in matrix form is:ln 1 (ωm ) T(ωm )ln (ωm ),(5)where ln (ωm ) contains the mth Fourier coefficients of the IRs at each surface sampleafter n reflections. The transfer matrix T(ωm )models the effect of one diffuse reflection. The(i, j)th element of T(ωm ) describes how themth Fourier coefficient at surface sample jaffects the mth Fourier coefficient at surfacesample i after one diffuse reflection. The entries in row i of T are computed by tracingpaths sampled over the hemisphere at surfacesample i; the delays and attenuations alongeach path terminating at any other surfacesample j are added to the entry Tij [22]. Wecan compute a multi-bounce transfer operatorwith n orders of reflection as the matrix sumTn T T2 · · · Tn .Existing acoustic radiance transfer algorithms [22] implicitly apply the transfer operator by performing path tracing from the sourceand precomputing the IR at each surface sample after several orders of reflection. This approach has the disadvantage of having to repeat the entire process if the source moves. Weeliminate this disadvantage by precomputingT, and multiplying it with the direct responseof the source at run-time. This decouplingof the source position from the precomputeddata allows rapid updates of the IR at thelistener whenever the source moves.4.2Transfer Operator CompressionTo apply the transfer operator once, thematrix-vector multiplication in Equation 5needs to be performed once per Fourier coefficient at run-time. However, even for scenesof moderate complexity, the number of surfacesamples, p, can be very large. Since T is a p pmatrix and ln is a p 1 vector, this step takesO(p2 ) time per Fourier coefficient per orderof reflection, which can quickly become quiteexpensive. We use the SVD to compute a rankk approximation of T. This allows us to reducethe complexity to O(pk).Intuitively, truncating T to k modes usingthe SVD removes some of the high spatialfrequencies in the transfer operator. A lower-

6order mode of T might model reflectionsfrom an entire wall, while higher-order modesmight model details added to the acousticresponse due to local variations in the wall’sgeometry (such as a painting on the wall). Ineffect, the parameter k can be used to controlthe level-of-detail of the acoustic response.As we shall discuss in Section 7, thereare use cases where we wish to precomputea one-bounce transfer operator and apply itrepeatedly to obtain higher-order reflections.In such cases, the cost of computing transfermatrices that represent additional bounces canbe further reduced to O(k 2 ) by precomputingappropriate matrices as follows. The direct IRsat each surface sample are stored in the vectorl0 . Suppose we have a rank k approximatione UeSeVe T , where Ue is a p kof T, given by Te is a k k diagonal matrix and Ve T ismatrix, Sa k p matrix. Then the first-order IR at eachsurface sample is given by:e 0TleSeVe T l0 Ue UbTeVe l0 is l0 projected into the spanwhere b Sof the first k right singular vectors of T. Thesecond-order response is:e Tle 0T TTe SeVe U)e SeVe l0U(eUDbeVeTUe is essentially the onewhere D Sbounce operator in the k-dimensional subspace spanned by the singular vectors corresponding to the top k singular values of T.The cost of multiplying b by D is simplyO(k 2 ). Notice that the third-order response cane 2 b, and so on. This allowsbe written as UDus to compute higher-order responses using ak k matrix instead of a p p matrix.5I MPLEMENTATIONOur implementation is CPU-based, and usesMicrosoft DirectX 9 for visualization, and IntelMath Kernel Library (MKL) for the matrixoperations.5.1ApproximationsOur algorithm allows for the following usercontrolled approximations:Surface Samples We parameterize the scenesurface by mapping the primitives to the unitsquare (a uv texture mapping) using LeastSquares Conformal Mapping (LSCM) [33].The user specifies the texture dimensions;each texel of the resulting texture is mappedto a single surface sample using an inversemapping process. The number of texelsmapped to a given primitive is weightedby the area of the primitive, to ensure aroughly even distribution of samples. Wechose the LSCM algorithm for this purposesince our modeling tools (Blender 1 ) have animplementation built-in; it can be replacedwith any other technique for sampling thesurfaces as long as the number of samplesgenerated on a primitive is proportional to itsarea.Frequency Samples We allow the user tovary the number of Fourier coefficients usedto represent the IRs. We use 1K Fouriercoefficients in all our experiments, since ithas been shown [22] that this provides anacceptable compromise between performanceand quality.Transfer Operator Modes The SVD approximation error of the transfer operator is measured using the Frobenius norm. Figure 3plots the error against the number of modesretained in the transfer operator. The figuresuggests that we could potentially use a verysmall number of modes to compute IRs withdiffuse reflections at runtime. Figure 4 plotsthe SVD approximation error (at 50 modes)with increasing orders of reflection. The figureclearly shows that the error introduced bythe SVD approximation for higher orders ofreflection quickly converges. In other words,the IR energy due to higher-order reflectionscan be modeled using very few SVD modesof the transfer operator. This matches the intuition that higher-order reflections have lowspatial frequency. As a result, when computingvery high orders of reflection (say 50), we canuse very few SVD modes beyond the first 2-3orders while still capturing the higher orderenergy (which must be captured to model thelate reverberation tail of the IR) accurately.1. http://www.blender.org

7Fig. 3. SVD approximation error for transfer operators. For each benchmark scene, the plotsshow the relative Frobenius norm error of rank-k approximations of T (for one value of ω) for allpossible values of k. From left to right: (a) Room (252 samples), (b) Hall (177 samples), (c) Sigyn(1024 samples).Fig. 4. SVD approximation error for eachhigher order of reflection, for the Sigyn scene(see Figure 5).5.2Audio ProcessingThe algorithm presented in Section 4computes a frequency domain energy IRwith 1K Fourier coefficients. The pressureIR is computed from the energy IR [34] andupsampled to encode the desired propagationdelay in the IR [22].Moving Sources and Listeners: In typicalvirtual environment applications, the sourceand listener tend to move and the audio isstreamed from the source in chunks of audiosamples (called audio frames). The frame sizeis determined by the allowed latency for theapplication. We choose audio frames of 4800samples at a sampling rate of 48KHz, leadingto a 100ms latency in our audio output. For astatic source and listener, computing the finalaudio is trivial and amounts to convolvingeach audio frame with the IR to computeoutput audio frames. For moving sources andlisteners, IRs evolve over time which couldlead to discontinuities in the final audio whenusing different IRs for two adjacent audioframes. In order to minimize such discontinuity artifacts, windowing [35] is applied at thesource frame and the listener frame when thesource and listener are moving respectively.We use a windowing method similar to Siltanen et al. [22].Note that the audio used in the accompanying video is generated by convolving thedry input audio with the listener IR. Ideally,one would apply the listener’s Head-RelatedTransfer Function (HRTF) in order to modelthe shape of the listener’s head and interreflections due to the head and shoulders.However, this step has been skipped in thevideo for simplicity, especially since HRTFsare not related to our main contributions.6E XPERIMENTSWe now present some experimental results.All tests were performed on an Intel XeonX5560 workstation with 4 cores (each operating at 2.80 GHz) and 4GB of RAM runningWindows Vista. We report timings for all 4cores since MKL automatically parallelizes ourmatrix operations over all cores of the testmachine. We have benchmarked our implementation on three scenes whose complexity istypical of scenes encountered in acoustics applications. Figure 5 shows these scenes alongwith some details.For comparison, we chose the state-of-theart frequency-domain acoustic radiance transfer algorithm [22]. To the best of our knowledge, the only other algorithms that model diffuse sound reflections are time-domain radiosity and path tracing. Since time-domain radiosity requires a prohibitive amount of memory, we chose not to compare against it. Path

8Fig. 5. Benchmark scenes. From left to right: (a) Room (252 samples), (b) Hall (177 samples),(c) Sigyn (1024 samples).SceneSurfaceSamplesPrecomputation TimeTSVDModesInitial ScatterRuntimeTransfer OperatorFinal GatherRoom25214.2 s94.5 s10255043.2 ms45.8 ms42.4 ms24.0 ms43.8 ms84.3 ms33.7 ms35.0 ms36.4 msHall17713.1 s93.1 s10255037.8 ms37.1 ms36.6 ms26.8 ms45.5 ms79.7 ms31.5 ms30.2 ms31.2 msSigyn10246.31 min50.9 min50164.1 ms218.1 ms109.9 msTABLE 1Performance characteristics of our algorithm. For each scene, we present the precomputationtime required by our algorithm for 1K Fourier coefficients. Under precomputation time, we showthe time required to compute the transfer operator, T, and the time required to compute its SVDapproximation. We also compare running times for varying numbers of modes from the SVD.The table shows the time spent at runtime in initial shooting from the source, applying thetransfer operator, and gathering the final IR at the listener position.tracing, while well-suited for dynamic scenes,requires even static scenes to be traversedmillions of times per frame for higher-orderreflections. Part of the reduction in complexity(and the memory usage) for the frequencydomain approach, is due to the restriction to arelatively small number of Fourier coefficients.Frequency-domain acoustic radiance transfer (ART) [22] computes the transfer operator(without any SVD approximation) and iteratively applies it to the direct acoustic responseuntil the solution converges. In order to perform a fair comparison, we restrict ART tocomputing as many orders of reflection as ouralgorithm.Table 1 summarizes the performance of theprecomputation and run-time stages of our algorithm. The run-time complexity depends onthe number of modes retained during the SVDapproximation; the table clearly highlights thisdependency. As shown by the table, our algorithm very efficiently updates IRs whenthe source position changes at run-time. Notethat we precompute a one-bounce transferoperator, and use the approach described inSection 4.2 to compute higher-order reflectionsat run-time. Depending on the application, wecould also precompute a multi-bounce operator and apply it directly at run-time, furtherimproving our performance. Our implementation uses the more flexible approach of varying the orders of reflection at runtime. As aresult, it is possible to further improve theperformance of our implementation.Table 2 shows the benefit of the SVD incompressing the transfer operator. The tableshows that without using SVD, the transferoperators may be too large to be used oncommodity hardware. For the uncompressed(“reference”) case, the transfer operator size isp p, for each Fourier coefficient (1K in ourcase). For the compressed (“50 Modes”) case,the transfer operator size is p k for Ũ, k k forTD and k p for S̃Ṽ , where k is the numberof modes retained. In the table, k 50, and pis the number of surface samples in the scene.Table 3 compares the run-time performanceof our method and ART. The table shows

9SceneSamplesReference50 21.6839.2TABLE 2Memory requirements of the transfer operatorscomputed by our algorithm with (column 4)and without (column 3) SVD compression.Note that since the entries of each matrix arecomplex numbers, each entry requires 8 bytesof storage. All sizes in the table are in MB.to significantly reduce memory requirementsand increase performance without significantdegradation of the computed IRs. Along withthe plots, Figure 6 shows RT60 (reverberationtime) values estimated from the decay curves.The data demonstrates that SVD approximation upto 50 modes does not lead to significantchange in reverberation time. Of course, thebest way to demonstrate the benefit of ourapproach is by comparing audio clips; forthis we refer the reader to the accompanyingvideo.7the time required to update IRs at the listener when the source is moved. The tableclearly shows the advantage of our approach.Since our precomputed transfer operator isdecoupled from the source position, movingthe source does not require recomputing thetransfer operator, allowing the source positionto be updated much faster than would bepossible with ART.Table 3 can also be used to derive the performance of our algorithm for the case whena multi-bounce transfer operator is precomputed. For example, suppose we precomputea transfer operator with 10 orders of reflectionfor the Sigyn scene. Then the run-time costwould be the same as that of the one-bounceoperator, i.e., 468.5 ms. The difference, i.e.(512.8ms 468.5ms) 1024 45.4s wouldbe the additional time spent during preprocessing to derive the multi-bounce operatorfrom the one-bounce operator (the factor of1024 arises due to the fact that the timingsin Table 3 are for matrix-vector multiplication,whereas precomputing the multi-bounce operator from the one-bounce operator requiresmatrix-matrix multiplications).Figure 6 compares the output of our algorithm and ART. The figure shows squaredIRs, smoothed using a moving-average lowpass filter, for different numbers of modes.As the figure shows, reducing the numberof modes significantly (down to 50 modes)has very little effect; however, if far fewermodes are used, significant errors appear inthe energy decays, as expected. Coupled withthe memory savings demonstrated in Table 2and performance advantage demonstrated inTable 3, we see that using the SVD allows usC ONCLUSIONWe have described a precomputed directto-indirect transfer approach to solving theacoustic rendering equation in the frequencydomain for diffuse reflections. We havedemonstrated that our approach is able toefficiently simulate diffuse reflections for amoving source and listener in static scenes.In comparison with existing methods, our approach offers a significant performance advantage when ha

compute the steady-state acoustic response, but the time-varying impulse response. The key to this is the frequency-domain delay equations described above. 3.3 Acoustic Rendering Equation The propagation of sound in a scene can be modeled using an extension of the standard graphics rendering equation [32], called the acoustic rendering equation .

Related Documents:

2.2. Neural Radiance Fields Neural Radiance Fields (NeRF) [20], simultaneously model color and geometry using a volumetric representa-tion written as: f(x) (cλ,σ) The representation function f is defined on 3D space by the coordinate vector x R3 The color of outgoing light cλ, also called radiance, is defined at discrete wavelengths:

Properties of Radiance Properties of Radiance 1. Fundamental field quantity that characterizes the distribution of light in an environment. Radiance is a function on rays All other field quantities are derived from it 2. Radiance invariant along a ray. 5D ray space reduces to 4D fildi CS348B Lecture 5 Pat Hanrahan, 2007 3.

CHAPTER 16 Managerial Accounting Concepts and Principles Ex. 16 6 (FIN MAN); Ex. 1 6 (MAN) a. improving e. strategic b. conversion f. materials inventory c. period g. electricity used to run assembly line d. indirect Ex. 16 7 (FIN MAN); Ex. 1 7 (MAN) a. direct g. indirect b. indirect h. indirect c. direct i. indirect d. direct j. direct

9-702.3 Classification of Cost as Direct or Indirect 9-703. Evaluation of Indirect Costs 9-703.1 General 9-703.2. Classification of Indirect Costs 9-703.3 Advance Agreements (Indirect Cost) 9-703.4. Allocation Bases 9-703.5 Individual Indirect Costs 9-703.6. Indirect

Double Object Pronouns Double object pronouns occur when both the indirect and direct object pronouns are used together with the same verb. Both the indirect and direct object precede the verb. The indirect object comes before the direct object. Miguel me dio el

emitted: direct, indirect, direct/ indirect, and indirect/direct (direct down to workplane; indirect up to ceiling). Linear luminaires are typi-cally mounted to or suspended 4" to 18" from the ceiling. Linear retrofit kits comprised 11% of all listed retrofit kits. Troffers are a staple of the lighting industry, providing economical ambient

Contract Support Cost 101 May 2014 Vickie Hanvey, CPA, MBA 2 Cost Classification Indirect Administrative Indirect Program Direct Program Direct Program Direct CSC Indirect CSC Indirect Admin Direct Admin Start up 3 PL 93-638 Section 106(a)(1): " funds provided shall not be less than the appropriate Secretary would have other wise provided

Unit 5: American Revolution . 2 A m e r i c a n R e v o l u t i o n Political and Economic Relationships between Great Britain and the Colonies England became Great Britain in the early 1700s, and it was throughout this century that the British colonies in America grew and prospered. The growth of the colonies made it more and more difficult for Great Britain to remain in control. King .