The New MCNP6 Depletion Capability

2y ago
3 Views
1 Downloads
943.17 KB
10 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Joanna Keil
Transcription

Proceedings of ICAPP ‘12Chicago, USA, June 24-28, 2012Paper 12305LA-UR 11-07032The New MCNP6 Depletion CapabilityMichael L. Fensin1, Michael R. James1, John S. Hendricks1, John T. Goorley21D-5/2XCP-3 MCNP Code Development Project, MS C921Los Alamos National Laboratory, Los Alamos, New Mexico, 87545Tel: 505-606-0145, Fax: 505-665-2897, Email: mfensin@lanl.gov,Abstract –The first MCNP based inline Monte Carlo depletion capability was officially releasedfrom the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both theMCNP5 and MCNPX codes have historically provided a successful combinatorial geometrybased, continuous energy, Monte Carlo radiation transport solution for advanced reactormodeling and simulation. However, due to separate development pathways, useful simulationcapabilities were dispersed between both codes and not unified in a single technology. MCNP6,the next evolution in the MCNP suite of codes, now combines the capability of both simulationtools, as well as providing new advanced technology, in a single radiation transport code. Wedescribe here the new capabilities of the MCNP6 depletion code dating from the official RSICCrelease MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECDbenchmark results are also reported.The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1)new performance enhancing parallel architecture that implements both shared and distributedmemory constructs; (2) enhanced memory management that maximizes calculation fidelity; and(3) improved burnup physics for better nuclide prediction.MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a singleMonte Carlo code. The enhancements described here help provide a powerful capability as wellas dictate a path forward for future development to improve the usefulness of the technology.I. INTRODUCTIONOver the past several years, there have been severalpublications on Monte Carlo linked depletion methods,advertising varied implementation strategies for externallylinking some version of MCNP, TRIPOLI, MVP, etc. to adepletion calculator such as ORIGEN, CINDER and/orPEPIN.1-10 The main reason for the continued interest inthis field is the belief that by using particle simulation withcombinatorial geometry and continuous energy crosssections, the Monte Carlo method will best simulatecomplex 3-d geometries, with exotic material combinationsand highly anisotropic flux behavior, expected to beencountered in test reactors and new advanced reactorsystems such as: small modular reactors (SMRs) andGeneration 3 and 4 systems.11-14Deterministic flux calculators have historically beenthe method of choice for industry inline depletioncalculations.15-18 The deterministic method uses variousapproximations to discretize the phase space of theBoltzman transport equation. These approximations, suchas multi-group representation of the cross section, angularaveraging (Sn or diffusion theory), and spatiallyapproximating smooth curved surfaces with triangular orsquare meshes, influence the flux solution accuracy.19Nonetheless, for industry, these approximations were (andcontinue to be) tuned to a plethora of operating reactor dataand the computational errors were deemed to be“acceptable enough” for reactivity type calculationsnecessary to license a reactor (i.e. cycle length, powerdistribution, safety margin, etc.).15-18Deterministicmethods are generally computationally less expensive thanthe Monte Carlo method; and therefore because reactordesigners may be required to run hundreds to thousands ofcalculations to license a core, qualified fast runningdeterministic methods make the most sense for typical lightwater reactor (LWR) core design. But what if a designerwas not just interested in reactivity? What if the designerwas interested in a system that did not have a large amountof experimental data for qualifying the simulationaccuracy?The Monte Carlo method is well suited for looking at“details” as the simulation process has fewerapproximations during the particle transport. “Details”

Proceedings of ICAPP ‘12Chicago, USA, June 24-28, 2012Paper 12305LA-UR 11-07032represents any calculation involving high anisotropy, largestreaming effects and/or when cross section fidelity isextremely important such as when computing: (a) lowcapture cross section high decay yield isotopes used in amaterial characterization for nonproliferation; (b) materialcombinations that result in appreciable spectra overvarying significant resonances such as high burnup oradvanced clad systems; and (c) fuel/reflector interface forhighly leaky systems such as SMRs.11-14, 20 The MonteCarlo method can also be used to compliment deterministicsolutions by qualifying the design space of implementedapproximations in the deterministic solution technique.21As mentioned before, several externally linkedtechnologies exist for computing Monte Carlo linkeddepletion solutions.1-10 These technologies utilize variousscripts for linking a transport code to a depletion solver. Inmost cases the author of the script only supportsdevelopment of the linking script and has no access to thecodes being linked. To accommodate robustness, thesescripts usually coordinate several files to generate thedecks for each stage of the calculation. The coordinationusually depends on a specific directory structure that mayor may not be automated during installation as well as aninput structure that utilizes rules that may or may not beconfined to the rules of the other codes further obfuscatingthe typical calculation. Furthermore, flux calculations anddepletion solutions for reactors involve an immenseamount fidelity that is extremely data heavy (i.e. manyisotopes and reactions); and therefore once the properphysics can be tallied, the real limitation is memorymanagement and performance, which may have nothing todo with the linking script.To best accommodate these limitations, the firstMCNP based inline Monte Carlo depletion capability wasofficially released from the Radiation Safety Informationand Computational Center as MCNPX 2.6.0.22 Thecapability utilized a consistent, easy-to-use and easy-toinstall framework that supports the development of thelink, transport and depletion solver such that physics,performance enhancements and memory managementimprovements are more tractable and easier to implement.Both the MCNP5 and MCNPX codes have historicallyprovided a successful combinatorial geometry based,continuous energy, Monte Carlo radiation transportsolution for advanced reactor modeling and simulation.22, 23However, due to separate development pathways, usefulsimulation capabilities were dispersed between both codesand not unified in a single technology (i.e. MCNPX burnupand MCNP5 Shannon entropy).MCNP6, the nextevolution in the MCNP suite of codes, now combines thecapability of both simulation tools, as well as providingnew advanced technology, in a single radiation transportcode.24We describe here the new capabilities of theMCNP6 depletion code dating from the official RSICCrelease, MCNPX 2.6.0, reported previously, to the nowcurrent state of MCNP6.The MCNP6 depletion capability enhancementsbeyond MCNPX 2.6.0 reported here include: (1) newperformance enhancing parallel architecture thatimplements both shared and distributed memoryconstructs; (2) enhanced memory management thatmaximizes calculation fidelity; and (3) improved burnupphysics for better nuclide prediction.II. PARALLEL ARCHITECTUREAt the Advances in Nuclear Fuel Managementconference in 2009, preliminary reactor modeling workidentified that running the depletion solver in a serial loopcaused the time dependent nuclide density calculation torival computational expense of the actual transportsolution.25 Though CINDER90 took seconds to run,running hundreds of materials could take hours.Eq. 1a-c displays the depletion equations:(Eq. 1a)(Eq. 1b)(Eq. 1c)The reaction rate term, in the destruction and creationoperators, depends upon time-dependent flux, and thetime-dependent flux depends upon the time-dependentnumber density, making these coupled equations non-linear(coupling is between isotopes). Therefore to solve theseequations, we assume reaction rates are constant over atime step, leading to the destruction and creation operatorsbeing constant over a time step, making equation 1a acoupled first order differential equation with constantcoefficients. The depletion solution therefore marchesthrough updating fluxes at each time step, using time steplengths that are only as long as can be assumed that thenuclide density does not change enough to significantlyalter the flux (i.e. flux shape and magnitude should notsignificantly change over a time step). Using theseassumptions, there are no transverse leakage terms indepletion equations, and the solution depends only on theintegral scalar flux in a given region. Therefore thedepletion solution for each region is completelyindependent of any other region, making the solution veryamenable to parallelization.In MCNPX 2.7.A, adistributed memory paradigm was implemented, using the

Proceedings of ICAPP ‘12Chicago, USA, June 24-28, 2012Paper 12305LA-UR 11-07032Message Pass Interface (MPI) to distribute the depletioncalculation over several nodes to maximize computationalperformance.26 Fig. 1 displays the MPI work distributionalgorithm. If the user is not parallelizing the depletioncalculation, a serial loop is executed over all burn regions.If the user is parallelizing the burnup calculation, the userthen has two options: (1) if the user has more materialsthan available processors, the load is distributed evenlyamongst processors (i.e. compute the range of regionsbetween M1 and M2); (2) if the user has more availableprocessors than regions, a single calculation is executed oneach processor in which M2 is less than or equal to thenumber of available processors.Notice that theparallelization scheme also utilizes the master for doinguseful work (1 S includes master).IF (MPI) thenM1 (1 S CS*M)/(1 S)M2 (1 CS)*M/(1 S)IF (((1 S)) M) thenM1 1 CSM2 M1ENDIFELSEM1 1M2 MENDIFFig. 1 MPI work distribution algorithmBecause of the extreme independence of the solutionmethod, it was hypothesized that the parallelization wouldresult in linear speedup; however, bottlenecks wereidentified. Theoretically, the CINDER90 interface needonly be sent interaction rates, fluxes, and atom densities(along with other variables to identify isotopes, flagpredictor corrector, and compute various normalizationcoefficients) , and the CINDER90 interface need only sendout atom densities (along with other variables forcomputing region specific quantities). Because thesereaction rate and flux arrays are large, and because a copymust be sent to each slave processor in a linear loop, forlarge scale calculations involving many regions, thereexists a bottleneck in the send and receive proceduresresulting in a “not-exactly linear” speedup inimplementation. Furthermore, by only using MPI, a copyof each array, used as intent in only, is now loaded on eachprocessor, even when several processors share a commonpiece of RAM (i.e. a node containing 4 processors canshare one piece of common RAM). This wasted memoryusage can limit the amount of fidelity used in a calculation(i.e. less memory available for using more burnableregions).To limit the bottleneck, we could have chosen to usetree collection procedures available in MPI-2 forparallelizing the collection; however, we would have stillhave been stuck with the wasted memory allocationproblem. A combination of MPI and threading was alreadyavailable in MCNP5 for regular transport calculations,utilizing MPI with OPENMP.23 Therefore in MCNP6 wechose to also implement this paradigm for parallelizing theburnup calculation. A collection of burnable regions is sentto a node via MPI and then those burnable regions arefurther threaded, using OPENMP, across the availableprocessors.The work distribution algorithm for eachthread within each node is displayed in Fig. 2. Thealgorithm is similar to Fig 1, except now load is distributedevenly for each node and thread.IF (MPI) thenM1 ((1 S)*T (CS*T CT)*M)/((1 S)*T)M2 (1 CT CS*T)*M/((1 S)*T)IF ((1 S)*T) M) thenM1 1 CT T*CSM2 M1ENDIFELSEIF (THREADING .AND. .NOT. MPI) thenM1 (T (CS*T CT)*M)/TM2 M*(1 CT CS*T)/TIF ( M2 M ) M2 MIF (((1 S)*T) M) thenM1 1 CTM2 M1ENDIFELSEM1 1M2 MENDIFFig. 2. Threading with MPI work distribution algorithmA simple test case using 28 concentric spheres, with28 burnable regions containing 76 total nuclides per regionwas executed using the single processor mode, acrossseveral threads on a single node, across several nodes on asingle thread per node and a combination of shared anddistributed memory across several nodes and threads pernode. The settings for each case were 5000 particles percycle, for 33 cycles skipping the first 2 cycles. Table Ishows the increase in performance when using acombination of MPI and threading. Comparing the singleprocessor case to the 1 node 8 thread case, we see aspeedup of 4.88 times. The 1 node 8 thread case is also 50% faster than the 8 node 1 thread case, which isevidence of the bottleneck in only using MPI instead ofthreading. The 3 node 8 thread case is 33% faster thanthe 24 node 1 thread case, which is not as large a speedupas comparing the 1 node 8 thread case to the 8 node 1thread case. Using MPI for any number of nodes initiatescommunication logic, which is in itself part of thebottleneck. Also included is the 3 node 1 thread case,which appears to have an almost linear speedup (actuallinear speedup would be 3.0); however, the 8 node 1 threadcase definitely does not have linear speedup as morecommunication is involved to reach more of the slaves.Because the burnup calculations are independent between

Proceedings of ICAPP ‘12Chicago, USA, June 24-28, 2012Paper 12305LA-UR 11-07032regions, large arrays passed in by MPI can all be madeTHREADSHARED and therefore do not require furthersuperfluous copying on the shared RAM. The threadingimproves computational performance by: (1) decreasingthe amount of distributed memory sends which decreasesthe computational expense of the main bottleneck (sendinginformation to and from threads is much faster thancommunicating to separate distributed memory space); and(2) decreasing the amount of needed memory at a slave.TABLE IComputational Speed from Distributed and Shared Memory* Single Processor C; Test A; Speedup C/A.III. MEMORY MANAGEMENTThe initial purpose of the MCNPX code was tocombine MCNP4B and the LAHET 2.8 codes, to transportall particles and all energies, in support of the AcceleratorProduction of Tritium (APT) project.27 Because tabularENDF/B data did not exist in the higher ( 100 MeV)regime, the MCNPX code implemented physics models,which use various event estimator codes, to predictinteraction rates at high energies.27 Because MCNPXoffered the ability to mix and match tabular data withphysics models, such that a particle could be simulated atany energy, the arrays associated with these auxiliary eventestimator codes (as well as interface arrays used tocommunicate with auxiliary codes) were allocatedregardless of whether they were needed or not.Furthermore, during transport secondary particles maybe created from inelastic reactions, banked, and thentransported (if the particle is present on the mode card).MCNPX takes the banked particles and stores informationabout the particle, in arrays, such that they can be emittedat the termination of the interacting particle history. Thestorage array information is saved on a per initial historybasis. If the amount of banked particles exceeds the size ofthe storage array, MCNPX would write the particleinformation to a file, which slows down the calculationthrough use of I/O. To accelerate high energy calculations,involving the creation of showers of particles perinteraction per starting history ( 1000 particles),MCNPX 2.7.C increased, by an order of magnitude, theamount of particles that could be saved in the bank perhistory. This adjustment was to be made statically and notphysics dependent, and therefore greatly increased theallocatable memory for storage arrays.In a typical eigenvalue reactor calculation (mode n p),the energy of an emitted neutron is not expected to exceed20 MeV (as χ(E) has an extremely low probability at 20MEV), and because the amount of secondary particlesgenerated per history is not expected to be large, bankedsecondaries from neutron only transport are only generatedthrough (n, 2n) and (n, 3n) events. It is true that the amountof banked secondaries per history can increase through useof variance reduction, such as splitting; however, in typicaleigenvalue calculations, variance reduction is useless, aswe are interested in computing global quantities such as keffor reaction rates in every region. Therefore, if examiningisotopes containing ENDF/B transport data, there shouldbe no reason to implement a high energy event estimatormodel. If simulating interactions that do not result in manybanked secondaries, then the storage space for thesebanked events should be minimized.In MCNPX 2.7.D, a memory reduction capability wasintroduced that used a combination of options on thephys:n and phy:p cards to eliminate physics modelallocation as well as intelligently set banked secondaryallocation based on problem dependent physics.28, 29 Onthe phys:n card, if the maximum particle energy (phys:n 1stentry) is less than the maximum energy for using tabulardata (phys:n 5th entry in MCNPX, 8th entry in MCNP6),then the code will never encounter a particle energy thatrequires a physics model (the code will interpolate thehigher energy cross section from tabular data); however,the code may still need physics models if usingphotonuclear physics as the code will use tabular data fornuclides with a specified extension but use models forevery other nuclide. Therefore to initiate the memoryreduction capability in MCNPX 2.7.D, the user had to setthe 5th entry on the phys:n card greater than the 1st entry,and also turn off photonuclear physics if running bothneutron and photon transport calculation (phys:p 4th entry,which is off by default). MCNP6 includes the capability ofMCNPX 2.7.D as well as eliminating more arraysassociated with non neutron photon transport (i.e. heavyion and electron transport) if the user only transportsneutrons and photons (i. e. using the settings mentioned forthe MCNPX 2.7.D capability as well as setting the 2ndentry on the phys:p to zero; turning of electron generationfrom photons causing bremstrahlung photon generation tobe neglected). MCNP6 also expunges all reactions fromthe ACE libraries that are not directly used for burnupsaving about 8% of the total cross section allocationspace.A test case using 600 concentric spheres, with 600burnable regions containing 277 total nuclides per region,was run using neutrons only to test the impact of thememory reduction capability. Table II shows the increasein memory savings comparing the base MCNPX 2.7.Dcapability to the MCNPX 2.7.D. memory reduction

Proceedings of ICAPP ‘12Chicago, USA, June 24-28, 2012Paper 12305LA-UR 11-07032capability and MCNP6 memory reduction capability. Thememory reduction capability in MCNP6 saves nearly anorder of magnitude of space that can be used to greatlyincrease the amount of available memory for moreburnable regions.TABLE IIMemory Savings from Memory Reduction Capability.usually correct for most heavier nuclides as (n, γ)dominates all capture reactions by orders of magnitude;however, for light nuclides such as B-10 the dominantreaction can be (n,α) (or other capture events like (n,p),(n,t), etc.), and therefore this approximation has since beeneliminated in MCNPX 2.7.D.MCNPX 2.6.0 over predicted (n, γ) contributionbecause the tallied (n, γ) in MCNPX was total (n, γ) and notadjusted for isomer branching. At ICAPP 2008, it wasstated that due to the energy dependent nature of theisomer branching, the future focus would be to includeENDF/B File 9 MT 102 in the ACE file and alter MCNPXto process this information.1 Figure 3 displays the energydependence, and fidelity, of the isomer branching to, forENDF/B VII.0, Am-242, Am-242m, Am-244, Am-244m.33M Memory Reduction Option turned on* During runtime after cross section processing (xact)* (Calculated/Measured-1)*100IV. BURNUP PHYSICS rporated into MCNPX 2.7.0, and thus also in MCNP6,since the release of MCNPX 2.6.0.30 These enhancementsinclude: (1) lowering the thermal fission cutoff upper bandlimit to 1 eV for assessing burn region energy dependentfission yield; (2) using actual (n, γ) instead of summedcapture for computing (n, γ) collision rates for CINDER90;and (3) correcting isomer branching based upon acombination of continuous energy integrated (n, γ) fromMCNP and computed 63-group energy integrated (n, γ*)from CINDER90.In MCNPX 2.6.B a capability was introduced to selecta burn region dependent thermal, fast or high energyspectra based fission yield for CINDER90.31, 32 The fissionyields in CINDER90 were based from ENDF/B VI.0 andtherefore thought to best represent a thermal reactor, fastreactor and fusion spectra. Initially, the energy boundswere set at 1 MeV and 14 MeV (if below 1 MeV usethermal; if between 1 and 14 MeV use fast, if greater than14 MeV use high energy). The bounds were arbitrarily setto these values to capture the minor amount of fissionevents in a thermal reactor occurring between 1 eV to 1MeV; however, when modeling epithermal systems, whereusing the fast yields is more correct, this approximationfails. Therefore in MCNPX 2.7.D the thermal cutoff waslowered to 1 eV.MCNPX automatically computes the total absorptionreaction (not including fission) during each track traverseand collision and stores this information for acceleratingreaction sampling. Initially, the burn capability attemptedto approximate the (n, γ) using total capture in order toaccelerate looking up these reactions during burnupreaction tracking in transport. This approximation isFig. 2. Energy dependent isomer branchingThe VESTA code actually does post process File 9, theisomer branching ratios, and File 10, cross sections for theproduction of the isomer state, to compute the actualbranching based upon ENDF/B and JEFF data.34 Thoughthe isomer branching is energy dependent (changingdrastically at 1 MeV), the fidelity of this energydependence in the file is actually not greater than thefidelity of the multi-group cross sections in CINDER90(which used a combination of File 9 and File 10 “like” datato compute the 63-group cross section). Therefore inMCNPX 2.7.B, a new method was developed thatleverages the 63-group (n, γ ) reactions from CINDER90to adjust the continuous energy integrated (n, γ) crosssections computed in MCNPX. Eq 2 displays the newmethod.(Eq. 2)This method therefore provides energy dependence ofthe isomer branching without having to: (1) change theformat of the ACE files and the NJOY code; (2)accommodate more storage in the cross section arrays; and

Proceedings of ICAPP ‘12Chicago, USA, June 24-28, 2012Paper 12305LA-UR 11-07032(3) increase computational expense by having to look upmore information on the ACE file.IV. H. B. ROBINSON BENCHMARKGeometry and burnup specifications used for the H. B.Robinson benchmark were taken from the Oak RidgeNational Laboratory Report, ORNL/TM-12667.35 Thecalculation setup (i.e. time steps, boundary conditions, etc.)was taken from ref. 1. The benchmark calculation uses aninfinitely reflected 15 by 15 UO2 fueled, Zircaloy-4 cladPressurized Water Reactor (PWR) fuel assembly. Fig. 3shows a diagram of the computational model. In the actualcalculation there is no excess water region; the outer pincell boundary on the outer pins is the reflective surface.* (Calculated/Measured-1)*100Analyzed Fuel RodTABLE IVBurnable PoisonPercent Difference* between Measured and ComputedNuclide Compositions for H. B. Robinson Benchmark Case B.Instrument TubeGuide TubeFig. 3. H. B. Robinson infinitely reflected lattice model.Cases A-D represents the different burnup cases fromthe benchmark: (1) Case A 16.02 GWD/MTU; (2) CaseB 23.8 GWD/MTU; (3) Case C 28.47 GWD/MTU; (4)Case D 31.66 GWD/MTU. MCNP6 is compared to bestavailable results from SCALE/SAS2H, MCNPX 2.6.0 andMONTEBURNS.1, 35, 36 The results of each Case for eachcode are displayed in Table III-VI.Each benchmark calculation was run using a separateset of ENDF/B (V-VII.0) cross sections generated at aseparate set of temperatures using different toleranceparameters in the cross section processing codes (details ofcross section generation are listed in refs. 1, 35, and 36).All MCNP6 results are representative of MCNPX 2.7.0.Thus MCNP6 in Tables III-VI represents MCNPX 2.7.0and MCNP6; MCNPX in Tables III-VI represents MCNPX2.6.0 At lower burnups, Cases A and B, MCNP6 does notcompute U-235, U-236, Pu-239, Pu-241 and Cs-137 aswell as MCNPX 2.6.0 and SCALE (results are similar toMONTEBURNS). For Case C, MCNP6 computes similarresults to MONTEBURNS, which are superior to MCNPX2.6.0 and SCALE/SAS2H; however, at higher burnups,Case D MCNP6 computes the best results for almost everyisotope (except Np-237).TABLE IIIPercent Difference* between Measured and ComputedNuclide Compositions for H. B. Robinson Benchmark Case A.* (Calculated/Measured-1)*100Because of the assumptions used in constructing thebenchmark and use of different data for each calculation,one cannot easily conclude that MCNP6 is the superiortechnology for this specific calculation. Furthermore, in allcases, no code best predicts all isotopes. For example, inCase A, MCNP6 has not burned up enough U-235;however, MCNP6 has transmuted more U-238 resulting inmore Pu-239 and Pu-241. The creation and destruction ofall isotopes is dictated by spectrum and shielding of oneisotope to another; therefore it is difficult to determine thespecific reaction where the methods are differing.Furthermore, the difference in data or calculation setupmay be generating the largest difference.TABLE V

Proceedings of ICAPP ‘12Chicago, USA, June 24-28, 2012Paper 12305LA-UR 11-07032Percent Difference* between Measured and ComputedNuclide Compositions for H. B. Robinson Benchmark Case C.* (Calculated/Measured-1)*100TABLE VIPercent Difference* between Measured and ComputedNuclide Compositions for H. B. Robinson Benchmark Case D.With the merger of MCNPX and MCNP5, MCNP6 isnow the next evolution in the MCNP suite of codes, andthe depletion capability in MCNP6 is the next generationin complete, relatively easy-to-use Monte Carlo linkedburnup.The new parallel architecture, using bothTHREADING and MPI as compared to MPI only, offerssignificant speedup in burnup calculations by speeding upboth particle transport and the burnup calculation. Thetests presented here show speedups of 30%-50% fromusing a combination of THREADING and MPI ascompared to using MPI alone. The new memorymanagement capability significantly reduces the memoryfootprint of each burn region allowing for more burnregions per gig of RAM to improve calculation fidelity. Forthe simple 600 region test case mentioned in this work,memory usage was improved by nearly an order ofmagnitude. Finally, the new physics enhancements providea more correct representation of the burnup physics, ascompared to MCNPX 2.6.0. Calculation results of the H.B. Robinson benchmark show that SCALE/SAS2H,MCNPX 2.6.0, MONTEBURNS and MCNP6 producessimilar results for 16-28 GWD/MTU burnups and MCNP6produces superior results at 31.66 GWD/MTU. Theenhancements described here help provide a powerfulcapability as well as dictate a path forward for futuredevelopment to improve the usefulness of the technology.VI. FUTURE WORK* (Calculated/Measured-1)*100Using MCNP6, each actinide and Cs-137 wascomputed to within a few percent, and Tc-99 wascomputed to within 12%, which is only slightly better thanthe other codes. However, one can conclude that thephysics updates in MCNP6 do not produce worse results;and since these physics enhancements help to betterrepresent the actual model, these improvements shouldimprove accuracy in more complicated calculations.V. CONCLUSIONSThe memory reduction capability eliminates 22 largedynamically allocated arrays.Over 64 subroutines/modules allocate variables in MCNP6. Therefore futurework will focus on eliminating excess allocation from therest of the MCNP6 code. Furthermore, large book keepingarrays for tracking variance reduction summaryinformation are dimensioned by the product of numbercells, nuclides per cell and number of summary reactions;therefore these tracking arrays are enormous for largeproblems.Since variance reduction tracking ismeaningless for typical reactor eigenvalue calculations,eliminating these tracking arrays can further increasememory savings. A preliminary capability to remove thesearrays was tested, and resulted in a further 200 MB ofsavings for the 600 burn region test case (total memoryreduction savings greater than an order of magnitude).However, eliminating these arrays causes a computationalhit as “if” tests are required throughout transport; thereforefurther testing is required before introducing this capabilityinto a production version of MCNP6. Furthermore asproblems get larger, data arrays may become so large thatstoring a complete array on a single node may becomeimpractical, and future implementations of burnup mayrequire data decomposition across several nodes. Thisimplementation will require severe restructuring of the

Proceedings of ICAPP ‘12Chicago, USA, June 24-28, 2012Paper 12305LA-UR 11-07032code, but still should be examined to accommodate largerscale calculations. neutron flux corrected capture rateACKNOWLEDGMENTS CINDER90 isomer production rateThis work was supported by the DOE– NNSA –Advanced Simulation and Computing program

M2 (1 CS)*M/(1 S) IF (((1 S)) M) then M1 1 CS M2 M1 ENDIF ELSE M1 1 M2 M ENDIF Fig. 1 MPI work distribution algorithm Because of the extreme independence of the solution method, it was hypothesized that the parallelization would result in linear speedup; however, bottlenecks we

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

10-12 October 2016. Dresden, Germany. Operated by the Los Alamos National Security, LLC for the DOE/NNSA . the initial release of the first production version of MCNP6 and its subsequent beta release, the extended electron-photon-relaxation capabilities of the code have continued to be developed. With the availability of newer data, several

by MCNP6, Serpent 2, and Proteus-SN for the two-dimensional RCF pin using one-fourth symmetry and reflective boundary conditions. MCNP 6.1 Serpent 2 Proteus-SN Eigenvalue ( G Ü á Ù) 1.41739(7) 1.41745(2) 1.41579 The results of this simulation show excellent agreement between MCNP6 and Serpent 2, within 6 pcm