BEST PRACTICES FOROPTIMIZING YOUR DBT ANDSNOWFLAKE DEPLOYMENTWHITE PAPER
TABLE OF CONTENTSOptimizing dbt22Introduction3What Is Snowflake?3Use environments22Snowflake architecture3Use the ref() function and sources24Benefits of using Snowflake4Write modular, DRY code255Use dbt tests and documentation265Use packages27Customer Use Case6Be intentional about your materializations27Optimizing Snowflake6Optimize for scalability28What Is dbt?dbt CloudAutomated resource optimizationfor dbt query tuning8- Automatic clustering8- Materialized views8- Query acceleration services9Resource management and monitoring10- Auto-suspend policies10- Resource monitors11- Naming conventions12Role-based access control (RBAC)- Plan for project scalability from the outset28- Follow a process for upgrading dbt ument Revisions29About dbt Labs30About Snowflake3013- Monitoring13- Monitoring credit usage15- Monitoring storage usage15Individual dbt workload elasticity17- Scaling up for performance18- Scaling out for concurrency20Writing effective SQL statements20- Query order of execution20- Applying filters as early as possible21- Querying only what you need21- Joining on unique keys21- Avoiding complex functions andUDFs in WHERE clauses22WHITE PAPER2
INTRODUCTIONCompanies in every industry acknowledge thatdata is one of their most important assets. And yet,companies consistently fall short of realizing thepotential of their data.Why is this the case? One key reason is theproliferation of data silos, which create expensive andtime-consuming bottlenecks, erode trust, and rendergovernance and collaboration nearly impossible.This is where Snowflake and dbt come in.The Snowflake Data Cloud is one global, unifiedsystem connecting companies and data providers torelevant data for their business. Wherever data orusers live, Snowflake delivers a single and seamlessexperience across multiple public clouds, eliminatingprevious silos.dbt is a transformation workflow that lets teamsquickly and collaboratively deploy analytics codefollowing software engineering best practices such asmodularity, portability, CI/CD, and documentation.With dbt, anyone who knows SQL can contribute toproduction-grade data pipelines.By combining dbt with Snowflake, data teams cancollaborate on data transformation workflows whileoperating out of a central source of truth. Snowflakeand dbt form the backbone of a data infrastructuredesigned for collaboration, agility, and scalability.When Snowflake is combined with dbt, customerscan operationalize and automate Snowflake’shallmark scalability within dbt as part of their analyticsengineering workflow. The result is that Snowflakecustomers pay only for the resources they need, whenthey need them, which maximizes efficiency andresults in minimal waste and lower costs.This paper will provide some best practices for usingdbt with Snowflake to create this efficient workflow.WHAT IS SNOWFLAKE?Snowflake’s Data Cloud is a global network wherethousands of organizations mobilize data with nearunlimited scale, concurrency, and performance. Insidethe Data Cloud, organizations have a single unifiedview of data so they can easily discover and securelyshare governed data, and execute diverse analyticsworkloads. Snowflake provides a tightly integratedanalytics data platform as a service, billed based onconsumption. It is faster, easier to use, and far moreflexible than traditional data warehouse offerings.Snowflake uses a SQL database engine and a uniquearchitecture designed specifically for the cloud. Thereis no hardware (virtual or physical) or software for youto select, install, configure, or manage. In addition,ongoing maintenance, management, and tuning arehandled by Snowflake.All components of Snowflake’s service (other thanoptional customer clients) run in a secure cloudinfrastructure.Snowflake is cloud-agnostic and uses virtual computeinstances from each cloud provider (AmazonEC2, Azure VM, and Google Compute Engine). Inaddition, it uses object or file storage from AmazonS3, Azure Blob Storage, or Google Cloud Storagefor persistent storage of data. Due to Snowflake’sunique architecture and cloud independence, you canseamlessly replicate data and operate from any ofthese clouds simultaneously.SNOWFLAKE ARCHITECTURESnowflake’s architecture is a hybrid of traditionalshared-disk database architectures and sharednothing database architectures. Similar to shared-diskarchitectures, Snowflake uses a central data repositoryfor persisted data that is accessible from all computenodes in the platform. But similar to shared-nothingarchitectures, Snowflake processes queries usingmassively parallel processing (MPP) compute clusterswhere each node in the cluster stores a portion of theentire data set locally. This approach offers the datamanagement simplicity of a shared disk architecture,but with the performance and scale-out benefits of ashared-nothing architecture.As shown in Figure 1, Snowflake’s unique architectureconsists of three layers built upon a public cloudinfrastructure: Cloud services: Cloud services coordinate activitiesacross Snowflake, processing user requests from loginto query dispatch. This layer provides optimization,management, security, sharing, and other features. Multi-cluster compute: Snowflake processes queriesusing virtual warehouses. Each virtual warehouse isan MPP compute cluster composed of multiple computenodes allocated by Snowflake from Amazon EC2,Azure VM, or Google Cloud Compute. Each virtualWHITE PAPER3
warehouse has independent compute resources, sohigh demand in one virtual warehouse has no impact onthe performance of other virtual warehouses. For moreinformation, see “Virtual Warehouses” in the Snowflakedocumentation. Centralized storage: Snowflake uses Amazon S3,Azure Blob Storage, or Google Cloud Storage tostore data into its internal optimized, compressed,columnar format using micro-partitions. Snowflakemanages the data organization, file size, structure,compression, metadata, statistics, and replication. Dataobjects stored by Snowflake are not directly visible bycustomers, but they are accessible through SQL queryoperations that are run using Snowflake.BENEFITS OF USING SNOWFLAKESnowflake is a cross-cloud platform, which meansthere are several things users coming from a moretraditional on-premises solution will no longer need toworry about: Installing, provisioning, and maintaining hardware andsoftware: All you need to do is create an account andload your data. You can then immediately connect fromdbt and start transforming data. Determining the capacity of a data warehouse:Snowflake has scalable compute and storage, so it canaccommodate all of your data and all of your users.You can adjust the count and size of your virtualwarehouses to handle peaks and lulls in your datausage. You can even turn your warehouses completelyoff to stop incurring costs when you are not using them. Learning new tools and expanded SQL capabilities:Snowflake is fully compliant with ANSI-SQL, so you canuse the skills and tools you already have. Snowflakeprovides connectors for ODBC, JDBC, Python,Spark, and Node.js, as well as web and command-lineinterfaces. On top of that, Snowpark is an initiative thatwill provide even more options for data engineers toexpress their business logic by directly working withScala, Java, and Python Data Frames. Siloed structured and semi-structured data: Businessusers increasingly need to work with both traditionallystructured data (for example, data in VARCHAR, INT,and DATE columns in tables) as well as semi-structureddata in formats such as XML, JSON, and Parquet.Snowflake provides a special data type called VARIANTthat enables you to load your semi-structured datanatively and then query it with SQL. Optimizing and maintaining your data: You can runanalytic queries quickly and easily without worryingabout managing how your data is indexed or distributedacross partitions. Snowflake also provides built-in dataprotection capabilities, so you don’t need to worryabout snapshots, backups, or other administrative taskssuch as running VACUUM jobs. Securing data and complying with international privacyregulations: All data is encrypted when it is loaded intoSnowflake, and it is kept encrypted at all times whenat rest and in transit. If your business requirementsinclude working with data that requires HIPAA, PII,PCI DSS, FedRAMP compliance, and more, Snowflake’sBusiness Critical edition and higher editions cansupport these validations.Figure 1: Three layers of Snowflake’s architectureWHITE PAPER4
Sharing data securely: Snowflake Secure Data Sharingenables you to share near real-time data internally andexternally between Snowflake accounts without copyingand moving data sets. Data providers provide securedata shares to their data consumers, who can viewand seamlessly combine the data with their own datasources. Snowflake Data Marketplace includes manydata sets that you can incorporate into your existingbusiness data—such as data for weather, demographics,or traffic—for greater data-driven insights.compiles the code, infers dependency graphs, runsmodels in order, and writes the necessary DDL/DMLto execute against your Snowflake instance. Thismakes it possible for users to focus on writing SQL andnot worry about the rest. For writing code that is DRY(don't repeat yourself), users can use Jinja alongsideSQL to express repeated logic using control structuressuch as loops and statements.DBT CLOUDWHAT IS DBT?When data teams work in silos, data quality suffers.dbt provides a common space for analysts, dataengineers, and data scientists to collaborate ontransformation workflows using their sharedknowledge of SQL.By applying proven software development bestpractices such as modularity, portability, versioncontrol, testing, and documentation, dbt’s analyticsengineering workflow helps data teams build trusteddata, faster.dbt transforms the data already in your datawarehouse. Transformations are expressed in simpleSQL SELECT statements and, when executed, dbtdbt Cloud is the fastest and most reliable way todeploy dbt. It provides a centralized experience forteams to develop, test, schedule, and investigate datamodels—all in one web-based UI (see Figure 2). Thisis made possible through features such as an intuitiveIDE, automated testing and documentation, in-appscheduling and alerting, access control, and a nativeGit integration.dbt Cloud also eliminates the setup andmaintenance work required to manage datatransformations in Snowflake at scale. A turn-keyadapter establishes a secure connection built tohandle enterprise loads, while allowing for finegrained policies and permissions.Figure 2: dbt Cloud provides a centralized experience for developing, testing, scheduling, and investigating data models.WHITE PAPER5
CUSTOMER USE CASEWhen Ben Singleton joined JetBlue as its Director ofData Science & Analytics, he stepped into a whirlpoolof demands that his team struggled to keep up with.The data team was facing a barrage of concerns andlow stakeholder trust.“My welcome to JetBlue involved a group of seniorleaders making it clear that they were frustrated withthe current state of data,” Singleton said.What made matters worse was the experts were notempowered to take ownership of their own data dueto the inaccessibility of the data stack.As Singleton dug, he realized the solution wasn’tincremental performance improvement but rathera complete infrastructure overhaul. By pairingSnowflake with dbt, JetBlue was able to transformthe data team from being a bottleneck to being theenablers of a data democracy.“Every C-level executive wants more questionsanswered with data, they want that data faster, andthey want it in many different ways. It’s critical forus,” Singleton said. All of this was done without anincrease in infrastructure costs. To read more aboutJetBlue’s success story, see the JetBlue case study.¹“Every C-level executivewants more questionsanswered with data, they wantthat data faster, and they wantit in many different ways.It’s critical for us.”Ben SingletonDirector of Data Science& Analytics, JetBlueThe remainder of this paper dives into the exactdbt and Snowflake best practices that JetBlue andthousands of other clients have implemented tooptimize performance.OPTIMIZING SNOWFLAKEYour business logic is defined in dbt, but dbtultimately pushes down all processing to Snowflake.For that reason, optimizing the Snowflake side ofyour deployment is critical to maximizing your queryperformance and minimizing deployment costs. Thetable on the following page summarizes the mainareas and relevant best practices for Snowflake andserves as a checklist for your deployment.WHITE PAPER6
AREABEST PRACTICESWHYAutomated resourceoptimization for dbtquery tuningAutomatic clusteringAutomated table maintenanceMaterialized viewsPre-compute complex logicQuery acceleration servicesAutomated scale out part of queryto speed up performance withoutresizing warehouseAuto-suspend policiesAutomatic stop of warehouse toreduce costsResource monitorsControl of resource utilizationand costNaming conventionsEase of tracking, allocation,and reportingRole-based access controlGovernance and cost allocationMonitoringResource and costconsumption monitoringScaling up for performanceResizing warehouse to increaseperformance for complex workloadScaling out for concurrencySpinning up additional warehousesto support a spike in concurrencyApplying filters as early as possibleOptimizing row operations andreducing records in subsequentoperationsQuerying only what you needSelecting only the columns neededto optimize columnar storeJoining on unique keysOptimizing JOIN operations andavoiding cross-joinsAvoiding complex functions andUDFs in WHERE clausesPruningResource managementand monitoringIndividual dbtworkload elasticityWriting effectiveSQL statementsWHITE PAPER7
AUTOMATED RESOURCE OPTIMIZATIONFOR DBT QUERY TUNING You no longer need to run manual operations to recluster data.Performance and scale are core to Snowflake.Snowflake’s functionality is designed such thatusers can focus on core analytical tasks instead ofon tuning the platform or investing in complicatedworkload management. Incremental clustering is done as new data arrives or alarge amount of data is modified.Automatic clusteringTraditionally, legacy on-premises and cloud datawarehouses relied on static partitioning of largetables to achieve acceptable performance and enablebetter scaling. In these systems, a partition is a unitof management that is manipulated independentlyusing specialized DDL and syntax; however, staticpartitioning has a number of well-known limitations,such as maintenance overhead and data skew, whichcan result in disproportionately sized partitions. Itwas the user’s responsibility to constantly optimizethe underlying data storage. This involved worksuch as updating indexes and statistics, postload vacuuming procedures, choosing the rightdistribution keys, dealing with slow partitions due togrowing skews, and manually reordering data as newdata arrived or got modified.In contrast to a data warehouse, Snowflakeimplements a powerful and unique form ofpartitioning called micro-partitioning, which deliversall the advantages of static partitioning without theknown limitations, as well as providing additionalsignificant benefits. Snowflake scalable, multicluster virtual warehouse technology automatesthe maintenance of micro-partitions. This meansSnowflake efficiently and automatically executesthe re-clustering in the background. There’s no needto create, size, or resize a virtual warehouse. Thecompute service continuously monitors the clusteringquality of all registered clustered tables. It starts withthe most unclustered micro-partitions and iterativelyperforms the clustering until an optimal clusteringdepth is achieved.With Snowflake, you can define clustered tables ifthe natural ingestion order is not sufficient in thepresence of varying data access patterns. Automaticclustering is a Snowflake service that seamlessly andcontinually manages all reclustering, as needed, ofclustered tables. Its benefits include the following: Data pipelines consisting of DML operations (INSERT,DELETE, UPDATE, MERGE) can run concurrently andare not blocked. Snowflake performs automatic reclustering in thebackground, and you do not need to specify awarehouse to use. You can resume and suspend automatic clustering ona per-table basis, and you are billed by the second foronly the compute resources used. Snowflake internally manages the state of clusteredtables, as well as the resources (servers, memory, andso on) used for all automated clustering operations. Thisallows Snowflake to dynamically allocate resources asneeded, resulting in the most efficient and effectivereclustering. The Automatic Clustering service doesnot perform any unnecessary reclustering. Reclusteringis triggered only when a table would benefit from theoperation.dbt supports table clustering on Snowflake. Tocontrol clustering for a table or incremental model,use the cluster by configuration. Refer to theSnowflake configuration guide for more details.Materialized viewsA materialized view is a pre-computed data setderived from a query specification (the SELECT inthe view definition) and stored for later use. Becausethe data is pre-computed, querying a materializedview (MV) is faster than executing a query against thebase table of the view. This performance differencecan be significant when a query is run frequently oris sufficiently complex. As a result, MVs can speedup expensive aggregation, projection, and selectionoperations, especially those that run frequently andthat run on large data sets. dbt does not supportMVs out of the box as materializations; therefore,we recommend using custom materializationsas a solution to achieve similar purposes. Thedbt materializations section in this white paperexplains how MVs can be used in dbt via a custommaterialization.WHITE PAPER8
MVs are particularly useful when: Query results contain a small number of rows and/orcolumns relative to the base table (the table on whichthe view is defined) Query results contain results that require significantprocessing, including:– Analysis of semi-structured data– Aggregates that take a long time to calculate The query is on an external table (that is, data setsstored in files in an external stage), which might haveslower performance compared to querying nativedatabase tables The view’s base table does not change frequentlyIn general, when deciding whether to create an MVor a regular view, use the following criteria: Create an MV when all of the following are true:– The query results from the view don’t change often.This almost always means that the underlying/basetable for the view doesn’t change often or at least thesubset of base table rows used in the MVdoesn’t change often.– The results of the view are used often(typically, significantly more often than the queryresults change).– The query consumes a lot of resources. Typically,this means that the query consumes a lot ofprocessing time or credits, but it could also meanthat the query consumes a lot of storage space forintermediate results. Create a regular view when any of the followingare true:– The results of the view change often.– The results are not used often (relative to the rate atwhich the results change).– The query is not resource-intensive so it is not costlyto re-run it.These criteria are just guidelines. An MV mightprovide benefits even if it is not used often—especially if the results change less frequently thanthe usage of the view.There are also other factors to consider whendeciding whether to use a regular view or an MV. Onesuch example is the cost of storing and maintainingthe MV. If the results are not used very often (evenif they are used more often than they change), theadditional storage and compute resource costs mightnot be worth the performance gain.Snowflake’s compute service monitors the basetables for MVs and kicks off refresh statements forthe corresponding MVs if significant changes aredetected. This maintenance process of all dependentMVs is asynchronous. In scenarios where a useris accessing an MV that has yet to be updated,Snowflake’s query engine will perform a combinedexecution with the base table to always ensureconsistent query results. Similar to Snowflake’sautomatic clustering with the ability to resume orsuspend per table, a user can resume and suspendthe automatic maintenance on a per-MV basis. Theautomatic refresh process consumes resourcesand can result in increased credit usage. However,Snowflake ensures efficient credit usage by billingyour account only for the actual resources used.Billing is calculated in one-second increments.You can control the cost of maintaining MVs bycarefully choosing how many views to create,which tables to create them on, and each view’sdefinition (including the number of rows and columnsin that view).You can also control costs by suspending or resuminga MV; however, suspending maintenance typicallyonly defers costs rather than reducing them. Thelonger that maintenance has been deferred, the moremaintenance there is to do.If you are concerned about the cost of maintainingMVs, we recommend you start slowly with thisfeature (that is, create only a few MVs on selectedtables) and monitor the costs over time.It’s a good idea to carefully evaluate these guidelinesbased on your dbt deployment to see if queryingfrom MVs will boost performance compared to basetables or regular views without cost overhead.Query acceleration servicesSizing the warehouse just right for a workload isgenerally a hard trade-off between minimizingcost and maximizing query performance. You’llusually have to monitor, measure, and pick anacceptable point in this price-performance spectrumand readjust as required. Workloads that areunpredictable in terms of either the number ofconcurrent queries or the amount of data required fora given query make this challenging.WHITE PAPER9
Multi-cluster warehouses handle the first case welland scale out only when there are enough queries tojustify it. For the case where there is an unpredictableamount of data in the queries, you usually have toeither wait longer for queries that look at larger datasets or resize the entire warehouse, which affects allclusters in the warehouse and the entire workload.Snowflake’s Query Acceleration Service provides agood default for the price-performance spectrum byautomatically identifying and scaling out parts of thequery plan that are easily parallelizable (for example,per-file operations such as filters, aggregations, scans,and join probes using bloom filters). The benefit isa much reduced query runtime at a lower cost thanwould result from just using a larger warehouse.The Query Acceleration Service achieves this byelastically recruiting ephemeral worker nodes tolend a helping hand to the warehouse. Parallelizablefragments of the query plan are queued up forprocessing on leased workers, and the output ofthis fragment execution is materialized andconsumed by the warehouse workers as a stream.As a result, a query over a large data set can finishfaster, use fewer resources on the warehouse, andpotentially, cost fewer total credits than it would withthe current model.What makes this feature unique is: It supports filter types, including joins No specialized hardware is required You can enable, disable, or configure the servicewithout disrupting your workloadThis is a great feature to use in your dbt deploymentif you are looking to: Accelerate long-running dbt queries that scan alot of data Reduce the impact of scan-heavy outliers Scale performance beyond the largest warehouse size Speed up performance without changing thewarehouse sizePlease note that this feature is currently managedoutside of dbt.This feature is in private preview at the time of thiswhite paper’s first publication; please reach out toyour Snowflake representative if you are interested inexperiencing this feature with your dbt deployment.RESOURCE MANAGEMENT AND MONITORINGA virtual warehouse consumes Snowflake creditswhile it runs, and the amount consumed dependson the size of the warehouse and how long itruns. Snowflake provides a rich set of resourcemanagement and monitoring capabilities to helpcontrol costs and avoid unexpected credit usage, notjust for dbt transformation jobs but for all workloads.Auto-suspend policiesThe very first resource control that you shouldimplement is setting auto-suspend policies for eachof your warehouses. This feature automaticallystops warehouses after they’ve been idle for apredetermined amount of time.We recommend setting auto-suspend accordingto your workload and your requirements forwarehouse availability: If you enable auto-suspend for your dbt workload, werecommend setting a more aggressive policy with thestandard recommendation being 60 seconds, becausethere is little benefit from caching. You might want to consider disabling auto-suspend fora warehouse if:– You have a heavy, steady workload forthe warehouse.– You require the warehouse to be available with nodelay or lag time. While warehouse provisioning isgenerally very fast (for example, 1 or 2 seconds), it’snot entirely instant; depending on the size of thewarehouse and the availability of compute resourcesto provision, it can take longer.If you do choose to disable auto-suspend, you shouldcarefully consider the costs associated with running awarehouse continually even when the warehouse isnot processing queries. The costs can be significant,especially for larger warehouses (X-Large, 2X-Large,or larger.).We recommend that you customize auto-suspendthresholds for warehouses assigned to differentworkloads to assist in warehouse responsiveness: Warehouses used for queries that benefit from cachingshould have a longer auto-suspend period to allow forthe reuse of results in the query cache. Warehouses used for data loading can be suspendedimmediately after queries are completed. Enabling autoresume will restart a virtual warehouse as soon as itreceives a query.WHITE PAPER10
Resource monitors If either the warehouse-level or account-level resourcemonitor reaches its defined threshold, the warehouse issuspended. This enables controlling global credit usagewhile also providing fine-grained control over creditusage for individual or specific warehouses.Resource monitors can be used by accountadministrators to impose limits on the number ofcredits that are consumed by different workloads,including dbt jobs within each monthly billingperiod, by: In addition, an account-level resource monitor doesnot control credit usage by the Snowflake-providedwarehouses (used for Snowpipe, automatic reclustering,and MVs); the monitor controls only the virtualwarehouses created in your account. User-managed virtual warehouses Virtual warehouses used by cloud servicesWhen these limits are either close to being reachedor have been reached, the resource monitor can sendalert notifications or suspend the warehouses.It is essential to be aware of the following rules aboutresource monitors: A monitor can be assigned to one or more warehouses. Define an account-level budget. Define priority warehouse(s) including warehouses fordbt workloads and carve from the master budget forpriority warehouses. Create a resource allocation story and map. Each warehouse can be assigned to only oneresource monitor. A monitor can be set at the account level to controlcredit usage for all warehouses in your account. An account-level resource monitor does not overrideresource monitor assignment for individual warehouses.Figure 3 illustrates an example scenario for a resourcemonitoring strategy in which one resource monitor isset at the account level, and individual warehousesare assigned to two other resource monitors:Credit quota 1,000Credit quota 5,000RESOURCEMONITOR 1Credit quota 2,500RESOURCEMONITOR 3RESOURCEMONITOR 2Set forthe accountWAREHOUSE 1Considering these rules, the following are somerecommendations on resource monitoring strategy:Assigned toAssigned toWAREHOUSE 2WAREHOUSE 3WAREHOUSE 4WAREHOUSE 5Figure 3: Example scenario for a resource monitoring strategyWHITE PAPER11
In the example (Figure 3 on the previous page), thecredit quota for the entire account is 5,000 permonth; if this quota is reached within the interval, theactions defined for the resource monitor (Suspend,Suspend Immediate, and so on) are enforced for allfive warehouses.Warehouse 3 performs ETL including ETL for dbtjobs. From historical ETL loads, we estimated it canconsume a maximum of 1,000 credits for the month.We assigned this warehouse to Resource Monitor 2.Warehouse 4 and 5 are dedicated to the businessintelligence and data science teams. Based on theirhistorical usage, we estimated they can consume amaximum combined total of 2,500 credits for themonth. We assigned these warehouses to ResourceMonitor 3.Warehouse 1 and 2 are for development and testing.Based on historical usage, we don’t need to place aspecific resource monitor for them.The credits consumed by Warehouses 3, 4, and 5 maybe less than their quotas if the account-level quota isreached first.The used credits for a resource monitor reflectthe sum of all credits consumed by all assignedwarehouses within the specified interval. If a monitorhas a Suspend or Suspend Immediately actiondefined and its used credits reach the threshold forthe action, any warehouses assigned to the monitorare suspended and cannot be resumed until one ofthe following conditions is met: The next interval, if any, starts, as dictated by the startdate for the monitor. The credit quota for the monitor is increased. The credit threshold for the suspended actionis increased. The warehouses are no longer assigned to the monitor. The monitor is dropp
dbt and Snowflake best practices that JetBlue and thousands of other clients have implemented to optimize performance. OPTIMIZING SNOWFLAKE Your business logic is defined in dbt, but dbt ultimately pushes down all processing to Snowflake. For that reason, optimizing the Snowflake side of your deployment is critical to maximizing your query
Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original
10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan
service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största
Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid
LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .
Switch and Zoning Best Practices 28-30 2. IP SAN Best Practices 30-32 3. RAID Group Best Practices 32-34 4. HBA Tuning 34-38 5. Hot Sparing Best Practices 38-39 6. Optimizing Cache 39 7. Vault Drive Best Practices 40 8. Virtual Provisioning Best Practices 40-43 9. Drive
SYMANTEC VISION 2014 Agenda NBU 7.6 Best Practices : Optimizing Performance 2 1 NetBackup Architecture & Scalability 2 NetBackup 5230 Appliance Scalability 3 NetBackup Performance Tuning Best Practices 4 NetBackup Performance Testing Best Practices 5 NetBackup 7.6 MSDP Performance Enhancements 6 NetBackup 7.6 Appliance Use Cases
The Formation Of Galactic Bulges Carollo C Marcella Ferguson Henry C Wyse Rosemary F G Vol. III - No. XV Page 1/4 4225392. 10 Best LLC Services - Top LLC Formation Services 2021 (sponsored) LLC LLC Formation Top LLC Formation Services Anna Allen (Ad) Become legal In 1980, the Internal Revenue Service (IRS) recognized the legalization of Limited Liability Companies (LLCs) in the United .