Ebook Serving Machine Learning Models - Knowledge Base

1y ago
9 Views
2 Downloads
1.54 MB
104 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Jayda Dunning
Transcription

ComplimentsofServing MachineLearning ModelsA Guide to Architecture, StreamProcessing Engines, and FrameworksBoris Lublinsky

Serving Machine LearningModelsA Guide to Architecture, StreamProcessing Engines, and FrameworksBoris LublinskyBeijingBoston Farnham SebastopolTokyo

Serving Machine Learning Modelsby Boris LublinskyCopyright 2017 Lightbend, Inc. All rights reserved.Printed in the United States of America.Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA95472.O’Reilly books may be purchased for educational, business, or sales promotional use.Online editions are also available for most titles (http://oreilly.com/safari). For moreinformation, contact our corporate/institutional sales department: 800-998-9938or corporate@oreilly.com.Editors: Brian Foster & Virginia WilsonProduction Editor: Justin BillingCopyeditor: Octal Publishing, Inc.Proofreader: Charles RoumeliotisInterior Designer: David FutatoCover Designer: Karen MontgomeryIllustrator: Rebecca DemarestFirst EditionOctober 2017:Revision History for the First Edition2017-10-11:First ReleaseThe O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Serving MachineLearning Models, the cover image, and related trade dress are trademarks of O’ReillyMedia, Inc.While the publisher and the author have used good faith efforts to ensure that theinformation and instructions contained in this work are accurate, the publisher andthe author disclaim all responsibility for errors or omissions, including without limi‐tation responsibility for damages resulting from the use of or reliance on this work.Use of the information and instructions contained in this work is at your own risk. Ifany code samples or other technology this work contains or describes is subject toopen source licenses or the intellectual property rights of others, it is your responsi‐bility to ensure that your use thereof complies with such licenses and/or rights.978-1-492-02406-4[LSI]

Table of ContentsIntroduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v1. Proposed Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Overall ArchitectureModel Learning Pipeline122. Exporting Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5TensorFlowPMML5133. Implementing Model Scoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Model RepresentationModel StreamModel FactoryTest Harness181922224. Apache Flink Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Overall ArchitectureUsing Key-Based JoinsUsing Partition-Based Joins2729365. Apache Beam Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Overall ArchitectureImplementing Model Serving Using Beam41426. Apache Spark Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Overall Architecture50iii

Implementing Model Serving Using Spark Streaming507. Apache Kafka Streams Implementation. . . . . . . . . . . . . . . . . . . . . . . 55Implementing the Custom State StoreImplementing Model ServingScaling the Kafka Streams Implementation5660648. Akka Streams Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Overall ArchitectureImplementing Model Serving Using Akka StreamsScaling Akka Streams ImplementationSaving Execution State686873739. Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75FlinkKafka StreamsAkka StreamsSpark and BeamConclusioniv Table of Contents7679869090

IntroductionMachine learning is the hottest thing in software engineering today.There are a lot of publications on machine learning appearing daily,and new machine learning products are appearing all the time.Amazon, Microsoft, Google, IBM, and others have introducedmachine learning as managed cloud offerings.However, one of the areas of machine learning that is not gettingenough attention is model serving—how to serve the models thathave been trained using machine learning.The complexity of this problem comes from the fact that typicallymodel training and model serving are responsibilities of two differ‐ent groups in the enterprise who have different functions, concerns,and tools. As a result, the transition between these two activities isoften nontrivial. In addition, as new machine learning tools appear,it often forces developers to create new model serving frameworkscompatible with the new tooling.This book introduces a slightly different approach to model servingbased on the introduction of standardized document-based inter‐mediate representation of the trained machine learning models andusing such representations for serving in a stream-processing con‐text. It proposes an overall architecture implementing controlledstreams of both data and models that enables not only the serving ofmodels in real time, as part of processing of the input streams, butalso enables updating models without restarting existing applica‐tions.v

Who This Book Is ForThis book is intended for people who are interested in approaches toreal-time serving of machine learning models supporting real-timemodel updates. It describes step-by-step options for exporting mod‐els, what exactly to export, and how to use these models for realtime serving.The book also is intended for people who are trying to implementsuch solutions using modern stream processing engines and frame‐works such as Apache Flink, Apache Spark streaming, ApacheBeam, Apache Kafka streams, and Akka streams. It provides a set ofworking examples of usage of these technologies for model servingimplementation.Why Is Model Serving Difficult?When it comes to machine learning implementations, organizationstypically employ two very different groups of people: data scientists,who are typically responsible for the creation and training models,and software engineers, who concentrate on model scoring. Thesetwo groups typically use completely different tools. Data scientistswork with R, Python, notebooks, and so on, whereas software engi‐neers typically use Java, Scala, Go, and so forth. Their activities aredriven by different concerns: data scientists need to cope with theamount of data, data cleaning issues, model design and comparison,and so on; software engineers are concerned with production issuessuch as performance, maintainability, monitoring, scalability, andfailover.These differences are currently fairly well understood and result inmany “proprietary” model scoring solutions, for example, Tensor‐flow model serving and Spark-based model serving. Additionally allof the managed machine learning implementations (Amazon,Microsoft, Google, IBM, etc.) provide model serving capabilities.Tools Proliferation Makes Things WorseIn his recent talk, Ted Dunning describes the fact that with multipletools available to data scientists, they tend to use different tools tosolve different problems (because every tool has its own sweet spotand the number of tools grows daily), and, as a result, they are notvi Introduction

very keen on tools standardization. This creates a problem for soft‐ware engineers trying to use “proprietary” model serving tools sup‐porting specific machine learning technologies. As data scientistsevaluate and introduce new technologies for machine learning, soft‐ware engineers are forced to introduce new software packages sup‐porting model scoring for these additional technologies.One of the approaches to deal with these problems is the introduc‐tion of an API gateway on top of the proprietary systems. Althoughthis hides the disparity of the backend systems from the consumersbehind the unified APIs, for model serving it still requires installa‐tion and maintenance of the actual model serving implementations.Model Standardization to the RescueTo overcome these complexities, the Data Mining Group has intro‐duced two model representation standards: Predictive ModelMarkup Language (PMML) and Portable Format for Analytics(PFA)The Data Mining Group Defines PMML as:is an XML-based language that provides a way for applications todefine statistical and data-mining models as well as to share modelsbetween PMML-compliant applications.PMML provides applications a vendor-independent method ofdefining models so that proprietary issues and incompatibilities areno longer a barrier to the exchange of models between applications.It allows users to develop models within one vendor’s application,and use other vendors’ applications to visualize, analyze, evaluate orotherwise use the models. Previously, this was very difficult, butwith PMML, the exchange of models between compliant applica‐tions is now straightforward. Because PMML is an XML-basedstandard, the specification comes in the form of an XML Schema.The Data Mining Group describes PFA asan emerging standard for statistical models and data transforma‐tion engines. PFA combines the ease of portability across systemswith algorithmic flexibility: models, pre-processing, and post pro‐cessing are all functions that can be arbitrarily composed, chained,or built into complex workflows. PFA may be as simple as a rawdata transformation or as sophisticated as a suite of concurrent datamining models, all described as a JSON or YAML configurationfile.Introduction vii

Another de facto standard in machine learning today isTensorFlow--an open-source software library for Machine Intelli‐gence. Tensorflow can be defined as follows:At a high level, TensorFlow is a Python library that allows users toexpress arbitrary computation as a graph of data flows. Nodes inthis graph represent mathematical operations, whereas edges repre‐sent data that is communicated from one node to another. Data inTensorFlow are represented as tensors, which are multidimensionalarrays.TensorFlow was released by Google in 2015 to make it easier fordevelopers to design, build, and train deep learning models, andsince then, it has become one of the most used software libraries formachine learning. You also can use TensorFlow as a backend forsome of the other popular machine learning libraries, for example,Keras. TensorFlow allows for the exporting of trained models inprotocol buffer formats (both text and binary) that you can use fortransferring models between machine learning and model serving.In an attempt to make TensorFlow more Java friendly, TensorFlowJava APIs were released in 2017, which enable scoring TensorFlowmodels using any Java Virtual Machine (JVM)–based language.All of the aforementioned model export approaches are designed forplatform-neutral descriptions of the models that need to be served.Introduction of these model export approaches led to the creation ofseveral software products dedicated to “generic” model serving, forexample, Openscoring and Open Data Group.Another result of this standardization is the creation of open sourceprojects, building generic “evaluators” based on these formats.JPMML and Hadrian are two examples that are being adopted moreand more for building model-serving implementations, such as inthese example projects: ING, R implementation, SparkML support,Flink support, and so on.Additionally, because models are represented not as code but asdata, usage of such a model description allows manipulation ofmodels as a special type of data that is fundamental for our pro‐posed solution.Why I Wrote This BookThis book describes the problem of serving models resulting frommachine learning in streaming applications. It shows how to exportviii Introduction

trained models in TensorFlow and PMML formats and use them formodel serving, using several popular streaming engines and frame‐works.I deliberately do not favor any specific solution. Instead, I outlineoptions, with some pros and cons. The choice of the best solutiondepends greatly on the concrete use case that you are trying to solve,more precisely: The number of models to serve. Increasing the number of mod‐els will skew your preference toward the use of the key-basedapproach, like Flink key-based joins. The amount of data to be scored by each model. Increasing thevolume of data suggests partition-based approaches, like Sparkor Flink partition-based joins. The number of models that will be used to score each data item.You’ll need a solution that easily supports the use of compositekeys to match each data item to multiple models. The complexity of the calculations during scoring and addi‐tional processing of scored results. As the complexity grows, sowill the load grow, which suggests using streaming enginesrather than streaming libraries. Scalability requirements. If they are low, using streaming libra‐ries like Akka and Kafka Streams can be a better option due totheir relative simplicity compared to engines like Spark andFlink, their ease of adoption, and the relative ease of maintain‐ing these applications. Your organization’s existing expertise, which can suggest mak‐ing choices that might be suboptimal, all other considerationsbeing equal, but are more comfortable for your organization.I hope this book provides the guidance you need for implementingyour own solution.How This Book Is OrganizedThe book is organized as follows: Chapter 1 describes the overall proposed architecture.Introduction ix

Chapter 2 talks about exporting models using examples of Ten‐sorFlow and PMML. Chapter 3 describes common components used in all solutions. Chapter 4 through Chapter 8 describe model serving imple‐mentations for different stream processing engines and frame‐works. Chapter 9 covers monitoring approaches for model servingimplementations.A Note About CodeThe book contains a lot of code snippets. You can find the completecode in the following Git repositories: Python examples is the repository containing Python code forexporting TensorFlow models described in Chapter 2. Beam model server is the repository containing code for theBeam solution described in Chapter 5. Model serving is the repository containing the rest of the codedescribed in the book.AcknowledgmentsI would like to thank the people who helped me in writing this bookand making it better, especially: Konrad Malawski, for his help with Akka implementation andoverall review Dean Wampler, who did a thorough review of the overall bookand provided many useful suggestions Trevor Grant, for conducting a technical review The entire Lightbend Fast Data team, especially Stavros Konto‐poulos, Debasish Ghosh, and Jim Powers, for many useful com‐ments and suggestions about the original text and codex Introduction

CHAPTER 1Proposed ImplementationThe majority of model serving implementations today are based onrepresentational state transfer (REST), which might not be appropri‐ate for high-volume data processing or for use in streaming systems.Using REST requires streaming applications to go “outside” of theirexecution environment and make an over-the-network call forobtaining model serving results.The “native” implementation of new streaming engines—for exam‐ple, Flink TensorFlow or Flink JPPML—do not have this problembut require that you restart the implementation to update the modelbecause the model itself is part of the overall code implementation.Here we present an architecture for scoring models natively in astreaming system that allows you to update models without inter‐ruption of execution.Overall ArchitectureFigure 1-1 presents a high-level view of the proposed model servingarchitecture (similar to a dynamically controlled stream).1

Figure 1-1. Overall architecture of model servingThis architecture assumes two data streams: one containing datathat needs to be scored, and one containing the model updates. Thestreaming engine contains the current model used for the actualscoring in memory. The results of scoring can be either delivered tothe customer or used by the streaming engine internally as a newstream—input for additional calculations. If there is no model cur‐rently defined, the input data is dropped. When the new model isreceived, it is instantiated in memory, and when instantiation iscomplete, scoring is switched to a new model. The model streamcan either contain the binary blob of the data itself or the referenceto the model data stored externally (pass by reference) in a databaseor a filesystem, like Hadoop Distributed File System (HDFS) orAmazon Web Services Simple Storage Service (S3).Such approaches effectively using model scoring as a new type offunctional transformation, which any other stream functional trans‐formations can use.Although the aforementioned overall architecture is showing a sin‐gle model, a single streaming engine could score multiple modelssimultaneously.Model Learning PipelineFor the longest period of time model building implementation wasad hoc—people would transform source data any way they saw fit,do some feature extraction, and then train their models based on2 Chapter 1: Proposed Implementation

these features. The problem with this approach is that when some‐one wants to serve this model, he must discover all of those inter‐mediate transformations and reimplement them in the servingapplication.In an attempt to formalize this process, UC Berkeley AMPLab intro‐duced the machine learning pipeline (Figure 1-2), which is a graphdefining the complete chain of data transformation steps.Figure 1-2. The machine learning pipelineThe advantage of this approach is twofold: It captures the entire processing pipeline, including data prepa‐ration transformations, machine learning itself, and anyrequired postprocessing of the machine learning results. Thismeans that the pipeline defines the complete transformationfrom well-defined inputs to outputs, thus simplifying update ofthe model. The definition of the complete pipeline allows for optimizationof the processing.A given pipeline can encapsulate more than one model (see, forexample, PMML model composition). In this case, we consider suchmodels internal—nonvisible for scoring. From a scoring point ofview, a single pipeline always represents a single unit, regardless ofhow many models it encapsulates.This notion of machine learning pipelines has been adopted bymany applications including SparkML, TensorFlow, and PMML.From this point forward in this book, when I refer to model serving,I mean serving the complete pipeline.Model Learning Pipeline 3

CHAPTER 2Exporting ModelsBefore delving into model serving, it is necessary to discuss the topicof exporting models. As discussed previously, data scientists definemodels, and engineers implement model serving. Hence, the abilityto export models from data science tools is now important.For this book, I will use two different examples: Predictive ModelMarkup Language (PMML) and TensorFlow. Let’s look at the waysin which you can export models using these tools.TensorFlowTo facilitate easier implementation of model scoring, TensorFlowsupports export of the trained models, which Java APIs can use toimplement scoring. TensorFlow Java APIs are not doing the actualprocessing; they are just thin Java Native Interface (JNI) wrapperson top of the actual TensorFlow C code. Consequently, theirusage requires “linking” the TensorFlow C executable to yourJava application.TensorFlow currently supports two types of model export: export ofthe execution graph, which can be optimized for inference, and anew SavedModel format, introduced this year.Exporting the Execution GraphExporting the execution graph is a “standard” TensorFlow approachto save the model. Let’s take a look at an example of adding an exe‐cution graph export to a multiclass classification problem imple‐5

mentation using Keras with a TensorFlow backend applied to anopen source wine quality dataset (complete code).Example 2-1. Exporting an execution graph from a Keras model.# Create TF session and set it in Kerassess tf.Session()K.set session(sess).# Saver op to save and restore all the variablessaver tf.train.Saver()# Save produced modelmodel path "path"model name "WineQuality"save path saver.save(sess, model path model name ".ckpt")print "Saved model at ", save path# Now freeze the graph (put variables into graph)input saver def path ""input binary Falseoutput node names "dense 3/Sigmoid"restore op name "save/restore all"filename tensor name "save/Const:0"output frozen graph name model path 'frozen ' model name '.pb'clear devices Truefreeze graph.freeze graph(graph path, input saver def path,input binary, save path,Output node names,restore op name, filename tensor name,output frozen graph name, clear devices, "")# Optimizing graphinput graph def tf.GraphDef()with tf.gfile.Open(output frozen graph name, "r") as f:data f.read()input graph def.ParseFromString(data)output graph def optimize for inference lib.optimize for inference(input graph def,["dense 1 input"],["dense 3/Sigmoid"],tf.float32.as datatype enum)# Save the optimized graphtf.train.write graph(output graph def, model path,"optimized " model name ".pb", as text False)Example 2-1 is adapted from a Keras machine learning example todemonstrate how to export a TensorFlow graph. To do this, it is nec‐essary to explicitly set the TensorFlow session for Keras execution.6 Chapter 2: Exporting Models

The TensorFlow execution graph is tied to the execution session, sothe session is required to gain access to the graph.The actual graph export implementation involves the followingsteps:1. Save initial graph.2. Freeze the graph (this means merging the graph definition withparameters).3. Optimize the graph for serving (remove elements that do notaffect serving).4. Save the optimized graph.The saved graph is an optimized graph stored using the binary Goo‐gle protocol buffer (protobuf) format, which contains only portionsof the overall graph and data relevant for model serving (the por‐tions of the graph implementing learning and intermediate calcula‐tions are dropped).After the model is exported, you can use it for scoring. Example 2-2uses the TensorFlow Java APIs to load and score the model (fullcode available here).Example 2-2. Serving the model created from the execution graph ofthe Keras modelclass WineModelServing(path : String) {import WineModelServing.// Constructorval lg readGraph(Paths.get (path))val ls new Session (lg)def score(record : Array[Float]) : Double {val input Tensor.create(Array(record))val result ls.runner.feed("dense 1 input",input).fetch("dense 3/Sigmoid").run().get(0)// Extract result valueval rshape result.shapevar rMatrix ape(1).asInstanceOf[Int])result.copyTo(rMatrix)var value (0, rMatrix(0)(0))1 to (rshape(1).asInstanceOf[Int] -1) foreach{i {if(rMatrix(0)(i) value. 2)value (i, rMatrix(0)(i))}}TensorFlow 7

value. 1.toDouble}def cleanup() : Unit {ls.close}}object WineModelServing{def main(args: Array[String]): Unit {val model path "/optimized WineQuality.pb"// modelval data path "/winequality red.csv" // dataval lmodel new WineModelServing(model path)val inputs getListOfRecords(data path)inputs.foreach(record println(s"result {lmodel.score(record. 1)}"))lmodel.cleanup()}private def readGraph(path: Path) : Graph {try {val graphData Files.readAllBytes(path)val g new Graphg.importGraphDef(graphData)g} .}.}In this simple code, the constructor uses the readGraph method toread the execution graph and create a TensorFlow session with thisgraph attached to it.The score method takes an input record containing wine qualityobservations and converts it to a tensor format, which is used as aninput to the running graph. Because the exported graph does notprovide any information about names and shapes of either inputs oroutputs (the execution signature), when using this approach, it isnecessary to know which variable(s) (i.e., input parameter) yourflow accepts (feed) and which tensor(s) (and their shape) to fetch asa result. After the result is received (in the form of a tensor), itsvalue is extracted.The execution is orchestrated by the main method in the WineModelServing object. This method first creates an instance of the WineModelServing class and then reads the list of input records and foreach record invokes a serve method on the WineModelServing classinstance.8 Chapter 2: Exporting Models

To run this code, in addition to the TensorFlow Java library, youmust also have the TensorFlow C implementation library (.dllor .so) installed on the machine that will run the code.Advantages of execution graph export include the following: Due to the optimizations, the exported graph has a relativelysmall size. The model is self-contained in a single file, which makes it easyto transport it as a binary blob, for instance, using a Kafka topic.A disadvantage is that the user of the model must know explicitlyboth input and output (and their shape and type) of the model touse the graph correctly; however, this is typically not a serious prob‐lem.Exporting the Saved ModelTensorFlow SavedModel is a new export format, introduced in 2017,in which the model is exported as a directory with the ariables.data-?-of-?variables.indexSaved model.pbwhere: assets is a subfolder containing auxiliary files such as vocabu‐laries, etc. assets.extra is a subfolder where higher-level libraries andusers can add their own assets that coexist with the model butare not loaded by the graph. It is not managed by the SavedMo‐del libraries. variables is a subfolder containing output from the Tensor‐Flow Saver: both variables index and data. saved model.pb contains graph and MetaGraph definitions inbinary protocol buffer format.The advantages of the SavedModel format are:TensorFlow 9

You can add multiple graphs sharing a single set of variables andassets to a single SavedModel. Each graph is associated with aspecific set of tags to allow identification during a load orrestore operation. Support for SignatureDefs. The definition of graph inputs andoutputs (including shape and type for each of them) is called aSignature. SavedModel uses SignatureDefs to allow generic sup‐port for signatures that might need to be saved with the graphs. Support for assets. In some cases, TensorFlow operationsdepend on external files for initialization, for example, vocabu‐laries. SavedModel exports these additional files in the assetsdirectory.Here is a Python code snippet (complete code available here) thatshows you how to save a trained model in a saved model format:Example 2-3. Exporting saved model from a Keras model#export version . # version number (integer)export dir "savedmodels/WineQuality/"builder saved model builder.SavedModelBuilder(export dir)signature predict signature def(inputs {'winedata': model.input},outputs {'quality': model.output})builder.add meta graph and variables(sess sess,tags [tag constants.SERVING],signature def map {'predict': signature})builder.save()By replacing the export execution graph in Example 2-1 with thiscode, it is possible to get a saved model from your multiclass classifi‐cation problem.After you export the model into a directory, you can use it for serv‐ing. Example 2-4 (complete code available here) takes advantage ofthe TensorFlow Java APIs to load and score with the model.Example 2-4. Serving a model based on the saved model from a Kerasmodelobject WineModelServingBundle {def apply(path: String, label: String): WineModelServingBundle new WineModelServingBundle(path, label)def main(args: Array[String]): Unit {val data path "/winequality red.csv"10 Chapter 2: Exporting Models

val saved model path "/savedmodels/WineQuality"val label "serve"val model WineModelServingBundle(saved model path, label)val inputs getListOfRecords(data path)inputs.foreach(record println(s"result {model.score(record. 1)}expected {record. 2}"))model.cleanup()}.class WineModelServingBundle(path : String, label : String){val bundle SavedModelBundle.load(path, label)val ls: Session bundle.sessionval metaGraphDef MetaGraphDef.parseFrom(bundle.metaGraphDef())val signatures cala)def score(record : Array[Float]) : Double {val input Tensor.create(Array(record))val result ls.runner.feed(signatures(0).inputs(0).name, get(0).}.def convertParameters(tensorInfo: Map[String,TensorInfo]) :Seq[Parameter] {var parameters Seq.empty[Parameter]tensorInfo.foreach(input {.fields.foreach(descriptor {descriptor. y.map(d q.foreach(v shape shape : v.toInt).foreach(v shape shape : v.toInt)}if(descriptor. 1.getName.contains("name") ) {name descriptor. 2.toString.split(":")(0)}if(descriptor. 1.getName.contains("dtype") ) {dtype descriptor. 2.toString}})parameters Parameter(name, dtype, shape) : parameters})parameters}def parseSignature(signatureMap : Map[String, SignatureDef]): Seq[Signature] {var signatures n {TensorFlow 11

val inputDefs definition. 2.getInputsMap.asScalaval outputDefs definition. 2.getOutputsMap.asScalaval inputs convertParameters(inputDefs)val outputs convertParameters(outputDefs)signatures Signature(definition. 1, inputs, outputs) : signatures})signatures}}.Compare this code with the code in Example 2-2. Although themain structure is the same, there are two significant differences: Reading the graph is more involved. The saved model containsnot just the graph itself, but the entire bundle (directory), andthen obtains the graph from the bundle. Additionally, it is possi‐ble to extract the method signature (as a protobuf definition)and parse it to get inputs

There are a lot of publications on machine learning appearing daily, and new machine learning products are appearing all the time. Amazon, Microsoft, Google, IBM, and others have introduced machine learning as managed cloud offerings. However, one of the areas of machine learning that is not getting enough attention is model serving—how to .

Related Documents:

Ebook Sophia, Princess Among Beasts Patterson, James Ebook Soul of the Sword Kagawa, Julie Ebook Spain in Our Hearts: Americans in the Spanish Civil War, 1936–1939 Hochschild, Adam Ebook Stand King, Stephen Ebook Stepsister Donnelly, Jennifer Ebook Storm and Fury: The Harbinger Series, Book 1 Armentrout, Je

decoration machine mortar machine paster machine plater machine wall machinery putzmeister plastering machine mortar spraying machine india ez renda automatic rendering machine price wall painting machine price machine manufacturers in china mail concrete mixer machines cement mixture machine wall finishing machine .

Machine Learning Machine Learning B. Supervised Learning: Nonlinear Models B.5. A First Look at Bayesian and Markov Networks Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim, Germany Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL .

Machine learning has many different faces. We are interested in these aspects of machine learning which are related to representation theory. However, machine learning has been combined with other areas of mathematics. Statistical machine learning. Topological machine learning. Computer science. Wojciech Czaja Mathematical Methods in Machine .

of Contents (TOC) in your ebook. This TOC is a requirement for all EPUBs and without one, your ebook will not pass our distribution review. Preparing Your File The golden rule for ebook design is: KEEP THE FORMATTING SIMPLE. To create your ebook, you’ll use Styles. In Microsoft Word 2007

8. Opening your eBook on your smartphone or tablet Once you have ADE Reader installed on your device and authorised it using your Adobe ID, you than need to open the eBook email from your device and click on download eBook link. Open the downloaded eBook with ADE app. This should now app

1. Ebook Central Chapter Download 1.1 Ebook Central Chapter Download Notes: Welcome to an overview of EBook Central Chapter Download! EBook Central makes it easy to download exactly what you need to your laptop, IOS or Android device in just a few steps!

geomagnetic field Magnetic “Operative” physical property Method Measured parameter. Further reading Keary, P. & Brooks, M. (1991) An Introduction to Geophysical Exploration. Blackwell Scientific Publications. Mussett, A.E. & Khan, M. (2000) Looking into the Earth – An Introduction to Geological Geophysics. Cambridge University Press. McQuillin, R., Bacon, M. & Barclay, W .