CS229LectureNotes - Stanford University

3y ago
26 Views
2 Downloads
227.71 KB
28 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Jacoby Zeller
Transcription

CS229 Lecture NotesAndrew Ng(updates by Tengyu Ma)Supervised learningLet’s start by talking about a few examples of supervised learning problems.Suppose we have a dataset giving the living areas and prices of 47 housesfrom Portland, Oregon:Living area (feet2 )21041600240014163000.Price (1000 s)400330369232540.We can plot this data:housing prices1000900800price (in square feet13500400045005000

2Given data like this, how can we learn to predict the prices of other housesin Portland, as a function of the size of their living areas?To establish notation for future use, we’ll use x(i) to denote the “input”variables (living area in this example), also called input features, and y (i)to denote the “output” or target variable that we are trying to predict(price). A pair (x(i) , y (i) ) is called a training example, and the datasetthat we’ll be using to learn—a list of n training examples {(x(i) , y (i) ); i 1, . . . , n}—is called a training set. Note that the superscript “(i)” in thenotation is simply an index into the training set, and has nothing to do withexponentiation. We will also use X denote the space of input values, and Ythe space of output values. In this example, X Y R.To describe the supervised learning problem slightly more formally, ourgoal is, given a training set, to learn a function h : X 7 Y so that h(x) is a“good” predictor for the corresponding value of y. For historical reasons, thisfunction h is called a hypothesis. Seen pictorially, the process is thereforelike this:TrainingsetLearningalgorithmx(living area ofhouse.)hpredicted y(predicted price)of house)When the target variable that we’re trying to predict is continuous, suchas in our housing example, we call the learning problem a regression problem. When y can take on only a small number of discrete values (such asif, given the living area, we wanted to predict if a dwelling is a house or anapartment, say), we call it a classification problem.

3Part ILinear RegressionTo make our housing example more interesting, let’s consider a slightly richerdataset in which we also know the number of bedrooms in each house:Living area (feet2 )21041600240014163000.#bedrooms Price (1000 s)34003330336922324540.(i)Here, the x’s are two-dimensional vectors in R2 . For instance, x1 is the(i)living area of the i-th house in the training set, and x2 is its number ofbedrooms. (In general, when designing a learning problem, it will be up toyou to decide what features to choose, so if you are out in Portland gatheringhousing data, you might also decide to include other features such as whethereach house has a fireplace, the number of bathrooms, and so on. We’ll saymore about feature selection later, but for now let’s take the features asgiven.)To perform supervised learning, we must decide how we’re going to represent functions/hypotheses h in a computer. As an initial choice, let’s saywe decide to approximate y as a linear function of x:hθ (x) θ0 θ1 x1 θ2 x2Here, the θi ’s are the parameters (also called weights) parameterizing thespace of linear functions mapping from X to Y. When there is no risk ofconfusion, we will drop the θ subscript in hθ (x), and write it more simply ash(x). To simplify our notation, we also introduce the convention of lettingx0 1 (this is the intercept term), so thath(x) dXθi xi θT x,i 0where on the right-hand side above we are viewing θ and x both as vectors,and here d is the number of input variables (not counting x0 ).

4Now, given a training set, how do we pick, or learn, the parameters θ?One reasonable method seems to be to make h(x) close to y, at least forthe training examples we have. To formalize this, we will define a functionthat measures, for each value of the θ’s, how close the h(x(i) )’s are to thecorresponding y (i) ’s. We define the cost function:n1X(hθ (x(i) ) y (i) )2 .J(θ) 2 i 1If you’ve seen linear regression before, you may recognize this as the familiarleast-squares cost function that gives rise to the ordinary least squaresregression model. Whether or not you have seen it previously, let’s keepgoing, and we’ll eventually show this to be a special case of a much broaderfamily of algorithms.1LMS algorithmWe want to choose θ so as to minimize J(θ). To do so, let’s use a searchalgorithm that starts with some “initial guess” for θ, and that repeatedlychanges θ to make J(θ) smaller, until hopefully we converge to a value ofθ that minimizes J(θ). Specifically, let’s consider the gradient descentalgorithm, which starts with some initial θ, and repeatedly performs theupdate: θj : θj αJ(θ). θj(This update is simultaneously performed for all values of j 0, . . . , d.)Here, α is called the learning rate. This is a very natural algorithm thatrepeatedly takes a step in the direction of steepest decrease of J.In order to implement this algorithm, we have to work out what is thepartial derivative term on the right hand side. Let’s first work it out for thecase of if we have only one training example (x, y), so that we can neglectthe sum in the definition of J. We have: 1(hθ (x) y)2J(θ) θj θj 2 1(hθ (x) y) 2 · (hθ (x) y) ·2 θj!dX (hθ (x) y) ·θi x i y θj i 0 (hθ (x) y) xj

5For a single training example, this gives the update rule:1 (i)θj : θj α y (i) hθ (x(i) ) xj .The rule is called the LMS update rule (LMS stands for “least mean squares”),and is also known as the Widrow-Hoff learning rule. This rule has severalproperties that seem natural and intuitive. For instance, the magnitude ofthe update is proportional to the error term (y (i) hθ (x(i) )); thus, for instance, if we are encountering a training example on which our predictionnearly matches the actual value of y (i) , then we find that there is little needto change the parameters; in contrast, a larger change to the parameters willbe made if our prediction hθ (x(i) ) has a large error (i.e., if it is very far fromy (i) ).We’d derived the LMS rule for when there was only a single trainingexample. There are two ways to modify this method for a training set ofmore than one example. The first is replace it with the following algorithm:Repeat until convergence {θj : θj αnXi 1 (i)y (i) hθ (x(i) ) xj , (for every j)(1)}By grouping the updates of the coordinates into an update of the vectorθ, we can rewrite update (1) in a slightly more succinct way:θ : θ αnXi 1 y (i) hθ (x(i) ) x(i)The reader can easily verify that the quantity in the summation in theupdate rule above is just J(θ)/ θj (for the original definition of J). So, thisis simply gradient descent on the original cost function J. This method looksat every example in the entire training set on every step, and is called batchgradient descent. Note that, while gradient descent can be susceptibleto local minima in general, the optimization problem we have posed here1We use the notation “a : b” to denote an operation (in a computer program) inwhich we set the value of a variable a to be equal to the value of b. In other words, thisoperation overwrites a with the value of b. In contrast, we will write “a b” when we areasserting a statement of fact, that the value of a is equal to the value of b.

6for linear regression has only one global, and no other local, optima; thusgradient descent always converges (assuming the learning rate α is not toolarge) to the global minimum. Indeed, J is a convex quadratic function.Here is an example of gradient descent as it is run to minimize a 404550The ellipses shown above are the contours of a quadratic function. Alsoshown is the trajectory taken by gradient descent, which was initialized at(48,30). The x’s in the figure (joined by straight lines) mark the successivevalues of θ that gradient descent went through.When we run batch gradient descent to fit θ on our previous dataset,to learn to predict housing price as a function of living area, we obtainθ0 71.27, θ1 0.1345. If we plot hθ (x) as a function of x (area), alongwith the training data, we obtain the following figure:housing prices1000900800price (in square feet3500400045005000

7If the number of bedrooms were included as one of the input features as well,we get θ0 89.60, θ1 0.1392, θ2 8.738.The above results were obtained with batch gradient descent. There isan alternative to batch gradient descent that also works very well. Considerthe following algorithm:Loop {for i 1 to n, {} (i)θj : θj α y (i) hθ (x(i) ) xj ,(for every j)(2)}By grouping the updates of the coordinates into an update of the vectorθ, we can rewrite update (2) in a slightly more succinct way: θ : θ α y (i) hθ (x(i) ) x(i)In this algorithm, we repeatedly run through the training set, and eachtime we encounter a training example, we update the parameters accordingto the gradient of the error with respect to that single training example only.This algorithm is called stochastic gradient descent (also incrementalgradient descent). Whereas batch gradient descent has to scan throughthe entire training set before taking a single step—a costly operation if n islarge—stochastic gradient descent can start making progress right away, andcontinues to make progress with each example it looks at. Often, stochasticgradient descent gets θ “close” to the minimum much faster than batch gradient descent. (Note however that it may never “converge” to the minimum,and the parameters θ will keep oscillating around the minimum of J(θ); butin practice most of the values near the minimum will be reasonably goodapproximations to the true minimum.2 ) For these reasons, particularly whenthe training set is large, stochastic gradient descent is often preferred overbatch gradient descent.2By slowly letting the learning rate α decrease to zero as the algorithm runs, it is alsopossible to ensure that the parameters will converge to the global minimum rather thanmerely oscillate around the minimum.

82The normal equationsGradient descent gives one way of minimizing J. Let’s discuss a second wayof doing so, this time performing the minimization explicitly and withoutresorting to an iterative algorithm. In this method, we will minimize J byexplicitly taking its derivatives with respect to the θj ’s, and setting them tozero. To enable us to do this without having to write reams of algebra andpages full of matrices of derivatives, let’s introduce some notation for doingcalculus with matrices.2.1Matrix derivativesFor a function f : Rn d 7 R mapping from n-by-d matrices to the realnumbers, we define the derivative of f with respect to A to be: f f··· A11 A1d . . A f (A) . f An1··· f AndThus, the gradient A f (A) is itself an n-by-d matrix, whose (i, j)-element is f / Aij . For example, suppose A A11A21A12A22is a 2-by-2 matrix, andthe function f : R2 2 7 R is given by3f (A) A11 5A212 A21 A22 .2Here, Aij denotes the (i, j) entry of the matrix A. We then have 3 10A122 A f (A) .A22A212.2Least squares revisitedArmed with the tools of matrix derivatives, let us now proceed to find inclosed-form the value of θ that minimizes J(θ). We begin by re-writing J inmatrix-vectorial notation.Given a training set, define the design matrix X to be the n-by-d matrix(actually n-by-d 1, if we include the intercept term) that contains the

9training examples’ input values in its rows: — (x(1) )T — — (x(2) )T — X . .(n) T— (x ) —Also, let y be the n-dimensional vector containing all the target values fromthe training set: y (1) y (2) y . . . y (n)Now, since hθ (x(i) ) (x(i) )T θ, we can easily verify that y (1)(x(1) )T θ . .Xθ y . .y (n) (x(n) )T θ hθ (x(1) ) y (1) . .(n)(n)hθ (x ) yThus, using the fact that for a vector z, we have that z T z nP2i zi :1X1(Xθ y )T (Xθ y ) (hθ (x(i) ) y (i) )222 i 1 J(θ)Finally, to minimize J, let’s find its derivatives with respect to θ. Hence,1 θ J(θ) θ (Xθ y )T (Xθ y )2 1 θ (Xθ)T Xθ (Xθ)T y y T (Xθ) y T y2 1 θ θT (X T X)θ y T (Xθ) y T (Xθ)2 1 θ θT (X T X)θ 2(X T y )T θ 2 1 2X T Xθ 2X T y2 X T Xθ X T y

10In the third step, we used the fact that aT b bT a, and in the fifth stepused the facts x bT x b and x xT Ax 2Ax for symmetric matrix A (formore details, see Section 4.3 of “Linear Algebra Review and Reference”). Tominimize J, we set its derivatives to zero, and obtain the normal equations:X T Xθ X T yThus, the value of θ that minimizes J(θ) is given in closed form by theequationθ (X T X) 1 X T y .33Probabilistic interpretationWhen faced with a regression problem, why might linear regression, andspecifically why might the least-squares cost function J, be a reasonablechoice? In this section, we will give a set of probabilistic assumptions, underwhich least-squares regression is derived as a very natural algorithm.Let us assume that the target variables and the inputs are related via theequationy (i) θT x(i) ǫ(i) ,where ǫ(i) is an error term that captures either unmodeled effects (such asif there are some features very pertinent to predicting housing price, butthat we’d left out of the regression), or random noise. Let us further assumethat the ǫ(i) are distributed IID (independently and identically distributed)according to a Gaussian distribution (also called a Normal distribution) withmean zero and some variance σ 2 . We can write this assumption as “ǫ(i) N (0, σ 2 ).” I.e., the density of ǫ(i) is given by 1(ǫ(i) )2(i)p(ǫ ) .exp 2σ 22πσThis implies that (y (i) θT x(i) )21.exp p(y x ; θ) 2σ 22πσ(i)3(i)Note that in the above step, we are implicitly assuming that X T X is an invertiblematrix. This can be checked before calculating the inverse. If either the number oflinearly independent examples is fewer than the number of features, or if the featuresare not linearly independent, then X T X will not be invertible. Even in such cases, it ispossible to “fix” the situation with additional techniques, which we skip here for the sakeof simplicty.

11The notation “p(y (i) x(i) ; θ)” indicates that this is the distribution of y (i)given x(i) and parameterized by θ. Note that we should not condition on θ(“p(y (i) x(i) , θ)”), since θ is not a random variable. We can also write thedistribution of y (i) as y (i) x(i) ; θ N (θT x(i) , σ 2 ).Given X (the design matrix, which contains all the x(i) ’s) and θ, whatis the distribution of the y (i) ’s? The probability of the data is given byp( y X; θ). This quantity is typically viewed a function of y (and perhaps X),for a fixed value of θ. When we wish to explicitly view this as a function ofθ, we will instead call it the likelihood function:L(θ) L(θ; X, y ) p( y X; θ).Note that by the independence assumption on the ǫ(i) ’s (and hence also they (i) ’s given the x(i) ’s), this can also be writtennYL(θ) p(y (i) x(i) ; θ) i 1nYi 1 (y (i) θT x(i) )21 .exp 2σ 22πσNow, given this probabilistic model relating the y (i) ’s and the x(i) ’s, whatis a reasonable way of choosing our best guess of the parameters θ? Theprincipal of maximum likelihood says that we should choose θ so as tomake the data as high probability as possible. I.e., we should choose θ tomaximize L(θ).Instead of maximizing L(θ), we can also maximize any strictly increasingfunction of L(θ). In particular, the derivations will be a bit simpler if weinstead maximize the log likelihood ℓ(θ):ℓ(θ) log L(θ) nY1(y (i) θT x(i) )2 logexp 2σ 22πσi 1 nX1(y (i) θT x(i) )2 log exp 2σ 22πσi 1n1 1 X (i)1 2·(y θT x(i) )2 . n log 2πσ σ 2 i 1Hence, maximizing ℓ(θ) gives the same answer as minimizingn1 X (i)(y θT x(i) )2 ,2 i 1

12which we recognize to be J(θ), our original least-squares cost function.To summarize: Under the previous probabilistic assumptions on the data,least-squares regression corresponds to finding the maximum likelihood estimate of θ. This is thus one set of assumptions under which least-squares regression can be justified as a very natural method that’s just doing maximumlikelihood estimation. (Note however that the probabilistic assumptions areby no means necessary for least-squares to be a perfectly good and rationalprocedure, and there may—and indeed there are—other natural assumptionsthat can also be used to justify it.)Note also that, in our previous discussion, our final choice of θ did notdepend on what was σ 2 , and indeed we’d have arrived at the same resulteven if σ 2 were unknown. We will use this fact again later, when we talkabout the exponential family and generalized linear models.4Locally weighted linear regressionConsider the problem of predicting y from x R. The leftmost figure belowshows the result of fitting a y θ0 θ1 x to a dataset. We see that the datadoesn’t really lie on straight line, and so the fit is not very ad, if we had added an extra feature x2 , and fit y θ0 θ1 x θ2 x2 ,then we obtain a slightly better fit to the data. (See middle figure) Naively, itmight seem that the more features we add, the better. However, there is alsoa danger in adding too many features:P The rightmost figure is the result offitting a 5-th order polynomial y 5j 0 θj xj . We see that even though thefitted curve passes through the data perfectly, we would not expect this tobe a very good predictor of, say, housing prices (y) for different living areas(x). Without formally defining what these terms mean, we’ll say the figureon the left shows an instance of underfitting—in which the data clearlyshows structure not captured by the model—and the figure on the right isan example of overfitting. (Later in this class, when we talk about learningtheory we’ll formalize some of these notions, and also define more carefully

13just what it means for a hypothesis to be good or bad.)As discussed previously, and as shown in the example above, the choice offeatures is important to ensuring good performance of a learning algorithm.(When we talk about model selection, we’ll also see algorithms for automatically choosing a good set of features.) In this section, let us talk briefly talkabout the locally weighted linear regression (LWR) algorithm which, assuming there is sufficient training data, makes the choice of features less critical.This treatment will be brief, since you’ll get a chance to explore some of theproperties of the LWR algorithm yourself in the homework.In the original linear regression algorithm, to make a prediction at a querypoint x (i.e., to evaluate h(x)), we would:P1. Fit θ to minimize i (y (i) θT x(i) )2 .2. Output θT x.In contrast, the locally weighted linear regression algorithm does the following:P1. Fit θ to minimize i w(i) (y (i) θT x(i) )2 .2. Output θT x.Here, the w(i) ’s are non-negative valued weights. Intuitively, if w(i) is largefor a particular value of i, then in picking θ, we’ll try hard to make (y (i) θT x(i) )2 small. If w(i) is small, then the (y (i) θT x(i) )2 error term will bepretty much ignored in the fit.A fairly standard choice for the weights is4 (x(i) x)2(i)w exp 2τ 2Note that the weights depend on the particular point x at which we’re tryingto evaluate x. Moreover, if x(i) x is small, then w(i) is close to 1; andif x(i) x is large, then w(i) is small. Hence, θ is chosen giving a muchhigher “weight” to the (errors on) training examples close to the query pointx. (Note also that while the formula for the weights takes a form that iscosmetically similar to the density of a Gaussian distribution, the w(i) ’s donot directly have anything to do with Gaussians, and in particular the w(i)are not random variables, normally distributed or otherwise.) The parameter4If x is vector-valued, this is generalized to be w(i) exp( (x(i) x)T (x(i) x)/

approximations to the true minimum.2) For these reasons, particularly when the training set is large, stochastic gradient descent is often preferred over batch gradient descent. 2By slowly letting the learning rate α decrease to zero as the algorithm runs, it is also

Related Documents:

SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity Qingyuan Zhao Stanford University qyzhao@stanford.edu Murat A. Erdogdu Stanford University erdogdu@stanford.edu Hera Y. He Stanford University yhe1@stanford.edu Anand Rajaraman Stanford University anand@cs.stanford.edu Jure Leskovec Stanford University jure@cs.stanford .

Computer Science Stanford University ymaniyar@stanford.edu Madhu Karra Computer Science Stanford University mkarra@stanford.edu Arvind Subramanian Computer Science Stanford University arvindvs@stanford.edu 1 Problem Description Most existing COVID-19 tests use nasal swabs and a polymerase chain reaction to detect the virus in a sample. We aim to

Domain Adversarial Training for QA Systems Stanford CS224N Default Project Mentor: Gita Krishna Danny Schwartz Brynne Hurst Grace Wang Stanford University Stanford University Stanford University deschwa2@stanford.edu brynnemh@stanford.edu gracenol@stanford.edu Abstract In this project, we exa

Stanford University Stanford, CA 94305 bowang@stanford.edu Min Liu Department of Statistics Stanford University Stanford, CA 94305 liumin@stanford.edu Abstract Sentiment analysis is an important task in natural language understanding and has a wide range of real-world applications. The typical sentiment analysis focus on

Stanford Health Care Organizational Overview 3 Contract Administration is a Shared Service of Stanford Health Care to Eight Other Stanford Medicine Entities Stanford Health are ("SH")is the flagship academic medical center associated with the Stanford University School of Medicine. SHC has 15,232 employees and volunteers, 613 licensed

Mar 16, 2021 · undergraduate and graduate students, faculty, staff, and members of the community. Anyone interested in auditioning for the Stanford Philharmonia, Stanford Symphony Orchestra, or Stanford Summer Symphony should contact Orchestra Administrator Adriana Ramírez Mirabal at orchestra@stanford.edu. For further information, visit orchestra.stanford.edu.

STANFORD INTERNATIONAL nANK, LTD., § STANFORD GROUP COMPANY, § STANFORD CAPITAL MANAGEMENT, LLC, § R. ALLEN STANFORD, JAMES . M. DAVIS, . The false data has helped SGC grow the SAS program from less than 10 million in around 2004 to . I : over 1.2 billion, generating fees for SGC (and ultimately Stanford) in excess of 25 million. .

Many community courts handle criminal cases only, but others are experimenting with a broader range of matters, including juvenile delinquency and housing code violations. Some community courts were initiated by courts, and some have been championed by a district attorney. These differences reflect a central aspect of community courts: they focus on neighborhoods and are designed to respond to .