Deep Learning - Courses.cs.duke.edu

2y ago
46 Views
2 Downloads
1.36 MB
15 Pages
Last View : Today
Last Download : 3m ago
Upload by : Maxine Vice
Transcription

4/23/21Deep LearningRonald ParrCompSci 370With thanks to Kris Hauser for some contentLate 1990’s: Neural Networks Hit the Wall Recall that a 3 layer network can approximate anyfunction arbitrarily closely (caveat: might requiremany, many hidden nodes) Q: Why not use big networks for hard problems? A: It didn’t work in practice!– Vanishing gradients– Not enough training data (local optima, variance)– Not enough training time (computers too slow tohandle huge data sets, even if they were available)1

4/23/21Why Deep? Deep learning is a family of techniques forbuilding and training large neural networks Why deep and not wide?– Deep sounds better than wide J– While wide is always possible, deep may requirefewer nodes to achieve the same result– May be easier to structure with human intuition:think about layers of computation vs. one flat, widecomputationExamples of Deep Learning Today Object/face recognition in your phone, your browser,autonomous vehicles, etc. Natural language processing (speech to text, parsing,information extraction, machine translation) Product recommendations (Netflix, Amazon) Fraud detection Medical imaging Image enhancement or restoration (e.g, Adobe Superresolution) the-acrteam-super-resolution.html Quick Draw: https://quickdraw.withgoogle.com2

4/23/21Vanishing Gradients Recall backprop derivation: E akδj h'(a j ) w kjδ kk ak a jk!! Activation functions often between -1 and 1 The furtheryou get from the output layer, thesmaller the gradient gets Hard to learn when gradients are noisy and smallRelated Problem: Saturation1.510.50-0.5-1-1.5-10-50510 Sigmoid gradient goes to 0 at tails Extreme values (saturation) anywhere alongbackprop path causes gradient to vanish3

4/23/21Summary of the Challenges Not enough training data in the 90’s to justifythe complexity of big networks (recall bias,variance trade off) Slow to train big networks Vanishing gradients, saturationSummary of Changes Massive data available Massive computation available Faster training methodsDifferent training methodsDifferent network structuresDifferent activation functions4

4/23/21Estimating the Gradient Efficiently Recall: Backpropagation is gradient descent Computing exact gradient of the loss function requiressumming over all training samples Thought experiment: What if you randomly sample one (or more) datapoint(s) and compute the gradient?Called online or stochastic gradientExpected value of sampled gradient true value of gradientSampled gradient true gradient noiseAs sample size increases, noise decreases, sampled gradient - truePractical idea: For massive data sets, estimate gradient using sampledtraining points to trade off computation vs. accuracy in gradient calculation– Possible pitfalls:––––– What is the right sampling strategy? Does the noise prevent convergence or lead to slower convergence?Batch/Minibatch Methods Find a sweet spot by estimating the gradientusing a subset of the samples Randomly sample subsets of the training dataand sum gradient computations over all samplesin the subset Take advantage of parallel architectures(multicore/GPU) Still requires careful selection of step size andstep size adjustment schedule – art vs. science5

4/23/21Other Tricks for Speeding Things Up Second order methods, e.g., Newton’s method – may becomputationally intensive in high dimensions Conjugate gradient is more computationally efficient, though notyet widely used Momentum: Use a combination of previous gradients to smoothout oscillations Line search: (Binary) search in gradient direction to find biggestworthwhile step size Some methods try to get benefits of second order methods withoutcost (without computing full Hessian), e.g., ADMMTricks For Breaking Down Problems Build up deep networks by training shallownetworks, then feeding their output into newlayers (may help with vanishing gradient andother problems) – a form of “pretraining” Train the network to solve “easier” problemsfirst, then train on harder problems –curriculum learning, a form of “shaping”6

4/23/21Convolutional Neural Networks (CNNs) Championed by LeCun (1998) Originally used for handwriting recognition Now used in state of the art systems in manycomputer vision applications Well-suited to data with a grid-like structureConvolutions What is a convolution? Way to combine two functions, e.g., x and w:𝑠 𝑡 𝑥 𝑎 𝑤 𝑡 𝑎 𝑑𝑎Entire Domain Discrete version𝑠 𝑡 * 𝑥 𝑎 𝑤(𝑡 𝑎)Example: Suppose s(t) is a decaying average of values of x around t, with w decreasingas a gets further from t7

4/23/21CHAPTER 9. CONVOLUTIONAL NETWORKSConvolution on Grid ExampleInputabcdefghijklKernelwxyzOutputFigure 9.1awey bxfz bwfy cxgz cwgy dxhz ewiy fxjz fwjy gxkz gwky hxlz Figure 9.1: An example of 2-D convolution without kernel-flipping. In this case we restrictthe output to only positions where the kernel lies entirely within the image, called “valid”convolution in some contexts. We draw boxes with arrows to indicate how the upper-leftelement of the output tensor is formed by applying the kernel to the correspondingfromDeep Learning, Ian Goodfellow and Yoshua Bengio and Aaronupper-left region of the input tensor.Courville334Convolutions on Grids For image I Convolution “kernel” K:𝑆 𝑖, 𝑗 & & 𝐼 𝑚, 𝑛 𝐾(𝑖 𝑚, 𝑗 𝑛) & & 𝐼 𝑖 𝑚, 𝑗 𝑛 𝐾(𝑚, 𝑛)!"!"Examples:A convolution can blur/smooth/noise-filter an image by averaging neighboring pixels.A convolution can also serve as an edge detectorhttps://en.wikipedia.org/wiki/Kernel (image processing)Figure 9.6 from Deep Learning, Ian Goodfellow and Yoshua Bengio and Aaron Courville8

4/23/21Application to Images & Nets Images have huge input space: 1000x1000 1M Fully connected layers huge number of weights,slow training Convolutional layers reduce connectivity byconnecting only an mxn window around each pixel Can use weight sharing to learn a common set ofweights so that same convolution is appliedeverywhere (or in multiple places)Advantages of Convolutions withWeight Sharing Reduces of weights that must be learned– Speeds up learning– Fewer local optima– Less risk of overfitting Enforces uniformity in what is learned Enforces translation invariance – learns thesame thing for all positions in the image9

4/23/21Additional Stages &Different Activation Functions Convolutional stages (may) feed to intermediatestages Detectors are nonlinear, e.g., ReLUSource: wikipedia Pooling stages summarizing upstream nodes,e.g., average (shrinking image), max(thresholding)ReLU vs. Sigmoid ReLU is faster to compute Derivative is trivial Only saturates on one side Worry about non-differentiability at 0? Can use sub-gradientRelu in blue10

4/23/21Example Convolutional NetworkINPUT28x28feature maps4@24x24feature maps4@12x12feature maps12@8x8nvolbsutionSubsCoSuCofeature maps onFrom, Convolutional Networks for Images, Speech, and Time-Series, LeCun & BengioN.B.: Subsampling averagingWeight sharing results in 2600 weights shared over 100,000 connections.Why This Works ConvNets can use weight sharing to reduce the number ofparameters learned – mitigates problems with big networks Combination of convolutions with shared weights andsubsampling can be interpreted as learning position andscale invariant features Final layers combine feature to learn the target function Can be viewed as doingsimultaneous feature discovery and classification11

4/23/21ConvNets in Practice Work surprisingly well in many examples, eventhose that aren’t images Number of convolutional layers, form ofpooling and detecting units may beapplication specific – art & science hereOther Tricks Convnets and ReLUs tend can can helpw/vanishing gradient problem, but don’teliminate it Residual nets introduce connections acrosslayers, which tends to mitigate the vanishinggradient problem Techniques such as image perturbation and dropout reduce overfitting and produce more robustsolutions12

4/23/21Putting It all Together Why is deep learning succeeding now when neural netslost momentum in the 90’s? New architectures (e.g. ConvNets) are better suited to(some) learning tasks, reduce # of weights Smarter algorithms make better use of data, handlenoisy gradients better Massive amounts of data make overfitting less of aconcern (but still always a concern) Massive amounts of computation make handlingmassive amounts of data possible Large and growing bag of tricks to mitigatingoverfitting, vanishing gradient issuesSuperficial(?) Limitations Deep learning results arenot easily humaninterpretable Computationally intensive Combination of art, science,rules of thumb Can be tricked:– “Intriguing properties ofneural networks”, Szegedy etal. [2013]13

4/23/21Beyond Classification Deep networks (and other techniques) can beused for unsupervised learning Example: Autoencoder tries to compressinputs to a lower dimensional representationRecurrent Networks Recurrent networks feed (part of) the output of thenetwork back to the input Why?– Can learn (hidden) state, e.g., in a hidden Markov model– Useful for parsing language– Can learn a program LSTM: Variation on RNN that handles long termmemories better14

4/23/21Deeper Limitations We get impressive results but we don’t always understand why orwhether we really need all of the data and computation used Hard to explain results and hard to guard against adversarialspecial cases (“Intriguing properties of neural networks”, and“Universal adversarial perturbations”) Not clear how logic, high level reasoning could be incorporated Not clear how to incorporate prior knowledge in a principled way15

Why Deep? Deep learning is a family of techniques for building and training largeneural networks Why deep and not wide? –Deep sounds better than wide J –While wide is always possible, deep may require fewer nodes to achieve the same result –May be easier to structure with human

Related Documents:

MY.DUKE.EDU/STUDENTS- Personal info & important links Navigate Campus CALENDAR.DUKE.EDU-University events calendar STUDENTAFFAIRS.DUKE.EDU- Student services, student groups, cultural centers DUKELIST.DUKE.EDU- Duke’s Free Classifieds Marketplace Stay Safe EMERGENCY.DUKE.EDU-

„Doris Duke of the illuminati Duke family was an heiress (at 12 years old) to the large tobacco fortune of the Duke family. She was the only child of American tobacco Co. founder James Buchanan Duke. Doris Duke, herself a member of the illuminati. Doris Duke had 5 houses (which have served as sites for illuminati rituals) – one in Beverly

The Duke MBA—Daytime Academic Calendar 2015-16 9 Preface 10 General Information 11 Duke University 11 Resources of the University 13 Technology at Fuqua 14 Programs of Study 15 The Duke MBA—Daytime 15 Concurrent Degree Programs 17 The Duke MBA—Weekend Executive 18 The Duke MBA—Global Executive 18 The Duke MBA—Cross Continent 19

Deep Learning: Top 7 Ways to Get Started with MATLAB Deep Learning with MATLAB: Quick-Start Videos Start Deep Learning Faster Using Transfer Learning Transfer Learning Using AlexNet Introduction to Convolutional Neural Networks Create a Simple Deep Learning Network for Classification Deep Learning for Computer Vision with MATLAB

2.3 Deep Reinforcement Learning: Deep Q-Network 7 that the output computed is consistent with the training labels in the training set for a given image. [1] 2.3 Deep Reinforcement Learning: Deep Q-Network Deep Reinforcement Learning are implementations of Reinforcement Learning methods that use Deep Neural Networks to calculate the optimal policy.

990 Super Duke R 2007 2007 EU 990 Super Duke R 2008 2008 EU 990 Super Duke R 2009 2009 EU 990 Super Duke R 2010 2010 EU 990 Super Duke R 2011 2011 EU

The original Duke University campus (East Campus) was rebuilt and the West Campus was built with the Duke Chapel as its center. West Campus opened in 1930 and East Campus served as the Women’s College of Duke University until 1972. Today male and female undergraduates attend Duke University in either the Trinity College of Arts

can distort the roof of a cone roof tank can exceed the design pressure of the tank maximum rateofflowinoroutrate of flow, in or out atmospheric or temperature changes size vent per API 2000 or approved standard min 1 ¼ in. (32 mm) or largest connection.