Digital Image Processing Chapter 2: Digital Image Fundamentals

2y ago
56 Views
2 Downloads
925.01 KB
30 Pages
Last View : 21d ago
Last Download : 3m ago
Upload by : Halle Mcleod
Transcription

Digital Image ProcessingChapter 2:Digital Image FundamentalsHuman and Computer Vision We can’t think of image processingwithout considering the human visionsystem.We observe and evaluate the imagesthat we process with our visual system.

Simple questions What intensity differences can wedistinguish?What is the spatial resolution of our eye?How accurately we estimate and comparedistances and areas?How do we sense colors?By which features can we detect anddistinguish objects?Test images

Test images Test images for distances and areaestimation:a) Parallel lines with up to 5% difference in length.b) Circles with up to 10% difference in radius.c) The vertical line appears longer but actually hasthe same length as the horizontal line.d) Deception by perspective: the upper line appearslonger than the lower one but actually have thesame length.Structure of the Human Eye Shape is nearly a sphere.Average diameter 20 mm.3 membranes: Cornea and Sclera - outercoverChoroidRetina -enclose the eye

Lens & Retina Lensboth infrared and ultraviolet light are absorbed,in excessive amounts, can cause damage to theeye. RetinaInnermost membrane of the eye. When the eyeis properly focused, light from an object outsidethe eye is imaged on the retina.Receptors Receptors are divided into 2 classes: ConesRods

Cones 6-7 million, located primarily in the centralportion of the retina (musclescontrolling the eye rotate the eyeball untilthe image falls on the fovea).Highly sensitive to color.Each is connected to its own nerve end thushuman can resolve fine details.Cone vision is called photopic or bright-lightvision.Rods 75-150 million, distributed over the retinasurface.Several rods are connected to a single nerveend reduce the amount of detail discernible.Serve to give a general, overall picture ofthe field of view.Sensitive to low levels of illumination.Rod vision is called scotopic or dim-lightvision.

Cross section of theeye Blind spot B the absence of receptors area.Cones are most dense in the center of the retina (in thearea of the fovea).Brightness adaptationand discrimination The total range of intensitylevels it can discriminatesimultaneously is rathersmall compared with thetotal adaptation range.

Simultaneous contrastWhich small square is the darkest one ? All the small squareshave exactly the sameintensity, but theyappear to the eyeprogressively darker asthe background becomesbrighter.Human Perception Phenomena

Signals a signal is a function that carriesinformation.usually content of the signal changes oversome set of spatiotemporal dimensions.Vocabulary:Spatiotemporal: existing in both space and time;having both spatial extension and temporal durationTime-Varying Signals Some signals vary over time:f(t)for example: audio signalmay be thought at one level as acollection various tones of differingaudible frequencies that vary over time.

Spatially-Varying Signals Signals can vary over space as well.An image can be thought of as being a functionof 2 spatial dimensions:f(x,y) for monochromatic images, the value of thefunction is the amount of light at that point.medical CAT and MRI scanners produce imagesthat are functions of 3 spatial dimensions:f(x,y,z)Spatiotemporal SignalsWhat do you think a signal of this form is?f(x,y,t)x and y are spatial dimensions;t is time.Perhaps, it is a video signal, animation,or other time-varying picture sequence.

Types of Signals most naturally-occurring signals arefunctions having a continuous domain.however, signals in a computer arediscrete samples of the continuousdomain.in other words, signals manipulated bycomputer have discrete domains.Analog & Digital most naturally-occurring signals alsohave a real-valued range in which valuesoccur with infinite precision.to store and manipulate signals bycomputer we need to store thesenumbers with finite precision. thus,these signals have a discrete range.signal has continuous domain and range analogsignal has discrete domain and range digital

Sampling sampling the spacing of discrete valuesin the domain of a signal.sampling-rate how many samples aretaken per unit of each dimension. e.g.,samples per second, frames per second,etc. f(t)tQuantizationQuantization spacing of discrete values inthe range of a signal. usually thought of as the number of bits persample of the signal. e.g., 1 bit per pixel (b/wimages), 16-bit audio, 24-bit color images, etc.f(t) 70t

Digital Image RepresentationPixel values in highlighted regionCAMERADIGITIZERA set of numberin 2D gridSamples the analog data and digitizes it.Example of Digital ImageContinuous imageprojected onto asensor arrayResult of imagesampling andquantization

DigitalDigital ImageImage Processing,Processing, 2nd2nd ed.ed.www.imageprocessingbook.comChapter 2: Digital Image Fundamentals 2002 R. C. Gonzalez & R. E. WoodsDigitalDigital ImageImage Processing,Processing, 2nd2nd ed.ed.www.imageprocessingbook.comChapter 2: Digital Image Fundamentals 2002 R. C. Gonzalez & R. E. Woods

DigitalDigital ImageImage Processing,Processing, 2nd2nd ed.ed.www.imageprocessingbook.comChapter 2: Digital Image Fundamentals 2002 R. C. Gonzalez & R. E. WoodsDigitalDigital ImageImage Processing,Processing, 2nd2nd ed.ed.www.imageprocessingbook.comChapter 2: Digital Image Fundamentals 2002 R. C. Gonzalez & R. E. Woods

DigitalDigital ImageImage Processing,Processing, 2nd2nd ed.ed.www.imageprocessingbook.comChapter 2: Digital Image Fundamentals 2002 R. C. Gonzalez & R. E. WoodsDigitalDigital ImageImage Processing,Processing, 2nd2nd ed.ed.www.imageprocessingbook.comChapter 2: Digital Image Fundamentals 2002 R. C. Gonzalez & R. E. Woods

Light-intensity function image refers to a 2D light-intensityfunction, f(x,y)the amplitude of f at spatial coordinates(x,y) gives the intensity (brightness) ofthe image at that point.light is a form of energy thus f(x,y) mustbe nonzero and finite.0 f ( x, y ) fGray Level We call the intensity of a monochrome image f atcoordinate (x,y)the gray level (l)of the imageat that point.-l lies in the range Lmin l LmaxLmin is positive and Lmax is finitegray scale [Lmin,Lmax]0 Black, L White

Number of bits The number of graylevels typically is aninteger power of 2L 2kN Number of bitsrequired to store adigitized imageb MxNxkMResolution Resolution (how much you can see the detail ofthe image) depends on sampling and graylevels.the bigger the sampling rate (n) and the grayscale (g), the better the approximation of thedigitized image from the original.the more the quantization scale becomes, thebigger the size of the digitized image.

Checkerboard effectabcdef(a) 1024x1024(b) 512x512(c) 256x256(d) 128x128(e) 64x64(f) 32x32 if the resolution is decreased too much, the checkerboardeffect can occur.False contouring(a) Gray level 16(b) Gray level 8(c) Gray level 4(d) Gray level 2 (a)(b) (c)(d)if the gray scale is notenough, the smooth areawill be affected.False contouring canoccur on the smootharea which has fine grayscales.

Nonuniform sampling for a fixed value of spatial resolution, theappearance of the image can be improvedby using adaptive sampling rates. fine sampling B required in theneighborhood of sharp gray-level transitions.coarse sampling B utilized in relativelysmooth regions.Example

Example an image with a face superimposed on auniform background. backgroundB little detailed informationB coarse sampling is enough.face B more detail B fine sampling.if we can use adaptive sampling, the quality ofthe image is improved.Moreover, we should care more around theboundary of the object B sharp gray-leveltransmission from object to background.Nonuniform quantization unequally spaced levels in quantizationprocess influences on the decreasing thenumber of gray level. use few gray levels in the neighborhood ofboundaries. Why ? eye is relatively poor atestimate shades of gray near abrupt levelchanges.use more gray levels on smooth area in orderto avoid the “false contouring”.

Definitions Monochromatic (achromatic) light: Light that is void ofcolor Attribute: Intensity (amount) .Gray level is used to describe monochromatic intensity.Chromatic light: To describe it, three quantities are used: Radiance: The total amount of energy that flows from the lightsource (measured in Watts).Luminance: The amount of energy an observer perceives from alight source (measured in lumens).Brightness: A subjective descriptor of light perception that isimpossible to measure (key factor in describing color sensation) Incoming energy(reflected ortransmitted through)is transformed into avoltage. Sensor material isresponsive to aparticular type ofenergy being detected.

Single SensorThe film rotatesThe sensormoveshorizontallySpatial And Gray Level Resolution An L-level image of size M N Spatial resolution: # of samples per unit length or area.Dots Per Inch (DPI): specifies the size of an individualpixel.Gray level resolution: Number of bits per pixel.Usually 8 bits.Color image has 3 image planes to yield 8 x 3 24bits/pixel.Too few levels may cause false contour.

Basic Relationship b/w pixels Neighbors of a pixelConnectivityLabeling of Connected ComponentsRelations, Equivalences, and TransitiveClosureDistance MeasuresArithmetic/Logic OperationsNeighbors of a pixel a pixel p at coordinate (x,y) hasx N4(p) : 4-neighbors of px p x(x 1, y), (x-1,y),(x,y 1), (x,y-1)xx xND(p) : 4-diagonal neighbors of pp(x 1, y 1), (x 1,y-1),(x-1,y 1), (x-1,y-1) xxN8(p) : 8-neighbors of p :a combination of N4(p) and ND(p)xx xxp xxx x

Connectivity Let V be the set of gray-level values used todefined connectivity4-connectivity : 2 pixels p and q with values from V are 4-connected if q isin the set N4(p)8-connectivity : 2 pixels p and q with values from V are 8-connected if q isin the set N8(p)m-connectivity (mixed connectivity): 2 pixels p and q with values from V are m-connected if q is in the set N4(p) orq is in the set ND(p) and the set N4(p) N4(q) is empty.Example0 1 10 1 00 0 1arrangementof pixels 0 1 10 1 00 0 18-neighbors of thecenter pixel0 1 10 1 00 0 1m-neighbors ofthe center pixelm-connectivity eliminates the multiple pathconnections that arise in 8-connectivity.

Adjacent a pixel p is adjacent to a pixel q if theyare connected.two image area subsets S1 and S2 areadjacent if some pixel in S1 is adjacentto some pixel S2.Exercise Consider the two image subsets S1 and S2 :S2S10 0 0 0 0 0 0 1 1 01 0 0 1 0 0 1 0 0 11 0 0 1 0 1 1 0 0 00 0 1 1 1 0 0 0 0 00 0 1 1 1 0 0 1 1 1 For V {1}, determine whether S1 and S2 are 4-connected8-connectedm-connected

Euclidean distancebetween p and qDe ( p, q ) ( x s)2 ( y t)21@2q(s,t)rp(x,y)s-xt-yradius (r) centeredat (x,y)

City-block distance:D4 distance22 1 22 1 0 1 22 1 22D4 ( p, q ) x s y tdiamond centered at (x,y)D4 1 are 4-neighbors of(x,y)Chessboard distance: D8distanceD8 ( p, q )2 2 2 2 22 1 1 1 22 1 0 1 22 1 1 1 22 2 2 2 2max( x s,y t )square centered at (x,y)

Arithmetic Operators used extensively in most branches of imageprocessing.Arithmetic operations for 2 pixels p and q : Addition : p q used in image average to reduce noise.Subtraction : p-q basic tool in medical imaging.Multiplication : pxq to correct gray-level shading result from non-uniformitiesin illumination or in the sensor used to acquire the image.Division : pyqArithmetic Operation entire images are carriedout pixel by pixel.Logic operations AND : p AND q(p q)OR : p OR q(p q)COMPLEMENT : NOT q ( q )logic operations apply only to binary images.arithmetic operations apply to multivaluedpixels.logic operations used for tasks such asmasking, feature detection, and shape analysis.logic operations perform pixel by pixel.

Mask Operation Besides pixel-by-pixel processing on entireimages, arithmetic and Logical operations areused in neighborhood oriented operations. Z1Z2Z3Z4Z5Z6Z7Z8Z9 Mask Operation Let the value assigned to a pixel be afunction of its gray level and the graylevel of its neighbors.e.g., replace the gray value of pixel Z5with the average gray values of it’sneighborhood within a 3x3 mask.Z1(Z 1 Z 2 Z 3 Z 9 )9

Mask operator In general term:Z1111Z 1 Z 2 Z 3 Z99999w 1 Z1 w 2 Z 2 w 3 Z 3 w 9 Z 99 w Zi 1iiW w1 w2 w3w4 w5 w6w7 w8 w91/9 1/9 1/91/9 1/9 1/91/9 1/9 1/9Mask coefficient Proper selection of the coefficients andapplication of the mask at each pixel positionin an image makes possible a variety of usefulimage operations noise reductionregion thinningedge detectionApplying a mask at each pixel location in animage is a computationally expensive task.

Digital Image Fundamentals Human and Computer Vision We can’t think of image processing without considering the human vision system. We observe and evaluate the images that we process with our visual system. Digital Image Processing

Related Documents:

Part One: Heir of Ash Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26 Chapter 27 Chapter 28 Chapter 29 Chapter 30 .

TO KILL A MOCKINGBIRD. Contents Dedication Epigraph Part One Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Part Two Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18. Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26

A digital image is a 2D representation of a scene as a finite set of digital values, calledpicture elements or pixels or pels. The field of digital image processing refers to processing digital image by means of a digital computer. NOTE: A digital image is composed of finite number of elements like picture elements, image

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a . Digital cameras generally include dedicated digital image processing chips to convert the raw data from the image sensor into a color-corrected image in a standard image file format. I

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subfield of digital signal processing, digital image processing has many advantages over analog image processing; it allows a much wider range of algorithms to be applied to the in

What is Digital Image Processing? Digital image processing focuses on two major tasks -Improvement of pictorial information for human interpretation -Processing of image data for storage, transmission and representation for autonomous machine perception Some argument about where image processing ends and fields such as image

The input for image processing is an image, such as a photograph or frame of video. The output can be an image or a set of characteristics or parameters related to the image. Most of the image processing techniques treat the image as a two-dimensional signal and applies the standard signal processing techniques to it. Image processing usually .

Java Digital Image Processing 1 Digital Image Processing (DIP) deals with manipulation of digital images using a computer. It is a subfield of signals and systems but focuses particularly on images. DIP focuses on developing a computer system that is able to perform processing on an image. The input of such system is a digital image.