Digital Image Processing - MECHATRONICS ENGINEERING DEPARTMENT

1y ago
6 Views
2 Downloads
7.72 MB
107 Pages
Last View : 10d ago
Last Download : 3m ago
Upload by : Camille Dion
Transcription

Introduction Lecturer: Dr. Hossam Hassan Email : hossameldin.hassan@eng.asu.edu.eg Computers and Systems Engineering

Essential Books 1. Digital Image Processing – Rafael Gonzalez and Richard Woods, Third Edition, Prenhall, 2008 2. Digital Image Processing using MATLAB – Rafael Gonzalez, Richard Woods and Steven Eddins, Prenhall, 2008 3. Image processing, analysis and machine vision – Milan Sonka, Vaclav Hlavac and Roger Boyle, Third edition, Thomson Learning, London, 2008

Course Contents Introduction Digital Image Fundamentals Image Enhancement in the Spatial Domain Image Enhancement in the Frequency Domain Edge Detection Image Segmentation Representation and Description Introduction to Object Recognition 3

Grading System Final examination 70% Midterm examination Assignment/Quiz/Report 30% Warnings: – A quiz may be given without being informed before. – Copying assignment is prohibited. – Delay of submission influences on marks 4

Overview Early days of computing, data was numerical. Later, textual data became more common. Today, many other forms of data: voice, music, speech, images, computer graphics, etc. Each of these types of data are signals. Loosely defined, a signal is a function that conveys information.

Relationship of Signal Processing to other fields As long as people have tried to send or receive through electronic media : telegraphs, telephones, television, radar, etc. there has been the realization that these signals may be affected by the system used to acquire, transmit, or process them. Sometimes, these systems are imperfect and introduce noise, distortion, or other artifacts.

Understanding the effects these systems have and finding ways to correct them is the fundamental of signal processing. Sometimes, these signals are specific messages that we create and send to someone else (e.g., telegraph, telephone, television, digital networking, etc.). That is, we specifically introduce the information content into the signal and hope to extract it out later.

Sender Acquiring Natural Image Enhance Picture Compress for Transmission Encode and Transmit over Digital network Recipient Transmitted Codes of Image Decode Decompress Display

Concerned fields: Digital Communication Compression Speech Synthesis and Recognition Computer Graphics Image Processing Computer Vision

What is Image Processing? Image processing is a subclass of signal processing concerned specifically with pictures. Improve image quality for human perception and/or computer interpretation.

Several fields deal with images Computer Graphics : the creation of images. Image Processing : the enhancement or other manipulation of the image – the result of which is usually another images. Computer Vision: the analysis of image content.

Several fields deal with images

2 Principal application areas 1. Improvement of pictorial information for human interpretation. 2. Processing of image data for storage, transmission, and representation for autonomous machine perception. Pictorial: of or expressed in pictures; illustrated

Ex. of fields that use DIP Categorized by image sources – Radiation from the Electromagnetic spectrum – Acoustic – Ultrasonic – Electronic (in the form of electron beams used in electron microscopy) – Computer (synthetic images used for modeling and visualization)

Gamma-Ray Imaging Nuclear Image – (a) Bone scan – (b) PET (Positron emission tomography) image Astronomical Observations. – (c) Cygnus Loop – Nuclear Reaction – (d) Gamma radiation from a reactor valve

X-ray Imaging Medical diagnostics – (a) chest X-ray (familiar) – (b) aortic angiogram – (c) head CT Industrial imaging – (d) Circuit board Astronomy – (e) Cygnus Loop

Imaging in Visible and Infrared Bands Astronomy Light microscopy Pharmaceuticals – (a) taxol (anticancer agent) – (b) Cholesterol Micro-inspection to materials characterization – – – – (c) Microprocessor (d) Nickel oxide thin film (e) Surface of audio CD (f) Organic superconductor

Remote sensing

Remote Sensing: Weather Observations

Imaging in Radio Band

Ultrasound Imaging

Generated images by computer

3 types of computerized process Image Analysis Examples: reading bar coded tags or as sophisticated as identifying a person from his/her face.

Fundamental steps

Image Acquisition:

Camera

Frame Grabber

Image Enhancement

Image Restoration

Color Image Processing

Wavelets

Compression

Morphological processing

Image Segmentation

Representation & Description

Representation & Description

Recognition & Interpretation

Knowledge base

Human and Computer Vision

Simple questions 43

What is Vision? Recognize objects – people we know – things we own Locate objects in space – to pick them up Track objects in motion – catching a baseball – avoiding collisions with cars on the road Recognize actions – walking, running, pushing

Vision is Deceivingly easy Deceptive Computationally demanding Critical to many applications

Vision is Deceivingly Easy We see effortlessly – seeing seems simpler than “thinking” – we can all “see” but only select gifted people can solve “hard” problems like chess – we use nearly 70% of our brains for visual perception! All “creatures” see – frogs “see” – birds “see” – snakes “see” but they do not see alike

Vision is Deceptive Vision is an exceptionally strong sensation – vision is immediate – we perceive the visual world as external to ourselves, but it is a reconstruction within our brains – we regard how we see as reflecting the world “as it is;” but human vision is subject to illusions quantitatively imprecise limited to a narrow range of frequencies of radiation passive

Some Illusion

Some Illusion

Some Illusion

Some Illusion

Human Vision is Passive It relies on external energy sources (sunlight, light bulbs, fires) providing light that reflects off of objects to our eyes Vision systems can be “active” - carry their own energy sources – Radars – Bat acoustic imaging systems

Spectral Limitation of Human Vision We “see” only a small part of the energy spectrum of sunlight – we don’t see ultraviolet or lower frequencies of light – we don’t see infrared or higher frequencies of light – we see less than .1% of the energy that reaches our eyes But objects in the world reflect and emit energy in these and other parts of the spectrum

Structure of the Human Eye

Structure of the Human Eye

Lens & Retina

Receptors

Cones

Rods

Contrast sensitivity

Weber ratio

Simultaneous contrast Which small square is the darkest one ?

Signals

Time-Varying Signals

Spatially-Varying Signals

Spatiotemporal Signals Video Signal!

Types of Signals

Analog & Digital

Sampling

Quantization

Digital Image Representation

Digital Image Representation

Digital Image Representation

Example of Digital Image

Light-intensity function

Illumination and Reflectance

Illumination and Reflectance

Gray level

Color Perception Color is an important part of our visual experience. We distinguish only 100 levels of grays but hundreds of thousands of colors. Color detection is important to computer vision; Object recognition is easier Underutilized because more processing is required, hard to publish

Color Perception of Reflection

Color Models Color Models are useful for driving hardware that generates or captures images Monitors, TVs, video cameras Color printers Since color sensation can be reproduced by combination of pure colors, it is simpler to use phosphors and CCD (charge-couple device) elements that have sharp and narrow spectra rather than combine overlapping spectra. Color models describe in what proportion to combine these spectra to produce different color impressions.

Additive Color Models In monitors, 3 electron beams illuminate phosphors of 3 colors that act as additive light sources. The powers of these beams are controlled by the components of colors described by the R,G,B model

Color Models (RGB Cube)

Number of bits

Resolution

Checkerboard effect

False contouring

Nonuniform sampling

Example

Example

Nonuniform quantization

Image Formation

Lens-less Imaging Systems - Pinhole Optics Projects images – without lens – with infinite depth of field Smaller the pinhole – better the focus – less the light energy from any single point Good for tracking solar eclipses

Pinhole Camera (Cont ) Distant Objects are Smaller

Pinhole Camera (Cont ) Bigger Hole-More Blurred Images

Lenses Collect More Lights With a lens, diverging rays from a scene point are converged back to an image point

Lens Equation n: Lens’ Refractive Index hI: Image Height ho: Object Height The negative values for image height indicate that the image is an inverted image

Thin Lens relates the distance between the scene point being viewed and the lens to the distance between the lens and the point’s image (where the rays from that point are brought into focus by the lens) Let M be a point being viewed, p is the distance of M from the lens along the optical axis. The thin lens focuses all the rays from M onto the same point, the image point m at distance q from the lens.

Thin Lens Equation m can be determined by intersecting two known rays – MQ is parallel to the optical axis, so it must be refracted to pass through F. – MO passes through the lens center, so it is not bent. Note two pairs of similar triangles – MSO and Osm (yellow) – OQF and Fsm (green)

Thin Lens Equation As p gets large, q approaches f As q approaches f, p approaches infinity

Field of View As f gets smaller, image becomes more wide angle (more world points project onto the finite image plane). As f gets larger, image becomes more telescopic (smaller part of the world projects onto the finite image plane)

According to that MODEL?

Vanishing Point?

Vanishing Point – TWO Points Perspective vy vx

Optical Power and Accommodation Optical power of a lens - how strongly the lens bends the incoming rays – Short focal length lens bends rays significantly – It images a point source at infinity (large p) at distance f behind the lens. The smaller f, the more the rays must be bent to bring them into focus sooner. – Optical power is 1/f, with f measured in meters. The unit is called the diopter – Human vision: when viewing faraway objects the distance from the lens to the retina is 0.017m. So the optical power of the eye is 58.8 diopters

Accommodation How does the human eye bring nearby points into focus on the retina? – by increasing the power of the lens – muscles attached to the lens change its shape to change the lens power – accommodation: adjusting the focal length of the lens – bringing points that are nearby into focus causes faraway points to go out of focus – depth-of-field: range of distances in focus

Accommodation Physical cameras: mechanically change the distance between the lens and the image plane

1. Digital Image Processing - Rafael Gonzalez and Richard Woods, Third Edition, Prenhall, 2008 2. Digital Image Processing using MATLAB - Rafael Gonzalez, Richard Woods and Steven Eddins, Prenhall, 2008 3. Image processing, analysis and machine vision - Milan Sonka, Vaclav Hlavac and Roger Boyle, Third edition, Thomson Learning, London, 2008

Related Documents:

Chapter 1 - Introduction to Mechatronics Questions 1.1 What is mechatronics? Mechatronics is the field of study concerned with the design, selection, analysis and control of systems that combine mechanical elements with electronic components as well as computers and/or microcontrollers. Mechatronics topics involve elements from

Mechatronics is also known as a way to achieve an optimal design solution for an electromechanical product. Key mechatronics ideas are developed during the . You can achieve this by simply making digital test and simulation an integral part of the digital design phase. As mechatronics systems become more complex, the challenges associated .

A digital image is a 2D representation of a scene as a finite set of digital values, calledpicture elements or pixels or pels. The field of digital image processing refers to processing digital image by means of a digital computer. NOTE: A digital image is composed of finite number of elements like picture elements, image

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a . Digital cameras generally include dedicated digital image processing chips to convert the raw data from the image sensor into a color-corrected image in a standard image file format. I

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subfield of digital signal processing, digital image processing has many advantages over analog image processing; it allows a much wider range of algorithms to be applied to the in

A multi-disciplinary mechatronics (and material handling systems) course was create d that allows students to learn and experience mechatronics engineering within the context of material handling systems. As shown in Figure 1, mechatronics incorporates aspect s from different engine ering fields such that product teams are

What is Digital Image Processing? Digital image processing focuses on two major tasks -Improvement of pictorial information for human interpretation -Processing of image data for storage, transmission and representation for autonomous machine perception Some argument about where image processing ends and fields such as image

opinions about the courts in a survey conducted by the National . criminal justice system, and that black women are imprisoned at a rate seven times greater than white women. The report indicates there has been an increase in their incarceration rate in excess of 400% in recent years. Further, three-fourths of the women, according to the report, were mothers, and two-thirds had children .