学 院 电气信息工程学院专 业 计算机科学与技术学生姓名 XXX班级学号 10455321 1 1指导教师 XXX
二零一四年六月
外文文献翻译
Digital Image Processing and Edge Detection
Turgay Celik*,Hasan Demirel,Huseyin Ozkaramanli,Mustafa Uyguroglu
1.Digital Image Processing
Interest in digital image processing methods stems from two principal applicantion areas:improvement of pictorial information fo r human interpretation;and processing of image datafor storage, transmission,and representation for au- tenuous machine perception.
An image may be defined as a two-dimensional function, f(x,y),where x and y are spatial(plane) coordinates, and the amplitude of f at any pair of coordinates (x,y) is called theintensity or gray level of the image at that point.When x,y,and the amplitude values of f areall finite, discrete quantities,we call the image a digital image. The field of digital imageprocessing refers to processing digital images by means of a digital computer.Note that adigital image is composed of a finite number of elements, each of which has a particularlocation and value.These elements are referred to as picture elements, image e lements,peels,and pixels.Pixel is the term most widely used to denote the elements of a digital image.
Vision is the most advanced ofour senses, so it is not surprising that images play the singlemo st important role in human perception.However,unlike humans,who are limited to thevisual band of the electromagnetic (EM) spec- trump, imaging machines cover almost theentire EM spectrum, ranging from gamma to radio waves.They can operate on imagesgenerated by sources that humans are not accustomed to associating with images. Theseinclude ultra- sound, electron microscopy, and computer-generated images. Thus, digitalimage processing encompasses a wide and varied field of applications.
There is no general agreement among authors regarding where image processing stops andother related areas, such as image analysis and computer vi- son, start. Sometimes adistinction is made by defining image processing as a discipline in which both the input andoutput of a process are images.We believe this to be a limiting and somewhat artificialboundary.For example,under this definition, even the trivial task of computing the averageintensity of an image (which yields a single number)would not be considered an image
1
外文文献翻译
processing operation.On the other hand,there are fields such as computer vision whoseultimate goal is to use computers to emulate human vision, including learning and being ableto make inferences and take actions based on visual inputs.This area itself is a branch ofartificial intelligence(AI)whose objective is to emulate human intelligence.The field of AI isin its earliest stages of infancy in terms of development,with progress having been muchslower than originally anticipated. The area of image analysis (also called imageunderstanding) is in be- teen image processing and computer vision.
There are no clear-cut boundaries in the continuum from image processing at one end tocomputer vision at the other.However, one useful paradigm is to consider three types ofcomputerized processes in this continuum: low-,mid-, and high-level processes.Low-levelprocesses involve primitive opera- tons such as image preprocessing to reduce noise, contrastenhancement,and image sharpening.A low-level process is characterized by the fact that bothits inputs and outputs are images.Mid-level processing on images involves tasks such assegmentation(partitioning an image into regions or objects),description of those objects toreduce them to a form suitable for computer processing, and classification(recognition) ofindividual objects.A midlevel process is characterized by the fact that its inputs generally areimages,but its outputs are attributes extracted from those images (e.g., edges, contours, andthe identity of individual objects).Finally, higher-level processing involves “making sense”of an ensemble of recognized objects, as in image analysis, and, at the far end of thecontinuum,performing the cognitive functions normally associated with vision.
Based on the preceding comments,we see that a logical place of overlap betweenimage processing and image analysis is the area of recognition of individual regions or objectsin an image.Thus,what we call in this book digital image processing encompasses processeswhose inputs and outputs are images and, in addition,encompasses processes that extractattributes from images,up to and including the recognition of individual objects.As a simpleillustration to clarify these concepts,consider the area of automated analysis of text. Theprocesses of acquiring an image of the area containing the text, preprocessing that image,extracting(segmenting)the individual characters,describing the characters in a form suitablefor computer processing,and recognizing those individual characters are in the scope of whatwe call digital image processing in this book.Making sense of the content of the page may be
2
外文文献翻译
viewed as being in the domain of image analysis and even computer vision,depending on thelevel of complexity implied by the statement“making sense.”As will become evident shortly,digital image processing,as we have defined it, is used successfully in a broad range of areasofexceptional social and economic value.
The areas of application of digital image processing are so varied that some form oforganization is desirable in attempting to capture the breadth of this field.One of the simplestways to develop a basic understanding of the extent of image processing applications is tocategorize images according to their source (e.g., visual,X-ray, and so on).The principalenergy source for images in use today is the electromagnetic energy spectrum. Otherimportant sources of energy include acoustic, ultrasonic, and electronic (in the form ofelectron beams used in electron microscopy). Synthetic images, used for modeling andvisualization, are generated by computer. In this section we discuss briefly how images aregenerated in these various categories and the areas in which they are applied.
Images based on radiation from the EM spectrum are the most familiar, especiallyimages in the X-ray and visual bands of the spectrum. Electromagnet- ice waves can beconceptualized as propagating sinusoidal waves of varying wavelengths, or they can bethought of as a stream of mass less particles, each traveling in a wavelike pattern and movingat the speed of light.Each mass less particle contains a certain amount (or bundle)of energy.Each bundle of energy is called a photon. If spectral bands are grouped according to energyper photon,we obtain the spectrum shown in fig.below, ranging from gamma rays (highestenergy)at one end to radio waves (lowest energy)at the other.The bands are shown shaded toconvey the fact that bands of the EM spectrum are not distinct but rather transition smoothlyfrom one to the other.
Image acquisition is the first process.Note that acquisition could be as simple as beinggiven an image that is already in digital form.Generally, the image acquisition stage involvespreprocessing, such as scaling.
Image enhancement is among the simplest and most appealing areas of digital imageprocessing.Basically, the idea behind enhancement techniques is to bring out detail that isobscured,or simply to highlight certain features of interest in an image.A familiar example ofenhancement is when we increase the contrast of an imagebecause “it looks better.” It is
3
外文文献翻译
important to keep in mind that enhancement is a very subjective area of imageprocessing.Image restoration is an area that also deals with improving the appearance of animage.However,unlike enhancement,which is subjective, image restoration is objective, inthe sense that restoration techniquestend to be based on mathematical or probabilistic models of image degradation.Enhancement,on the other hand, is based on human subjective preferences regarding what constitutes a“good”enhancement result.
Color image processing is an area that has been gaining in importance because of thesignificant increase in the use of digital images over the Internet. It covers a number offundamental concepts in color models and basic color processing in a digital domain.Color isused also in later chapters as the basis for extracting features of interest in an image.
Wavelets are the foundation for representing images in various degrees of resolution. Inparticular, this material is used in this book for image data compression and for pyramidalrepresentation, in which image s are subdivided successively into smaller regions.
Compression, as the name implies, deals with techniques for reducing the storagerequired saving an image, or the bandwidth required transmitting it. Although storagetechnology has improved significantly over the past decade, the same cannot be said fortransmission capacity.This is true particularly in uses of the Internet,which are characterizedby significant pictorial content. Image compression is familiar(perhaps inadvertently) to mo stusers of computers in the form of image file extensions, such as the jpg file extension used inthe JPEG(Jo int Photo graphic Experts Group) image compres sion standard.
Morphological processing deals with tools for extracting image components that areuseful in the representation and description of shape.The material in this chapter begins atransition from processes that output images to processes that output image attributes.
Segmentation procedures partition an image into its constituent parts or objects.Ingeneral, autonomous segmentation is one of the most difficult tasks in digital imageprocessing. A rugged segmentation procedure brings the process a long way towardsuccessful solution of imaging problems that require objects to be identified individually.Onthe other hand,weak or erratic segmentation algorithms almost always guarantee eventualfailure. In general, the more accurate the segmentation, the more likely recognition is to
4
外文文献翻译
succeed.
Representation and description almost always follow the output of a segmentation stage,which usually is raw pixel data, constituting either the bound- ray of a region(i.e., the set ofpixels separating one image region from another)or all the points in the region itself. In eithercase, converting the data to a form suitable for computer processing is necessary.The firstdecision that must be made is whether the data should be represented as a boundary or as acomplete region.Boundary representation is appropriate when the focus is on external shapecharacteristics, such as corners and inflections.Regional representation is appropriate whenthe focus is on internal properties, such as texture or skeletal shape. In some applications,theserepresentations complement each other.Choosing a representation is only part of thesolutionfor trans- forming raw data into a form suitable for subsequent computer processing.Amethod must also be specified for describing the data so that features of interest arehighlighted.Description, also called feature selection, deals with extracting attributes thatresult in some quantitative information of interest or are basic for differentiating one class ofobjects from another.
Recognition is the process that assigns a label (e.g., “vehicle”) to an object based onits descriptors.As detailed before,we conclude our coverage of digital image processing withthe development of methods for recognition of individual objects.
So far we have said nothing about the need for prior knowledge or about theinteraction between the knowledge base and the processing modules in Fig2 above.Knowledge about a problem domain is coded into an image processing system in the form ofa knowledge database.This knowledge may be as slim-plea as detailing regions of an imagewhere the information of interest is known to be located, thus limiting the search that has tobe conducted in seeking that information.The knowledge base also can be quite complex,such as an interrelated list of all major possible defects in a materials inspection problem or animage database containing high-
-resolution satellite images of a region in con- lection with change-detection applications. Inaddition to guiding the operation of each processing module, the knowledge base also controlsthe interaction between modules. This distinction is made in Fig2 above by the use of
5
外文文献翻译
double-headed arrows between the processing modules and the knowledge base,as op-posedto single-headed arrows linking the processing modules.
2.E dge dete ction
The image edge is one of image most basic characteristics,often is carrying image majority ofinformations。 But the edge exists in the image irregular structure and in not the steadyphenomenon, also namely exists in the signal point of discontinuity place, these spots havegiven the image outline position, these outlines are frequently we when the imageryprocessing needs the extremely important some representative condition, this needs us toexamine and to withdraw its edge to an image。 But the edge examination algorithm is in theimagery processing question one of classical technical difficult problems, its solution carrieson the high level regarding us the characteristic description, the recognition and theunderstanding and so on has the significant influence;Also because the edge examination allhas in many aspects the extremely important use value, therefore how the people are devotingcontinuously in study and solve the structure to leave have the good nature and the goodeffect edge examination operator question。 In the usual situation,we may the signal insingular point and the point of discontinuity thought is in the image peripheral point, itsnearby gradation change situation may reflect from its neighboring picture element gradationdistribution gradient。
Edge detection is a terminology in image processing and computer vision,particularly inthe areas of feature detection and feature extraction, to refer to algorithms which aim atidentifying points in a digital image at which the image brightness changes sharply or moreformally has discontinuities.Although point and line detection certainly are important in anydiscussion on segmentation,edge detection is by far the most common approach for detectingmeaningful disco unties in gray level.
The image majority main information all exists in the image edge, the main performance forthe image partial characteristic discontinuity, is in the image the gradation change quite fierceplace, also is the signal which we usually said has the strange change place。 The strangesignal the gradation change which moves towards along the edge is fierce,usually we dividethe edge for the step shape and the roof shape two kind of types (as shown in Figure 1-1).Inthe step edge two side grey levels have the obvious change;But the roof shape edge is located
6
外文文献翻译
the gradation increase and the reduced intersection point.May portray the peripheral point inmathematics using the gradation derivative the change, to the step edge, the roof shape edgeasks its step, the second time derivative separately。
To an edge, has the possibility simultaneously to have the step and the line edgecharacteristic. For example on a surface, changes from a plane to the normal directiondifferent another plane can produce the step edge; If this surface has the edges and cornerswhich the regular reflection characteristic also two planes form quite to be smooth, thenworks as when edges and corners smooth surface normal after mirror surface reflection angle,as a result of the regular reflection component, can produce the bright light strip on the edgesand corners smooth surface, such edge looked like has likely superimposed a line edge in thestep edge.Because edge possible and in scene object important characteristic correspondence,therefore it is the very important image characteristic。 For instance,an object outline usuallyproduces the step edge,because the object image intensity is different with the backgroundima ge inte ns ity。
We knew that, the edge examination essence is uses some algorithm to withdraw in theimage the object and the background junction demarcation line.We define the edge for theimage in the gradation occur the rapid change region boundary.The image gradation changesituation may use the image gradation distribution the gradient to reflect, therefore we mayuse the partial image differential technology to obtain the edge examination operator.The edge examination algorithm has the following four steps :
Filter:The edge examination algorithm mainly is based on an image intensity step and thesecond time derivative,but the derivative computation is very sensitive to the noise, thereforemust use the filter to improve and the noise related edge detector performance.Needs to pointout that, the majority filter have also caused the edge intensity loss while noise reduction,therefore, strengthens the edge and between the noise reduction needs compromised.Enhancement: Strengthens the edge the foundation is determines the image eachneighborhood intensity the change value.The enhancement algorithm may(or partial) theintensity value has the neighborhood the remarkable change spot to reveal suddenly.The edgestrengthens is generally completes through the computation gradient peak-to-peak value.
7
外文文献翻译
P1-1 Processing result
Examination:Has many point gradient peak-to-peak value in the image quite to be big,butthese in the specific application domain not all is the edge, therefore should use some methodto determine which select is the peripheral points.The simple edge examination criterion is thegradient peak-to-peak value threshold value criterion.
Localization: If some application situation request definite edge position, then the edgeposition may come up the estimate in the sub-picture element resolution, the edge positionalso may estimate.In the edge examination algorithm, the first three steps use extremelyuniversally.This is because under the majority situations,needs the edge detector to point outmerely the edge appears in image some picture element neighbor,but is not unnecessary topoint out the edge the exact location or the direction.
The edge examines the error usually is refers to the edge to classify the error by mistake,namely distinguished the vacation edge the edge retains,but distinguished the real edge thevacation edge removes.The edge error of estimation is describes the edge position and thelateral error with the probability statistical model.We examine the edge the error and the edgeerror of estimation differentiate, is because their computational method is completely different,its error model completely is also different.
The edge examination is examines the image partial remarkable change the mostfundamental operation. In the unidimensional situation, the step edge concerns with the imagefirst derivative partial peak value.The gradient is the function change one kind of measure,but an image may regard as is the image intensity continuous function sampling point array.Therefore, is similar with the unidimensional situation, the image grey level remarkable
8
目前舍利云服务器的主要特色是适合seo和建站,性价比方面非常不错,舍利云的产品以BGP线路速度优质稳定而著称,对于产品的线路和带宽有着极其严格的讲究,这主要表现在其对母鸡的超售有严格的管控,与此同时舍利云也尽心尽力为用户提供完美服务。目前,香港cn2云服务器,5M/10M带宽,价格低至30元/月,可试用1天;;美国cera云服务器,原生ip,低至28元/月起。一、香港CN2云服务器香港CN2精品线...
由于行业需求和自媒体的倾向问题,对于我们个人站长建站的方向还是有一些需要改变的。传统的个人网站建站内容方向可能会因为自媒体的分流导致个人网站很多行业不再成为流量的主导。于是我们很多个人网站都在想办法进行重新更换行业,包括前几天也有和网友在考虑是不是换个其他行业做做。这不有重新注册域名重新更换。鉴于快速上手的考虑还是采用香港服务器,这不腾讯云和阿里云早已不是新账户,考虑到新注册UCLOUD账户还算比...
鲨鱼机房(Sharktech)我们也叫它SK机房,是一家成立于2003年的老牌国外主机商,提供的产品包括独立服务器租用、VPS主机等,自营机房在美国洛杉矶、丹佛、芝加哥和荷兰阿姆斯特丹等,主打高防产品,独立服务器免费提供60Gbps/48Mpps攻击防御。机房提供1-10Gbps带宽不限流量服务器,最低丹佛/荷兰机房每月49美元起,洛杉矶机房最低59美元/月起。下面列出部分促销机型的配置信息。机房...