Clever Geek Handbook
📜 ⬆️ ⬇️

Highlighting Borders

An example of border allocation for determining the position of license plates. Original image and result.

Highlighting borders ( highlighting edges ) is a term in the theory of image processing and computer vision , partly from the field of searching for objects and selecting objects, based on algorithms that extract points of a digital image in which brightness changes sharply or there are other types of heterogeneities.

Content

Purpose

The main purpose of detecting sharp changes in image brightness is to capture important events and changes in the world. They may reflect various assumptions about the imaging model, changes in the brightness of the image may indicate:

  • depth changes;
  • surface orientation changes;
  • changes in material properties;
  • difference in scene lighting.

In the ideal case, the result of the selection of the boundaries is a set of connected curves indicating the boundaries of objects, faces and prints on the surface, as well as curves that represent changes in the position of the surfaces. Thus, the application of a filter for selecting borders to the image can significantly reduce the amount of processed data due to the fact that the filtered part of the image is considered less significant, and the most important structural properties of the image are preserved. However, it is not always possible to highlight boundaries in real-world pictures of medium complexity. Borders isolated from such images often have such disadvantages as fragmentation (border curves are not interconnected), lack of borders, or the presence of false borders that do not correspond to the object being studied.

Border Properties

The boundaries highlighted in a two-dimensional image of a three-dimensional scene can be divided into dependent or independent of the point of view. Borders independent of the viewpoint usually reflect properties inherited from objects in a three-dimensional scene, such as the color of the surface and its shape. Borders depending on the viewpoint can change with the change of the viewpoint and reflect the geometry of the scene, such as overlapping objects.

A normal border can be, for example, the border between blocks of red and yellow. On the other hand, a line can be a set of pixels of a different color against a constant background. Therefore, the line can be on the border on each side of it.

Borders are quite important in many image processing applications, especially in machine vision systems that analyze scenes of artificial objects in fixed light. In recent years, however, studies have been carried out consistently (and successfully) on computer vision methods that do not rely on highlighting borders as a preprocessing step.

Simple border model

Although some literature considers highlighting ideal step boundaries, borders in a natural image are usually not. They are usually affected by one or more of the following effects:

  • Focal blur due to the final depth of field
  • Blurred penumbra from non-point light sources,
  • Shading smooth objects

and therefore, many researchers use the stepped edge smoothed by the Gauss function (error function) as the simplest approximation of the ideal edge model for modeling blurred boundaries in applied problems. Thus, a one-dimensional imagef {\ displaystyle f}   which has exactly one edge at a pointx=0 {\ displaystyle x = 0}   can be modeled as:

f(x)=Ir-Il2(erf⁡(x2σ)+one)+Il.{\ displaystyle f (x) = {\ frac {I_ {r} -I_ {l}} {2}} \ left (\ operatorname {erf} \ left ({\ frac {x} {{\ sqrt {2} } \ sigma}} \ right) +1 \ right) + I_ {l}.}  

Here

erf(u)=2π∫0ue-t2dt{\ displaystyle \ operatorname {erf} \, \ left (u \ right) = {\ frac {2} {\ sqrt {\ pi}}} \ int \ limits _ {0} ^ {u} e ^ {- t ^ {2}} \, dt}   .

To the left of the border is brightnessIl=limx→-∞f(x) {\ displaystyle I_ {l} = \ lim _ {x \ rightarrow - \ infty} f (x)}   , on right -Ir=limx→∞f(x) {\ displaystyle I_ {r} = \ lim _ {x \ rightarrow \ infty} f (x)}   . Parameterσ {\ displaystyle \ sigma}   called the size of the border blur.

Why bordering is not a trivial task

To illustrate why boundary extraction is a non-trivial task, consider the boundary allocation problem on the next one-dimensional signal. Here we can immediately say intuitively that the border should be between the 4th and 5th pixels.

five76four152148149

If the change in brightness between the 4th and 5th pixels was less, and the change in brightness between their neighbors was greater, it would not be so simple to say that the border should be in this place. In addition, someone could say that there should generally be several boundaries.

five7641113148149

Therefore, it’s hard to fix a certain threshold on what should be the change in brightness between two neighboring pixels, so that we can say that there is a border, is not always an easy task. This is one of the reasons why highlighting borders is a non-trivial task, unless the scene's objects are quite simple and the lighting conditions are well tuned.

Boundary Approaches

There are many approaches to the selection of boundaries, but almost everything can be divided into two categories: methods based on the search for maximums, and methods based on the search for zeros. Methods based on the search for maximums, select the boundaries by calculating the "edge force", usually the expression of the first derivative, such as the magnitude of the gradient, and then search for local maximums of the edge force using the assumed direction of the border, usually perpendicular to the gradient vector. Methods based on the search for zeros look for the intersection of the abscissa axis of the expression of the second derivative, usually the zeros of the Laplacian or the zeros of the nonlinear differential expression, as will be described later. As a pre-processing step for border selection, image smoothing is almost always applied, usually with a Gaussian filter.

The published border selection methods differ in the smoothing filters used and in the ways that the edge strength is considered. Although many border extraction methods rely on calculating the gradient of an image, they differ in the types of filters used to calculate gradients in the x- and y-directions.

Bounding Canny

John Canny I studied the mathematical problem of obtaining a filter that is optimal according to the criteria for isolating, localizing, and minimizing several responses of one edge. He showed that the desired filter is the sum of four exponentials. He also showed that this filter can be well approximated by the first derivative of the Gaussian. Canny introduced the concept of Non-Maximum Suppression, which means that the pixels of the borders are the pixels at which the local maximum of the gradient is reached in the direction of the gradient vector.

Although his work was carried out at the dawn of computer vision, Canny's boundary detector is still one of the best detectors. In addition to special special cases, it is difficult to find a detector that would work much better than a Canny detector.

The Canny-Deriche detector was derived from a similar mathematical criterion, like the Canny detector, although, starting from a different point of view, it led to a set of recursive filters to smooth the image instead of exponential filters and Gaussian filters.

Other first-order methods

In order to estimate the magnitude of the gradient of the image or its smoothed version, various gradient operators can be applied. The simplest approach is to use central differences:

Lx(x,y)=-one/2⋅L(x-one,y)+0⋅L(x,y)+one/2⋅L(x+one,y).{\ displaystyle L_ {x} (x, y) = - 1/2 \ cdot L (x-1, y) +0 \ cdot L (x, y) +1/2 \ cdot L (x + 1, y ).}  
Ly(x,y)=-one/2⋅L(x,y-one)+0⋅L(x,y)+one/2⋅L(x,y+one).{\ displaystyle L_ {y} (x, y) = - 1/2 \ cdot L (x, y-1) +0 \ cdot L (x, y) +1/2 \ cdot L (x, y + 1 ).}  

corresponding to the application of the following filters to the image:

Lx=[-one/20one/2]∗LandLy=[+one/20-one/2]∗L{\ displaystyle L_ {x} = {\ begin {bmatrix} -1 / 2 & 0 & 1/2 \ end {bmatrix}} * L \ quad {\ mbox {and}} \ quad L_ {y} = {\ begin {bmatrix} +1/2 \\ 0 \\ - 1/2 \ end {bmatrix}} * L}  

The well-known Sobel operator is based on the following filters:

Lx=[-one0+one-20+2-one0+one]∗LandLy=[+one+2+one000-one-2-one]∗L{\ displaystyle L_ {x} = {\ begin {bmatrix} -1 & 0 & + 1 \\ - 2 & 0 & + 2 \\ - 1 & 0 & + 1 \ end {bmatrix}} * L \ quad {\ mbox {and}} \ quad L_ {y} = {\ begin {bmatrix} + 1 & + 2 & + 1 \\ 0 & 0 & 0 \\ - 1 & -2 & -1 \ end {bmatrix}} * L}  

Having obtained such estimates, we can calculate the magnitude of the gradient as follows:

|∇L|=Lx2+Ly2{\ displaystyle | \ nabla L | = {\ sqrt {L_ {x} ^ {2} + L_ {y} ^ {2}}}}  

and the direction of the gradient is calculated as follows:

θ=atan2⁡(Ly,Lx){\ displaystyle \ theta = \ operatorname {atan2} (L_ {y}, L_ {x})}  

Other operators for calculating the image gradient were proposed by Judith Prewitt and Lawrence Roberts and are known as the Pruit operator and the Roberts cross operator , respectively.

Threshold Highlight and

After we find the strength of the border (usually the magnitude of the gradient), the next step is to apply a threshold to decide whether or not the border is at a given point in the image. The lower the threshold, the more boundaries there will be, but the more susceptible to noise the result will be, highlighting excess image data. Conversely, a high threshold can skip the weak edges or get the border fragments.

If the threshold is applied simply to the image of the gradient, the resulting borders will be thick and some post-processing will be required to make the edge thin and precise. If you select borders using Non-Maximum Suppression, the borders will be thin by definition and they can be joined into polygons by the procedure of joining the edges (tracing the border). On a discrete grid, the stage of suppressing non-maxima can be implemented by estimating the direction of the gradient using the first derivatives, rounding the direction by values ​​in increments of 45 degrees, and finally, comparing the values ​​of the gradient in the obtained gradient direction.

The traditional approach to solving the problem of finding a suitable threshold is the “delayed” thresholds. The method uses several thresholds. We use the upper threshold to find the start point of the border. After we got the starting point, we track the border, point by point, until the edge strength value is above the lower threshold. This algorithm assumes that borders are most likely continuous curves, and allows us to trace weak sections of borders without assuming that all the noisy points in the image will be marked as edges. However, there is still the problem of choosing the appropriate threshold values ​​for this method, since the optimal parameters can vary from image to image.

Boundary refinement

Border refinement is a process that makes borders thinner by removing unwanted false dots that appear on the border. This technique is applied already after the image has been smoothed (using the median or the Gaussian filter), the boundary operator (as one of the ones described above) has been applied to calculate the edge strength and after the borders have been cleared using suitable thresholds. This method removes all unwanted points and, when applied carefully, produces borders of one pixel thickness.

Pros:

  • sharp and thin borders can improve the recognition of objects
  • when using the Hough transform to detect straight or ellipses, thin borders give significantly better results
  • if the border is the boundary of a certain region, thin borders allow you to calculate parameters such as the perimeter, without any complicated arithmetic

There are many popular methods for solving this problem. One of them is described below:

  1. Select type of connectivity: 8, 6 or 4
    • 8-connectivity is preferred, in which all pixels immediately surrounding the current pixel are considered
  2. Delete points above, below, left and right of a point
    • This should be done in several passes, that is, first remove the points in one direction, then on the processed image, delete the points on the other.
    • The point is deleted in the following case:
      1. This point has no neighbors from above (in the case of processing the "upper" direction, otherwise - in the corresponding direction)
      2. This point is not the end of the line.
      3. Removing this point will not affect the connectedness of its neighbors.
      4. OR is it an isolated point
    • Otherwise, the point is not deleted
  3. The previous step can be repeated several times, depending on the desired level of "accuracy" of the border.

Second Order Approaches to Boundary

Instead of working with a gradient, some border extraction operators use second derivatives of image brightness. This naturally determines the strength of the gradient change. Thus, in the ideal case, the detection of zeros of the second derivative will make it possible to detect local maximums of the gradient.

The Marra-Hildreth operator is based on calculating the roots of the Laplace operator applied to an image smoothed by a Gaussian filter. However, it was shown that this operator selects false boundaries in homogeneous portions of the image, where the gradient has a local minimum. In addition, this operator poorly localized the rounded edges. Therefore, this operator is now more likely to be of historical value.

Differential Boundary

A more advanced way of extracting second-order boundaries, which also selects boundaries with pixel accuracy, is to use the following differential approach to detect zeros of the second derivative in the direction of the gradient vector.

We introduce at each point in the image a local coordinate system(u,v) {\ displaystyle (u, v)}   wherev {\ displaystyle v}   - direction parallel to the gradient. Assuming the image was smoothed by a Gaussian filter, and a scale representationL(x,y;t) {\ displaystyle L (x, y; t)}   on a scalet {\ displaystyle t}   it was calculated, we can require that the magnitude of the gradient of the scale representation, which is equal to the first derivative in the directionLv {\ displaystyle L_ {v}}   atv {\ displaystyle v}   - direction, will have the first derivative inv {\ displaystyle v}   - direction equal to zero

∂v(Lv)=0{\ displaystyle \ partial _ {v} (L_ {v}) = 0}   ,

while the second derivative inv {\ displaystyle v}   - direction fromLv {\ displaystyle L_ {v}}   should be negative, since only maxima are of interest, that is:

∂vv(Lv)≤0{\ displaystyle \ partial _ {vv} (L_ {v}) \ leq 0}   .

Written as an Explicit Expression of Local Partial DerivativesLx {\ displaystyle L_ {x}}   ,Ly {\ displaystyle L_ {y}}   ...Lyyy {\ displaystyle L_ {yyy}}   , this definition of the edge can be expressed as zero lines of the differential invariant

Lv2Lvv=Lx2Lxx+2LxLyLxy+Ly2Lyy=0,{\ displaystyle L_ {v} ^ {2} L_ {vv} = L_ {x} ^ {2} \, L_ {xx} +2 \, L_ {x} \, L_ {y} \, L_ {xy} + L_ {y} ^ {2} \, L_ {yy} = 0,}  

which satisfies the following condition:

Lv3Lvvv=Lx3Lxxx+3Lx2LyLxxy+3LxLy2Lxyy+Ly3Lyyy≤0{\ displaystyle L_ {v} ^ {3} L_ {vvv} = L_ {x} ^ {3} \, L_ {xxx} +3 \, L_ {x} ^ {2} \, L_ {y} \, L_ {xxy} +3 \, L_ {x} \, L_ {y} ^ {2} \, L_ {xyy} + L_ {y} ^ {3} \, L_ {yyy} \ leq 0}  

WhereLx {\ displaystyle L_ {x}}   ,Ly {\ displaystyle L_ {y}}   ...Lyyy {\ displaystyle L_ {yyy}}   denote partial derivatives calculated on a scale representationL {\ displaystyle L}   obtained by filtering the original image with a Gaussian filter.

In this case, the edges will automatically be continuous curves with pixel precision. Delayed thresholds can additionally be applied to the resulting edges.

In practice, the first derivatives can be calculated as described previously, while the second derivatives can be calculated from a scale representationL {\ displaystyle L}   So:

Lxx(x,y)=L(x-one,y)-2L(x,y)+L(x+one,y).{\ displaystyle L_ {xx} (x, y) = L (x-1, y) -2L (x, y) + L (x + 1, y).}  
Lxy(x,y)=(L(x-one,y-one)-L(x-one,y+one)-L(x+one,y-one)+L(x+one,y+one))/four,{\ displaystyle L_ {xy} (x, y) = (L (x-1, y-1) -L (x-1, y + 1) -L (x + 1, y-1) + L (x + 1, y + 1)) / 4,}  
Lyy(x,y)=L(x,y-one)-2L(x,y)+L(x,y+one).{\ displaystyle L_ {yy} (x, y) = L (x, y-1) -2L (x, y) + L (x, y + 1).}  

corresponding to the following operators:

Lxx=[one-2one]∗LandLxy=[-one/four0one/four000one/four0-one/four]∗LandLyy=[one-2one]∗L{\ displaystyle L_ {xx} = {\ begin {bmatrix} 1 & -2 & 1 \ end {bmatrix}} * L \ quad {\ mbox {and}} \ quad L_ {xy} = {\ begin {bmatrix} -1 / 4 & 0 & 1/4 \\ 0 & 0 & 0 \\ 1/4 & 0 & -1 / 4 \ end {bmatrix}} * L \ quad {\ mbox {and}} \ quad L_ {yy} = {\ begin {bmatrix} 1 \\ - 2 \\ 1 \ end {bmatrix}} * L}  

Derivatives of higher orders can be calculated similarly.

Phase Approaches

The latest development of border detection techniques uses a frequency approach to detect borders. Methods that use phase matching try to find areas in the image where all sinusoids in the frequency space are in phase. These areas will usually correspond to the areas of the perceived boundary, regardless of how strong the change in brightness is. The main advantage of this method is that it strongly distinguishes “ Mach bands ” and avoids the typical false boundaries around a rough edge.

Links

  • JOHN CANNY, A Computational Approach to Edge Detection
  • Kenny Boundary Detector in OpenCV

See also

  • Digital image processing
  • Computer vision
  • Cameraman Canny
  • Sobel operator
  • Operator Prewitt
  • Roberts Cross Operator
  • Iverson operator
Source - https://ru.wikipedia.org/w/index.php?title= Border_ highlighting&oldid = 100023850


More articles:

  • Civil Defense (Group)
  • Glenn, Nika Nikolaevna
  • Petrov, Valerie
  • Idiopathic Fibrosing Alveolitis
  • Waldo's People
  • Eckhard (Earl of Chalon)
  • Friesland (Netherlands)
  • Oceania Cup Winners Cup
  • Magomedova, Manaba Omarovna
  • Foresight (destroyer)

All articles

Clever Geek | 2019