Image Processing

Previously we have learned what visual inspection is and how it helps in inspection checks and quality assurance of manufactured products. The task of vision-based inspection implements a specific technological aspect with the name of Image Processing.

Image processing is a technique to carry out a particular set of actions on an image for obtaining an enhanced image or extracting some valuable information from it.

It is a sort of signal processing where the input is an image, and output may be an improved image or characteristics/features associated with the same. Over the years, image processing has become one of the most rapidly growing technologies within engineering and even the computer science sector too.

Image processing consists of these three following steps:

  • Importing the image via image capturing tools;
  • Manipulating and analyzing the image;
  • Producing a result where the output can be an altered image or report that is based on image analysis.

Image processing includes the two types of method: 

Analogue Image Processing: Generally, analogue image processing is used for hard copies like photographs and printouts. Image analysts use various facets of interpretation while using these visual techniques.

Digital image processing: Digital image processing methods help in the manipulation and analysis of digital images. The three general steps that all types of data have to undergo while using digital image processing techniques are -  pre-processing, enhancement, and information extraction.

This article discusses primarily digital image processing techniques and various phases.

 

Image processing

 

Digital Image Processing and different phases

Digital image processing requires digital computers to convert images into digital form using digital conversion method and then process it. It is about subjecting various numerical depictions of images to a series of operations to obtain the desired result. The primary advantages of Digital Image Processing methods lie in its versatility, repeatability and the preservation of original data.

Main techniques of digital image processing are as follows:

  • Image Editing: It means changing/altering digital images with the use of graphic software tools.
  • Image Restoration: It means processing a corrupt image and taking out a clean original image to get back the lost information.
  • Independent Component Analysis: It separates a variety of signals computationally into additive subcomponents.
  • Anisotropic Diffusion: This method enables reducing image noise without having to remove essential portions of the image.
  • Linear Filtering. It’s another digital image processing method, which is about processing time-varying input signals and generating output signals.
  • Neural Networks: Neural networks are the computational models used in machine learning for solving various tasks.
  • Pixelation: It is a method for turning printed images into digitized ones.
  • Principal Components Analysis: It is a digital image processing technique that is used for feature extraction.
  • Partial Differential Equations: This method refers to dealing with de-noising
  • Hidden Markov Models: This technique is used for image analysis in 2D (two dimensional).
  • Wavelets: Wavelets are the mathematical functions used in image compression.
  • Self-organizing Maps: a digital image processing technique that classifies images into several classes.

Image recognition technology has grown up to be of great potential for wide adoption in various industries. This technology has seen significant usage with each passing year, as enterprises have become more time-efficient and productive due to the incorporation of better manufacturing, inspection and quality assurance tools and processes. Big corporations and start-ups such as Tesla, Google, Uber, Adobe Systems, etc heavily use image processing techniques in their day to day operations. With the advancements in the field of AI (Artificial Intelligence), this technology will see significant upgrades in the coming years.

Read More

Image Processing Algorithms

Till now, we have read about Image processing being a technique to carry out a particular set of actions on an image for obtaining an enhanced image or extracting some valuable information from it. The input is an image, and output may be an improved image or characteristics/features associated with the same.

It is essential to know that computer algorithms have the most significant role in digital image processing. Developers have been using and implementing multiple algorithms to solve various tasks, which include digital image detection, image analysis, image reconstruction, image restoration, image enhancement, image data compression, spectral image estimation, and image estimation. Sometimes, the algorithms can be straight off the book or a more customized amalgamated version of several algorithm functions.

Image processing algorithms commonly used for complete image capture can be categorized into:

Low-level techniques, such as color enhancement and noise removal,

Medium-level techniques, such as compression and binarization,

and higher-level techniques involving segmentation, detection, and recognition algorithms extract semantic information from the captured data.

 

 

Types of Image Processing Algorithms

Some of the conventional image processing algorithms are as follows:

Contrast Enhancement algorithm: Colour enhancement algorithm is further subdivided into -

  • Histogram equalization algorithm: Using the histogram to improve image contrast
  • Adaptive histogram equalization algorithm: It is the histogram equalization which adapts to local changes in contrast
  • Connected-component labeling algorithm: It is about finding and labeling disjoint regions

Dithering and half-toning algorithm: Dithering and half-toning includes of the following -

  • Error diffusion algorithm
  • Floyd–Steinberg dithering algorithm
  • Ordered dithering algorithm
  • Riemersma dithering algorithm

Elser difference-map algorithm: It is a search algorithm used for general constraint satisfaction problems. It was used initially for X-Ray diffraction microscopy.

Feature detection algorithm: Feature detection consists of -

  • Marr–Hildreth algorithm: It is an early edge detection algorithm
  • Canny edge detector algorithm: Canny edge detector is used for detecting a wide range of edges in images.
  • Generalized Hough transform algorithm
  • Hough transform algorithm
  • SIFT (Scale-invariant feature transform) algorithm: SIFT is an algorithm to identify and define local features in images.
  • SURF (Speeded Up Robust Features) algorithm: SURF is a robust local feature detector.

Richardson–Lucy deconvolution algorithm: This is an image deblurring algorithm.

Blind deconvolution algorithm: Much like Richardson–Lucy deconvolution, it is an image de-blurring algorithm when point spread function is unknown.

Seam carving algorithm: Seam carving algorithm is a content-aware image resizing algorithm

Segmentation algorithm: This particular algorithm parts a digital image into two or more regions.

  • GrowCut algorithm: an interactive segmentation algorithm
  • Random walker algorithm
  • Region growing algorithm
  • Watershed transformation algorithm: A class of algorithms based on the watershed analogy

It is to note down that apart from the algorithms mentioned above, industries also create customized algorithms to address their needs. They can be either right from the scratch or a combination of various algorithmic functions. It is safe to say that with the evolution of computer technology, image processing algorithms have provided sufficient opportunities for multiple researchers and developers for investigation, classification, characterization, and analysis of various hordes of images.

Read More

The Reverse Engineering process

Sometimes, situations arise where you don’t have access to a part’s original design documentation from its original production. This might be due to the absence of the original manufacturer altogether or stoppage on the production itself.

Reverse engineering empowers us to analyze a physical part and explore how it was originally built to replicate, create variations, or improve on the design. The goal is to ultimately create a new CAD model for use in manufacturing.

Let us take a look at the steps involved in reverse engineering. Commonly, it involves careful executions of the following steps:

  • Scanning

The first step involves using a 3D scanner for collecting the geometric measurements and dimensions of the existing part quickly and accurately using projected light patterns and camera system. Generally, the types of scanners used for such execution are blue light scanner, white light scanner, CT scanner and /or laser scanner. The former two captures the outward dimension and measurements while the latter two is capable of scanning the entire inside out.

  • Point Cloud

Once a certain part is scanned, the data gets transformed in the form of point clouds. Point cloud is a 3D visualization consisting of thousands or even millions of points. Point clouds define the shape of a physical system.

  • Meshing/Triangulation

This stage serves involves conversion of point clouds to mesh (STL or Stereolithographic format). Mesh generation is the practice of converting the given set of points into a consistent polygonal model that generates vertices, edges and faces that only meet at shared edges. Common software tools used to merge point clouds are Polyworks, Geomagic, ImageWare, MeshLab. The meshed part is then run for alignment in the mentioned software tools.

  • Parametric/Non-parametric Modeling

After the meshed part is aligned, it goes through either of two stages. The first option involves applying surface modeling on meshed part in tools such as Polyworks. It results in the generation of non-parametric model (IGES or STEP format). An alternate option is creating a sketch of the meshed part instead of putting it through surfacing. This work-process is known as parametric modeling (.PRT format). For a non parametric model, predicting future data is based on not just the parameters but also in the current state of data that has been observed. For a parametric model to predict new data, knowing just the parameters is enough.

  • CAD Modeling

The next stage consists of transferring the data through CAD software tools such as NX, Catia, Solidworks, Creo etc, for applying functions such as ‘stitch’, ‘sew’, ‘knit’, ‘trim’, ‘extrude’, ‘revolve’ etc for creation of 3D CAD model.

  • Inspection

This stage includes visual computer model inspections and alignment of the merged models against actual scanned parts (STL) for any discrepancies in the geometry as well as dimensions. Generally, inspection is carried out by using tools such as Polyworks or Geomagic. Reverse engineering inspection provides sufficient information to check tolerances, dimensions and other information relevant to the project.

  • Documentation

Documentation of 3D stage model depends solely on one’s technical/business requirements. This step is about converting 3D model to 2D sketch, usually with the help of tools such as inventor or Isodraw/Coraldraw, citing measurements which can be used for reference in the future.

 

Read More