#
Page tree
Skip to end of metadata
Go to start of metadata

This guide gives an step by step introduction to industrial hyperspectral imaging


This document guides the reader through the complete hyperspectral imaging workflow beginning with the setup of components and ending up with the inspection of a color stream to discriminate plastics.

The differentiation of three plastic plates by means of its molecular fingerprint is shown. In this example hyperspectral line-scan technology (VNIR range) is used to differentiate plastic plates of white color.

Step 1 - Set up a hyperspectral system

Before starting, check if you have all components available:

  • CCI-compliant hyperspectral camera
  • Illumination suited for hyperspectral measurements
  • Calibration target
  • Perception Studio program
  • When real-time processing is needed, a CC-compliant streaming adapter (e.g. an industrial PC with the Perception Core program running on it)
  • A black marker, or any thin straight object (needed for setting up the sharpness)
  • A good absorbing background material is often helpful (e.g. a black foam)
  • Reference material for first measurements (e.g. plastic parts)

Set up hardware and software

In this section the setup of hardware and software components for industrial hyperspectral imaging is described.


Deciding on the system's purpose

With Perception Studio and Perception Core, two general use cases are possible:

  1. Perception Studio allows you to record data from supported hyperspectral cameras, store and manipulate this data and finally create models, which can be applied to this data. These models will be able to convert hyperspectral data into image data, which makes application relevant information visible. For instance such a model might show an image where good materials are colored green and contaminants are colored red. 
  2. Perception Core can run those models, which were generated by Perception Studio, in high performance using GPU acceleration. A deployed system may consist of a powerful hardware (with GPU) where a Perception Core software is installed and one or more models that define the system's behavior. All running Perception Core instances in a computer network can be controlled and set up by a single Perception Studio application in the same network. 


Set up the hardware components

Mount the camera and lighting in a way that the cameras field of view gets illuminated. It is good practise to use a black background material (e.g. a black foam).

Hint: Make sure the illumination casts light not only from one side onto the line of inspection. Shadows will harm the measurement quality. It is best to use a diffuse illumination or to illuminate from 2 sides.
Be aware of the influence of an alternating current power supply on the measurement results. The alternation can cause a ripple in the measured intensity over time. Use an illumination powered by direct current to avoid this disturbing influence in the data.

Keep the minimum measurement distance of your camera in mind - ask your camera provider about this measure.
Hint: A distance smaller than this will result in blurred camera images.

If streaming of molecular information is the goal:
Connect the camera with a CCI-compliant streaming adapter (Perception Core on it). Connect your PC to the streaming adapter.

If streaming is not needed:
Connect the camera to your PC.


 Measurement setup
Measurement setup
Specim FX10 hyperspectral camera, 2-sided Halogen illumination and a white tile in the camera's field of view

Set up the camera

Depending on the type of camera, several steps might be needed before a connection to the camera is possible and all functions of the camera can be used.

For GenICam compliant cameras such as Specim FX10e, FX17e and AlliedVision G008, see this manual:  How to configure automatic detection for GenICam cameras

For CameraLink cameras such as Specim FX10, FX17 and AlliedVision CL032, install the grabber drivers and software and verify that you have full access to the camera before proceeding


Set up other devices

If you also have other devices like Specim's linear table, you also need to install the LUMO software by Specim (Installation of LUMO). However, LUMO is not needed if you want to work only with the camera.


Set up the software components

Make sure the camera is connected properly (see previous section).

→ Install the Perception Studio program on your PC. 

→ If you want to also setup a Perception Core program, you can install it on the same PC as Perception Studio or on another PC in the same network  (e.g. on the industrial PC). However, the PC needs a GPU that is supported.

→ Start the Perception Studio program and make sure the hardware was detected properly.

Hint: In case you are connected to the streaming adapter, the Perception Core icon will be shown in the device section of the user interface. In case of a direct connection to the camera, the cameras vendor icon will be shown in the user interface.


 Connection to a device
Connected to FX10 33417Not connected

The camera FX10 of serial number 33417 is connected.

No device found.

Set up the acquisition

Set up the optics and acquisition parameters.

Set up the camera's acquisition parameters

→ Open Perception Studio's Setup perspective and start the live visualization of camera data.

→ Select the data dimension "spatial" and inspect the camera live data in the view.

→ Control the camera's acquisition parameter to find appropriate saturation of the sensor.

Inspect the sensor's saturation for the "white" situation (light gets reflected from the calibration target) vs. the "dark" situation (e.g. the cap is on the lens and blocks the optical path).
Make sure the signal dynamic is reasonable high. Avoid under- or over-saturated sensor regions.


 White and dark situation
White situationDark situation

The cameras shutter is open.

The cameras shutter is closed.

Set up the camera's image sharpness

→ Open Perception Studio's Setup perspective and start the live visualization of camera data.

→ Select the data dimension "spatial" and inspect the camera live data in the view.

→ Put a black marker (or any thin straight object) into the camera's field of view (e.g. onto the calibration target) and inspect the obtained live data.

→ Adjust the optical setup (e.g. the lens) to achieve the maximum spatial sharpness.

Try to get an image of the marker with the best contrast possible - the marker appears the sharpest in the camera's image.


 Sharpness ok vs. not ok
Sharpness not okSharpness ok

A black marker was put on the calibration target (line of inspection).

The fore-lens needs to get adjusted.

The fore-lens was set up properly.

Set up the measurement

The interaction of light with objects is generally diverse. One part is reflected from the object, another one is absorbed by the object and in case of a thin object, yet another part is transmitted through the object.
Very often in an industrial application molecular properties of objects are measured, by studying reflected light relatively to a known target at multiple wavelengths.

Set up the system to measure relatively to a target.

Set up the balancing

→ Open Perception Studio's Setup perspective and start the live visualization of camera data.

→ Put the calibration target into the camera's field of view and inspect the obtained camera data.

→ Perform white balancing

→ Perform dark balancing


When inspecting the "white" image, make sure to check if the cameras sensor is saturated properly. Under- or over-saturated pixel regions will cause measurement errors.

Be sure the camera sensor is properly saturated than click on the Record White Image button.

Block the optical path of the camera (e.g. put the cap onto the fore lens) and click on Record Dark Image button.

When done, check the "balanced" option on the top of the graph and inspect balanced live data of the camera.


 Not balanced vs. balanced camera data
Not BalancedBalanced

The calibration target was put in the line of inspection. The shutter of the FX10 camera was used to darken the optical path.

Not balanced data.

Balanced live data.

 Influence of temperature
Directly after balancingCooled down illumination (5min.)

Balanced data is compared, captured directly after balancing and captured after the illumination was cooled down.
Note: The illumination temperature has a great influence on the measurement quality.

Balanced camera live data captured directly after balancing.

Balanced camera live data captured after a 5min. cooling down period of the illumination

Step 2 - Acquire hyperspectral data

This chapter guides you through the acquisition process of hyperspectral data.

Make sure your hyperspectral system was set up properly (see the previous chapter).

→ Open Perception Studio's Acquire perspective

→ Set the duration of the planned acquisition process (number of frames to be captured over time)

→ Click on Record button and wait till the system is ready for acquisition

→ Confirm the start of the acquisition and move the measurement objects through the line of inspection

→ Crop the record to ensure reasonable data sizes

→ Provide documentation for this measurement

→ Save the record


 The acquisition process
Acquisition of three plastic plates


An acquired hyperspectral data scene of a hand holding three plastic plates.
The boundaries of the scene were restricted to cover only the objects of interest.
On the left, some descriptions were attached to the data set.

Three plastics (VNIR).hsd

Step 3 - Explore hyperspectral data

Explore the acquired data to get an idea about its quality and measurement errors.

→ Open Perception Studio's Explore perspective

→ Select the data on which exploratory analysis should get performed.

→ Inspect the scene

→ Select objects of interest and inspect their spectral information

→ Learn about the influence of different preprocessing configurations


 The selection process
Selection of objects

 


The plastic plates as well as the hand and background were selected by means of the Select Spectra tool.

 The influence of preprocessing
Without preprocessing1st derivative of data

Reflectance spectra of the objects scanned.

Preprocessing: 1st derivative. The spectral slope per wavelength of scanned objects is shown.
Note: Bias information in the spectra are gone, instead changes in the spectra (slope information) are visualized.

Step 4 - Model the application relevant information:

→ Open Perception Studio's Model perspective

→ Select the data for the modelling process

→ Choose a modelling method available in the ribbon

→ Develop a model and save it for later usage


Depending on the application different approaches to the modelling of information are available:

Model a colored perception (CCI)

Applying a CCI method results in a chemical color image which expresses chemical information of the observed objects through color information.


 CCI-Preview method

The CCI-Preview method is an unsupervised approach to information extraction from hyperspectral imaging data.
Preprocessing 1st derivative was applied to the data beforehand. The spectra of the plastic plates are different from each other (different colors).

 CCI-Extract method

The CCI-Extract method is an unsupervised approach. It is based on unscrambling principle spectral components from the selected spectra.
Preprocessing 1st derivative was applied to the data beforehand. Components PC1, PC2 and PC3 well distinguish the spectra of the plastics.

 CCI-Correlate method

The CCI-Correlate method correlates selected spectra to the pixels in the scene and shows the correlation result per spectra set.
Preprocessing 1st derivative was applied to the data beforehand. The correlation of the selected spectra well-discriminates the plastics.

 CCI-Constrain method

The CCI-Constrain method lets the user constrain colors and spectra according to his expectations about the chemical scene.
Preprocessing 1st derivative was applied to the data beforehand. The result matches the constrain expectations of the user.

Model a classification

Applying a classification method results in a classification image which expresses a classification ID through a color information.


 Case Discrimination method

The Case Discrimination method allows a discrimination based on selected spectra sets. Preprocessing 1st derivative was applied to the data beforehand.
Contrary to the CCI-methods, the case discrimination method results in a class ID per taught material (i.e. per spectra set).

Model a statistical feature

Applying one of these methods results in a gray value image which expresses a statistical property by its pixel value.


 Statistical feature method - dynamic

The statistical feature methods allows an extraction of information based on statistical methodology. Preprocessing 1st derivative was applied to the data beforehand.
The spectral dynamic is extracted (difference of max and min value in the spectra).

Step 5 - Setup for live streaming:

In this step a CCI-compliant hardware adapter is set up to perform a modelled feature extraction in real-time.

→ Open Perception Studio's Setup perspective and select the Perception Core to be the target device.

→ Make sure the hyperspectral system is set up properly.

→ Make sure the spectral range in your model fits the set spectral region of interest.

→ Make sure the measurement system is calibrated, i.e. set up the balancing.

→ Scroll down the parameter list to the systems configuration panel and click on Add button.

→ Give the new configuration a meaningful name, select the models to be applied and configure streaming options.

→ Activate the new configuration by clicking on the Activate button.


 Add a configuration

The name was set to "Three plastics" and "Correlate_1stDerivative" was selected to be the model to be applied in real-time.
The stream is configured to be GigEVision compliant. An image is configured to get upsampled to a height of 1000 prior transmission to the client application.

 Systems configuration panel

The configuration "Three plastics" is available and activated.

Correlate_1stDerivative.psb

Step 6 - View live data: 

In this step the output of a CCI-compliant streaming adapter is looked at.

→ Open Perception Studio's View perspective and select the Perception Core to be the target device.

→ Activate the configuration of interest.

→ Click on Play button and inspect the live streaming.


 Live-view of a feature stream

The configuration "Three plastics" is activated. The model "Correlate_1stDerivative" gets applied in real-time and results in a color stream shown in the view.
Left, the three plastic plates got scanned in a crossed configuration (the plates overlap partly) - this results in a mixture of colors.
Right, the three plastic plates got scanned without overlap.

Step 7 - Connect your machine vision application

This step summarizes the communication of your machine vision application with the streaming adapter (Perception Core).

→ Set up the Perception Core for live streaming (see a previous chapter).

  • Streaming via UDP:
    Make sure to opt for the stream type "UDP" when adding a new configuration to the Perception Core.
    Specify the IP and port your application is listening to.
  • Streaming via GigEVision:
    Make sure to opt for the stream type "GEV" when adding a new configuration to the Perception Core.
    Specify the NIC (streaming adapter) which should receive the streamed data.

→ Prepare your application to receive live data

→ Prepare your application to control the Perception Core programmatically


 GenICamBrowser program

The Perception Core is configured by the run job "Three plastics" (stream type = GigiEVision).
Stemmer Imaging's GenICamBrowser program is used as client.

On start of GenICamBrowser the device "Perception System" was selected. After pressing play, molecular information is extracted live from camera raw data and is shown in the program.

 


© 2019 by Perception Park GmbH
The content of this page and any attached files are confidential and intended solely for the addressee(s). Any publication, transmission or other use of the information by a person or entity other than the intended addressee is prohibited. If you receive this in error please contact Perception Park and delete copied material. Perception Park GmbH, Wartingergasse 42, A-8010 Graz; Austria; FN 400381x

  • No labels