This guide gives an step by step introduction to industrial hyperspectral imaging
This document guides the reader through the complete hyperspectral imaging workflow beginning with the setup of components and ending up with the inspection of a color stream to discriminate plastics.
The differentiation of three plastic plates by means of its molecular fingerprint is shown. In this example hyperspectral line-scan technology (VNIR range) is used to differentiate plastic plates of white color.
Step 1 - Set up a hyperspectral system
Before starting, check if you have all components available:
- CCI-compliant hyperspectral camera
- Illumination suited for hyperspectral measurements
- Calibration target
- Perception Studio program
- When real-time processing is needed, a CC-compliant streaming adapter (e.g. an industrial PC with the Perception Core program running on it)
- A black marker, or any thin straight object (needed for setting up the sharpness)
- A good absorbing background material is often helpful (e.g. a black foam)
- Reference material for first measurements (e.g. plastic parts)
Set up hardware and software
In this section the setup of hardware and software components for industrial hyperspectral imaging is described.
Deciding on the system's purpose
With Perception Studio and Perception Core, two general use cases are possible:
- Perception Studio allows you to record data from supported hyperspectral cameras, store and manipulate this data and finally create models, which can be applied to this data. These models will be able to convert hyperspectral data into image data, which makes application relevant information visible. For instance such a model might show an image where good materials are colored green and contaminants are colored red.
- Perception Core can run those models, which were generated by Perception Studio, in high performance using GPU acceleration. A deployed system may consist of a powerful hardware (with GPU) where a Perception Core software is installed and one or more models that define the system's behavior. All running Perception Core instances in a computer network can be controlled and set up by a single Perception Studio application in the same network.
Set up the hardware components
Mount the camera and lighting in a way that the cameras field of view gets illuminated. It is good practise to use a black background material (e.g. a black foam).
Hint: Make sure the illumination casts light not only from one side onto the line of inspection. Shadows will harm the measurement quality. It is best to use a diffuse illumination or to illuminate from 2 sides.
Be aware of the influence of an alternating current power supply on the measurement results. The alternation can cause a ripple in the measured intensity over time. Use an illumination powered by direct current to avoid this disturbing influence in the data.
Keep the minimum measurement distance of your camera in mind - ask your camera provider about this measure.
Hint: A distance smaller than this will result in blurred camera images.
If streaming of molecular information is the goal:
Connect the camera with a CCI-compliant streaming adapter (Perception Core on it). Connect your PC to the streaming adapter.
If streaming is not needed:
Connect the camera to your PC.
Set up the camera
Depending on the type of camera, several steps might be needed before a connection to the camera is possible and all functions of the camera can be used.
For GenICam compliant cameras such as Specim FX10e, FX17e and AlliedVision G008, see this manual: How to configure automatic detection for GenICam cameras
For CameraLink cameras such as Specim FX10, FX17 and AlliedVision CL032, install the grabber drivers and software and verify that you have full access to the camera before proceeding
Set up other devices
If you also have other devices like Specim's linear table, you also need to install the LUMO software by Specim (Installation of LUMO). However, LUMO is not needed if you want to work only with the camera.
Set up the software components
Make sure the camera is connected properly (see previous section).
→ Install the Perception Studio program on your PC.
→ If you want to also setup a Perception Core program, you can install it on the same PC as Perception Studio or on another PC in the same network (e.g. on the industrial PC). However, the PC needs a GPU that is supported.
→ Start the Perception Studio program and make sure the hardware was detected properly.
Hint: In case you are connected to the streaming adapter, the Perception Core icon will be shown in the device section of the user interface. In case of a direct connection to the camera, the cameras vendor icon will be shown in the user interface.
Set up the acquisition
Set up the optics and acquisition parameters.
Set up the camera's acquisition parameters
→ Open Perception Studio's Setup perspective and start the live visualization of camera data.
→ Select the data dimension "spatial" and inspect the camera live data in the view.
→ Control the camera's acquisition parameter to find appropriate saturation of the sensor.
Inspect the sensor's saturation for the "white" situation (light gets reflected from the calibration target) vs. the "dark" situation (e.g. the cap is on the lens and blocks the optical path).
Make sure the signal dynamic is reasonable high. Avoid under- or over-saturated sensor regions.
Set up the camera's image sharpness
→ Open Perception Studio's Setup perspective and start the live visualization of camera data.
→ Select the data dimension "spatial" and inspect the camera live data in the view.
→ Put a black marker (or any thin straight object) into the camera's field of view (e.g. onto the calibration target) and inspect the obtained live data.
→ Adjust the optical setup (e.g. the lens) to achieve the maximum spatial sharpness.
Try to get an image of the marker with the best contrast possible - the marker appears the sharpest in the camera's image.
Set up the measurement
The interaction of light with objects is generally diverse. One part is reflected from the object, another one is absorbed by the object and in case of a thin object, yet another part is transmitted through the object.
Very often in an industrial application molecular properties of objects are measured, by studying reflected light relatively to a known target at multiple wavelengths.
Set up the system to measure relatively to a target.
Set up the balancing
→ Open Perception Studio's Setup perspective and start the live visualization of camera data.
→ Put the calibration target into the camera's field of view and inspect the obtained camera data.
→ Perform white balancing
→ Perform dark balancing
When inspecting the "white" image, make sure to check if the cameras sensor is saturated properly. Under- or over-saturated pixel regions will cause measurement errors.
Be sure the camera sensor is properly saturated than click on the Record White Image button.
Block the optical path of the camera (e.g. put the cap onto the fore lens) and click on Record Dark Image button.
When done, check the "balanced" option on the top of the graph and inspect balanced live data of the camera.
Step 2 - Acquire hyperspectral data
This chapter guides you through the acquisition process of hyperspectral data.
Make sure your hyperspectral system was set up properly (see the previous chapter).
→ Open Perception Studio's Acquire perspective
→ Set the duration of the planned acquisition process (number of frames to be captured over time)
→ Click on Record button and wait till the system is ready for acquisition
→ Confirm the start of the acquisition and move the measurement objects through the line of inspection
→ Crop the record to ensure reasonable data sizes
→ Provide documentation for this measurement
→ Save the record
Step 3 - Explore hyperspectral data
Explore the acquired data to get an idea about its quality and measurement errors.
→ Open Perception Studio's Explore perspective
→ Select the data on which exploratory analysis should get performed.
→ Inspect the scene
→ Select objects of interest and inspect their spectral information
→ Learn about the influence of different preprocessing configurations
Step 4 - Model the application relevant information:
→ Open Perception Studio's Model perspective
→ Select the data for the modelling process
→ Choose a modelling method available in the ribbon
→ Develop a model and save it for later usage
Depending on the application different approaches to the modelling of information are available:
Model a colored perception (CCI)
Applying a CCI method results in a chemical color image which expresses chemical information of the observed objects through color information.
Model a classification
Applying a classification method results in a classification image which expresses a classification ID through a color information.
Model a statistical feature
Applying one of these methods results in a gray value image which expresses a statistical property by its pixel value.
Step 5 - Setup for live streaming:
In this step a CCI-compliant hardware adapter is set up to perform a modelled feature extraction in real-time.
→ Open Perception Studio's Setup perspective and select the Perception Core to be the target device.
→ Make sure the hyperspectral system is set up properly.
→ Make sure the spectral range in your model fits the set spectral region of interest.
→ Make sure the measurement system is calibrated, i.e. set up the balancing.
→ Scroll down the parameter list to the systems configuration panel and click on Add button.
→ Give the new configuration a meaningful name, select the models to be applied and configure streaming options.
→ Activate the new configuration by clicking on the Activate button.
Step 6 - View live data:
In this step the output of a CCI-compliant streaming adapter is looked at.
→ Open Perception Studio's View perspective and select the Perception Core to be the target device.
→ Activate the configuration of interest.
→ Click on Play button and inspect the live streaming.
Step 7 - Connect your machine vision application
This step summarizes the communication of your machine vision application with the streaming adapter (Perception Core).
→ Set up the Perception Core for live streaming (see a previous chapter).
- Streaming via UDP:
Make sure to opt for the stream type "UDP" when adding a new configuration to the Perception Core.
Specify the IP and port your application is listening to. - Streaming via GigEVision:
Make sure to opt for the stream type "GEV" when adding a new configuration to the Perception Core.
Specify the NIC (streaming adapter) which should receive the streamed data.
→ Prepare your application to receive live data
- Streaming via UDP:
Read the UDP Streaming Protocol Definition - Streaming via GigEVision:
Make sure your application can receive data from a GigEVision device.
→ Prepare your application to control the Perception Core programmatically
- Read the UDP Control Protocol Definition
© 2019 by Perception Park GmbH
The content of this page and any attached files are confidential and intended solely for the addressee(s). Any publication, transmission or other use of the information by a person or entity other than the intended addressee is prohibited. If you receive this in error please contact Perception Park and delete copied material. Perception Park GmbH, Wartingergasse 42, A-8010 Graz; Austria; FN 400381x