capture

The bridge between camera implementations and the rest of openhsi land.

Capture Support and Simulated Cameras

Tip

This module can be imported using from openhsi.capture import *

The OpenHSI class defines the interface between custom camera implementations and all the processing and calibration needed to run a pushbroom hyperspectral imager.


source

OpenHSI

 OpenHSI (n_lines:int=16, processing_lvl:int=-1, warn_mem_use:bool=True,
          json_path:str=None, pkl_path:str=None,
          print_settings:bool=False)

Base Class for the OpenHSI Camera.


source

OpenHSI.collect

 OpenHSI.collect ()

Collect the hyperspectral datacube.


source

OpenHSI.avgNimgs

 OpenHSI.avgNimgs (n:int)

Take n images and find the average

Type Details
n int number of images to average
Returns ndarray averaged image
Warning

Running in notebook slow downs the camera more than running in a script.

To add a custom camera, five methods need to be defined in a class to: 1. Initialise camera __init__, and 2. Open camera start_cam, and 3. Close camera stop_cam, and 4. Capture a picture as a numpy array get_img, and 5. Update the exposure settings set_exposure, and 6. [Optional] Poll the camera temperature get_temp.

By inheriting from the OpenHSI class, all the methods to load settings/calibration files, collect datacube, saving data to NetCDF, and viewing as RGB are integrated. Furthermore, the custom camera class can be passed to a SettingsBuilder class for calibration.

For example, we implement a simulated camera below.


source

SimulatedCamera

 SimulatedCamera (img_path:str=None, mode:str=None, n_lines:int=16,
                  processing_lvl:int=-1, warn_mem_use:bool=True,
                  json_path:str=None, pkl_path:str=None,
                  print_settings:bool=False)

Simulated camera using an RGB image as an input. Hyperspectral data is produced using CIE XYZ matching functions.

Type Default Details
img_path str None Path to an RGB image file
mode str None Default is to generate lines from the RGB image. Other options are HgAr and flat to simulate the HgAr spectrum and a flat field respectively.
n_lines int 16
processing_lvl int -1
warn_mem_use bool True
json_path str None
pkl_path str None
print_settings bool False

source

SimulatedCamera.mode_change

 SimulatedCamera.mode_change (mode:str=None)

Switch between simulating HgAr, flat field, or neither.


source

SimulatedCamera.rgb2xyz_matching_funcs

 SimulatedCamera.rgb2xyz_matching_funcs (rgb:numpy.ndarray)

convert an RGB value to a pseudo-spectra with the CIE XYZ matching functions.

with SimulatedCamera(img_path="../assets/great_hall_slide.png", n_lines=1024, processing_lvl = 2, 
                     json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl") as cam:
    cam.collect()
    fig = cam.show(plot_lib="matplotlib",hist_eq=True)
Allocated 480.78 MB of RAM. There was 3446.97 MB available.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1024/1024 [00:07<00:00, 137.19it/s]
WARNING:param.main: Calling the .opts method with options broken down by options group (i.e. separate plot, style and norm groups) is deprecated. Use the .options method converting to the simplified format instead or use hv.opts.apply_groups for backward compatibility.
fig.opts(fig_inches=7,title="simulated hyperspectral datacube")

Each RGB value is converted into a pseudo-spectra by using the CIE XYZ matching functions.

Text(0, 0.5, 'CIE XYZ value')

Simulated flat field picture

We will use the Sun’s blackbody radiation for this.


source

SimulatedCamera.gen_flat

 SimulatedCamera.gen_flat ()

simulated blackbody radiation

with SimulatedCamera(mode="flat", n_lines=128, processing_lvl = -1, 
                     json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl",
                     ) as cam:
    cam.collect()
    fig = cam.show(plot_lib="matplotlib")
fig.opts(fig_inches=(1,4))
Allocated 559.45 MB of RAM. There was 3424.47 MB available.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 128/128 [00:00<00:00, 499.85it/s]
WARNING:param.main: Calling the .opts method with options broken down by options group (i.e. separate plot, style and norm groups) is deprecated. Use the .options method converting to the simplified format instead or use hv.opts.apply_groups for backward compatibility.

If we look at each simulated picture as it goes into the datacube, it looks like this a blackbody spectrum along the wavelength axis. There are also top and bottom black bars to simulate the rows that would get illuminated in a real camera.

plt.imshow(cam.dc.data[:,0,:], cmap="gray")
plt.xlabel("wavelength index")
plt.ylabel("cross-track")
Text(0, 0.5, 'cross-track')

Simulated HgAr lamp pic

For testing wavelength calibration.


source

SimulatedCamera.gen_sim_spectra

 SimulatedCamera.gen_sim_spectra ()

simulated picture of a HgAr lamp

with SimulatedCamera(mode="HgAr", n_lines=128, processing_lvl = -1, 
                     json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl",
                     ) as cam:
    cam.collect()
Allocated 559.45 MB of RAM. There was 3424.47 MB available.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 128/128 [00:00<00:00, 761.87it/s]

We can see the emission lines in roughly the spot where a real HgAr spectral line should fall. The intensity of each emission line is also roughly simulated.

plt.imshow(cam.dc.data[:,0,:], cmap="gray")
plt.xlabel("wavelength index")
plt.ylabel("cross-track")
Text(0, 0.5, 'cross-track')

Loading and processing datacubes further

Tip

ProcessRawDatacube only works for raw data captured using processing_lvl = -1.


source

ProcessRawDatacube

 ProcessRawDatacube (fname:str, processing_lvl:int, json_path:str,
                     pkl_path:str, old_style:bool=False)

Post-process datacubes


source

ProcessRawDatacube.save

 ProcessRawDatacube.save (save_dir:str, preconfig_meta_path:str=None,
                          prefix:str='', suffix:str='',
                          old_style:bool=False)

Saves to a NetCDF file (and RGB representation) to directory dir_path in folder given by date with file name given by UTC time. Override the processing buffer timestamps with the timestamps in original file, also for camera temperatures.

Type Default Details
save_dir str Path to folder where all datacubes will be saved at
preconfig_meta_path str None Path to a .json file that includes metadata fields to be saved inside datacube
prefix str Prepend a custom prefix to your file name
suffix str Append a custom suffix to your file name
old_style bool False Order of axis
json_path = '../calibration_files/OpenHSI-16_settings_Mono8_bin2.json'
pkl_path  = '../calibration_files/OpenHSI-16_calibration_Mono8_bin2_window.pkl'

proc_dc = ProcessRawDatacube(fname = "../../Downloads/16_pvn1_bin2_10ms2022_01_13-04_22_25.nc", processing_lvl=4,
                             json_path=json_path, pkl_path=pkl_path)
proc_dc.collect()

proc_dc.show(hist_eq=True)

If your saved datacubes have already been processed (for example, binned for smaller file size), you can further post-process your datacube using ProcessDatacube. A list of callable transforms can be provided to ProcessDatacube.load_next_tfms, the catch is to remember what transforms have already been applied during data collection and the final desired processing level (binning, radiance output, …). See the quick start guide for some documentation on what is done for each processing level.

Warning

next_tfms needs to be valid. For instance, you cannot bin twice!


source

ProcessDatacube

 ProcessDatacube (fname:str, processing_lvl:int, json_path:str,
                  pkl_path:str, old_style:bool=False)

Post-process datacubes


source

ProcessDatacube.load_next_tfms

 ProcessDatacube.load_next_tfms
                                 (next_tfms:List[Callable[[numpy.ndarray],
                                 numpy.ndarray]]=[])

provide the transforms you want to apply to this dataset

proced_dc = ProcessDatacube(fname = "../calibration_files/2022_01_13/2022_01_13-04_22_25_proc_lvl_2.nc", processing_lvl=4,
                             json_path=json_path, pkl_path=pkl_path)
proced_dc.load_next_tfms([proced_dc.dn2rad])

proced_dc.collect()

proced_dc.show(hist_eq=True)

Parallel saving of datacubes while simulated camera is continuously running

Saving datacubes is a blocking operation but we want our camera to continue capturing while saving is taking place. This attempts to place the saving in another multiprocessing.Process and the underlying datacube is implemented as a shared multiprocessing.Array.

Warning

Experimental! However, the below example works! I’m a genious. Well, at very least, I feel like one for wrestling with the Global Interpreter Lock and coming out on top.


source

SharedSimulatedCamera

 SharedSimulatedCamera (img_path:str=None, mode:str=None, n_lines:int=16,
                        processing_lvl:int=-1, json_path:str=None,
                        pkl_path:str=None, print_settings:bool=False)

Simulated camera using an RGB image as an input. Hyperspectral data is produced using CIE XYZ matching functions.

num_saved = 0
num2save  = 3

with SharedSimulatedCamera(img_path="../assets/great_hall_slide.png", n_lines=128, processing_lvl = 2, 
                     json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl") as cam:
    
    for i in range(num2save):
        if num_saved > 0:
            #p.join() # waiting for the last process to finish will make this slow. 
            pass
            
        cam.collect()
        print(f"collected from time: {cam.timestamps.data[0]} to {cam.timestamps.data[-1]}")
        p = cam.save("../hyperspectral_snr/temp")
        num_saved += 1
    
    print(f"finished saving {num2save} datacubes")
Allocated 120.20 MB of RAM.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 128/128 [00:00<00:00, 128.94it/s]
collected from time: 2023-03-20 04:09:31.589557+00:00 to 2023-03-20 04:09:32.573887+00:00
Saving ../hyperspectral_snr/temp/2023_03_20/2023_03_20-04_09_31 in another process.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 128/128 [00:00<00:00, 137.65it/s]
collected from time: 2023-03-20 04:09:32.595094+00:00 to 2023-03-20 04:09:33.514739+00:00
Saving ../hyperspectral_snr/temp/2023_03_20/2023_03_20-04_09_32 in another process.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 128/128 [00:00<00:00, 140.65it/s]
collected from time: 2023-03-20 04:09:33.532609+00:00 to 2023-03-20 04:09:34.432315+00:00
Saving ../hyperspectral_snr/temp/2023_03_20/2023_03_20-04_09_33 in another process.
finished saving 3 datacubes

Due to requiring double the amount of memory and more to facilitate saving in a separate process, make sure your datacubes can fit in your RAM. Have not tested this but I would suggest choosing n_lines <= 1/3 the amount used using the regular OpenHSI.