The bridge between camera implementations and the rest of openhsi land.
Capture Support and Simulated Cameras
Tip
This module can be imported using from openhsi.capture import *
The OpenHSI class defines the interface between custom camera implementations and all the processing and calibration needed to run a pushbroom hyperspectral imager.
Running in notebook slow downs the camera more than running in a script.
To add a custom camera, five methods need to be defined in a class to: 1. Initialise camera __init__, and 2. Open camera start_cam, and 3. Close camera stop_cam, and 4. Capture a picture as a numpy array get_img, and 5. Update the exposure settings set_exposure, and 6. [Optional] Poll the camera temperature get_temp.
By inheriting from the OpenHSI class, all the methods to load settings/calibration files, collect datacube, saving data to NetCDF, and viewing as RGB are integrated. Furthermore, the custom camera class can be passed to a SettingsBuilder class for calibration.
For example, we implement a simulated camera below.
WARNING:param.main: Calling the .opts method with options broken down by options group (i.e. separate plot, style and norm groups) is deprecated. Use the .options method converting to the simplified format instead or use hv.opts.apply_groups for backward compatibility.
WARNING:param.main: Calling the .opts method with options broken down by options group (i.e. separate plot, style and norm groups) is deprecated. Use the .options method converting to the simplified format instead or use hv.opts.apply_groups for backward compatibility.
If we look at each simulated picture as it goes into the datacube, it looks like this a blackbody spectrum along the wavelength axis. There are also top and bottom black bars to simulate the rows that would get illuminated in a real camera.
with SimulatedCamera(mode="HgAr", n_lines=128, processing_lvl =-1, json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl", ) as cam: cam.collect()
Allocated 559.45 MB of RAM. There was 3424.47 MB available.
We can see the emission lines in roughly the spot where a real HgAr spectral line should fall. The intensity of each emission line is also roughly simulated.
Saves to a NetCDF file (and RGB representation) to directory dir_path in folder given by date with file name given by UTC time. Override the processing buffer timestamps with the timestamps in original file, also for camera temperatures.
Type
Default
Details
save_dir
str
Path to folder where all datacubes will be saved at
preconfig_meta_path
str
None
Path to a .json file that includes metadata fields to be saved inside datacube
If your saved datacubes have already been processed (for example, binned for smaller file size), you can further post-process your datacube using ProcessDatacube. A list of callable transforms can be provided to ProcessDatacube.load_next_tfms, the catch is to remember what transforms have already been applied during data collection and the final desired processing level (binning, radiance output, …). See the quick start guide for some documentation on what is done for each processing level.
Warning
next_tfms needs to be valid. For instance, you cannot bin twice!
Parallel saving of datacubes while simulated camera is continuously running
Saving datacubes is a blocking operation but we want our camera to continue capturing while saving is taking place. This attempts to place the saving in another multiprocessing.Process and the underlying datacube is implemented as a shared multiprocessing.Array.
Warning
Experimental! However, the below example works! I’m a genious. Well, at very least, I feel like one for wrestling with the Global Interpreter Lock and coming out on top.
Simulated camera using an RGB image as an input. Hyperspectral data is produced using CIE XYZ matching functions.
num_saved =0num2save =3with SharedSimulatedCamera(img_path="../assets/great_hall_slide.png", n_lines=128, processing_lvl =2, json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl") as cam:for i inrange(num2save):if num_saved >0:#p.join() # waiting for the last process to finish will make this slow. pass cam.collect()print(f"collected from time: {cam.timestamps.data[0]} to {cam.timestamps.data[-1]}") p = cam.save("../hyperspectral_snr/temp") num_saved +=1print(f"finished saving {num2save} datacubes")
collected from time: 2023-03-20 04:09:33.532609+00:00 to 2023-03-20 04:09:34.432315+00:00
Saving ../hyperspectral_snr/temp/2023_03_20/2023_03_20-04_09_33 in another process.
finished saving 3 datacubes
Due to requiring double the amount of memory and more to facilitate saving in a separate process, make sure your datacubes can fit in your RAM. Have not tested this but I would suggest choosing n_lines <= 1/3 the amount used using the regular OpenHSI.