Quick Start Guide
To use an OpenHSI camera, you will need a settings .json
file that describes how the camera is initialised and other details you can edit to suit your use case. You will also need a .pkl
file that includes some arrays produced during calibration that allow the OpenHSI camera to do smile corrections, and conversions to radiance and reflectance.
For example, this is how you would use an OpenHSI camera (packaged with a Lucid sensor) and collect a hyperspectral datacube. The context manager automatically handles the initialisation and closing of the camera.
from openhsi.cameras import *
= "path_to_settings_file.json"
json_path = "path_to_calibration_file.pkl"
pkl_path
with LucidCamera(n_lines = 1_000,
= 2,
processing_lvl = pkl_path,
pkl_path = json_path,
json_path = 10
exposure_ms as cam:
)
cam.collect()= cam.show(plot_lib="matplotlib", robust=True)
fig
fig
Since we have a pushbroom sensor, we capture a line of spatial information at a time. Motion is required to obtain 2D spatial information and how many lines we collect is specified by n_lines
. After LucidCamera.collect
is run, the data is stored in a 3D numpy array LucidCamera.dc.data
which is implemented as a circular buffer. The next section explains the processing_lvl
parameter.
Processing levels
The library comes with some predefined recipes you can use to output a datacube with the desired level of processing. Depending on your use case, you may want to use save raw data, or choose a faster binning scheme. The available options are listed below.
processing_lvl |
Description |
---|---|
-1 | do not apply any transforms (default) |
0 | raw digital numbers cropped to useable sensor area |
1 | crop + fast smile |
2 | crop + fast smile + fast binning |
3 | crop + fast smile + slow binning |
4 | crop + fast smile + fast binning + conversion to radiance in units of uW/cm^2/sr/nm |
5 | crop + fast smile + radiance + fast binning |
6 | crop + fast smile + fast binning + radiance + reflectance |
7 | crop + fast smile + radiance + slow binning |
8 | crop + fast smile + radiance + slow binning + reflectance |
Main difference between these is the order the transforms are used in the pipeline. This summaries the binning procedure and output:
processing_lvl |
Binning | Output |
---|---|---|
-1,0,1 | None | Digital Numbers |
2 | Fast | Digital Numbers |
3 | Slow | Digital Numbers |
4,5 | Fast | Radiance (uW/cm^2/sr/nm) |
6 | Fast | Reflectance |
7 | Slow | Radiance (uW/cm^2/sr/nm) |
8 | Slow | Reflectance |
Alternatively, you can supply a custom pipeline of transforms custom_tfms:List[Callable[[np.ndarray],np.ndarray]]
to LucidCamera.set_processing_lvl(custom_tfms)
.
A note on binning schemes
We provide a fast binning scheme that only involves one memory allocation to speed things up and it assumes that the wavelength profile along the spectral axis is linear. In practice, it is not exacly linear so we also provide a slow binning scheme that does it properly at the cost of requiring more memory allocations. We found that the extra time needed was around 2 ms on a Jetson Xavier board.
Post-processing Datacubes
If you are collecting raw data and want to post-process them into radiance or reflectance, you can use ProcessDatacube
from the capture module. For example, we have a datacube of digital numbers and we want to convert them to radiance. We need to pass a processing_lvl
that includes the radiance conversion (so the the dn2rad
method is initialised). Then we can pass in a list of transforms to load_next_tfms
which will be applied to the whole datacube.
from openhsi.capture import ProcessDatacube
= ProcessDatacube(fname = "path_to_datacube_file.nc", processing_lvl=4,
dc2process =json_path, pkl_path=pkl_path)
json_path
dc2process.load_next_tfms([proced_dc.dn2rad]) dc2process.collect()
Just like the SimulatedCamera
, you can then view your post-processed datacube by using
=True) dc2process.show(hist_eq
or similar. More on visualisation in the next section.
Visualisation
After collection, the datacube can be visualised as an RGB image using LucidCamera.show
which returns a figure object created using your chosen plotting backend plot_lib
. The red, green, and blue wavelengths can be specified and the RGB channels will be chosen from the nearest wavelenth bands.
You may find that the contrast is low because of some outlier pixels from, for instance, specular reflection. To increase the contrast, we provide two options:
robust
: saturated linear stretch. For example,robust=True
will rescale colours to the 2–98% percentile. Alternatively, you can specify the percentage too like sorobust=5
will rescale to 5-95%.hist_eq
: apply histogram equalisation
Default behaviour is no contrast adjustments.
If you just want to view a datacube without any cameras attached. You can do so using:
from openhsi.data import *
= DataCube()
dc "path_to_datacube_file.nc")
dc.load_nc(=True) dc.show(robust
If you want to interactively view your datacubes (tap and see spectra), you can do so using:
from openhsi.atmos import *
= DataCubeViewer("path_to_datacube_file.nc")
dcv dcv()
Saving datacubes
To save the datacube to NetCDF format (alongside an RGB picture), use LucidCamera.save
. For example:
= "beach_data" ) cam.save( save_dir
will save a NetCDF file as f”beach_data/{current_date}/{current_datetime}.nc” and also an RGB image alongside. The save function also allows you to customise the file prefix and suffix. Preconfigured metadata can also be indicated to be saved into the NetCDF file. The camera temperature (in Celcius) and datatime for each camera frame is automatically included.
Getting surface reflectance
Generally, processing to radiance is recommended. To process to reflectance in real-time, one requires knowledge of the atmospheric conditions at the time of collect. While this is facilitated by setting the processing_lvl
to 6, internally, the algorithm relies on the pre-computed at sensor radiance saved in the calibration .pkl file (in the Python dictionary, rad_fit
is the key). The other option is to use Empirical Line Calibration (see the atmos
module in the sidebar).
Updating the radiative transfer model
A radiative transfer model predicts the behaviour of sunlight as it enters the Earth’s atmosphere. Some of the light will be absorbed, re-emitted, scattered, etc, from oxygen, nitrogen, carbon dioxide, methane, and aerosols to name a few. You don’t want any clouds obstructing the sunlight.
Since the atmospheric conditions will change, you may need to recompute this every so often. Here is how to do it assuming your camera object is called cam
from the example above:
If you don’t have a physical camera initialised, you can use the base CameraProperties
class from openhsi.data
to load, modify, and dump these files.
= CameraProperties(json_path="path_to_settings_file.json",pkl_path="path_to_calibration_file.pkl") cam
# camera initialised...
from datetime import datetime
from openhsi.atmos import *
from Py6S import *
from scipy.interpolate import interp1d
= Model6SV(lat = cam.settings["latitude"], lon = cam.settings["longitude"],
model = datetime.strptime(cam.settings["datetime_str"],"%Y-%m-%d %H:%M"),
z_time = cam.settings["radiosonde_station_num"], region = cam.settings["radiosonde_region"],
station_num = cam.settings["altitude"], zen = 0., azi = 0., # viewing zenith and azimuth angles
alt = AeroProfile.Maritime,
aero_profile = np.linspace(350,900,num=2000), # choose larger range than sensor range
wavelength_array = cam.settings["sixs_path"])
sixs_path
"rad_fit"] = interp1d(np.linspace(350,900,num=2000), model.radiance/10, kind='cubic')
cam.calibration[
#cam.dump(json_path,pkl_path) # update the settings and calibration files
You will need the 6SV excutable somewhere on your system. You can specify the path to the executable with sixs_path
. If you installed via conda
you should be fine without specifying the path.
The 6SV model calculates radiance in units of (W/m^2/sr/μm), whereas the integrating sphere calibration is in (μW/cm^2/sr/nm) hence the extra divide by 10 in model.radiance/10
. Use a wavelength_array
that extends beyond the sensor range on both sides.
Empirical Line Calibration
The ELC widget is defined in the openhsi.atmos
module (link to documentation). This method basically uses known spectral targets (typically one dark and one light) to extrapolate the reflectance for the other pixels. Users can draw several bounding boxes telling the ELC algorithm to use those pixels as the reference targets. Part of automatically identifying the spectral target, and thus an interactive widget, is to use a spectral matching technique. I use Spectral Angle Mapper and implemented it efficiently enough to be used interactively.
Only ingests radiance datacubes. To view a digital number or reflectance datacube interactively, use DataCubeViewer
in the openhsi.atmos
module.
from openhsi.atmos import *
= ELC(nc_path="path_to_radiance_datacube.nc",
elc ="path_to_spectral_library.pkl",pkl_path="path_to_camera_calibration_file.pkl")
speclib_path elc()
The speclib_path
parameter identifies the lab measured spectra of a few calibration tarps. This method also requires a radiance estimate model_6SV
so we can spectrally match radiance from lab based reflectance - close enough is good enough.
Usage Tips
Here are some tips from those who have used this library and camera in the field.
- Running the camera collect software from Jupyter Notebooks will impose some delays and slow down the frame rate. For best performance, run the camera collect from a script.
- The interactive manner of Jupyter Notebooks mean memory usage can grow with successive datacube allocations. Restart the kernel if your memory is getting full helps.