data

Hyperspectral datacubes contain three axes (cross-track, along-track, and wavelength) and are stored in 3D numpy ndarrays. When using a pushbroom scanner, the datacube are filled gradually. The implementation here is based on a circular buffer and we provide additional methods to apply a pipeline of transforms which, for example, can be used for smile correction, and radiance conversion.
Tip

This module can be imported using from openhsi.data import *

Generic Circular Buffer on numpy.ndarrays

The base functionality is implemented on a generic circular buffer. The datatype dtype can be modified as desired but the default is set to store uint16 digital numbers.


source

CircArrayBuffer

 CircArrayBuffer (size:tuple=(100, 100), axis:int=0, dtype:type=<class
                  'numpy.uint8'>, show_func:Callable[[numpy.ndarray],Forwa
                  rdRef('plot')]=None)

Circular FIFO Buffer implementation on ndarrays. Each put/get is a (n-1)darray.

Type Default Details
size tuple (100, 100) Shape of n-dim circular buffer to preallocate
axis int 0 Which axis to traverse when filling the buffer
dtype type uint8 Buffer numpy data type
show_func typing.Callable[[numpy.ndarray], ForwardRef(‘plot’)] None Custom plotting function if desired

source

CircArrayBuffer.put

 CircArrayBuffer.put (line:numpy.ndarray)

Writes a (n-1)darray into the buffer


source

CircArrayBuffer.get

 CircArrayBuffer.get ()

Reads the oldest (n-1)darray from the buffer


source

CircArrayBuffer.show

 CircArrayBuffer.show ()

Display the data

For example, we can write to a 1D array

cib = CircArrayBuffer(size=(7,),axis=0)
for i in range(9):
    cib.put(i)
    cib.show()

for i in range(9):
    print(i,cib.get())
#(7) [0 0 0 0 0 0 0]
#(7) [0 1 0 0 0 0 0]
#(7) [0 1 2 0 0 0 0]
#(7) [0 1 2 3 0 0 0]
#(7) [0 1 2 3 4 0 0]
#(7) [0 1 2 3 4 5 0]
#(7) [0 1 2 3 4 5 6]
#(7) [7 1 2 3 4 5 6]
#(7) [7 8 2 3 4 5 6]
0 2
1 3
2 4
3 5
4 6
5 7
6 8
7 None
8 None

Or a 2D array

plots_list = []

cib = CircArrayBuffer(size=(4,4),axis=0)
cib.put(1) # scalars are broadcasted to a 1D array
for i in range(5):
    cib.put(cib.get()+1)
    plots_list.append( cib.show().opts(colorbar=True,title=f"i={i}") )

hv.Layout(plots_list).cols(3)

Loading Camera Settings and Calibration Files

The OpenHSI camera has a settings dictionary which contains these fields: - camera_id is your camera name, - row_slice indicates which rows are illuminated and we crop out the rest, - resolution is the full pixel resolution given by the camera without cropping, and - fwhm_nm specifies the size of spectral bands in nanometers, - exposure_ms is the camera exposure time last used, - luminance is the reference luminance to convert digital numbers to luminance, - longitude is the longitude degrees east, - latitude is the latitude degrees north, - datetime_str is the UTC time at time of data collection, - altitude is the altitude above sea level (assuming target is at sea level) measured in km, - radiosonde_station_num is the station number from http://weather.uwyo.edu/upperair/sounding.html, - radiosonde_region is the region code from http://weather.uwyo.edu/upperair/sounding.html, and - sixs_path is the path to the 6SV executable. - binxy number of pixels to bin in (x,y) direction - win_offset offsets (x,y) from edge of detector for a selective readout window (used in combination with a win_resolution less than full detector size). - win_resolution size of area on detector to readout (width, height) - pixel_format format of pixels readout sensor, ie 8bit, 10bit, 12bit

The settings dictionary may also contain additional camera specific fields: - mac_addr is GigE camera mac address - used by Lucid Vision Sensors - serial_num serial number of detector - used by Ximea and FLIR Sensors

The pickle file is a dictionary with these fields: - camera_id is your camera name, - HgAr_pic is a picture of a mercury argon lamp’s spectral lines for wavelength calibration, - flat_field_pic is a picture of a well lit for calculating the illuminated area, - smile_shifts is an array of pixel shifts needed to correct for smile error, - wavelengths_linear is an array of wavelengths after linear interpolation, - wavelengths is an array of wavelengths after cubic interpolation, - rad_ref is a 4D datacube with coordinates of cross-track, wavelength, exposure, and luminance, - sfit is the spline fit function from the integrating sphere calibration, and - rad_fit is the interpolated function of the expected radiance at sensor computed using 6SV.

These files are unique to each OpenHSI camera.


source

CameraProperties

 CameraProperties (json_path:str=None, pkl_path:str=None,
                   print_settings:bool=False, **kwargs)

Save and load OpenHSI camera settings and calibration

Type Default Details
json_path str None Path to settings file
pkl_path str None Path to calibration file
print_settings bool False Print out settings file contents
kwargs

source

CameraProperties.dump

 CameraProperties.dump (json_path:str=None, pkl_path:str=None)

Save the settings and calibration files

For example, the contents of CameraProperties consists of two dictionaries. To produce the files cam_settings.json and cam_calibration.pkl, follow the steps outlined in the calibration module.

#collapse_output

cam_prop = CameraProperties(pkl_path="../assets/cam_calibration.pkl")
cam_prop
/var/folders/9j/gsmzhdmn6m35ff9gznjvmkww0000gn/T/ipykernel_61167/1799396090.py:22: DeprecationWarning: Please use `interp1d` from the `scipy.interpolate` namespace, the `scipy.interpolate.interpolate` namespace is deprecated.
  self.calibration = pickle.load(handle)
settings = 
{}

calibration = 
{'HgAr_pic': array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]]), 'smile_shifts': array([9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,
       9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,
       9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,
       9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,
       9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,
       9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 8, 8,
       8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
       8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
       8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
       8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 7, 7, 7,
       7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
       7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
       7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
       7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6,
       6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
       6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
       6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
       6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
       6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
       5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
       5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
       5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
       5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4,
       4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
       4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
       4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
       4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
       4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
       4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
       4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3,
       3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
       3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
       3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
       3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
       3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0], dtype=int16), 'wavelengths': array([418.37123297, 418.74477592, 419.11877271, ..., 893.95644796,
       894.09194015, 894.22658974]), 'wavelengths_linear': array([419.81906422, 420.25228866, 420.68551311, ..., 951.81868076,
       952.2519052 , 952.68512965]), 'flat_field_pic': array([[ 9,  7,  9, ...,  2,  3,  1],
       [ 8,  7,  8, ...,  3,  3,  4],
       [ 8,  7, 10, ...,  6,  7,  6],
       ...,
       [ 8,  6,  7, ...,  2,  3,  1],
       [ 3,  3,  3, ...,  1,  2,  2],
       [ 1,  1,  1, ...,  1,  1,  1]], dtype=uint8), 'rad_ref': <xarray.DataArray (variable: 1, cross_track: 905, wavelength_index: 1240,
                   exposure: 2, luminance: 2)>
array([[[[[ 0,  2],
          [ 0,  5]],

         [[ 0,  2],
          [ 0,  5]],

         [[ 0,  2],
          [ 0,  5]],

         ...,

         [[ 0,  7],
          [ 0, 15]],

         [[ 0,  7],
          [ 0, 15]],

         [[ 0,  7],
          [ 0, 15]]],

...

        [[[ 0,  2],
          [ 0,  5]],

         [[ 0,  2],
          [ 0,  5]],

         [[ 0,  2],
          [ 0,  5]],

         ...,

         [[ 0,  7],
          [ 0, 14]],

         [[ 0,  7],
          [ 0, 14]],

         [[ 0,  7],
          [ 0, 14]]]]], dtype=int32)
Coordinates:
  * cross_track       (cross_track) int32 0 1 2 3 4 5 ... 900 901 902 903 904
  * wavelength_index  (wavelength_index) int32 0 1 2 3 4 ... 1236 1237 1238 1239
  * exposure          (exposure) int32 10 20
  * luminance         (luminance) int32 0 10000
  * variable          (variable) <U8 'datacube', 'spec_rad_ref_luminance': 52020, 'sfit': <scipy.interpolate._interpolate.interp1d object>, 'rad_fit': <scipy.interpolate._interpolate.interp1d object>}
# # Show the integrating sphere calibration references
cam_prop.calibration["rad_ref"]
<xarray.DataArray (variable: 1, cross_track: 905, wavelength_index: 1240,
                   exposure: 2, luminance: 2)>
array([[[[[ 0,  2],
          [ 0,  5]],

         [[ 0,  2],
          [ 0,  5]],

         [[ 0,  2],
          [ 0,  5]],

         ...,

         [[ 0,  7],
          [ 0, 15]],

         [[ 0,  7],
          [ 0, 15]],

         [[ 0,  7],
          [ 0, 15]]],

...

        [[[ 0,  2],
          [ 0,  5]],

         [[ 0,  2],
          [ 0,  5]],

         [[ 0,  2],
          [ 0,  5]],

         ...,

         [[ 0,  7],
          [ 0, 14]],

         [[ 0,  7],
          [ 0, 14]],

         [[ 0,  7],
          [ 0, 14]]]]], dtype=int32)
Coordinates:
  * cross_track       (cross_track) int32 0 1 2 3 4 5 ... 900 901 902 903 904
  * wavelength_index  (wavelength_index) int32 0 1 2 3 4 ... 1236 1237 1238 1239
  * exposure          (exposure) int32 10 20
  * luminance         (luminance) int32 0 10000
  * variable          (variable) <U8 'datacube'

Transforms

We can apply a number of transforms to the camera’s raw data and these tranforms are used to modify the processing level during data collection. For example, we can perform a fast smile correction and wavelength binning during operation. With more processing, this is easily extended to obtain radiance and reflectance.

Some transforms require some setup which is done using CameraProperties.tfm_setup. This method also allows one to tack on an additional setup function with the argument more_setup which takes in any callable which can mutate the CameraProperties class.


source

CameraProperties.tfm_setup

 CameraProperties.tfm_setup
                             (more_setup:Callable[[__main__.CameraProperti
                             es],NoneType]=None, dtype:Union[numpy.uint8,n
                             umpy.uint16,numpy.float32]=<class
                             'numpy.uint16'>, lvl:int=0)

Setup for transforms


source

CameraProperties.crop

 CameraProperties.crop (x:numpy.ndarray)

Crops to illuminated area


source

CameraProperties.fast_smile

 CameraProperties.fast_smile (x:numpy.ndarray)

Apply the fast smile correction procedure


source

CameraProperties.fast_bin

 CameraProperties.fast_bin (x:numpy.ndarray)

Changes the view of the datacube so that everything that needs to be binned is in the last axis. The last axis is then binned.


source

CameraProperties.slow_bin

 CameraProperties.slow_bin (x:numpy.ndarray)

Bins spectral bands accounting for the slight nonlinearity in the index-wavelength map


source

CameraProperties.dn2rad

Converts digital numbers to radiance (uW/cm^2/sr/nm). Use after cropping to useable area.


source

CameraProperties.rad2ref_6SV


source

CameraProperties.set_processing_lvl

 CameraProperties.set_processing_lvl (lvl:int=-1,
                                      custom_tfms:List[Callable[[numpy.nda
                                      rray],numpy.ndarray]]=None)

Define the output lvl of the transform pipeline. Predefined recipies include: -1: do not apply any transforms (default), 0 : raw digital numbers cropped to useable sensor area, 1 : crop + fast smile, 2 : crop + fast smile + fast binning, 3 : crop + fast smile + slow binning, 4 : crop + fast smile + fast binning + conversion to radiance in units of uW/cm^2/sr/nm, 5 : crop + fast smile + radiance + fast binning, 6 : crop + fast smile + fast binning + radiance + reflectance, 7 : crop + fast smile + radiance + slow binning, 8 : crop + fast smile + radiance + slow binning + reflectance.

You can add your own tranform by monkey patching the CameraProperties class.

@patch
def identity(self:CameraProperties, x:np.ndarray) -> np.ndarray:
    """The identity tranform"""
    return x

If you don’t require any camera settings or calibration files, a valid transform can be any Callable that takens in a 2D np.ndarray and returns a 2D np.ndarray.

Pipeline for Composing Transforms

Depending on the level of processing that one wants to do real-time, a number of transforms need to be composed in sequential order. To make this easy to customise, you can use the pipeline method and pass in a raw camera frame and an ordered list of transforms.

To make the transforms pipeline easy to use and customise, you can use the CameraProperties.set_processing_lvl method.


source

CameraProperties.pipeline

 CameraProperties.pipeline (x:numpy.ndarray)

Compose a list of transforms and apply to x.

# if wavelength calibration is changed, this needs to be updated

cam_prop = CameraProperties("../assets/cam_settings.json","../assets/cam_calibration.pkl")

cam_prop.set_processing_lvl(-1) # raw digital numbers
test_eq( (924,1240), np.shape( cam_prop.pipeline(cam_prop.calibration["HgAr_pic"])) )

cam_prop.set_processing_lvl(0) # cropped
test_eq( (905, 1240), np.shape( cam_prop.pipeline(cam_prop.calibration["HgAr_pic"])) )

cam_prop.set_processing_lvl(1) # fast smile corrected
test_eq( (905, 1231), np.shape( cam_prop.pipeline(cam_prop.calibration["HgAr_pic"])) )

cam_prop.set_processing_lvl(2) # fast binned
test_eq( (905, 136),  np.shape( cam_prop.pipeline(cam_prop.calibration["HgAr_pic"])) )

cam_prop.set_processing_lvl(4) # radiance
test_eq( (905, 136),  np.shape( cam_prop.pipeline(cam_prop.calibration["HgAr_pic"])) )

# cam_prop.set_processing_lvl(6) # reflectance
# test_eq( (452,108),  np.shape( cam_prop.pipeline(cam_prop.calibration["HgAr_pic"])) )

# cam_prop.set_processing_lvl(5) # radiance conversion moved earlier in pipeline
# test_eq( (452,108),  np.shape( cam_prop.pipeline(cam_prop.calibration["HgAr_pic"])) )

cam_prop = CameraProperties("../assets/cam_settings.json","../assets/cam_calibration.pkl")  
cam_prop.set_processing_lvl(3) # slow binned
test_eq( (905, 118),  np.shape( cam_prop.pipeline(cam_prop.calibration["HgAr_pic"])) )
/var/folders/9j/gsmzhdmn6m35ff9gznjvmkww0000gn/T/ipykernel_61167/1799396090.py:22: DeprecationWarning: Please use `interp1d` from the `scipy.interpolate` namespace, the `scipy.interpolate.interpolate` namespace is deprecated.
  self.calibration = pickle.load(handle)
/var/folders/9j/gsmzhdmn6m35ff9gznjvmkww0000gn/T/ipykernel_61167/1799396090.py:22: DeprecationWarning: Please use `interp1d` from the `scipy.interpolate` namespace, the `scipy.interpolate.interpolate` namespace is deprecated.
  self.calibration = pickle.load(handle)

Buffer for Data Collection

DataCube takes a line with coordinates of wavelength (x-axis) against cross-track (y-axis), and stores the smile corrected version in its CircArrayBuffer.

For facilitate near real-timing processing, a fast smile correction procedure is used. An option to use a fast binning procedure is also available. When using these two procedures, the overhead is roughly 2 ms on a Jetson board.

Instead of preallocating another buffer for another data collect, one can use the circular nature of the DataCube and use the internal buffer again without modification - just use DataCube.put like normal.

Storage Allocation

All data buffers are preallocated so it’s no secret that hyperspectral datacubes are memory hungry. For reference:

along-track pixels wavelength binning RAM needed time to collect at 10 ms exposure time to save to SSD
4096 4 nm ≈ 800 MB ≈ 55 s ≈ 3 s
1024 no binning ≈ 4 GB ≈ 14 s ≈ 15 s

In reality, it is very difficult to work with raw data without binning due to massive RAM usage and extended times to save the NetCDF file to disk which hinders making real-time analysis. The frame rate (at 10 s exposure) with binning drops the frame rate to from 90 fps to 75 fps. In our experimentation, using a SSD mounted into a M.2 slot on a Jetson board provided the fastest experience. When using other development boards such as a Raspberry Pi 4, the USB 3.0 port is recommended over the USB 2.0 port.


source

DateTimeBuffer

 DateTimeBuffer (n:int=16)

Records timestamps in UTC time.


source

DateTimeBuffer.update

 DateTimeBuffer.update ()

Stores current UTC time in an internal buffer when this method is called.

timestamps = DateTimeBuffer(n=8)
for i in range(8):
    timestamps.update()
    
timestamps.data
array([datetime.datetime(2023, 5, 31, 3, 5, 3, 781193, tzinfo=datetime.timezone.utc),
       datetime.datetime(2023, 5, 31, 3, 5, 3, 781204, tzinfo=datetime.timezone.utc),
       datetime.datetime(2023, 5, 31, 3, 5, 3, 781206, tzinfo=datetime.timezone.utc),
       datetime.datetime(2023, 5, 31, 3, 5, 3, 781208, tzinfo=datetime.timezone.utc),
       datetime.datetime(2023, 5, 31, 3, 5, 3, 781209, tzinfo=datetime.timezone.utc),
       datetime.datetime(2023, 5, 31, 3, 5, 3, 781210, tzinfo=datetime.timezone.utc),
       datetime.datetime(2023, 5, 31, 3, 5, 3, 781212, tzinfo=datetime.timezone.utc),
       datetime.datetime(2023, 5, 31, 3, 5, 3, 781213, tzinfo=datetime.timezone.utc)],
      dtype=object)

Since datacubes can be incredibly demanding on RAM, our implementation includes a safety check so it’s not possible to accidentally allocate more memory than there is available memory. You can bypass this by using warn_mem_use=False although this is not recommended. For systems with adaquate swap management, this can work well but for development boards such as the Raspberry Pi 4, allocating more than the available memory will hang the operating system and you can’t do anything but forcibly unpower the board. (Learned from experience…)

Warning

Allocating more RAM than there is available is not recommended.

There are a few options to decrease your RAM usage: 1. Decrease n_lines, and 2. Use a processing_lvl>=$$2 which includes real-time binning.

Tip

You can also close other programs running in the background which take up memory. For example, running the code in Jupyter Notebooks require a browser open which uses a significant chunk of RAM. You can experiment will smaller datacubes in a notebook but then run production code from a script instead if you do not require interactive widgets.

If you are trying to allocate $$80% of your available RAM, there will be a prompt to confirm if you want to continue. Respond with y if you want to continue or respond n to stop.


source

DataCube

 DataCube (n_lines:int=16, processing_lvl:int=-1, warn_mem_use:bool=True,
           json_path:str=None, pkl_path:str=None,
           print_settings:bool=False)

Facilitates the collection, viewing, and saving of hyperspectral datacubes.

Type Default Details
n_lines int 16 How many along-track pixels desired
processing_lvl int -1 Desired real time processing level
warn_mem_use bool True Raise error if trying to allocate too much memory (> 80% of available RAM)
json_path str None
pkl_path str None
print_settings bool False

source

DataCube.put

 DataCube.put (x:numpy.ndarray)

Applies the composed tranforms and writes the 2D array into the data cube. Stores a timestamp for each push.


source

DataCube.save

 DataCube.save (save_dir:str, preconfig_meta_path:str=None, prefix:str='',
                suffix:str='', old_style:bool=False)

Saves to a NetCDF file (and RGB representation) to directory dir_path in folder given by date with file name given by UTC time.

Type Default Details
save_dir str Path to folder where all datacubes will be saved at
preconfig_meta_path str None Path to a .json file that includes metadata fields to be saved inside datacube
prefix str Prepend a custom prefix to your file name
suffix str Append a custom suffix to your file name
old_style bool False Order of axis

source

DataCube.load_nc

 DataCube.load_nc (nc_path:str, old_style:bool=False,
                   warn_mem_use:bool=True)

Lazy load a NetCDF datacube into the DataCube buffer.

Type Default Details
nc_path str Path to a NetCDF4 file
old_style bool False Only for backwards compatibility for datacubes created before first release
warn_mem_use bool True Raise error if trying to allocate too much memory (> 80% of available RAM)

source

DataCube.show

 DataCube.show (plot_lib:str='bokeh', red_nm:float=640.0,
                green_nm:float=550.0, blue_nm:float=470.0,
                robust:Union[bool,int]=False, hist_eq:bool=False,
                quick_imshow:bool=False)

Generate an RGB image from chosen RGB wavelengths with histogram equalisation or percentile options. The plotting backend can be specified by plot_lib and can be “bokeh” or “matplotlib”. quick_imshow is used for saving figures quickly but cannot be used to make interactive plots.

Type Default Details
plot_lib str bokeh Plotting backend. This can be ‘bokeh’ or ‘matplotlib’
red_nm float 640.0 Wavelength in nm to use as the red
green_nm float 550.0 Wavelength in nm to use as the green
blue_nm float 470.0 Wavelength in nm to use as the blue
robust typing.Union[bool, int] False Saturated linear stretch. E.g. setting robust to 2 will show the 2-98% percentile. Setting to True will default to robust=2. Robust to outliers
hist_eq bool False Choose to plot using histogram equilisation
quick_imshow bool False Used to skip holoviews and use matplotlib for a static plot
Returns Image a bokeh or matplotlib plot

load_nc expects the datacube to have coordinates (wavelength, cross-track, along-track). This is a format that can be viewed in ENVI and QGIS. If you have a datacube with coordinates (cross-track, along-track, wavelength), then set the parameter old_style=True.

The plot_lib argument hs Choose matplotlib if you want to use Choose bokeh if you want to compose plots together and use interactive tools.

n = 256

dc = DataCube(n_lines=n,processing_lvl=2,json_path="../assets/cam_settings.json",pkl_path="../assets/cam_calibration.pkl")

np.random.seed(0)  # keeps notebook from changing this data everytime run.

for i in range(200):
    dc.put( np.random.randint(0,255,dc.settings["resolution"]) )

dc.show("bokeh")
/var/folders/9j/gsmzhdmn6m35ff9gznjvmkww0000gn/T/ipykernel_61167/1799396090.py:22: DeprecationWarning: Please use `interp1d` from the `scipy.interpolate` namespace, the `scipy.interpolate.interpolate` namespace is deprecated.
  self.calibration = pickle.load(handle)
Allocated 120.20 MB of RAM. There was 3048.16 MB available.