Utilities Subpackage
- class cv2_group.utils.api_models.SaveImageRequest(*, image_data: str, username: str, image_type: str)
Model for save image endpoint request
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- cv2_group.utils.azure_integration.get_azure_datastore() Datastore | None
Retrieve the currently initialized Azure ML Datastore instance.
- Returns:
The default Azure ML Datastore object if initialized, else None.
- Return type:
Optional[Datastore]
- cv2_group.utils.azure_integration.get_azure_workspace() Workspace | None
Retrieve the currently initialized Azure ML Workspace instance.
- Returns:
The Azure ML Workspace object if initialized, else None.
- Return type:
Optional[Workspace]
- cv2_group.utils.azure_integration.initialize_azure_workspace() bool
Initialize Azure ML Workspace connection using Service Principal credentials.
This function attempts to authenticate to the Azure ML Workspace using service principal credentials provided via environment variables. It sets up global variables for the workspace and its default datastore upon successful connection.
- Returns:
- True if the workspace and datastore are successfully initialized,
False otherwise.
- Return type:
bool
- cv2_group.utils.azure_integration.is_azure_available() bool
Check if Azure ML Workspace and Datastore are available and properly configured.
- Returns:
True if both workspace and datastore are initialized, False otherwise.
- Return type:
bool
- cv2_group.utils.azure_model_loader.load_model_from_azure_registry(model_name=None, model_label=None)
Load a trained model from the Azure ML Model Registry.
This function retrieves a registered model using Azure ML SDK, downloading its artifacts locally, then attempts to load the model using MLflow’s Keras flavor or falls back to the generic PyFunc flavor.
Configuration parameters such as model name, model label, Azure subscription, resource group, workspace name, and credentials are read from environment variables if not provided as arguments.
- Parameters:
model_name (str, optional) – Name of the model to load. If None, reads from ‘AZURE_MODEL_NAME’ env var.
model_label (str, optional) – Label/version of the model to load (e.g., ‘latest’). Defaults to ‘latest’.
- Raises:
EnvironmentError – If required Azure environment variables are missing.
FileNotFoundError – If downloaded model artifacts are not found in expected paths.
Exception – If loading the model fails at any stage.
- Return type:
The loaded model object (Keras model or MLflow PyFunc model).
- cv2_group.utils.binary.ensure_binary_mask(mask: ndarray, threshold: float = 0.3) ndarray
Converts a mask to binary format (values 0 or 255).
- Parameters:
mask (np.ndarray) – Input mask, expected to be float (0-1) or int.
threshold (float) – Threshold to binarize the mask if it’s in float format.
- Returns:
Binary mask with dtype uint8 and values 0 or 255.
- Return type:
np.ndarray
- cv2_group.utils.dashboard_stats.get_dashboard_stats() Dict[str, Any]
Retrieve the current dashboard statistics including counts and processing times.
- Returns:
- Dictionary containing images processed, masks reanalyzed,
files downloaded, average processing time (seconds), and total processing time (seconds).
- Return type:
Dict[str, Any]
- cv2_group.utils.dashboard_stats.reset_dashboard_stats() None
Reset all dashboard statistics including counters and timing data, and log the reset.
- cv2_group.utils.dashboard_stats.track_download() None
Increment the count of files downloaded and log the event.
- cv2_group.utils.dashboard_stats.track_image_processed(processing_time: float) None
Track a completed image processing event by incrementing the count, updating timing stats, and logging.
- Parameters:
processing_time (float) – Time taken to process the image in seconds.
- cv2_group.utils.dashboard_stats.track_mask_reanalyzed(processing_time: float) None
Track a completed mask reanalysis event by incrementing the count, updating timing stats, and logging.
- Parameters:
processing_time (float) – Time taken to reanalyze the mask in seconds.
- class cv2_group.utils.feedback_system.FeedbackAnalysisData(*, roi_data: List[Dict[str, Any]], image_filename: str)
Model for storing analysis table data in feedback
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class cv2_group.utils.feedback_system.FeedbackEntry(*, id: str, image_data: FeedbackImageData, analysis_data: FeedbackAnalysisData, message: str, user_identifier: str | None = None, timestamp: datetime, image_filename: str, model_source: str)
Complete feedback entry with metadata
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class cv2_group.utils.feedback_system.FeedbackImageData(*, original_image: str, overlay_image: str, mask_image: str, drawing_image: str)
Model for storing image data in feedback (now stores blob paths or URLs)
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class cv2_group.utils.feedback_system.FeedbackSubmission(*, image_data: FeedbackImageData, analysis_data: FeedbackAnalysisData, message: Annotated[str, MaxLen(max_length=500)], user_identifier: str | None = None, model_source: str)
Model for feedback submission
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- cv2_group.utils.feedback_system.add_feedback_entry(ml_client: azure.ai.ml.MLClient, feedback: FeedbackSubmission, image_filename: str) str
Add a new feedback entry, saving associated images and managing storage limits.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
feedback (FeedbackSubmission) – Feedback data including images and analysis.
image_filename (str) – Name of the image file related to this feedback.
- Returns:
Unique identifier of the new feedback entry.
- Return type:
str
- cv2_group.utils.feedback_system.delete_feedback_entry(ml_client: azure.ai.ml.MLClient, feedback_id: str) bool
Delete a specific feedback entry and its associated images from Azure blob storage.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
feedback_id (str) – Unique identifier of the feedback entry to delete.
- Returns:
True if deletion was successful, False if entry was not found.
- Return type:
bool
- cv2_group.utils.feedback_system.get_blob_client(ml_client: azure.ai.ml.MLClient, blob_name: str) BlobClient
Create and return a BlobClient for a specific blob in the default datastore.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
blob_name (str) – Name of the blob to access.
- Returns:
Client to interact with the specified blob.
- Return type:
BlobClient
- cv2_group.utils.feedback_system.get_blob_url(ml_client: azure.ai.ml.MLClient, blob_name: str) str
Construct and return the URL for a blob in the default datastore.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
blob_name (str) – Name of the blob.
- Returns:
URL to access the specified blob.
- Return type:
str
- cv2_group.utils.feedback_system.get_feedback_entries(ml_client: azure.ai.ml.MLClient) List[Dict[str, Any]]
Retrieve all feedback entries (newest first) from Azure blob storage.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
- Returns:
List of all feedback entries.
- Return type:
List[Dict[str, Any]]
- cv2_group.utils.feedback_system.get_feedback_entries_for_image(ml_client: azure.ai.ml.MLClient, image_filename: str) List[Dict[str, Any]]
Retrieve feedback entries for a specific image (newest first).
- Parameters:
ml_client (MLClient) – Azure ML client instance.
image_filename (str) – Filename of the image to filter feedback entries.
- Returns:
List of feedback entries related to the specified image.
- Return type:
List[Dict[str, Any]]
- cv2_group.utils.feedback_system.initialize_feedback_storage(ml_client: azure.ai.ml.MLClient)
Initialize feedback storage by creating an empty JSON file in Azure blob storage.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
- cv2_group.utils.feedback_system.load_feedback_data(ml_client: azure.ai.ml.MLClient) List[Dict[str, Any]]
Load feedback data from a JSON file stored in Azure blob storage.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
- Returns:
List of feedback entries, with timestamps parsed as datetime objects. Returns an empty list if loading fails.
- Return type:
List[Dict[str, Any]]
- cv2_group.utils.feedback_system.save_base64_image_to_blob(ml_client: azure.ai.ml.MLClient, base64_str: str, feedback_id: str, image_type: str) str
Save a base64-encoded image to Azure blob storage and return its URL.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
base64_str (str) – Base64-encoded PNG image string.
feedback_id (str) – Unique identifier for the feedback entry.
image_type (str) – Type/category of the image (e.g., original_image).
- Returns:
URL of the saved image blob.
- Return type:
str
- cv2_group.utils.feedback_system.save_feedback_data(ml_client: azure.ai.ml.MLClient, data: List[Dict[str, Any]]) None
Save feedback data to a JSON file in Azure blob storage.
- Parameters:
ml_client (MLClient) – Azure ML client instance.
data (List[Dict[str, Any]]) – Feedback data to save. Timestamps will be converted
strings. (to ISO)
- cv2_group.utils.image_helpers.decode_b64_png_to_ndarray(b64_string: str) ndarray
Decode a base64-encoded PNG image string into a NumPy ndarray.
- Parameters:
b64_string (str) – Base64 encoded PNG image string.
- Returns:
- Decoded image as a NumPy array with original image channels
and depth preserved.
- Return type:
np.ndarray
- cv2_group.utils.image_helpers.unpack_model_response(base64_encoded_compressed_response_string: str) Tuple
Unpack and decode a model response from a base64-encoded, compressed JSON string.
- Parameters:
base64_encoded_compressed_response_string (str) – Base64 encoded, zlib-compressed JSON string containing the model’s response data.
- Returns:
- cropped_image_for_prediction (np.ndarray):
Cropped image for prediction.
- uncropped_binary_mask (np.ndarray):
Full-size binary mask (uint8).
- original_bbox (Any):
Original bounding box coordinates from the response.
- square_offsets (Any):
Offset values for cropping/square adjustments.
- binary_mask_cropped_square (np.ndarray):
Binary mask cropped to a square (uint8).
- Return type:
Tuple
- Raises:
Exception – If decompression, decoding, or parsing fails.
- class cv2_group.utils.llama_service.LlamaRequest(*, prompt: str, system_prompt: str | None = None, temperature: float | None = 0.7)
Model for Llama API request.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class cv2_group.utils.llama_service.LlamaResponse(*, response: str, error: str | None = None)
Model for Llama API response.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class cv2_group.utils.llama_service.LlamaService
Service class for interacting with Llama model.
- generate_response(request: LlamaRequest) LlamaResponse
Generate a response using the Llama model via Open Web UI API.
- set_system_prompt(prompt: str) None
Update the default system prompt.
- cv2_group.utils.predicting.predict_from_array(image: ndarray, original_image_shape: Tuple[int, int], model: Any) Tuple[ndarray, ndarray, Tuple[int, int, int, int], Tuple[int, int], ndarray]
Encapsulates the prediction logic for a single image, now returning the cropped image used for prediction, the binary mask uncropped back to the original image’s dimensions, the cropping parameters, and the binary mask before uncropping (for cropped visualizations).
- Parameters:
image (np.ndarray) – The input image as a NumPy array (can be grayscale or BGR).
original_image_shape (Tuple[int, int]) – The (height, width) of the image before any cropping by crop_image.
model (Any) – The globally loaded Keras/TensorFlow model.
- Returns:
cropped_image_for_prediction: The square image actually fed into the model (before padding).
uncropped_binary_mask: The predicted binary mask, uncropped and resized to the original_image_shape.
original_bbox: Bounding box (x, y, width, height) of the largest component in the original input image.
square_offsets: Offsets (x_offset, y_offset) used to place the component within the square_image.
binary_mask_cropped_square: The binary mask before uncropping (i.e., matching the cropped_image_for_prediction size).
- Return type:
Tuple[np.ndarray, np.ndarray, Tuple[int, int, int, int], Tuple[int, int], np.ndarray]
- cv2_group.utils.predicting.predict_root(patches: ndarray, model: Any) ndarray
Performs root segmentation prediction using the loaded model.
- Parameters:
patches (np.ndarray) – Array of image patches (pre-processed for the model).
model (Any) – The loaded Keras/TensorFlow model.
- Returns:
The prediction output from the model.
- Return type:
np.ndarray
- cv2_group.utils.visualization.create_side_by_side_visualization(original_image: ndarray, blue_mask: ndarray, green_mask: ndarray) bytes
Creates a single image showing two versions of the original image side-by-side, one with the blue mask overlay and one with the green mask overlay.
- Parameters:
original_image (np.ndarray) – The original BGR image.
blue_mask (np.ndarray) – The binary mask from the ‘blue’ model.
green_mask (np.ndarray) – The binary mask from the ‘green’ model.
- Returns:
The resulting image encoded as PNG bytes.
- Return type:
bytes
- cv2_group.utils.visualization.draw_all_root_skeletons_on_image(image: ndarray, binary_mask: ndarray, color: Tuple[int, int, int] = (255, 0, 255), thickness: int = 1) ndarray
Overlays all root skeletons (from connected components) on the image. :param image: The original image (BGR). :param binary_mask: The binary mask (0/255 or 0/1). :param color: BGR color for skeletons (default magenta). :param thickness: Line thickness.
- Returns:
Image with all root skeletons overlayed.