Sonador AI

Sonador AI provides an end-to-end suite of tools and best practice guidance for Medical Imaging based machine learning. It includes a framework for implementing machine learning models on top of PyTorch or Tensorflow, tools for creating annotated datasets, tracking experiments and model versions, deploying to production, and integrating AI into clinical workflows.

Manage the Machine Learning Lifecycle

Sonador AI is a collection of integrated tools and best practices to manage the complete machine learning. It provides software libraries that can be used to work with imaging meta and pixel data, three-dimensional representations of image stacks, interface with popular deep learning libraries such as PyTorch/MONAI and TensorFlow for creating models, and integrate with systems to help with their deployment and monitoring such as MLflow.

Explore and Visualize

Due to its power and flexibility, Jupyter has become the de-facto standard for Data Science development. It allows for developers and researchers to prototype complex workflows, validate results, and document design decisions in a single document.

Through its integration with Jupyter, Sonador provides an ideal environment for rapid prototyping AI models and preparing 2D and 3D medical imaging data. Data can be retrieved from Orthanc using the Python client library, converted to NumPy arrays using PyDICOM, analyzed using SimpleITK, and visualized with ITK Widgets.

Sonador 3D: Brain Volume in JupyterLab
Sonador integrates with JupyterLab to allow for interactive development and visualization. It's possible to download studies stored in Orthanc, convert them to NumPy arrays via PyDICOM and visualize the resulting image volumes using ITK Widgets.
PyKNEEr: Assessing Cartilage Thickness
Jupyter integrates with a broad ecosystem of tools to summarize and visualize data such as Pandas, SciKit-Learn, Seaborn, and more.
Sonador 3D: CT Volume of Knee in JupyterLab
ITK Widgets, based on the powerful Visual Toolkit (VTK) provides state-of-the-art visualization of 2D and 3D data. Sonador 3D provides tools to manage the conversion of information in Orthanc to formats compatible with VTK.
Sonador 2D: Knee CT Slices in JupyterLab
Sonador AI provides connectors which allow data stored in Orthanc to be consumed by MONAI's pre-built network architectures. This allows for the rapid prototyping and assessment of new models.
Sonador AI (Workflow Step 1): Visualizing raw pixel data arrays in JupyterLab
Jupyter is a powerful tool for for implementing and documenting complex workflows. This example shows how it can be used for preparing image masks from a 3D shape to train an AI segmentation model. Step 1: Visualize slices of image volume.
Sonador AI (Workflow Step 2): Extracting image masks from contours obtained from a 3D model
Step 2: Slice 3D bone shape and project into MRI volume.
Sonador AI (Workflow Step 3): Visualizing Image Masks in JupyterLab
Step 3: Capture intersection of shape contours with slice planes.
Sonador AI (Workflow Step 4): Jupyter integration allows for complex 3D transforms to verified
Step 4: Verify image mask volume prior to saving as DICOM-SEG and uploading to Orthanc.
MONAI Logo

Create and Assess AI Models

Sonador is compatible with the Medical Open Network for Artificial Intelligence (MONAI) framework. MONAI is a collection of components and programs designed to build end-to-end AI systems based on medical imaging standards and best practices. Built on top of PyTorch, MONAI provides a framework for multi-dimensional medical imaging pipelines; implementations of networks, losses, and evaluation metrics; and support for multi-GPU training.

In addition to the core modules, MONAI also includes MONAI Label, a tool that enables researchers to build AI annotation models quickly. Available within 3D Slicer as a plugin, MONAI Label is able to observe as users segment or align structures of interest and then update a background model so that it becomes more accurate based on user feedback.

Sonador AI implements connectors that allow data stored in Orthanc to PyTorch/MONAI directly from the Sonador Python client, and for results to be written back to DICOM. These components allow for models to be created without extensive pre-processing and for DICOM data to be used directly, simplifying the challenge of integrating AI into clinical environments.
Sonador/MONAI Label: MONAI Label is a framework which facilitates interactive medical image annotation
MONAI Label is a framework to help facilitate interactive medical image annotation. It is able to observe users segment or align structures of interest and iteratively update the model so that it becomes more accurate.
MONAI Project

Verify and Validate

While a challenge in any industry, labeling Medical Imaging data for training and then verifying and validating model outputs is particularly difficult and time-consuming. As compared to other industries, where a layperson might label an image as "car" or "street" without too much trouble, reviewing data in medical imaging requires a professional's opinion.

To streamline labeling and review tasks, OHIF provides a powerful set of annotation tools to capture measurements, findings, or other features of interests. The annotations can be paired with the original study and saved as DICOM-SR documents and sent back to Orthanc, enabling them to be transferred along with the source images for downstream processing.

Sonador/OHIF: OHIF supports a broad set of annotation tools for acquiring measurements or specifying findings
OHIF supports a broad set of annotation tools for acquiring measurements or specifying findings. Annotations and labels are captured as DICOM-SR documents, which are paired with the DICOM study and can be transferred along with the source images.
Sonador Viewer: Multi-modality MRI/CT/SEG
DICOM-SEG and DICOM-RT data can also be viewed in OHIF, allowing for review and approval of segmentation and contour lines.
Sonador Viewer: Multi-planar reconstruction (MPR with segmentations)
MLFlow Logo (Black)

Publish and Deploy

Sonador AI integrates with MLflow to provide tools to help manage the machine learning lifecycle. It provides a central system to track parameters during training to ensure reproducibility, host binary artifacts centrally to ease deployment, and track which models are deployed in what environments.

MLflow is designed to work with any machine learning library, algorithm, deployment tool, or language. It provides a set of REST APIs and simple data formats that allow for it to be integrated into a variety of tools including REST APIs, stream data consumers, or serverless functions.

MLflow: MLflow can be used from within Jupyter to track training runs and log their parameters
MLflow can be used from within Jupyter to track training runs and log their parameters. This can be used to create a database providing insight into what factors create the best models.
MLflow: Central Model Repository
MLflow provides a central model registry, allowing you to save model instances for later deployment.
MLflow: Once registered, it is possible to compare runs against one another and find which parameters produced the best models.
Once registered in MLflow, it is possible to compare runs against one another and find which parameters produced the best models. Runs can be viewed from the web interface or filtered via the API.
MLflow: MLflow lets you track parameters, metrics (potentially providing information about the model's convergence), and output files or data (artifacts)
MLflow lets you track parameters, metrics (potentially providing information about the model's convergence), and output files or data (artifacts). It is also possible to save a model to the registry in a framework agnostic format that can be used by either SciKit-Learn or Spark.

Comments

Loading
Unable to retrieve data due to an error
Retry
No results found
Back to All Comments