資料請求

世界中の全ての医療従事者へ。
テクノロジーの⼒で、
より⼼地よい医療環境の実現を。

メドメインは病理標本のデジタル化から、独⾃のソフトウェアを駆使した病理診断の⽀援まで、
病理診断におけるトータルソリューションを提供しております。
最先端のテクノロジーを活⽤することで、慢性的に病理医が不⾜する中で
⾼まる医療現場の負荷を軽減します。

  • 最適かつタイムリーに
    病理診断依頼や⽀援、
    症例の共有ができる
    環境を実現

  • 病理標本を
    管理・運搬する⼿間や
    時間的ロスを削減

  • ⼤規模な
    データベースの
    構築による、
    医療DXを推進

メドメインが提供する
2つのサービス

メドメインは「デジタル病理」を⽀援する2つのサービスを提供しています。
「デジタル病理の環境構築」から「バーチャル画像を駆使した病理診断での活⽤⽀援」まで、
⼀貫したサービスの提供が可能です。
また、施設に合わせて、⼀部機能に特化した提供も可能です。

PidPort

病理診断の現場を
テクノロジーでサポートする

PidPortは、①デジタル化した病理画像データを保管・管理・閲覧・利活用するための「クラウドストレージ機能」、②オンライン上で病理医に診断を依頼し、症例を迅速に共有できる「遠隔診断・コンサルテーション機能」、③病理画像データをAIがスクリーニングやWチェックする「AI解析機能(※)」を兼ね備えております。インターネットが使⽤できる環境下であれば、機材導⼊など初期費⽤も不要でいつでもご使⽤いただくことが可能です。また、視認性・快適性に優れた独⾃の⾼速ビューワーを標準装備しております。

※⽇本国内において、AI解析に関する機能については将来的な提供を計画しています。

imaging-center

「デジタル病理」

環境構築をサポートする

Imaging Centerでは、安価で迅速なWSIの受託作成を行っております。お客様からお預かりした病理組織・細胞のガラス標本(病理標本・病理スライド)をデジタル化し、⾼精細なバーチャルスライドとして納品致します。過去の病理標本の保管と活⽤、院内・教室内でのカンファレンスや検討会での利⽤、関連施設間での精度管理体制の構築、他施設との連携による遠隔病理診断コンサルテーションや、来たるべきAI等の最新技術による業務・診断⽀援の時代に向けて、Imaging Centerではお客様の病理標本のデジタル化をサポートし、その新しい活⽤⽅法を提案することによって、お客様の病理診断業務に関わる様々な業務に新たな価値を提供し、デジタルパソロジーの環境構築を最⼤限サポート致します。

更新情報

2021/08/02

スマホなどの小さい画面で左のメニューが非表示になる機能(バージョン0.25.0)をPidPortに新たに搭載いたしました。

Read more

2021/06/30

画像を新しいタブで開ける機能(バージョン0.24.0)をPidPortに新たに搭載いたしました。

Read more

2021/06/29

ツールバーを消す機能(バージョン0.23.0)をPidPortに新たに搭載いたしました。

Read more

メドメインの
デジタル病理
ソリューション
サービスの流れ

病理標本のデジタル化から画像データの保管・遠隔病理診断まで
⼀貫したサービス提供が可能です。また、将来的な実装を計画しているAIによる
解析機能を加えることで、効率的で迅速な病理診断をトータルで⽀援いたします。

Medmain AI の紹介

メドメインでは複数の医療機関との共同研究により、数十万例におよぶ病理組織本に対して標本単位のデジタルイメージ(whole slide image: WSI)を作成し、自社開発のアノテーションツールを用いた病理医による教師データの作成を行い、これらを深層学習させることによって、病理画像解析のAIモデルの開発を行っています。現在では、特に症例数の多い胃・大腸・乳腺(悪性上皮性腫瘍と良性上皮性腫瘍と非腫瘍性病変)、肺(悪性上皮性腫瘍と非腫瘍性病変)、膵臓(超音波内視鏡下穿刺吸引生検標本における腺癌の検出)の組織判定をはじめ、子宮頸部・尿の細胞判定(腫瘍性判定の有無)までAI解析の実用が可能であり、今後、他の臓器・症例に関しても研究開発および実装を進める予定です。

研究開発論⽂

Published: 14 October 2021, Scientific Reports

A deep learning model for gastric diffuse-type adenocarcinoma classification in whole slide images

Gastric diffuse-type adenocarcinoma represents a disproportionately high percentage of cases of gastric cancers occurring in the young, and its relative incidence seems to be on the rise. Usually it affects the body of the stomach, and it presents shorter duration and worse prognosis compared with the differentiated (intestinal) type adenocarcinoma. The main difficulty encountered in the differential diagnosis of gastric adenocarcinomas occurs with the diffuse-type. As the cancer cells of diffuse-type adenocarcinoma are often single and inconspicuous in a background desmoplaia and inflammation, it can often be mistaken for a wide variety of non-neoplastic lesions including gastritis or reactive endothelial cells seen in granulation tissue. In this study we trained deep learning models to classify gastric diffuse-type adenocarcinoma from WSIs. We evaluated the models on five test sets obtained from distinct sources, achieving receiver operator curve (ROC) area under the curves (AUCs) in the range of 0.95–0.99. The highly promising results demonstrate the potential of AI-based computational pathology for aiding pathologists in their diagnostic workflow system. 論文詳細

Modified: 27 Aug 2021, Proceedings of Machine Learning Research

Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning

Transfer learning from ImageNet is the go-to approach when applying deep learning to medical images. The approach is either to fine-tune a pre-trained model or use it as a feature extractor. Most modern architecture contain batch normalisation layers, and fine-tuning a model with such layers requires taking a few precautions as they consist of trainable and non-trainable weights and have two operating modes: training and inference. Attention is primarily given to the non-trainable weights used during inference, as they are the primary source of unexpected behaviour or degradation in performance during transfer learning. It is typically recommended to fine-tune the model with the batch normalisation layers kept in inference mode during both training and inference. In this paper, we pay closer attention instead to the trainable weights of the batch normalisation layers, and we explore their expressive influence in the context of transfer learning. We find that only fine-tuning the trainable weights (scale and centre) of the batch normalisation layers leads to similar performance as to fine-tuning all of the weights, with the added benefit of faster convergence. We demonstrate this on a variety of seven publicly available medical imaging datasets, using four different model architectures. 論文詳細

Published: 08 Feb 2021 (modified: 21 Apr 2021), OpenReview.net

Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning

Transfer learning from ImageNet is the go-to approach when applying deep learning to medical images. The approach is either to fine-tune a pre-trained model or use it as a feature extractor. Most modern architecture contain batch normalisation layers, and fine-tuning a model with such layers requires taking a few precautions as they consist of trainable and non-trainable weights and have two operating modes: training and inference. Attention is primarily given to the non-trainable weights used during inference, as they are the primary source of unexpected behaviour or degradation in performance during transfer learning. It is typically recommended to fine-tune the model with the batch normalisation layers kept in inference mode during both training and inference. In this paper, we pay closer attention instead to the trainable weights of the batch normalisation layers, and we explore their expressive influence in the context of transfer learning. We find that only fine-tuning the trainable weights (scale and centre) of the batch normalisation layers leads to similar performance as to fine-tuning all of the weights, with the added benefit of faster convergence. We demonstrate this on a variety of seven publicly available medical imaging datasets, using four different model architectures. 論文詳細

Published: 30 June 2021, Technology in Cancer Research & Treatment

Deep Learning Models for Gastric Signet Ring Cell Carcinoma Classification in Whole Slide Images

Signet ring cell carcinoma (SRCC) of the stomach is a rare type of cancer with a slowly rising incidence. It tends to be more difficult to detect by pathologists, mainly due to its cellular morphology and diffuse invasion manner, and it has poor prognosis when detected at an advanced stage. Computational pathology tools that can assist pathologists in detecting SRCC would be of a massive benefit. In this paper, we trained deep learning models using transfer learning, fully-supervised learning, and weakly-supervised learning to predict SRCC in Whole Slide Images (WSIs) using a training set of 1,765 WSIs. We evaluated the models on two different test sets (n = 999, n = 455). The best model achieved a ROC-AUC of at least 0.99 on all two test sets, setting a top baseline performance for SRCC WSI classification. 論文詳細

Published: 19 April 2021, Scientific Reports

A deep learning model to detect pancreatic ductal adenocarcinoma on endoscopic ultrasound-guided fine-needle biopsy

Histopathological diagnosis of pancreatic ductal adenocarcinoma (PDAC) on endoscopic ultrasonography-guided fine-needle biopsy (EUS-FNB) specimens has become the mainstay of preoperative pathological diagnosis. However, on EUS-FNB specimens, accurate histopathological evaluation is difficult due to low specimen volume with isolated cancer cells and high contamination of blood, inflammatory and digestive tract cells. In this study, we performed annotations for training sets by expert pancreatic pathologists and trained a deep learning model to assess PDAC on EUS-FNB of the pancreas in histopathological whole-slide images. We obtained a high receiver operator curve area under the curve of 0.984, accuracy of 0.9417, sensitivity of 0.9302 and specificity of 0.9706. Our model was able to accurately detect difficult cases of isolated and low volume cancer cells. If adopted as a supportive system in routine diagnosis of pancreatic EUS-FNB specimens, our model has the potential to aid pathologists diagnose difficult cases. 論文詳細

Published: 14 April 2021, Scientific Reports

A deep learning model for the classification of indeterminate lung carcinoma in biopsy whole slide images

The differentiation between major histological types of lung cancer, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC), and small-cell lung cancer (SCLC) is of crucial importance for determining optimum cancer treatment. Hematoxylin and Eosin (H&E)-stained slides of small transbronchial lung biopsy (TBLB) are one of the primary sources for making a diagnosis; however, a subset of cases present a challenge for pathologists to diagnose from H&E-stained slides alone, and these either require further immunohistochemistry or are deferred to surgical resection for definitive diagnosis. We trained a deep learning model to classify H&E-stained Whole Slide Images of TBLB specimens into ADC, SCC, SCLC, and non-neoplastic using a training set of 579 WSIs. The trained model was capable of classifying an independent test set of 83 challenging indeterminate cases with a receiver operator curve area under the curve (AUC) of 0.99. We further evaluated the model on four independent test sets—one TBLB and three surgical, with combined total of 2407 WSIs—demonstrating highly promising results with AUCs ranging from 0.94 to 0.99. 論文詳細

Published: 09 June 2020, Scientific Reports

Weakly-supervised learning for lung carcinoma classification using deep learning

Lung cancer is one of the major causes of cancer-related deaths in many countries around the world, and its histopathological diagnosis is crucial for deciding on optimum treatment strategies. Recently, Artificial Intelligence (AI) deep learning models have been widely shown to be useful in various medical fields, particularly image and pathological diagnoses; however, AI models for the pathological diagnosis of pulmonary lesions that have been validated on large-scale test sets are yet to be seen. We trained a Convolution Neural Network (CNN) based on the EfficientNet-B3 architecture, using transfer learning and weakly-supervised learning, to predict carcinoma in Whole Slide Images (WSIs) using a training dataset of 3,554 WSIs. We obtained highly promising results for differentiating between lung carcinoma and non-neoplastic with high Receiver Operator Curve (ROC) area under the curves (AUCs) on four independent test sets (ROC AUCs of 0.975, 0.974, 0.988, and 0.981, respectively). Development and validation of algorithms such as ours are important initial steps in the development of software suites that could be adopted in routine pathological practices and potentially help reduce the burden on pathologists. 論文詳細

Published: 30 January 2020, Scientific Reports

Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours

Histopathological classification of gastric and colonic epithelial tumours is one of the routine pathological diagnosis tasks for pathologists. Computational pathology techniques based on Artificial intelligence (AI) would be of high benefit in easing the ever increasing workloads on pathologists, especially in regions that have shortages in access to pathological diagnosis services. In this study, we trained convolutional neural networks (CNNs) and recurrent neural networks (RNNs) on biopsy histopathology whole-slide images (WSIs) of stomach and colon. The models were trained to classify WSI into adenocarcinoma, adenoma, and non-neoplastic. We evaluated our models on three independent test sets each, achieving area under the curves (AUCs) up to 0.97 and 0.99 for gastric adenocarcinoma and adenoma, respectively, and 0.96 and 0.99 for colonic adenocarcinoma and adenoma respectively. The results demonstrate the generalisation ability of our models and the high promising potential of deployment in a practical histopathological diagnostic workflow system. 論文詳細

Special Interview

国際医療福祉⼤学での導⼊事例
〜学⽣講義⽤⽤途でのPidPortの活⽤〜

詳しくはこちら