コロキアムB発表

日時: 6月7日(月)3限(13:30~15:00)


会場: L3

司会: Doudou Fall
上野 友梨 M, 2回目発表 光メディアインタフェース 向川 康博, 加藤 博一, 舩冨 卓哉, 田中 賢一郎, 久保 尋之
title:
Detection of Frontside Inked Area with Optical Model and Images Taken on Dark and Bright Mounts
abstract:
There is a need to make the ancient paper document more readable in Japan, especially, some of them written or painted on both sides for reuse, which is called show-through. Our final goal is to separate the frontside and backside image of the Literature documents from the photos including the backside show-through. The backside image can be inferred where the frontside is not inked, but it is hard to determine if the ink is on the frontside or double sides. We suppose that the deep learning inpainting helps to infer the backside image in the area. In this presentation, we set a sub-goal to detect the frontside inked area to be inferred. We use a phenomenon that the brightness of the mount paper affects the show-through. We derive a physically-based model from the paper structure, which is composed of two filtering layers and the blur layer. The frontside reflectance could be obtained from solving the simultaneous equations of the two images taken on the two type mounts, dark and bright. We experimented the factual objects and succeeded in the detection of the frontside inked area.
language of the presentation: Japanese
 
LI GANPING M, 1回目発表 生体医用画像 佐藤 嘉伸, 加藤 博一, 大竹 義人, Soufi Mazen, 上村 圭亮

Title: Cross-modality segmentation by CycleGAN and Bayesian U-net for muscle volumetry in MRI using CT training data set

Abstract: Generative methods based on neural networks have been proved efficient in cross-modalitymedical image translation and segmentation tasks. Meanwhile, the scarcity of medical datasetswith high-quality annotations has become the bottleneck of the segmentation performance ofneural networks. The complex anatomical structures make the annotation task extremely time-consuming. In this work, we investigate the feasibility of the image translation from a specificmodality (e.g., MRI) to another (e.g., CT) with CycleGAN for the cross-modality segmentation,where annotations performed on one modality are used for training of the segmentation model onanother modality. We first trained a CycleGAN model using training data set of 462 CTs and136 MRIs, and a Bayesian U-net segmentation model using training data set of 20 manuallyannotated   CTs.   The   CycleGAN   translated   the   target   MRI   to   CT-like   images,   which   wereautomatically segmented by the Bayesian U-net. The experiments on quadriceps muscles usingfully-manually segmented MRIs of 57 subjects showed dice coefficient of 0.723 ± 0.128 (mean ±std)

Language of the Presentation: English

 
CHENG ZHUO M, 2回目発表 生体医用画像 佐藤 嘉伸, 加藤 博一, 大竹 義人, Soufi Mazen, 上村 圭亮

Title: Uncertainty Prediction of Vertebrae Segmentation Using Bayesian U-Net: Towards Age- and Gender-dependent Statistical Modeling in a Large-scale CT Database

Abstract: Quantifying segmentation uncertainty has become an important task due to the large diversity in anatomical structures, such as vertebrae. A previously proposed Bayesian U-Net demonstrated a correlation between the Monte Carlo (MC) dropout sampling-based predictive uncertainty and the segmentation accuracy in an application of the muscle segmentation. However, the effectiveness of this approach in vertebrae segmentation has not been validated. In this work, we integrate the MC dropout sampling in a framework that achieved high vertebrae segmentation accuracy and landmark detection rate in the MICCAI 2019 challenge. Furthermore, we validate our approach on 30 CT volumes from a large-scale CT database collected independently from the training dataset. The results suggest the feasibility of the uncertainty estimated by Bayesian U-Net as a predictive measure of vertebrae segmentation accuracy, which would be helpful in the age- and gender-dependent statistical modeling in a large-scale CT database in our future work.

Language of the Presentation: English

 
GARCIA FELAN CARLO D, 中間発表 数理情報学 池田 和司, 中村 哲, 吉本 潤一郎, 久保 孝富(特任准教授), 福嶋 誠, 日永田智絵
Title: Leveraging Longitudinal Lifelog Data of Patients in Remission to Estimate Their Risk of Depression Relapse
Abstract: Managing depression relapse is a challenge given healthcare factors such as inconsistent follow-up and cumbersome psychological distress evaluation methods which leaves patients with a high risk of relapse to leave their symptoms untreated. In an attempt to bridge this gap, we proposed an approach on the use of personal longitudinal lifelog activity data gathered from individual smartphones of patients in remission and maintenance therapy (N=87) to predict their risk of depression relapse. Through the use of survival analysis, we modeled the activity data as covariates to predict survival curves to determine if patients are at risk of relapse. Furthermore we discuss on-going work on inferring depression severity score from the data as an alternative to pen-and-paper or online mental health survey for monitoring the mental state of patients recently in depression remission.
Language of the presentation: English