中島 新菜 | M, 2回目発表 | 数理情報学 | 池田 和司, | 清川 清, | 久保 孝富, | 日永田 智絵, | LI YUZHE, | 藤原 幸一 |
title: Developing a System for Tracking Cats' Liquid Motion
abstract: The liquid-like flexibility of cats poses a significant challenge in pose tracking. Existing methods, such as DeepLabCut (DLC), require a large and diverse set of annotated images to learn various postures, which is practically difficult to obtain. To address this issue, we propose a novel approach that integrates DLC, Tracking Any Point with per-frame Initialization and temporal Refinement (TAPIR), and the Segment Anything Model (SAM). DLC provides initial predictions of standard poses, while TAPIR progressively refines the tracking from coarse to fine levels with uncertainty estimation, allowing for better handling of flexible postures. Additionally, SAM segments the body regions to filter out invalid points, enabling more robust tracking. Experimental results on domestic cat videos show that the proposed method improves tracking performance compared to DLC. This method proves effective in tracking the complex movements of cats in natural environments, showing potential as a valuable tool for animal behavior analysis and biological research. language of the presentation: Japanese 発表題目: ネコの流れるような動きを捉える追跡システムの開発 発表概要: ネコの液体のような柔軟性は、姿勢追跡において大きな課題である。DeepLabCut(DLC)などの既存の手法では、多様な姿勢の画像およびそのアノテーションが必要となり、現実的には困難である。この課題を解決するために、本研究ではDLC、Tracking Any Point with per-frame Initialization and temporal Refinement(TAPIR)、およびSegment Anything Model(SAM)を統合した新しい手法を提案する。DLCは標準的な姿勢の初期予測を提供し、TAPIRは時間的に粗い追跡から細やかな追跡へと段階的に洗練させ、不確実性の推定により柔軟な姿勢を補正する。さらに、SAMによって身体領域をセグメント化し、無効なポイントをフィルタリングすることで、より頑健な追跡を実現する。飼い猫の動画を用いた実験の結果、飼い猫動画での実験結果は、提案手法がDLCと比較してトラッキング性能を向上させることを示している。本手法は、自然環境におけるネコの複雑な動きを追跡する上で有効であり、動物行動解析や生物学的研究において有用なツールとなる可能性を示している。 | ||||||||
宮井 菜名子 | M, 2回目発表 | 数理情報学 | 池田 和司, | 清川 清, | 久保 孝富, | 日永田 智絵, | LI YUZHE, | 藤原 幸一 |
title: Hierarchical Representation Learning of Dog Behavior Using DeepLabCut and h/BehaveMAE
abstract: Dogs and humans have developed a mutually beneficial relationship through cohabitation, during which dogs have acquired social cognitive abilities to understand human instructions and gaze. Such dog behaviors have attracted attention in scientific research, training, and veterinary contexts, and the demand for video-based behavior analysis has been increasing. However, conventional behavior analysis relies on pre-labeled videos, which incur high annotation costs and make it difficult to analyze undefined behaviors. In this study, we focus on h/BehaveMAE, a self-supervised learning method for learning hierarchical representations of behavior from pose sequences. This method leverages spatio-temporal masking to effectively learn both the structural features of posture and their temporal context. As a result, it demonstrates robustness to missing data and the ability to capture the multi-scale structure of behavior. We input 2D pose data of dogs, obtained using DeepLabCut, into h/BehaveMAE and use the resulting frame-wise behavior representations to automatically detect similar behaviors. This approach aims to establish a flexible framework for dog behavior analysis without requiring predefined labels. language of the presentation: Japanese 発表題目: DeepLabCut と h/BehaveMAE を用いたイヌ行動の階層的表現学習 発表概要: イヌとヒトは共に生活する中で互恵関係を築き、その過程でイヌはヒトの指示や視線を理解する社会的認知能力を発達させてきた。このようなイヌの行動は科学研究やトレーニング、獣医学の文脈で注目されており、近年では動画解析の需要が高まっている。しかし従来の行動分析は、事前にラベル付けされた動画に依存するためアノテーションコストが高く、未定義の行動の分析が困難である。 本研究では、姿勢時系列から行動の階層的表現を自己教師ありで学習する手法である h/BehaveMAE に注目する。 この手法は、空間的および時間的マスクを用いた自己教師あり学習により、姿勢構造とその時間的文脈の両方を効果的に学習する。 これによりデータ欠損に対して頑健であり、行動のマルチスケールな構造を捉えることが可能となる。DeepLabCut により取得したイヌの 2 次元姿勢データを h/BehaveMAE に入力し、フレーム単位で得られる行動表現を用いて、類似行動の自動検出を試みる。これにより、事前のラベル定義を必要としない柔軟なイヌ行動分析の実現を目指す。 | ||||||||
重藤 瞭介 | M, 2回目発表 | インタラクティブメディア設計学 | 加藤 博一, | 清川 清, | 澤邊 太志, | Isidro Butaslac | ||
title: *** Investigation of an AR-Based Information Presentation Method for Reducing Autonomous Vehicle Stress ***
abstract: *** Autonomous Vehicle Stress refers to the anxiety or discomfort that arises when the cognitive and decision-making outcomes of an autonomous driving system are not shared with passengers, making it difficult for them to predict the vehicle’s behavior. Reducing such Autonomous Vehicle Stress is important for achieving a comfortable autonomous vehicle experience in the future. In this study, we propose a method of presenting information using Augmented Reality (AR) as a way to share the cognitive and decision-making processes of the autonomous driving system. Information related to "cognition" includes external obstacles that may influence the vehicle’s behavior, while information related to "decision-making" includes the system’s intended actions and predicted route. By sharing this information, passengers are expected to be able to intuitively and clearly understand the vehicle’s behavior, thereby reducing Autonomous Vehicle Stress. To enable such AR-based in-vehicle information presentation during autonomous driving, it is important to manage the spatial alignment and relationships among the moving vehicle, the passenger, and the external environment. Therefore, this section describes the development of an AR information presentation system through preliminary experiments and its validation using an actual autonomous vehicle. *** language of the presentation: *** Japanese *** 発表題目: *** 自動走行ストレス軽減のためのAR情報提示手法の検討 *** 発表概要: *** 自動走行ストレスとは,自動走行システムによる認知・判断の結果が搭乗者に共有されないことにより, 車両挙動等の予測が困難となることで生じる不安や不快感のことである.このような自動走行ストレスを軽減するこ とは,今後快適な自動走行車を実現するためには重要である.そこで本研究では,自動走行システムの認知・判断に 関する情報の共有方法として,拡張現実感(AR)を用いた情報提示の手法を提案する.「認知」に関する情報には,自 車挙動に影響を及ぼす可能性のある外部障害物が含まれ,「判断」に関する情報には,自動走行システムが意図してい る行動や予測経路が含まれる.これらの情報を共有することにより,搭乗者が車両挙動を直感的かつ明瞭に理解で, 自動走行ストレスの軽減が期待できる.上記のような自動走行中の車内での AR 情報提示を行うためには,走行中の 車両,搭乗者,外部環境のそれぞれの関係性や位置合わせが重要となることより,本項では,予備実験を通した AR 情報提示システムの構築と実自動走行車両を用いた検証を行う. *** | ||||||||
野口 翔平 | M, 2回目発表 | インタラクティブメディア設計学 | 加藤 博一, | 清川 清, | 澤邊 太志, | Isidro Butaslac | ||
title: Investigation of the Effects of Changes in the Appearance and Behavior of a Stroked Object on Stress Reduction abstract: It is known that stroking a human or animal has the effect of reducing stress. However, when the target of stroking is a human, a close relationship is required, and building such a relationship takes time and effort. In addition, when the target is an animal, there are various constraints, such as the person stroking may have allergies or the animal itself may feel stress from being stroked. As a way to avoid these constraints, attention has been increasing toward animal robots and VR pets. However, the former tends to become structurally complex or large when trying to implement diverse behaviors, and the latter has the problem that it is difficult to provide tactile sensations, which are important when stroking. In this study, I aim to enhance the stress-reducing effect of stroking by using VR technology to change the appearance and behavior of a physical object that can be touched. To that end, I set the research question: "What kinds of appearances and behaviors of a stroked object can enhance the stress-reducing effect?" In this presentation, I introduce the results of experiments I have conducted so far in relation to this question, and also describe the experimental plans I am going to implement in the future. language of the presentation: Japanese 発表題目: 撫でる対象の外見と振る舞いの変化がストレス軽減に与える影響の調査 発表概要: 人や動物を撫でる行為には、ストレスを軽減する効果があることが知られている。しかし、撫でる対象が人である場合には親密な関係性が求められ、そのような関係を築くには時間や労力がかかる。また、撫でる対象が動物である場合には、撫でる側がアレルギーを持っていたり、動物自身が撫でられることにストレスを感じる可能性があるなど、実施にはさまざまな制約が伴う。 こうした制約を回避する手段として、動物ロボットやVRペットへの関心が高まっている。しかし、前者は多様な振る舞いを実現しようとすると構造が複雑化・大型化しやすく、後者は撫でる際に重要となる触覚提示が困難であるという課題がある。 本研究では、VR技術を用いて物理的に触れる実物体の外見や振る舞いを変化させることで、撫でた際のストレス軽減効果を高めることを目指す。そのために、「撫でる対象のどのような外見や振る舞いが、ストレス軽減効果を高めるのか」というリサーチクエスチョンを設定した。 本発表では、この問いに対してこれまでに実施してきた実験結果を紹介するとともに、今後予定している実験計画についても述べる。 | ||||||||
菊池 尊勝 | M, 2回目発表 | ユビキタスコンピューティングシステム | 安本 慶一, | 岡田 実, | 諏訪 博彦, | 松井 智一 | |
title: Cross-modal Daily Activity Recognition based on Fixed Sensor
abstract: Privacy concerns have recently heightened interest in recognizing daily activities without cameras or microphones, relying solely on fixed sensors deployed in the environment. Although multimodal learning that integrates fixed sensor readings with other modalities through contrastive learning has shown promise, the information provided by fixed sensors alone remains limited, leaving challenges in accuracy and generalization. This study proposes an approach that semantically fuses fixed‑sensor data with complementary modalities to obtain richer feature representations, enabling reliable activity recognition even when only fixed sensors are available. In addition, we construct a representation space by performing text‑based macro‑activity classification on the Ego4D dataset, demonstrating the practicality of our method. language of the presentation: Japanese 発表題目:環境設置型センサによる生活行動認識手法の検討 発表概要: 近年、プライバシー保護の観点から、カメラやマイクを使用せずに環境設置型センサのみで生活行動を認識する手法への関心が高まっている。特に、対照学習を活用し、環境設置型センサと他モーダル情報を統合するマルチモーダル学習は有効性が期待されているが、環境設置型センサ単体では情報量が限定的であり、精度や汎化性に課題が残る。本研究では、環境設置型センサデータと他モーダル情報との意味的な統合により特徴表現を強化し、環境設置型センサのみでも有用な行動認識を可能にするアプローチを提案する。また、Ego4Dデータセットを活用し、テキストベースでのマクロ行動分類を実施し、その表現空間を構築した。 | |||||||
小坂 修平 | M, 2回目発表 | ユビキタスコンピューティングシステム | 安本 慶一, | 岡田 実, | 諏訪 博彦, | 松井 智一, | 佐々木 航 |
title: Proposal of a Position Estimation Method for Formation Flying Satellites Using LiDAR
abstract: A project of the Ministry of Internal Affairs and Communications (MIC), in which the authors are participating, is studying the possibility of direct communication between satellites and cell phone terminals by using more than 10,000 very small satellites in formation flight (FF) in low earth orbit to function as phased array antennas (PAA). One of the goals of this project is to precisely control the angle of the entire PAA relative to the ground footprint and the position and attitude of each satellite in the FF. In this paper, we propose a position estimation method that calculates the PAA plane and measures the attitude and distance of each satellite in the FF from the anchor/overhead satellites with high accuracy and low cost by placing multiple anchor and overhead satellites equipped with millimeter wave radars and LiDAR in the FF. The proposed method estimates the position of a satellite in the plane by box fitting from the point clouds of neighboring satellites acquired by an overhead satellite using LiDAR. We implemented the proposed method in MATLAB and performed 3D simulations. As a result, we confirmed that the proposed method can estimate the position of each satellite within an average distance error of 0.8 cm when the number of satellites in the FF is 10,000, 15 LiDAR-equipped overhead satellites, and 4 anchor satellites. language of the presentation: Japanese 発表題目: LiDARを用いた編隊飛行衛星の位置推定手法の提案 発表概要: 著者らが参画する総務省のプロジェクトでは,低軌道をフォーメーションフライト(FF)する1万個以上の超々小型衛星をフェーズドアレイアンテナ(PAA)として機能させ,衛星と携帯電話端末との直接通信を実現する検討を行っている. このプロジェクトの目標の1つは,PAA全体の地上フットプリントに対する角度とFF内の各衛星の位置と姿勢を正確に制御することである.本研究では,ミリ波レーダーとLiDARを搭載した複数のアンカー衛星と俯瞰衛星をFF内に配置することで,PAA平面を計算し,アンカー/俯瞰衛星からFF内の各衛星の姿勢と距離を高精度かつ低コストで計測する位置推定手法を提案する.提案手法では,俯瞰衛星がLiDARにより取得した近隣の衛星の点群からボックスフィッティングを行うことで,衛星の平面内での位置を推定する. 提案手法をMATLAB内に実装し3Dシミュレーションを行った結果,FFを構成する衛星数が1万機,LiDARを搭載した俯瞰衛星15機,アンカー衛星4機を用いた場合に,平均0.8 cm以内の距離誤差での各衛星の位置推定を達成できることを確認した. | |||||||
仲田 深紅 | D, 中間発表 | サイバネティクス・リアリティ工学 | 清川 清, | 安本 慶一, | 内山 英昭, | Perusquia Hernandez Monica, | 平尾 悠太朗 |
title: A Vibrotactile Device for Enabling Sound Localization and Identification for Deaf and Hard of Hearing Individuals
abstract: Deaf and hard-of-hearing individuals often face difficulties in identifying the direction and type of sound sources. The ability to localize sound sources can enhance their safety, while recognizing sound types can improve environmental awareness. However, there has been limited research on presenting both sound direction and type in a manner suitable for this population. In this study, we designed a wearable assistive device using vibrotactile actuators to support sound source localization and identification in daily life. We investigated tactile perception through changes in vibration frequency to convey sound types. A prototype of the localization device was developed in the form of a hat, and we explored methods of converting sound frequency into corresponding vibration cues. By integrating these two systems—directional feedback and sound-type encoding—we created a unified assistive device and conducted evaluation experiments. The results showed no significant improvement in sound localization performance, but a trend toward significance in sound identification. Participants were able to distinguish between different types of sound based on vibrotactile feedback. language of the presentation: Japanese 発表題目: ろう・難聴者のための音源定位と識別を可能にする振動触覚デバイス 発表概要:ろう・難聴者は音源方向を特定したり音源種類を感じ取ったりすることが困難である.音源定位が可能となることで,ろう・難聴者は安全に生活でき,音源種類を把握できることで周辺環境の理解を促進できる.しかし,ろう・難聴者を対象とした音源の方向や種類の提示方法に関する研究は少ない.そこで,本研究では振動子を用いた音源定位補助デバイスの日常生活使用に適したデザインを検討し,音源種類を提示するために振動周波数の変化による触覚知覚について調査する.音源定位補助デバイスの試作機(帽子型)を製作した.音源種類の提示については,音から振動への周波数変換方法について検討した.これら二つのシステムを統合させ,新たな補助デバイスを製作し,評価実験を通じて有用性を検証した.その結果,音源定位問題では有意差が見られなかったが,音源識別問題では有意傾向が見られた.振動の違いによって異なる音源種類が知覚できた. | |||||||
藤岡 空夢 | M, 2回目発表 | ソフトウェア設計学 | 飯田 元, | 松本 健一, | 柏 祐太郎, | Reid Brittany | |
title: An Empirical Analysis of the Occurrence and Lifespan of Inconsistencies between Code and Comments
abstract: This study aims to fill that gap by conducting a large-scale quantitative analysis of code-comment inconsistencies, investigating their frequency of occurrence, the duration for which they are left unaddressed, their underlying causes, and the methods by which they are eventually resolved. We analyzed the commit histories of multiple open-source Java projects and utilized a fine-tuned GPT model (gpt-3.5-turbo) to evaluate the consistency of code-comment pairs from a dataset constructed with the SZZ algorithm, performing a multi-faceted analysis by tracking Git history, examining refactoring patterns, and manually inspecting individual commits. Our key findings are as follows: (1) Inconsistencies occurred in approximately 0.85% of all commits and were associated with a higher short-term bug-introduction rate; (2) a vast majority were long-lived, with 92% persisting for over a month and 48% for over a year; (3) the primary triggers were specific refactoring operations with limited IDE support, such as annotation changes, as well as general bug fixes and logic changes; and (4) only 38% were actively resolved through modification or deletion of the comment, with the rest being neglected or disappearing only when the entire file was deleted. This research demonstrates that code-comment inconsistency is a non-trivial and frequent problem in development practice, often accumulating as long-term technical debt, and our findings help identify the specific development contexts most susceptible to this issue, highlighting the critical importance of its prevention and early detection. language of the presentation: Japanese 発表題目: コードとコメント間の不整合発生及び放置状況における大規模調査 発表概要: 本研究は、コードとコメントの不整合について、その発生頻度・期間・原因を定量的に分析した。オープンソースJavaプロジェクトを対象に、GPTモデルとSZZアルゴリズムを用いてコミット履歴から不整合を検出・分析した結果、不整合は全コミットの約0.85%で発生し、その半数近く(48%)が1年以上放置され、能動的な修正は38%に留まることが判明した。この結果は、コードとコメントの不整合が技術的負債として頻繁に蓄積する問題であることを示しており、本研究の知見はリスクが高い開発状況の特定と予防に貢献する。 | |||||||
藤田 駿 | M, 2回目発表 | ソフトウェア設計学 | 飯田 元, | 松本 健一, | 柏 祐太郎, | Reid Brittany | |
title: The Effectiveness of Snapshot Testing
abstract: Software testing is an important process in ensuring quality. Recently, test codes are also being created for UI in front-end development, but the test targets are often complex, making it difficult to conduct sufficient testing. In particular, testing complex targets requires a significant amount of developers' time, which is why snapshot testing, which is easy to implement, has begun to be used. Snapshot testing can detect unexpected changes by examining the differences in behavior before and after program modifications. However, the extent of the defect detection effectiveness of snapshot testing is not yet clear. In this study, we collect projects on GitHub that use snapshot testing provided by the JEST test framework among JavaScript projects. We then investigate to what extent coverage and mutation scores, widely used as test effectiveness indicators, improve with snapshot testing. In addition, we will explore the benefits and challenges of snapshot testing through a survey of developers. language of the presentation: Japanese | |||||||
渡邉 未来 | M, 2回目発表 | ソフトウェア設計学 | 飯田 元, | 松本 健一, | 柏 祐太郎, | Reid Brittany | |
title: Investigating the Impact of Shortening Release Cycle on Self-Admitted Technical Debt
abstract: Technical debt refers to the extra work that will be incurred in the future as a result of choosing an imperfect solution to solve the problem quickly, rather than opting for an ideal implementation that takes more time. Developers often intentionally introduce imperfect implementations and then leave notes in comments so that other developers can recognize them. Such intentional technical debt is specifically called Self-Admitted Technical Debt (SATD). Previous studies have reported that the pressure of software release may drive developers to introduce SATDs. Recently, due to the fierce competition in software development, many projects have shortened release cycles to boost the frequency of major releases, which might amplify release pressures on developers and result in an increase in introduced SATDs. On the other hand, the increased frequency of major releases may allow developers to make changes that break backward compatibility, which implies that developers can easily address SATDs. Thus, reducing release cycles can work positively for resolving SATDs and negatively for introducing SATDs. However, it is unclear how the shortened release cycle impacts idealing with SATDs. This study investigates the impact of adopting shorter release cycles on the introduction and resolution of Self-Admitted Technical Debt (SATD). Specifically, we compare the introduction, resolution, lifetime, and types of SATD before and after the modifications to the release cycle. language of the presentation: Japanese | |||||||
NERIT LEWEL LENZ | D, 中間発表 | ソフトウェア工学 | 松本 健一, | 飯田 元, | Raula Gaikovina Kula, | 嶋利 一真, | Fan Youmei |
title: *** Should Maintainers Deploy to Cross-Ecosystems? An Analysis of Packages from PyPI and NPM ***
abstract: Developers rely on third-party open-source libraries to save time and reuse well-tested code. As technology stacks diversify, libraries are deployed across multiple ecosystems to reach broader audiences and accommodate different user needs. However, maintainers may hesitate due to concerns about increased maintenance effort and uncertain adoption outcomes. This study investigates the impact of cross-ecosystem deployments on maintenance effort and project adoption. Analyzing 972,592 NPM and PyPI packages, we focused on 420 actively maintained libraries that exist in both ecosystems. Of these, 184 were initially deployed to NPM, 148 to PyPI, and 88 were synchronized releases. We collected GitHub metrics—including issues, pull requests, contributors, forks, and commits—over a three-month period before and after deployment. Results show that 80–85% of packages saw no major maintenance activity. However, synchronized releases led to a 15.91% rise in issue reporting and an 11.49% increase in pull requests (PyPI→NPM), indicating higher initial maintenance effort. Popularity remained stable for 87% of packages, though synchronized releases saw an 11.36% increase in forks. While contributions increased in some cases (e.g., 13.59% in NPM → PyPI), others saw a decline in commit activity. Overall, cross-ecosystem deployment does not significantly raise maintenance effort but also does not guarantee increased adoption. Our results show insights towards understanding how deploying to multiple ecosystems may have some benefits. language of the presentation: *** English *** | |||||||