Keynote Speakers - Details

Home>Programs>Details

Real-time Dynamic Imaging Methods for Deformable Boundary Shape in Flexible Electrical Impedance Tomography (fEIT)



Prof. Masahiro Takei

Chiba university

Japan

Dr. Panji Nursetia

Chiba University

Japan



The aim of this lecture is to demonstrate the real-time dynamic imaging methods for deformable boundary shape in flexible Electrical Impedance Tomography (fEIT) to overcome the technical issues of conventional EIT for human body imaging and meat composition visualization. The aim is attained through the systematic achievement of the following three main objectives: (1) Establishment of real-time dynamic imaging method for flexible boundary sensor in wearable EIT (wEIT) in order to estimate deformation boundary shape of human body and compute Jacobian matrix in a real-time, (2) Development of high speed and high accuracy meat composition imaging by mechanically flexible EIT (mech-fEIT) with k-Nearest Neighbor and k-Means machine learning approaches for approximation of Jacobian matrix and quantification of meat composition and meat edges, (3) Improvement of image reconstruction by sectorial Jacobian matrix using k-Means clustering algorithm for elimination demerit of non-linearity of conventional Jacobian matrix. Real-time dynamic imaging methods for deformable boundary shape in flexible Electrical Impedance Tomography (fEIT) describes in this lecture ascertains the potential application of flexible EIT such as such as open the opportunity for application of EIT in “Internet of Thing (IoT)” healthcare monitoring for Lymphedema diagnostic, muscle Imaging and expand the possibility of meat composition imaging in meat industry



Machine Vision for Biomedical Imaging and Diagnostics Applications



Prof. Zervakis Michalis

TECHNICAL UNIVERSITY OF CRETE

Greece

Dr. Marios Antonakakis

TECHNICAL UNIVERSITY OF CRETE

Greece



The human body consists of one of the most complicated biological systems. It can be considered highly as a multi-parametric system for which the exposure electromagnetic activity can be captured with cutting-edge technologies such as electro- and magneto- encephalography (EEG/MEG), magnetic resonance imaging (MRI), and/or other levels of medical imaging (e.g. Mammography, functional/diffusion MRI). In this regard, the principal aspect of machine vision can be utilized to quantify and analyze the captured spatiotemporal activity for precise non-invasive diagnosis of human body and brain diseases. In this lecture, the primary goal is to present the basics of machine vision as well as machine learning for analyzing and processing biomedical signals and images towards biomedical applications for non-invasive diagnostic and treatment.



Recent Advancement in Ear Recognition Techniques



Dr. Akbar Sheikh Akbari

Leeds Beckett University

United Kingdom

Dr. Matthew Martin Zarachoff

Leeds Beckett University

United Kingdom



This talk presents a series of multi-band ear recognition methods, inspired by Principal Component Analysis (PCA) based techniques for hyperspectral images. 2D Multi-Band Principal Component Analysis (2D-MBPCA) divides the gray input image into a number of frames based on the intensity of the pixels, in a process called multi-banding. Conventional PCA is then applied on the resulting frames, extracting their eigenvectors, which are then used as features for matching. An extension of 2D-MBPCA to the wavelet domain, called 2D Wavelet based Multi-Band Principal Component Analysis (2D-WMBPCA), performs a non-decimated wavelet transform on the input image, dividing the image into its subbands. Each resulting subband's coefficients are then multi-banded and conventional PCA is used to generate eigenvectors for matching. The performance of these techniques is assessed using the images of two benchmark ear image datasets, called IITD II and USTB I. Experimental results show that the proposed techniques outperform single-image PCA as well as the `eigenfaces' technique. Moreover, the proposed algorithms generate very competitive results to those of learning-based methods. A technique called Chainlet based Ear Recognition using Multi-Banding and Support Vector Machine (CERMB-SVM), which combines multi-banding with the chainlet descriptor, is also introduced. This method first applies multi-banding to the input image before applying Canny edge detection on each resulting normalized band. The resulting binary edge maps are then combined, and a tiling process is used to calculate the chainlet histograms using the Freeman chain code. The resulting chainlets are then used for training and testing a pairwise Support Vector Machine (SVM). Experimental results on images of the two aforementioned benchmark ear image datasets show that the proposed CERMB-SVM technique outperforms 2D-MBPCA and 2D-WMBPCA. In addition, CERMB-SVM also outperforms its anchor chainlet method and state of the art learning-based ear recognition techniques.