Sitemap
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Pages
Posts
Leetcode Meta面试高频题
Published:
【编程】:《剑指offer》题目思路整理
Published:
【机器学习笔记】:模型性能度量
Published:
【机器学习笔记】:什么是机器学习
Published:
【数据结构】利用栈实现正整数四则运算
Published:
Harris角点检测Python实现
Published:
2023 暑期实习申请总结
Published:
projects
Multi-View Fusion-Based 3D Object Detection for Robot Indoor Scene Perception
I worked on a deep learning based indoor 3D object detection project under guidance and supervision of Prof. Hock Soon Seah and Dr. Li Wang.
Deep Learning Based Fluorescence-to-Color Image Registration
In this project, we built a fluorescence imaging system to captuer both color image and fluorescence image. We achieved fluorescence-to-color image registration with image features extracted by VGG-16.
Deep Learning Based Spine MRI Segmentation
In this project, we proposed a deep network named Res50_UNet for spine MRI segmentation. Res50_UNet combines the architectural characteristics of UNet and FPN and can achieve accurate segmentation on spine MRI images.
publications
Application of Hybrid Network of UNet and Feature Pyramid Network in Spine Segmentation
Published in 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA), 2021
Spine segmentation plays an important role in spinal disease diagnosis and treatment, also, it is a fundamental procedure in some spine surgical navigation systems. In this paper, we proposed a hybrid network of Feature Pyramid Network (FPN) and UNet, and used it for vertebral body segmentation. Experiments were conducted with a T2-weighted lower spine MRI dataset. Experimental results show that our proposed network outperforms UNet and several other UNet based networks in spine segmentation. Quantitative analysis shows that segmentation accuracy of 99.5% can be achieved with this network.
Recommended citation: X. Liu, W. Deng and Y. Liu, "Application of Hybrid Network of UNet and Feature Pyramid Network in Spine Segmentation," 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Lausanne, Switzerland, 2021, pp. 1-6, doi: 10.1109/MeMeA52024.2021.9478765.
Download Paper | Download Slides
Deep Convolutional Feature-Based Fluorescence-to-Color Image Registration
Published in 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA), 2021
Fluorescence guided surgery (FGS) combines functional information (fluorescence imaging) and structural information (color imaging) to improve surgery performance. Fluorescence-to-color image registration plays fundamental role in FGS. In this paper, we used VGG16 to extract image features from color and fluorescence images, and feature descriptors were built with these features. Then, keypoint matching was conducted to build correspondence between color image and fluorescence image. Finally, fluorescence-to-color image registration was achieved based on the matched keypoint pairs. Experimental results show that our method outperforms conventional feature based image registration algorithms, like SIFT, BRISK, SUFT, ORB.
Recommended citation: X. Liu, T. Quang, W. Deng and Y. Liu, "Deep Convolutional Feature-Based Fluorescence-to-Color Image Registration," 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Lausanne, Switzerland, 2021, pp. 1-6, doi: 10.1109/MeMeA52024.2021.9478607.
Download Paper | Download Slides
Multi-View Fusion-Based 3D Object Detection for Robot Indoor Scene Perception
Published in Sensors, 2021
Accurate 3D object detection enables service robots to have 3D scene perception in cluttered indoor environments, but it’s usually a challenging task. In this paper, we proposed a two-stage 3D object detection algorithm which is to fuse multiple views of 3D object point clouds in the first stage and to eliminate unreasonable and intersection detections in the second stage. For each view, 3D object bounding box estimation has four steps: (1) 2D object semantic segmentation and 3D object point clouds reconstruction; (2) using Locally Convex Connected Patches (LCCP) method to segment the object from the background; (3) calculating the main object orientation with Manhattan Frame estimation method; (4) constructing 3D object bounding box. An object database is created and refined as more multi-view point clouds of the same object are fused. Incorrect and intersecting objects are removed from the object database based on prior knowledge. Experiments performed on both SceneNN dataset and a real indoor environment show the high accuracy and stability of our proposed method.
Recommended citation: Wang, L.; Li, R.; Sun, J.; Liu, X.; Zhao, L.; Seah, H.S.; Quah, C.K.; Tandianus, B. Multi-View Fusion-Based 3D Object Detection for Robot Indoor Scene Perception. Sensors 2019, 19, 4092. https://doi.org/10.3390/s19194092
Download Paper
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.