Biorobotics Laboratory BioRob

Project Database

This page contains the database of possible research projects for master and bachelor students in the Biorobotics Laboratory (BioRob). Visiting students are also welcome to join BioRob, but it should be noted that no funding is offered for those projects. To enroll for a project, please directly contact one of the assistants (directly in his/her office, by phone or by mail). Spontaneous propositions for projects are also welcome, if they are related to the research topics of BioRob, see the BioRob Research pages and the results of previous student projects.

Search filter: only projects matching the keyword Vision are shown here. Remove filter

Amphibious robotics
Computational Neuroscience
Dynamical systems
Human-exoskeleton dynamics and control
Humanoid robotics
Miscellaneous
Mobile robotics
Modular robotics
Neuro-muscular modelling
Quadruped robotics


Miscellaneous

742 – Create synthetic salamander dataset with domain randomization and unsupervised generative attentional networks
Category:semester project, master project (full-time)
Keywords:3D, C++, Computer Science, Data Processing, Machine learning, Programming, Vision
Type:20% theory, 80% software
Responsible: (MED 1 1611, phone: 36620)
Description:

Powerful deep-learning based tracking method for animal behaviors requires large-scale curated and annotated data. Several recent papers [1,2] revealed the possibility to leverage the data requirement by rendering animated synthetic animals such as mice and ants.

In this project, the student will work on an existing biomechanical model of Salamander Pleurodeles Waltl to create a synthetic dataset for marklerless keypoint tracking tasks. The dataset would help improve the performance of a salamander tracking network, which would ultimately provide invaluable kinematics data for designing muscle models, neural controllers and validating neuroscience hypothesis.

For a PdM, this project involves:

  • Improve the realism of the current salamander model in Blender and add diversity with procedurally generated noise and domain randomization.
  • Generate a synthetic image dataset for markerless tracking tasks.
  • Train an Image Domain Translator (e.g. U-GAT-IT[3]) to increase the dataset fidelity and reduce the reality gap
  • Bonus: evaluate the dataset power on a markerless tracking network (e.g. DLC[4])

For a semester project, work packages will be optionally dropped and tailored according to student’s skills and interests.

The student is expected to have good programming skills and previous experience/knowledge in deep learning. Knowledge in 3D modeling and computer graphics is a plus but not required. If interested, please send an email to Chuanfang Ning with your motivation, CV, transcripts and most relevant experience.

[1] Bolaños, Luis A., et al. "A three-dimensional virtual mouse generates synthetic training data for behavioral analysis."

[2] Plum, Fabian, et al. "replicAnt: a pipeline for generating annotated images of animals in complex environments using Unreal Engine." 

[3] Kim, Junho, et al. "U-gat-it: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation." 

[4] Mathis, Alexander, et al. "DeepLabCut: markerless pose estimation of user-defined body parts with deep learning." 



Last edited: 27/06/2024
737 – Development of a treadmill with closed-loop control of speed for recording optical and X-ray videos
Category:semester project, master project (full-time)
Keywords:Control, Electronics, Embedded Systems, Experiments, Firmware, Image Processing, Mechanical Construction, Motion Capture, Prototyping, Treadmill, Vision
Type:60% hardware, 40% software
Responsibles: (MED 1 1611, phone: 36620)
(MED 1 1626, phone: 38676)
Description:When recording animal behaviors using optical or X-ray videos, there is a tradeoff between having a large field of view and having a high resolution of the animal body. This limits the ability to obtain animal kinematics for a long time and with high accuracy simultaneously. One solution is to let the animal run on a treadmill such that it can stay inside the field of view. However, the animal often varies its speed during movement, and the radio-opaque components in the common treadmills add difficulty in placing X-ray cameras. In this project, the student will develop a treadmill to be used with optical and X-ray tracking setup. The treadmill is expected to have the following features: (1) The major components should be constructed using radio-transparent materials such as plastics. (2) The slope of the treadmill can be adjusted. (3) The speed of the treadmill can be controlled in closed loops to keep the animal in the center of the view. To realize this, a camera may be used to track the animals. See this video for an example: https://www.youtube.com/watch?v=0GyovqfQj2g&ab_channel=TerradynamicsLab (Note that the treadmill in this project does not need to move in 2 dimensions.) If there is sufficient time, the following features would be desirable: (4) Being able to move in two dimensions (omnidirectional treadmill). (5) Allowing integration with force/torque sensors below the surface. Students with knowledge of designing mechanical structures and embedded systems, computer vision, and feedback control are preferred. Interested students can send their resumes, transcripts, and materials that can show their project experience to the assistants.

Last edited: 20/06/2024

Mobile robotics

732 – Mobile furniture motion control using human body language
Category:semester project, master project (full-time)
Keywords:C++, Control, Machine learning, Python, Vision
Type:35% theory, 10% hardware, 55% software
Responsible: (undefined, phone: 37432)
Description:Furniture are evolving. From static objects in the home, they are become active and mobile. These new capabilities open novel interaction opportunities and raise questions about ways furniture can communicate with users. Together with Prof. Emmanuel Senft from Human-centered Robotics and AI group, EPFL IDIAP, and building on recent developments in mobile furniture in BioRob, this project will explore how they can communicate with their user by adapting their motions to achieve defined communication goals. This work will follow exploration studies from the human-robot interaction field using mostly Wizard-of-Oz paradigms (a human is actually controlling the “robot”) to add autonomy to these systems. This will be a following-up project based on existing systems. A human pose is detected as a 3D joints skeleton, using Kinect camera (RGB-Depth camera) and OpenPifPaf (a learning-based human pose detection algorithm). Human motions, or sequence of human poses, can be categorized into different meanings based on current studies of human body language, and can further be classified by the provided visual perception system using either geometrical regulations or learning-based motion recognition algorithm (for example, spatial-temporal graph neural network). Once the user commands are correctly identified, these commands can be sent to the mobile furniture robot using robot operating system (ROS) to execute the commands in order to meet the user requirements in the assistive environment. Further real-world experiments will also be needed to verify the functionality and performance of this system.

Last edited: 25/06/2024

3 projects found.