Biorobotics Laboratory BioRob

Project Database

This page contains the database of possible research projects for master and bachelor students in the Biorobotics Laboratory (BioRob). Visiting students are also welcome to join BioRob, but it should be noted that no funding is offered for those projects. To enroll for a project, please directly contact one of the assistants (directly in his/her office, by phone or by mail). Spontaneous propositions for projects are also welcome, if they are related to the research topics of BioRob, see the BioRob Research pages and the results of previous student projects.

Search filter: only projects matching the keyword Learning are shown here. Remove filter

Amphibious robotics
Computational Neuroscience
Dynamical systems
Human-exoskeleton dynamics and control
Humanoid robotics
Miscellaneous
Mobile robotics
Modular robotics
Neuro-muscular modelling
Quadruped robotics


Quadruped robotics

A small excerpt of possible projects is listed here. Highly interested students may also propose projects, or continue an existing topic.

652 – Integrating Learning-Based Control with MPC and CPGs
Category:semester project, master project (full-time), internship
Keywords:Bio-inspiration, Control, Learning, Locomotion, Optimization, Robotics
Type:40% theory, 60% software
Responsible: (MED 1 1024, phone: 37506)
Description:Recent years have shown impressive locomotion control of dynamic systems through a variety of methods, for example with optimal control (MPC), machine learning (deep reinforcement learning), and bio-inspired approaches (CPGs). Given a system for which two or more of these methods exist: how should we choose which to use at run time? Should this depend on environmental factors, i.e. the expected value of a given state? Can this help with explainability of what exactly our deep reinforcement learning policy has learned? In this project, the student will use machine learning to answer these questions, as well as integrate CPGs and MPC into the deep reinforcement learning framework. The methods will be validated on systems including quadrupeds and model cars first in simulation, with the goal of transferring the method to hardware. To apply, please email Guillaume with your motivation, CV, and briefly describe your relevant experience (i.e. with machine learning, software engineering, etc.).

Last edited: 09/01/2024
697 – Teaching a Robot Dog New Tricks
Category:semester project, master project (full-time), internship
Keywords:C++, Computer Science, Control, Learning, Programming, Python, Quadruped Locomotion, Vision
Type:20% theory, 20% hardware, 60% software
Responsible: (MED 1 1024, phone: 37506)
Description:As robots become more prevalent in human society, the number of interactions will increase and good communication will be critical for successful human-machine collaboration. In this project, the student will develop a framework for human-robot interaction using both visual and audio feedback. Given a set of user-defined "tricks" (i.e. lie down, turn around, move left), how can we instruct the robot to perform a particular task? Can we also teach the robot a new task it currently does not know how to do? Communication will be done using both a camera mounted on the robot, as well as with a microphone. The three important tasks are 1) developing the motion library, 2) developing the visual interface to human activity recognition software to map to the motion library, 3) developing the voice command interface. To apply, please email Guillaume with your motivation, CV, and briefly describe your relevant experience (i.e. with machine learning, software engineering, etc.).

Last edited: 04/12/2023

Mobile robotics

651 – Autonomous Drifting on Scaled Vehicle Hardware
Category:semester project, master project (full-time), internship
Keywords:C++, Control, Electronics, Embedded Systems, Experiments, Learning, Optimization
Type:10% theory, 60% hardware, 30% software
Responsible: (MED 1 1024, phone: 37506)
Description:Controlling vehicles at their limits of handling has significant implications from both safety and autonomous racing perspectives. For example, in icy conditions, skidding may occur unintentionally, making it desirable to safely control the vehicle back to its nominal working conditions. From a racing perspective, drivers of rally cars drift around turns while maintaining high speeds on loose gravel or dirt tracks. In this project, the student will compare several approaches for high speed, dynamic vehicle maneuvers, including NMPC with a standard dynamic bicycle model, NMPC with a dynamic bicycle model + GP residuals, NMPC with learned dynamics (i.e. a NN), and lastly a pure model-free reinforcement learning approach. All approaches will be tested in both simulation as well as on a scaled vehicle hardware platform. To apply, please email Guillaume with your motivation, CV, and briefly describe your relevant experience (i.e. with machine learning, software engineering, etc.).

Last edited: 09/01/2024

3 projects found.