Who am I?
My name is Clément Macadré and I am a student at Polytech Angers, in Angers, France.
This page is aimed at creating an overview of my school projects and professional experiences in hopefully, an entertaining way.
For a faster approach take a look at my resume.
First of all I graduated in 2017 from Brequigny high school by obtaining an high school diploma in engineering science, with a computer science option.
Then, I followed two years of preparatory courses (2017-2019) at the engineering school Polytech Angers where I am pursuing an engineering degree. (2019-2022)
Given my background and my interests, I chose the specialization "Automation and computer engineering" available at Polytech Angers. I will present in the following sections what projects and professional experiences I was able to complete during this course.
Note that in 2014, after obtaining my "Certificate of general education" I took a 10-month sabbatical in the United States to learn English. As a result, in 2020, I took the TOEIC exam, obtaining a score of 990/990 which translates into a "C1" level of the CEFR ("Proficient user - Effective Operational Proficiency")
Finally, in my final year in Polytech Angers I succeeded in integrating a second course allowing me to obtain a master's degree in dynamic systems and signals (DSS) in parallel with my engineering degree.
The objective of this master's degree is to provide a concrete introduction to research work in one of the research teams associated with the DSS master's degree: writing bibliographic studies, image processing, automatic learning.
LARIS (2022) 5 months Internship
This internship is a continuation of the previous one (i.e. CEREMA 2021), in which I was able to carry out a research topic proposed by Dr Guyonneau, a member of the Laris dynamic systems and optimization team.
Let's start by contextualizing the project : Mobile robots use SLAM (Simultaneous Localization and Mapping) algorithms to estimate its posture in an unknown environment. We found that it is difficult to evaluate the performance of these algorithms, because it requires to compare the posture predictions from these algorithms with a ground truth. Note that a ground truth is simply a more accurate measure of the trajectory performed by the robot in space. Classically, to obtain this ground truth, we would use posture tracking cameras. But in the eventuality where we would try to explore a complete building, we would have to equip it with a multitude of cameras, which could be tedious.
What we propose to do with this research project is to perform a posture tracking with the help of a landmark map. The detection of these landmarks (cones in this project) in the environment allows us to localize the robot.
In the figures on the left, we can observe the detection process of cones inside a point cloud generated by a lidar. In color, the clusters associated with the landmarks, and in white all the other points eliminated.
In green boxes we have the estimated position of landmarks. In blue, we have the box containing the posture of the robot at that moment in a guaranteed manner.
Prior to this project, we carried out a research of the state of the art that allowed us to obtain the result prediction that can be seen in the image on the left. Finally on the right we observe the results at the end of this internship. With in blue the boxes containing in a guaranteed way the robot, in red an estimation of postures resulting from an algorithm of SLAM and finally in red the posture of the robot recorded in the simulation software Gazebo.
In conclusion, this research topic aimed at providing a guaranteed ground truth, based on interval analysis in a context of bounded errors, only from Lidar data.
CEREMA (2021) 4 months Internship
This internship took place at the Cerema, which joined forces with the Laris to design a robot capable of automatically taking localized lighting measurements in its environment. In summary, the RoMuLux project led by the Cerema aims at facilitating lighting measurements in public buildings subject to normative thresholds. My mission was to provide a robot able to locate itself and to move automatically in a building to carry out luminosity measurements. The final result being a map of the building interpolated with the measurements taken.
The first step of this project was to simulate the RoMuLux robot by taking into account the dimensions and properties of the robot, the method of displacement as well as the simulation of the sensors present on the robot: a lidar, an inertial unit and a light meter.
The second step of the project was to implement SLAM (Simultaneous Localization and Mapping) algorithms to build a map of the environment in which the robot evolves and to obtain its spatial coordinates simultaneously.
The images on the left show a mapping of the environment and a pose tracking of the robot using point clouds obtained with a lidar sensor and an inertial unit.
On the image above, we can see the final step which consisted in developing an interface to plan the automatic trajectory of the robot. We can also notice in red the path that will be automatically followed by the robot.
It was also possible to run SLAM algorithms and use our automated path planning in real conditions in several environments : the Iceparc of Angers and the SUAPS of Angers.
In conclusion, the RoMuLux project allowed me to use the ROS middleware under Linux and to experiment with SLAM algorithms. I was also able to create a simulation of the RoMuLux project with the Gazebo simulation software. Finally, I developed a controller rich in functionalities allowing to plan a zigzag thanks to the Rviz interface and to let the RoMuLux follow this path.
Polytech Angers (2020) 3 months At-home Internship
In the context of the Covid-19 sanitary crisis, the internships abroad have been cancelled for the year 2020. In replacement, I was able to carry out a bibliographical study on the simulation of bird flocks. This study deals with the simulation of these systems and their applications: Boids, cellular automata, particle systems and neural network weight optimization. You can check out more here.
Warner Electric (2019) 1 month Internship
Warner Electric specializes in the design and production of brakes and clutches. It manufactures high quality brakes for elevators and escalators, pallet trucks and hydraulic scaffolding. Smaller brake models are needed in the aviation industry for aircraft pilot seats and in the medical sector for mobile beds, MRIs and scanners. Finally, clutch-brakes can be found in letter sorting machines in post offices, these devices have the particularity of performing several dozen stops and restarts per second.
During this internship I had the opportunity to assemble an electromagnetic brake. Let's analyze the operation of this device with the help of the diagram below:
When the coil is energized, a magnetic field disengages the friction disk from the drive shaft assembly. This allows the motor to rotate freely. In the event of a power loss or when the coil power supply is switched off, the field weakens and the springs between the friction disk and the armature will press the friction material and thus perform the braking function. The armature serves to dissipate the kinetic energy converted into thermal energy.
Another of my missions was to sandblast the outer surface of disks. The asperities created by the sandblasting allowed me to apply glue and to fix friction strips on the disk. It is thus this disk that is kept away by an electromagnetic field from the armature connected to the rotating shaft. As explained above, when this magnetic field disappears, either intentionally or by failure, the disk is pressed against the armature, which brakes it by friction.
Bibliographic report (2022)
This bibliographic report was intended to prepare the research project carried out in 2022. This research project is entitled: Assemblistic approach for ground truth estimation: application to localization and mapping problems in mobile robotics and aims at providing a ensemblisitic ground truth used for evaluating the performances of SLAM algorithms.
The goal of this study was to discover the usual methods for evaluating SLAM algorithms. Thus, we discovered that the classical method is to compare the posture estimates from these algorithms to a more accurate measure of these postures: the ground truth. We also found that it was difficult to obtain such a measure especially when exploring complex environments. We therefore discovered that it would be interesting to develop a posture tracking algorithm using a landmark map to locate the robot. Indeed, this method seems to be easier to implement. Then we learned about interval analysis and its mechanisms. Finally we defined the Constraint Satisfaction Problem (CSP) that we will have to solve when implementing the interval analysis pose tracking algorithm (IAPT).
On the image to the left, we can see the results expected at the end of the internship, a series of boxes (in blue) containing the posture of the robot at each moment in a guaranteed way. This box has been computed using the interval analysis, the landmark map and the analysis of a point cloud obtained from a lidar sensor. Then these same point clouds were passed to several SLAM algorithms which estimated the path of the robot (red and green). Finally, we can use the computed guaranteed ground truth to evaluate the performance of the SLAM algorithms.
On the image to the right, we can see the type of constraint satisfaction problem that the IAPT algorithm will try to solve. The robot detects an obstacle contained in the box (W), and the lidar measurements of angle (gamma) and distances (y) are affected by uncertainties, so thanks to the interval analysis we will be able to propagate all these incertitudes in order to compute the box (q) containing in a guaranteed way the posture of the robot at this moment.
This preparatory work greatly accelerated the development of the research project carried out during the 2022 internship period. This discovery of the state of the art involved reading numerous scientific articles in order to be up to date on the latest practices. Finally, this project allowed me to familiarize myself with interval analysis and the formalization of constraint satisfaction problems.
Machine Learning (2022)
This machine learning project focuses on the prediction of ischemic lesion evolution caused by a stroke. A stroke is caused by the obstruction of an artery in the brain. Perfusion-weighted magnetic resonance imaging (MRI) is typically used to obtain information about the flow of blood micro-circulating in the tissue. The area affected by the stroke (the ischemic lesion) is hypoperfused because it is blocked by a blood clot. Thus, the objective of this project is to analyze the images obtained by MRI perfusion in order to judge the severity of the stroke and to help a doctor decide if a surgical operation is necessary.
The database is therefore populated with scans composed of "healthy" and "infracted" pixels. We have therefore chosen an SVM algorithm to solve this classification problem. We note that the data were accommodated to guarantee an equal proportionality of healthy/infarcted tissue since the SVM model is sensitive to class imbalance.
The SVM algorithm allows us to deduce the probability that a patch belongs to a certain class. And consequently create a probabilistic map of infarctions as can be seen on the image on the left. The color assigned to the heatmap is arbitrary. We can see from the probabilistic maps that the scans have been pre-processed since the density of the "healthy" areas is less important than the "infarcted" areas. Finally, with the necessary medical knowledge, a heatmap could be an important tool to help the surgeon to choose which type of treatment to apply to patients.
Since for this project we have access to a ground truth (the actual classification of a patch), it may be interesting to evaluate the type of errors made by our model. Indeed, it can be interesting to estimate the rate of false negatives. Thus we will create a map of confusion observable on the two images below.
The black and green dots represent a correct classification by the model. On the other hand, we notice the presence of false positives on the outside of the brain, which will have the consequence of aggravating the diagnosis, since the model suggests that the extent of the damaged tissue is greater than in reality, this type of error is nevertheless less serious than the false negatives.
Finally, to validate the model, we use several validation criteria such as the average rate of good prediction when training the model, the ROC curve which is a measure of the performance of a binary classifier (this is the case here, we try to associate each patch to the Healthy or Infarcted class), the confusion matrix which informs us about the type of errors made. Indeed, it can be interesting in a screening test to estimate the false negative rate (Type 2) which is a much more serious error than a false positive (Type 1). Another diagnostic tool is the learning curve, used to diagnose learning problems, such as an underfit or overfit model, as well as to determine whether the training and validation data sets are sufficiently representative.
To conclude, this project allowed me to realize the importance of the dataset quality in a machine learning application and the choice of appropriate criteria to evaluate the performance of our model.
RoMuLux Project (2021)
This project was an introduction to the ROS middleware as well as to the Gazebo simulation software and the Rviz vizualization software in order to prepare for the 2021 internship at the Cerema. In a second time, the goal was to experiment with the RoMuLux robot and its Lidar sensor to map several environments using SLAM algorithms.
Bibliographic report (2020)
In 2020, in replacement of an internship abroad cancelled due to the coronavirus outbreak, I was instead able to carry out a literature search.
This bibliographical report is a popularization and synthesis exercise which aims to provide a first experience of scientific research on a subject of my choice. Thus, based on the knowledge acquired during my first year of engineering studies in terms of simulation of complex systems and a certain curiosity for biomimicry, I decided to present and then carry out several approaches to the simulation of a bird flock as well as an application in neural networks.
I dedicated a whole section of this website to this report which you can find here. In this exercise, I focused on the simulation of flocks of birds, schools of fish or any system capable of demonstrating emergent behaviors. By using theoretical approaches such as Reynolds’s Boid theory, Shannon’s cellular automata model or Kennedy’s particle systems to create autonomous simulations. In fact, I discovered that the creation of a Unity simulation of a flock of birds in 3D can be achieved with only three rules and some environmental pressure. I also learned about cellular automatons, starting with Conway’s game of life and progressing to the creation of my own cellular automaton simulating flock behaviors. Next, I experimented with an optimization technique using particle systems that was originally based on the behavior of a flock foraging for food. Finally, I visualized this optimization technique through a simulation and then used it to train a neural network.
Throughout the creation of this report i had the opportunity to use different programming languages:
C#: I used C#to create a flocking simulation that you can check out on this page. (All the codes are available by clicking the link under the simulations)
The creation of the particle system optimization was inspired by the behavior of a flock foraging for food.
Particle system optimization can also be used in the learning phase of a neural network, replacing the usual technique of backpropagation of the error gradient. Indeed, the particles can move in a space with N-dimensions possibly allowing the optimization of N parameters. In the image above the XOR logic is solved using a neural network trained with the particle system technique, more on that here.
C: When experimenting with cellular automatons, I created a version of Conway's Game of Life in a console application in C. You can find the source and the executable here.
The Game of Life, was created by John Horton Conway in 1970. This game takes place on a two-dimensional grid similar to a chess board, and progresses through generations. Indeed, at each iteration, a cell will count how many of its 8 neighboring cells are alive. And this information is then subjected to the rules that govern the game. Each element evolves in the world according to a limited "sphere of perception". And most importantly, the rules governing their movements are excessively simple.
Interactive Map (2020)
I had the opportunity to take part in the creation of a website for Anger's international week. The goal was to create an interactive map of Anger's landmarks: restaurants, bars and places of interest. We embedded a map using the Openstreetmap initiative and created profiles for each landmark using information stored in an SQL table. My contribution consisted in setting up the SQL database by creating an online form that you can check out here. Unfortunately, the event was cancelled due to the coronavirus outbreak and the project stopped there, but it was still a great opportunity to put into practice what I learned in programming languages such as PHP, CSS and SQL.
PHP: I used PHP to create this form using prepared statements and regular expressions to ensure the integrity of the database and to make sure it was filled out correctly.
CSS: The front-end of the website was designed using CSS.
SQL: Using SQL, I set up and managed the database containing the information visible on the website.
This project required the coordination of different actors to progress, which was a valuable experience in the development of my teamwork skills.
Image Processing (2019)
This project aimed at carrying out an image analysis for the control of a conveyor belt. The objective was to take pictures of the objects moving on the conveyor and to determine the shape of the objects. Then, depending if the objects had a circular or rectangular shape, it would be ejected from the conveyor using actuators or left on the belt.
We had at our disposal a Langlois PSY4001 conveyor model connected to a Siemens S300 PLC and an Optris PI infrared camera.
LIST: For the programming of the PLC, we used the Simatic manager software with which we had to use the LIST language.
LADDER: I also used LADDER language in practical work at school for the control of carriages, elevators, barriers, gates, etc...
The infrared camera was placed above the conveyor to take pictures of the objects transported by the automat. It should be noted that we heated the objects so that they stand out from the environment.
Matlab: We used Matlab for our image processing by arranging the pixels of the image in a matrix, to which we applied different filters. Indeed, Matlab is particularly efficient for matrices operations.
Our goal was to clear the image leaving only its outlines, as you can see on the images below:
The first step of the process is to convert the image to black and white so that each pixel is encoded on a gray level. Next, we need to apply a low-pass filter several times to improve the appearance of the image.
After this filtering, we can finally clean the image by comparing the pixel values to a given threshold. Pixels below this threshold are set to 0 and on the contrary all values above the threshold are set to 1.
By using these normalized values, we can empty the inside of our object to leave only the contours. Finally, we compute the center of gravity of the image, and the ratio between the minimum and maximum distance from a pixel to the center. The closer this ratio is to 1, the more all pixels will be clustered on a circle around the center of gravity.
I was then able to efficiently identify the shapes of the objects on the conveyor. Note that the image processing could have been done with machine learning using pre-trained models or by building a convolutional neural network form scratch. The way a convolutionnal network works is by processing an image multiple times to extract features, by applying self-learned filters and creating blocks within the image.
Below is an application of these techniques using Lenna as a test image. Look at the controls below the image to see the effects of different filters and two ways to see what max-pooling does to an image.
Simulation's controls :
Press 's' to go through different filters
'm' to apply multiple times the same filter on the filtered image
'r' to apply a random filter
'p' to apply a maxPooling step to the image with the resulting size change
'q' to apply a maxPooling step to the image with the resulting resolution change
'e' to reset to the original image
Laminar Flow Wind Tunnel (2018)
The aim of this project was to deepen a phenomenon discovered during the study of air flow in a fluid mechanics course. I was asked to measure the aerodynamic lift and drag of a wing for different angles of attack of a wing profile. I concluded that after a certain angle of incidence between the wing and the airflow generated by the airflow tunnel, the wing suddenly lost most of its aerodynamic lift, which is comparable to its ability to "fly". This corresponds in aviation to the stall phenomenon, which is the main cause of fatal accidents in commercial aviation.
To explain simply how a wing undergoes a stall, I built a wind tunnel and injected a thick steam into the tunnel to visualize the action of the air on the wing. Indeed, during the stall, one can visually notice the turbulences in the air :
Observe on the drawing above the airflow over the entire surface of the wing.
As the angle of attack increases, the upper airflow begins to separate from the tail of the wing. This creates turbulence in its wake. The lift increases with the angle of attack.
Finally, the aircraft stalls when the critical angle of attack, specific to the lifting surface, is exceeded. After which, the upper airflow suddenly separates from the wing, dangerously reducing the lift. The aircraft is thus stalling.
The wing (4), was 3D-printed to match the shape of a real wing profile, as it corresponds to the NACA 4418 profile used on real aircrafts. I needed to bring steam into the tunnel, so I placed a fan (5) upside down at its exit. This has the effect of sucking the steam inside, with little disturbance. I placed a honeycomb structure (1), printed in 3D, at the steam inletin the hopes of obtaining a laminar flow, critical to distinguishing multiple layers of airflow. In order to be able to observe the steam flows more easily a row of LEDs (2) was placed on top of the tunnel. To measure the wind speed, I placed a Pitot tube (3) on the bottom of the tunnel, where it generates the least turbulence.
I couldn't create a perfect laminar flow that would have allowed us to see the disturbances gradually forming on the upper surface of the wing, instead of vortices forming further down the wing tail.
Look at the image on the right side of the video to see what I would have hoped to observe.
1- Unsticking of the airflow from the wing surface
2- Formation of turbulence and significant loss of lift
This project was controlled with an Arduino board: The fan speed varied according to a potentiometer value, the leds were turned on and off with a switch, and the color was chosen with another potentiometer. Finally the angle of attack of the wing was controlled with a servomotor slaved to yet another potentiometer's value.
This project had a lot of DIY involved which was a challenge but it made the whole project fun to pursue.