This is the first unit where I don’t have any programming examples. There are three chapters covered from the textbook, and a section in the workbook.
Table of Contents
- Reading – Chapter 7: What’s Going On
- Reading – Chapter 8: Switch on the Light
- Reading – Chapter 9: Sonars, Lasers and Cameras
- The Robotics Primer Workbook – Sensors
- Conclusion
Reading – Chapter 7: What’s Going On
This chapter explored the importance and variety of sensors that might be used in robotics. Emphasis was placed upon the distinction between proprioception and exteroception. Sensors are how a robot perceives both the world and itself, and are vital components of the robot. However, sensors in and of themselves are very limited in the information that they can provide. Perception is an equally important problem in robotics, and indeed may be one of the greatest problems for robotics.
Food for Thought 7-1
Uncertainty is not much of a problem in computer simulations, which is why simulated robots are not very close to the real, physical ones. Can you figure out why?
In simulations queries can be made between objects directly using programming methods. To actually build a system that required processing and perception in a way similiar to the real world would be very challenging, you would have to create a realized world with complex interactions. It is the processing of sensor input that is the big challenge for robots, and it is very hard to build a simulation that would accurately reflect the challenges of not only sensing but also perceiving in such an environment.
Food for Thought 7-2
Some robotics engineers have argued that sensors are the main limiting factor in robot intelligence: if only we had more, smaller, and better sensors, we could have all kind of amazing robots. Do you believe that is all that is missing? (Hint: If that were so, wouldn’t this book be much thinner?)
If the sensors included perception modules smaller sensors would indeed simplify robotics. I still hold the the biggest ongoing issue and challenge with robotics is perception. So if you had sensors with AI perception module that could identify and provide details about objects it would make things much easier to accomplish in robotics. But, this leads to the next issue regarding sensor fusion, where perception can be more accurate and meaningful when multiple inputs are combined. If we had perception capable sensors, then we would face a new challenge when attempting to fuse input from multiple sensors. At the end of the day, I believe that robotics books would be different but not less than they currently are.
Food For Thought 7-3
Being able to sense the self, being self-aware, is the foundation for consciousness. Scientists today still argue about what animals are conscious, and how that relates to their intelligence, because consciousness is a necessary part of higher intelligence of the kind people have. What do you think it will take to get robots to be self-aware and highly intelligent? And if someday they are both, what will their intelligence be like, similiar to ours or completely different?
Self-awareness is an interesting philosophical concept, I would point out that there have been ongoing questions about whether we as humans actually have self-awareness, or just a concept of self-awareness and whether it matters if we even have self awareness. This seems to be related to the question of free will, are humans actually free in their decision making process or are we just responding to our environment in ways determined by our environment, genetics and culture? The question of free-will is an ongoing question that has followed us for centuries, and while modern thinking leans towards the importance of volition and free will in humans, the question is not settled. So the way I see this question is more akin to: will robots reach a point where they ask philosphical questions about their existence? And I would say that yes a robot could be programmed and built to be complex and to ask such questions, but does that mean the questions are actually of value? Will those questions be equally as valid as human’s pholsophical questions? I don’t know.
I do know that in AI there is a concept known as the Turing Test, where a computer(robots included) can be said to possess Artificial Intelligence if it can mimic human responses under specific conditions. This could be extended to include concepts such as philosophical questions and questions of self-awareness. This does indicate to me though that while the underlying code and processes will be different, digital vs biological, the resulting intelligence would be recognizable and would be reflections of our own.
Reading – Chapter 8: Switch on the Light
This chapter focused on the differences between passive and active sensors, and explored some of the ways that light sensors can be utilized in robotics. Photocells are a type of resistive sensor that alter their resistance in response to the amount of available light, however other resistive sensors were also discussed. Beyond active and passive sensors further discussions were had regarding the difference between simple and complex sensors.
Food For Thought 8-1
Why might you prefer a passive to active sensor?
Typically I believe that a passive sensor will be simpler. Meaning it is cheaper, easier to use and will require less processing. This might not always be the case, but typically a passive sensor will be simpler than an active sensor. Passive sensors will be less likely to cause interference in other devices as well, though they might receive interference from active sensors in the environment.
Food For Thought 8-2
Are potentiometers active or passive sensors?
I would consider potentiometers to be a passive sensor. They can be set by interaction from their environment and can provide a variety of return values, but they do not actively affect their environments.
Food For Thought 8-3
Our stomach muscles have stretch receptors, which let our brains know how stretched our stomachs are, and keep us from eating endlessly. What robot sensors would you say are most similiar to such stretch receptors? Are they similiar in form (mechanism of how they detect) or function(what they detect)? Why might stretch receptors be useful to robots, even without stomachs and eating?
I think that the stretch receptors would be most similiar to a resistive sensor, as the stomach stretches the value returned would shift. There is a range of returned values (hungry to full) which would best be reflected by the resistive sensors. Now the mechanism of how they work is going to be very different between a stomach and a robot, and for the robot the use is going to be different than for food. The closest example I can think of a stretch receptor for a robot would be an old piece of VR tech called the P5 Powerglove which used resistive sensors to detect how bent a users finger was. That same type of sensor technology could be adapted to detect how full a bag is.
Reading – Chapter 9: Sonars, Lasers and Cameras
This was an interesting chapter that dealt with complex sensors such as sonar and vision. The short version is that vision is a very complex problem, whether we are using lasers, optical or audible information to “see” our environment. There are some ways we can sidestep parts of the problem, and there are many strategies used to simplify the challenge so that desired tasks can be achieved by robots.
Food For Thought 9-1
What is the speed of sound in metric units?
At 20 degrees Celsius, the speed of sound is about 343 m/s or 1,235 km/h.
Food For Thought 9-2
How much greater is the speed of light than the speed of sound? What does this tell you about sensors that use one or the other?
The speed of light is about 3.0 e 8 m/s and is not affected by the temperature of the environment. The speed of light is five orders of magnitude greater than the speed of sound. This tells me that the response from an optical sensor is going to be far faster than a response from an audible sensor. In robotics and modern electronics this amounts to feedback from optical sensors being instantaneous, while an audible response may be measured in milliseconds.
Food For Thought 9-3
What happens when multiple robots need to work together and all have sonar sensors? How might you deal with their sensor interference? In chapter 20 we will learn about coordinating teams of robots.
Multiple sonar sensors active at the same time will be prone to interference. To deal with the interference some way to communicate the number of robots in the area would be needed, as well as accurate clocks, so that the robots can be set up to stagger their sonar activations and to keep from interfering with each other.
Food For Thought 9-4
Besides using time-of-flight, the other way to use sonars is to employ the Doppler shift. This involves examining the shift in frequency between the sent and reflected waves. By examining this shift, one can very accurately estimate the velocity of an object. In medical applications, sonars are used in this way to measure blood flow, among other things. Why don’t we use this in robotics?
There are two primary reasons I can think of. Firstly the calculations and processing for interpretting sonar in this manner is likely to be much higher than just timing a ping. Secondly, I believe it would be difficult to use the doppler effect for measurement at angles approaching 90 degrees, it is most useful for detecting the speed of sound reflected toward or away from the receiver. In robotics, we will often be encountering environments of unknown layout and dimension and this will make it increasingly difficult to interpret the returned doppler shift.
Food For Thought 9-5
Since two eyes are much better than one, are three eyes much better, or even any better than two?
The big advantage of two eyes versus one is that it enables stereovision. This allows vision to effectively include depth information. Three or more eyes would not add additional vision beyond stereo vision, but it may increase the Field of view or allow for more accurate depth sensing.
The Robotics Primer Workbook – Sensors
In this section, we get to spend some time working with a variety of sensors to learn more about their use, ad what type of information they produce.
Exercise 1: Sensors & Levels of Processing
Here we are asked to describe the difference between an exteroceptive and proprioceptive sensor. We are then asked to list the sensors available in our kit and determine if they are exteroceptive or proprioceptive and to organize them according to increasing level of complexity.
Exteroceptive sensors collect information from their environment and provide information about the external world. Proprioceptive sensors are used to collect and track internal information, such as battery level and position of joints or limbs.
The SIK includes the following sensors
- 4 x Tactile buttons – Exteroceptor, indicates if the button has been pushed, either open or closed output.
- Potentiometer – Exteroceptor, indicates value in a range, depending on how it has been set.
- Photocell – Exteroceptor, indicates darkness value in a range.
- TMP36 Temp Sensor – Exteroceptor, indicates temperature of environment in a range.
- Ultrasonic Distance Sensor – Exteroceptor, indicated distance to object in line, output is a digital value indicating distance.
- Sparkfun Motor Driver – Proprioceptor, I need to look at the manual for this driver to be sure, but most motor drivers have the ability to track how much rotation has occurred and what the current position of the motor is(at least on 3d Printers).
Exercise 2: Infra-rad Sensor
This exercise uses an infra red sensor, which we unfortunately do not have access to. The next section will deal with the ultrasonic sensor that we do have, but for this section I will only be able to reflect upon what is presented in the textbook and workbook. In this exercise, the infra-red sensor is explored, calibration is completed, and then the sensor is used for obstacle avoidance. The calibration process seems to be fairly straightforward with the focus being on calibrating the returned voltage to determine the range reading. Unfortunately the returned voltage is not linear to the range, but there is a bit of mathematical trickery that can be performed to convert this out put to a linear result which the robot can use. The process of implementing and tuning this for the individual sensor is the calibration process.
Exercise 3: Sonar Sensor
This is again an exercise for obstacle avoidance. There is once more a calibration process for determining the range read by the sensor, though in this case the returned value is not a voltage reading but rather two pulses and the timing between the pulses correlates to the distance. To calibrate this sensor, the pulse width reading must be tuned to accurately reflect the distance measurement. In this exercise and the previous one the robot is to turn 90 degrees when it “bumps” or gets too close to the wall.
Exercise 4: Infra-red and Sonar Wall Following
In this exercise we are asked to use both sensors to create a robot that can follow the wall. To do this, I would expect that we need two sensors point 90 degrees from each other. The robot would then proceed in a straight line until it found a wall. Once finding the wall the robot would need to orient itself so that one sensor can continue to track the wall and the other can point along the wall. At this point the one sensor would which we will call the wall sensor should be reading a value X, where X is the minimum distance between the sensor and the wall (say, 1cm); while the other is reading > X and will be referred to as the path sensor. The robot would then be able to move along the wall making slight adjustments to keep the wall sensor approximately at approximately X. If the path sensor then approaches X, while the wall sensor is X it means that the robot has approached a corner and will need to turn so that it can track along the new wall. There will need to be tuning for these parameters, and the X will need to have a ranged response so that the magnitude of the correction/adjustment made can increase or decrease depending on the situation.
Exercise 5: Laser Sensor
In this exercise, we are considering the use of a Laser sensor. One of the big differences with the laser sensor as compared to the previous sensors is that it has a wide 180 degree field of view. This sensor can be set up for collision avoidance as before, where if an object is detected as being to close to the robot the robot can turn to avoid collisions. The second part of this exercise asks us to consider how we might use a laser sensor to create a robot that can follow another robot with a special reflective marker on it. Firstly the collision avoidance logic will need to be implemented and given priority. Then the special marker will need to be used to attract the robot, so that while avoiding collisions the robot will move towards the marker. My expectation is that the special marker will return a nearly perfect signal, meaning the laser sensor will detect this as a point that is essentially at zero distance. There will be an offset between the sensor and the base or edge of the robot, so the robot will treat collisions as objects greater than X distance approaching X and behave as necessary to avoid collisions while seeking to move towards the marker or zero point as best as it can.
Exercise 6: Color blob tracking
In this exercise we are asked to use a camera to allow our robot to track and follow a coloured blob. Unfortunately colored paper will show up as a different color in different lighting conditions. As long as we can hold steady lighting, this should be doable though. We would need to parse the input from the camera and find a blob of pixels that match our desired colour, these pixels would then be able to be compared across frames to look for movement and it should be possible to have the robot move to keep them in frame if the colored blob is moved off the side of the frame. By evaluating which edge the color was last closest to the camera/robot could be rotated in that direction to look for the object again.
Exercise 7: Person Following
In this last exercise, a robot is set up to follow a person. However to simplify the task, a coloured blob is used to mark the person. This means that we are essentially rebuilding our laser sensor project from earlier. We will need to use a sonar or infra-red sensor for range finding, and obstacle avoidance as in the earlier exercises. Then we will need to add the colour blob tracking logic so that the robot will be attracted to the coloured blob, and will move towards the blob whenever collision avoidance allows. It seems like an interesting little exercise, and I am a little disappointed we don’t have the necessary parts in our kit to do something similiar. I thought about maybe using the photocell instead of a coloured blob, but there would be far too much interference in my environment for that to really be a viable option.
Conclusion
And this concludes the content for Unit 4. I am a little sad a t the lack of embedded videos though, so I wanted to include this link to how VR headsets and controllers are tracked in the lighthouse configuration. I was reminded of this technology by the section discussing the laser sensors and the blob tracking.
That’s all for now,
Shawn Ritter
November 12th, 2021