Unit 11: Navigation and Group Robotics

This unit has two primary topics to cover, navigation and group dynamics. In chapter 19 we will be looking at how robots might navigate a space. I expect that we will examine the problem of navigation in light of the control schemes we have discussed previously (reactive, deliberative, hybrid, behavior based) and what the advantages might be of a particular approach. In the following chapter I expect that we will be looking into group communication and behavior. In the last unit we were introduced to the idea of emergent behavior such as flocking, I expect that this chapter will use similiar principles to create systems that are dynamic and capable of far more complex behaviors such as group problem solving and the pursuit of group goals.

Table of Contents

Reading – Chapter 19: Going Places

As I suspected this chapter was indeed about navigation, however it was less about the applicability of the control methods we have been discussing and more about the problems of localization, mapping and searching. An emphasis was placed upon the SLAM problem, which is when robots have to simultaneously perform localization and mapping as part of their navigation problem. The chapter was interesting and given that the text was written back in 2007, I found myself wondering how self-driving cars such as a Tesla are approaching the challenge of navigation now. I expect that there has been significant progress and development in many areas related to robotics over the past decade, with self-driving cars being one of the most publicized robotic revolutions occurring at the moment. I think online shopping warehouses, such as Amazon’s fulfillment sensors would also be an interesting application of the navigation problem as I have heard that there are many many robots in their warehouses that are whizzing around preparing deliveries.

Food For Thought 19-1

Ants are tremendously effective at dead reckoning, finding their way in the vast open spaces of the desert through odometry. They seem to be much better at it than people, based on some studies that have been done. Of course no studies have actually pitted a human against an ant in the desert, but comparable experiments find people lost and confused, and ants on nearly a straight path back home.

There isn’t too much of a question here, but it is an interesting tidbit to think about. I will say that I was under the impression that ants were able to follow pheromone trails for navigation so it would be similiar to a human leaving a trail behind them. However this example sounds like they are saying that even without the pheromone trail, ants are so good at localization that they can find the direct path back without any landmarks or trails to follow. I think another interesting navigation example like this would be that of migratory birds who are able to accurately travel great distances to reach a specific destination. I have heard that part of this is due to a sensitivity they have to electromagnetic fields, so in essence they have an internal compass sensor that they can use for navigation.

Food For Thought 19-2

A great deal of research has gone into understanding how rats navigate, since they are very good at it (so are ants, as we noted above, but their brains are much harder to study being so small). Researchers have put rats into regular mazes, and even mazes filled with milk. One thing that is very interesting about rats that learn a maze well is that they may run into a wall (if a new wall is stuck in by the experimenter) even if they can see it, seeming to run based on their stored map rather than their sensed data. What type of robot control system would produce such behavior? Rats only do it once though.

This sounds like a deliberative control system. However it is interesting to note that the rats will only hit the wall once. It seems that after they hit the wall they switch from a deliberative approach to something more reactive and to me that sounds like a hybrid approach where the middle management layer kicks in and reprioritizes after an issue is encountered.

Reading – Chapter 20: Go Team!

Controlling a group of robots leads additional challenges as compared to the control of a single robot. Many of the same control schemes can be extended for use with groups, but there are considerable interference concerns that arise when dealing with a group. Communication is not necessary for all groups, but it does enable more complex interactions at an additional processing cost. Groups of robots can be controlled in either a centralized or distributed fashion, and those two approaches can be approached from the four control schemes we discussed previously. In the last unit I included a video about the Amazon robots, it was an interesting look at the use of robots to improve efficiency and optimize warehouse operations. I think the amazon robots are using a centralized control scheme with extensive communications, however, I also think they are a hybrid system. The controller tells which robot what to grab and where to deliver it, but the robots have their own sensors that they use to avoid obstacles and report problems they encounter. It is a complex system with thousands of robots moving tons of goods and working with people to allow amazon to deliver packages far faster than any other company.

Food For Thought 20-1

The ways in which robots in a group interfere is not very different from the way people in a group interfere with each other. Some roboticists are interested in studying how people resolve conflict and collaborate, to see if some of those methods can be used to make robots better team players as well. Unlike people, robots don’t tend to be self-interested or to show a vast variety of personal differences, unless they are specifically programmed that way.

Ultimately, I think the biggest limitation on robots right now is not so much the hardware as it is the programming and logic that controls the robots. Robots can only do what they have been programmed to do. As such the interference is a natural consequence of how we have implemented our robots. If we want robots to be able to work together we will specifically have to implement that behaviour and functionality. In robotics, and many other fields, we are continually looking to nature to see what solutions have been found and used to successfully accomplish something. We then take inspiration from that solution and adapt it to the challenge we are currently facing. So the idea of studying conflict resolution in people seems like a good place to start for designing a solution for robots.

Food For Thought 20-2

It is theoretically impossible to produce totally predictable group behavior in multi-robot systems. In fact, it is a lost cause to attempt to prove or guarantee precisely where each robot will be and what it will do after the system is running. Fortunately, that does not mean that multi-robot system behavior is random. Far from it; we can program our robots so that it is possible to characterize, even prove, what the behaviour of the group will be. The important fact of life for multi-robot systems is that we can know a great deal about what the group as a whole will do, but we cannot know exactly what each individual in such a group will do.

Reading this I was reminded of atomic physics. We cannot actually know the exact location of an electron in an atom, but we can refer to electron clouds which allow us to view the probability that an electron might be in a certain region. When we begin to look at group robotic systems, we are examining a complex system and because of the sheer number of possible interactions, the variables become too great for us to be able to exactly predict how the system will operate and where each part will be. It reminds me of the chaos theory scene in Jurassic Park where Dr. Malcolm explains that even though a drop of water might start at the same place, its path and destination can vary wildly between samplings. However, even with such variability and wide range of possibilities we can still make approximations and have expectations about the general behavior of such systems.

Ian Malcolm explains chaos theory

Food For Thought 20-3

Imagine that the robots in the group can change their behavior over time, by adapting and learning. In chapter 21 we will study what and how robots can learn. Learning in a team of robots makes coordination even more complex, but it also makes the system more interesting and potentially more robust and useful.

The idea of robots or machines learning is an interesting one with some very exciting applications. I remember seeing some videos where a machine is learning to play a game. In machine learning a system called reinforcement learning or the genetic algorithm can be used to allow a machine to learn and improve in its performance. This approach has been used to successfully teach an “AI” how to drive a race, and it would be interesting to see how such “evolution” might be used in a group of robots.

AI learning how to drive a race in trackmania.

Food For Thought 20-4

The ideas of positive and negative feedback are used in common English terminology to giving praise/reward and punishment. Actually this is a loose use of terms; we’ll talk about how positive and negative feedback relate to reward and punishment in Chapter 21, which deals with robot learning.

My intro psychology course covered many topics including behaviorism which we previously discussed and the use of rewards or punishment to affect and shape behaviors. The day-to-day use of positive and negative reinforcement is slightly different than the proper use of these terms. Specifically, there were four terms that we covered: positive reinforcement, negative reinforcement, positive punishment and negative punishment. In proper usage, positive indicates that something is added, while negative indicates that something is removed, while reinforcement indicates an increase in a behaviour, while punishment results in the decrease of a behavior. This creates a situation that is unexpected by the way that we use language in everyday conversation. Firstly is the concept of negative reinforcement. In daily use this may sound as though we are discouraging a behaviour, when in truth what is happening is we are removing something unpleasant or undesired to reinforce or encourage a behaviour. On the other hand positive punishment sounds like it may be an oxymoron, how could a punishment be good? These terms are not actually used to indicate the desirability of an action on a behaviour, but rather what is actually occurring. For example if you have ever touched a hot stove you quickly learn not to touch hot stoves. This is positive punishment. You have gained an unpleasant experience or receiving a burn and this results in you being much more careful around stoves and hot items in the future.


I am looking forward to the next chapter where we will have a chance to learn more about how robots learn and if they learn in some of the ways I mentioned above, such as the genetic algorithm and reinforcement learning or if a different method is preferred. From my understanding AI learning happens as the result of several generations facing a problem and the best performers being selected to create the basis for the next generation. Robots don’t have babies so they are unlikely to learn in generations, but they might learn through repeated attempts to complete a task with better performance being used as the foundation for future performances.

Shawn Ritter

December 9th, 2021

Feature Image: https://www.cinema52.com/2013/10/02/ian-malcolm-sucks-at-chaos/

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.