Tensegrity’s Next Step: Detecting Direction

One of the current major limitations of the tensegrity is that it does its distance calculations using only a start position and an end position. Although that does lead to some very useful results, the data is crude and has its limitations. For example, at the moment there is no way for our evolutionary algorithms to evolve for turning. This is because we can only tell changes in position, not changes in directions. To solve this problem, we will begin implementing calculating direction of the tensegrity.

When we began brain storming how to determine direction, two approaches immediately jumped to mind. The first was using a wiimote to track points on the tensegrity, much like tracking fingers. However, it turns out that this had the distinct problem that we cannot distinguish one point from another. In other words, the points that are tracked do not have an ID associated with them, so checking if a point has rotated would be quite difficult.

The second option was to color portions of the tensegrity different colors and use openCV to track the different sections and extrapolate direction from that. This is a problem that has been solved many times before, so there are lots of good ideas and documentation online for how to do something like that.

Because it seemed much easier to implement, we are going to use openCV to track sections of the tensegrity to determine the direction of the tensegrity. The first step in that is to detect different colored markers and differentiate them from the background. This is demonstrated in the video below.

Please note that the tracking was slowed down so that the image viewer program could update. Without that additional overhead, tracking can be done in near to real time.

There is still some tweaking to be done with this but the next step is to use this data to find a direction vector for the tensegrity. Once that is established, we will begin implementing counting rotations during tests.

Leave a Reply