Digital Ecologies 2013, UCL MSc AAC, Stamatios Psarras Blog for current progress: http://psarras.wordpress.com/
This term I tried to use an old Delta Robot and animate it. The object is to make the Robot to behave in a way that seems "alive" but most importantly to simulate to a certain degree where people pay attention to or what seems most important to them. This involves using computer vision to distinguish important data from the real world, like movement or people that are then mapped in the behaviour of the robot. Besides developing a series of behaviours from computer vision an effort was given on how the robot could better adapt to the world, from gathering data and rethinking its behaviour.
Possible Ideas: Face recognition to recognize people and make its centre of interest, Recognize Sounds, Cute Dog-Like movements that return to the previous location, Default scanning movements, some form of initialization.
Classic example, important things to notice the Large scale and the Interaction with multiple targets
-At 7:48 and forward. Algorithm for establishing best area of focus:
- Humans (HSB skin-tone colours)
- Toys (High Saturated Colours), (Maybe use edges to distinguish objects with high complexity?)
- Things that move Around,
- Habituation Gaussian (in order to get stack on one point, Steven Spielberg reference).
- Top Down procedures may change the different weights of those criteria.
- 3-Dimensional Emotional state. Able to differentiate between different tones in the voice and characterize their emotion.
Very good example of "machine learning" using computer vision and a Genetic Algorithm. This is a series of robot, fairly simple in their construction but the different parameters make them very diverse. Their genes include: speed, Angle Rotation, blinking Rate and Light Colour.
The scary speed of this robot shows the capabilities of the Delta Robot design in Fabrication.
Using two Delta Robots this is a good example how a delta Robot might be used in Art. More:
Darwin, C. (1877). A Biographical Sketch of an Infant. Mind, 2, 285-294.
Ideas for expression and how these expressions could develop. This assumes that the robot will have different stages of development, not unlike a child.
- More involuntary movements than voluntary ( The possible movements will gradually become more )
- More Voluntary
- Imitating others and treating certain objects in a particular way.
- More Voluntary
- Able to focus on strong light sources (same) / Colourful and gracefully moving items keeps its attention although fails to actually follow it very well (same)
- Becoming better on following and recognizing colours
- Close objects attract him ( moves towards it and shines brighter)
- Detecting something too close will call to "attack"
- Sudden objects appearing / Loud Noises causes scare( detecting those kind of objects will cause the animation of "scare", curl up, close/dim light )
- Able to recognize direction of sound and focus on it, as well as follow light sources
- Starting recognizing certain people and feeling more comfortable with them (Recognizing people and remembering them)
- Forming a trust only with people it trusts or look similar with people that he trusts.
How this might be translated into an evolving code after gaining some experiencing on the limitations of the DeltaRobot and the Kinect.
- Larger Steps of movement
- Following a Vector
- Adjusting Speed
- Brightest Point
- Following Moving Objects
- Following closest Person
- Following two closest People
- Reacting on a group
Things to consider for later:
- Using a GA to consider benefits/consumption
- Changing the Light Source
- Adding, Sound
- Adding Sound Direction Recognition
Project Development (Technical)
Here are some photos from the fabrication of this project. Most of the work was already done and I basically had to do some maintenance, assembling the parts of the Robot and fixing some of each legs.
Also in an attempt to put a centre to focus on when looking at the robot (although the robot is very impressive by itself!) I put a lamp that was hanging and able move even after the delta robot stopped moving.
The Delta Robot is a fascinating device and unlike to a 3D printer each motor does not control one Axis. This specific model used 2 motors for each leg and in order to move the Actuator (the head) from (0, 0, 0) to (1, 0, 0) essentially moving it in one axis only you are required to make all three legs to work together! In order to calculate how much you need to rotate the motors you have to use Kinematics. Basically these equations through geometrical calculations are described in the book Arduino and Kinect Projects  and also this guys provide code on how to implement them in processing. Besides those guys I would also like to thank the engineer Vahid Aminzadeh who helped me understand the mechanics and the equations of the Delta Robot.
The Delta Robot uses 6 Dynamixel Motors that are controlled through serial communication using processing. I was struggling with serial communication almost until the end of the project. The problem is that the code was based in processing serial communication examples which was both limited in speed and buggy for these motors at least. Nevertheless I managed to make it work with the Kinematics although but it was obviously slower than I would like.
After the mid Crits I was lucky enough to learn from George Forenza that there is actually a library from the guy that created SimpleOpenNi for Kinect the so called SimpleDynamixel that handled the serial communication for you. After that things began to be much more fluent and I was able to send signals much more quickly. This solved a series of problems that I had before and had snowballed the motors performance (serial communication delays the sketch which delays the frame-rate which sends serial slower). However I had to rewrite my whole code in order to meat the libraries new needs. In this part I also found two Algorithms that Helped me a lot with performance of the sketch and smoothing the Robot's movements.
Algorithm for frame-rate
The first algorithms is fairly simple but very powerful. It fixes a problem that is related with serial communication in general. The main rule is "if you don't need to send a serial signal do not send it!". This an easy to do mistake in processing. You have a function that sends a message for a position but the position doesn't actually change but you keep sending it and the sketch become slower than it needs to. The algorithm that I used, stores the current position of the motors or the head and updates to a new position only if there is a big enough change.
Algorithm for smoothing
The second algorithm is again a simple but fairly important one. Its job is to manipulate the point that the robot wants to reach by incrementing it a little bit each time towards the target position. This creates a smooth movement from one point to the next. Of course one might say that it makes the sketch heavier computational and that it cancels the work that the above algorithm did, but what I found was that you can balance them out and have a very good >30 frame-rate.
For the interaction I used kinect. However The main computation was done using the RGB camera so the whole sketch could potential be changed to use only a plain camera (although you would lose some of the depth interactions). I don't know if there are better ways to achieve both speed and control but I used the followed two methods cause I found them interesting on how they work and because they use more rough data than the processed skeleton(which seems to take some time for calibration as well)! Related BlogPost: []
This class used only sceneImage from Kinect and what it was doing was to calculate the centre of mass by counting all the pixels that belong to the same person. Kinect is nice cause it gives you an array for the pixels that contains values equal to the ID of each user. In that way you can also get certain other values as well like the maximum/minimum X/Y value as well as in what percentage each person is inside the scene!
This class is much more interesting since it digs deep into computer vision! I made lots of different variations for this class cause it has so much potential! The algorithm behind it uses frame differencing and counting the pixel that changed. Frame differencing removes the inactive pixels essentially letting you remove the background but not only that. You can also get the position that a movement occurred or how much movement there is in each place and with a bit of work what place has the largest movement. Using this I also created a grid that displayed the history for all past movements that slowly decayed over time. Having said that a next step was to gather the pixels that moved and through thresholding I managed to isolate colours that match the skin which I finally used for the robot to follow.
To make the robot more life-like it was important to analyse real world data. What I found important, deriving from the ideas of Rodney Brooks, is were will the robot focus on. This lead me to create a series of different "targets" that the robot could look and another set of predefined "moves". Also independent on these moves the robot could change from an Aggressive mode to a more calmer one by adjusting among other parameters and its speed. The targets derived from computer vision (see kinect above) and include: tracking moving objects, tracking skin colour and tracking Centre of Mass of People. Also predetermined moves were done using either equations or by manually moving the Delta Robot and replaying later.
Data were collected from the interaction of the robot with the crowed. After analyzing those data the robot would then change its behaviour by adjusting some limits. You can see more analytical the procedure on diagrams below.
The data were processed by calculating in which mode people stayed more time or they travelled more distance and the weight were adjusting accordingly. Likewise there was a calculation for aggression. Every time it receives new data the new instructions are saved on a file. Furthermore every now and then the robot reads the new instructions and re adjust its weights.
Finally I used an algorithm to adjust the mapping from point of interest to the deltaRobot's position. This was done in order to better relate to the target position and always offer the full capabilities of the robot regardless of its position or range of movement. To achieve that a number of points regarding the point of interest had to be stored and calculating the limits of the array I constantly changed the mapping. This had a very noticeable effect. What was happening was that if lets say you move from -1 meter to +1 meter the robot was moving -1/4 and +1/4 this means that for every 4cm the delta moves 1cm but if you were moving from -2 to +2 the delta robot will move for every 8cm 1cm. This makes the delta robot to vary its behaviour on a more dynamic way.
some blog Posts that I think add up to my whole experience with the Delta Robot
Puppeteering & head