关于机器人dsp控制的外文翻译内容摘要:

ds together and then sets himself to removing the lid from the jar. He grasps the glass jar in one hand and the lid in the other and begins to unscrew the lid by turning it counterclockwise. While he is opening the jar, he pauses to wipe his brow, and glances at the robot to see what it is doing. He then resumes opening the jar. The robot then attempts to imitate the action. Although classical machine learning addresses some issues this situation raises, building a system that can learn from this type of interaction requires a focus on additional research questions. Which parts of the action to be imitated are important (such as turning the lid counterclockwise), and which aren’t (such as wiping your brow)? Once the action has been 5 performed, how does the robot evaluate the performance? How can the robot abstract the knowledge gained from this experience and apply it to a similar situation? These questions require knowledge about not only the physical but also the social environment. Constructing and testing humanintelligence theories In our research, not only do we draw inspiration from biological models for our mechanical designs and software architectures, we also attempt to use our implementations of these models to test and validate the original hypotheses. Just as puter simulations of neural s have been used to explore and refine models from neuroscience, we can use humanoid robots to investigate and validate models from cognitive science and behavioral science. We have used the following four examples of biological models in our research. Development of reaching and grasping. Infants pass through a sequence of stages in learning handeye coordination. We have implemented a system for reaching to a visual target that follows this biological model. Unlike standard kinematic manipulation techniques, this system is pletely selftrained and uses no fixed model of either the robot or the environment. Similar to the progression observed in infants, we first trained Cog to orient visually to an interesting object. The robot moved its eyes to acquire the target and then oriented its head and neck to face the target. We then trained the robot to reach for the target by interpolating between a set of postural primitives that mimic the responses of spinal neurons identified in frogs and rats. After a few hours of unsupervised training, the robot executed an effective reach to the visual target. Several interesting outes resulted from this implementation. From a puter science perspective, the twostep training process was putationally simpler. Rather than attempting to map the visualstimulus location’s two dimensions to the nine DOF necessary to orient and reach for an object, the training focused on learning two simpler mappings that could be chained together to produce the desired behavior. Furthermore, Cog learned the second mapping (between eye position and the postural primitives) without supervision. This was possible because the mapping between stimulus location and eye position provided a reliable error signal. From a biological standpoint, this implementation uncovered a limitation in the postural primitive theory. 6 Although the model described how to interpolate between postures in the initial workspace, no mechanism for extrapolating to postures outside the initial workspace existed.. Rhythmic movements. Kiyotoshi Matsuoka describes a model of spinal cord neurons that produce rhythmic motion. We have implemented this model to generate repetitive arm motions, such as turning a crank. Two simulated neurons with mutually inhibitory connections drive each arm joint. The oscillators take proprioceptive input from the joint and continuously modulate the equilibrium point of that joint’s virtual spring. The interaction of the oscillator dynamics at each joint and the arm’s physical dynamics determines the overall arm motion. This implementation validated Matsuoka’s model on various realworld tasks and provided some engineering benefits. First, the oscillators require no kinematic model of the arm or dynamic model of the system. No a priori knowledge was required about either the arm or the environment. Second, the oscillators were able to tune to a wide task range, such as turning a crank, playing with a Slinky, sawing a wood block, and swinging a pendulum, all without any change in the control system configuration. Third, the system was extremely tolerant to perturbation. Not only could we stop and start it with a very short transient period (usually less than one cycle), but we could also attach large masses to the arm and the system would quickly pensate for the change. Finally, the input to the oscillators could e from other modalities. One example was using an auditory input that let the robot drum along with a human drummer. Visual search and attention. We have implemented Jeremy Wolfe’s model of human。
阅读剩余 0%
本站所有文章资讯、展示的图片素材等内容均为注册用户上传(部分报媒/平媒内容转载自网络合作媒体),仅供学习参考。 用户通过本站上传、发布的任何内容的知识产权归属用户或原始著作权人所有。如有侵犯您的版权,请联系我们反馈本站将在三个工作日内改正。