ENGINE. "We introduce a knowledge engine, which learns and shares knowledge representations, for robots to carry out a variety of tasks." https://lnkd.in/d9dr5SN View in LinkedIn
POTENT PROSPECT. "The fact that robots of the future will be capable of shared and distributed learning has profound implications and is scaring some, while exciting others." http://www.lifehacker.com.au/2015/12/how-do-robots-see-the-world/ View in LinkedIn
MACHINE LEARNING. "In our lab, instead of manually "programming" our robots, we take a machine learning approach where we use variety of data and learning methods to train our robots. Our robots learn from watching (3D) images on the Internet, from observing people via RGB-D cameras, from observing users playing video games, and from humans giving feedback to the robot." http://pr.cs.cornell.edu/ View in LinkedIn
AUTOSUFFICIENCY. "Will robots soon be able to teach themselves ... everything? There's a robot in California teaching itself to walk. Its name is Darwin, and like a toddler, it teeters back and forth in a UC Berkeley lab, trying and falling, and then trying again before getting it right. But it's not actually Darwin doing all this. It's a neural network designed to mimic the human brain." http://www.cnbc.com/2015/12/04/a-baby-step-on-way-to-robots-learning-every-human-thing.html View in LinkedIn
SILICO SALTATIONS. "Darwin's baby steps speak to what many researchers believe will be the greatest leap in robotics — a kind of general machine learning that allows robots to adapt to new situations rather than respond to narrow programming." http://www.cnbc.com/2015/12/04/a-baby-step-on-way-to-robots-learning-every-human-thing.html View in LinkedIn
TRIAL-AND-ERROR LEARNING. 'Developed by Pieter Abbeel and his team at UC Berkeley's Robot Learning Lab, the neural network that allows Darwin to learn is not programmed to perform any specific functions, like walking or climbing stairs. The team is using what's called "reinforcement learning" to try and make the robots adapt to situations as a human child would. Like a child's brain, reinforcement technology invokes the trial-and-error process." http://www.cnbc.com/2015/12/04/a-baby-step-on-way-to-robots-learning-every-human-thing.html View in LinkedIn
HUMAN-LIKE. "Using a deep learning technique similar to those that have been successfully advancing speech and image recognition, University of California, Berkeley researchers are developing robots that learn motor tasks by trial and error. They say their approach more closely approximates human learning." https://lnkd.in/e6W7yP4 View in LinkedIn
COST-EFFECTIVE. "If a robot could learn a task by itself by watching experts, the ability to deploy robots quickly into a task at a low cost becomes more realistic." https://lnkd.in/e3DKPE3 View in LinkedIn
METABOLIC PATHWAYS are highly complex and interconnected. But like all other elements of living things, they were not built from scratch, but evolved by bolting on molecules that had evolved for other purposes, thus creating a patchwork of inefficient and maladapted works in progress, just as the human foot was built by a drunken committee from the foot of tree-living ancestors. View in LinkedIn
BOLTED-ON PATCHWORK. Studies of exactly how metabolic pathway evolution may have occurred are very instructive to this patchwork bolted-on view of the biology of living creatures. The following study by Daniel J. Kliebenstein in Arabidopsis plants of the outcomes of gene duplication is highly illustrative case study of these principles. View in LinkedIn