打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
可自我学习的仿生脑计算机 - 计算机新时代
userphoto

2014.01.01

关注
摘要:计算机已进入一个新时代,类大脑计算机不仅可以自动完成一些需要辛苦编程的任务,它还具有容错能力。这是怎么实现的?这主要是新型计算机基于生物设计,因此它算法是不断变化的,而这也使得系统能够适应并解决问题。
《纽约时报》网站报道,加利福尼亚州PALO ALTO的计算机已经进入了一个新时代,它们能够从自己的错误中吸取教训。据悉这个具有新芯片的计算机不仅可以自动完成需要辛苦编程的任务(比如说顺利和有效地移动机器人的手臂外),还能回避乃至容忍错误,从而潜在地避免“电脑死机”发生。
目前这个新的计算机已经在一些大型科技公司使用,该计算机算法主要基于生物神经系统,特别之处则是它的“神经元”能对刺激作出反应,并与其它神经元协作处理信息。正是因为基于这样的机制,当计算机执行任务时能够吸收新的信息,并在基于信息基础上进行调整。
在未来几年,这种方法很有可能产生新一代的人工智能系统,从而非常容易地执行一些只有人类才能所具有的一些功能,比如看、讲话、听东西、导航、操纵和控制等。当执行一些面部和语音识别等任务时,它可以处理庞大的数据,不过就目前来看,虽然具有一定的容错能力,但这些仍处于初级阶段,它还严重依赖于人类的编程。
类大脑计算机:前进一小步,实则一大步
设计师称,尽管目前计算机思维或意识,仍像科幻小说里的某个主题,并且实现它仍然还很遥远,但这种计算风格为机器人安全的在物理世界行走和驾驶扫清了道路。致力于这种新型计算机电路研究的天体物理学家Larry Smarr称,他们努力目标正从工程计算机系统向生物计算的方向转移。
传统的计算机受限于人们的编程代码,比如在计算机视觉系统中,唯一的“识别”对象是通过统计面向算法来识别物体,一种算法就好比是一个食谱,通过一套循序渐进的指示来执行计算。但是在去年,在没有任何其它管理的情况下,谷歌研究人员通过一个被称为神经网络的机器学习算法,执行和完成了一些鉴定工作——网络扫描1000万张图片的数据库,从而训练出计算机自己的综合分析能力。
今年六月,谷歌公司表示,它们已使用神经网络技术开发了一个新的搜索服务,以帮助客户更准确地找到特定的照片。这个神经网络已经用于硬件和软件中,并正在推动脑科学知识的爆炸。
领导斯坦福大学在硅谷进行脑研究项目的计算机科学家Kwabena Boahen称,而这也是其局限性之一,因为科学家至今还远远没有充分了解大脑的功能。他称,尽管如此,但这些夸张的理论,仍然给他带来一些灵感并构建相应的东西。
类大脑计算机与传统计算机的不同
迄今为止,电脑的设计理念仍是出自65年前数学家约翰·冯·诺伊曼之手,微处理器以闪电般的速度执行长串的0和1编程指令,这些指令信息一般单独存放在内存、处理器、相邻的存储芯片或更高容量的磁盘驱动器之中。比如,处理温度气候模型或字母文字处理情况下,计算机进行程序化操作时这些数据只有短期记忆,计算机只能频繁地进出处理器,最后得到结果才会移到它的主存储器。
而新型计算机的处理器则由众多模仿生物神经突触并连接在一起的电子元器件组成,因为他们是基于大群体的类神经元元件,所以它们被称为神经形态处理器。值得一提的是,这个术语最早由加州理工学院的物理学家Carver Mead提出,他于20世纪80年期后期首先开创了这个概念。
由于数据和数据间的连接相当于“加权”,实际上处理器已经不是在执行“编程”的命令了,根据数据的相关性来看,处理器已经在学习。在芯片中,那些权重会改变数据流,然后会造成他们价值观改变和产生尖波,而这会产生信号并扩散到其他组件中产生反应,并最终改变神经网络。从本质上来说,这种设计理念与信息改变了人类思想和行为的方式基本差不多。
IBM计算机认知计算研究负责人、IBM计算机科学家Dharmendra Modha称,我们可以把计算带给数据,而不是像今天这样,被动地把数据输入进去计算。Dharmendra Modha继续称,未来传感器将成为计算机,不仅如此,它还会开辟使用计算机芯片地新方式——可以无处不在。
类大脑计算机优势明显,但不会取代今天的计算机
这种类大脑计算机基于硅芯片,未来不会取代今天的计算机,但会增加他们的能力,至少从目前来看是这样。许多计算机设计者认为今天的计算机不仅不会被取代,它们的未来还会被当做协处理器,这意味着他们可以串联并嵌入到智能手机和巨大的集中式计算机中组成云。是的,现代计算机已经由各种各样的协处理器执行专门任务,比如像今天用户都在使用自己的手机处理图形和转换视觉、音频,而使用笔记本去处理其他数据等。
这种新型计算机一个最大的优点是它具有容忍故障的能力,传统计算机是精确的,但他们由于死脑筋,在遇到失败时就会崩溃,但是新型计算机不一样,它是基于生物设计,因此它的算法是不断变化的,而这也使得系统能够不断地适应并解决故障,从而完成任务。
与人脑相比,类大脑计算机还有很长的路要走
实际上类人脑计算机能效也非常低下,特别是与真实的大脑相比。IBM公司在去年宣布,它已建立一个超级计算机仿真人脑团队,该超级计算机粗略估计大约包含了100亿个神经元,而这相当于人脑10%以上的比例,但它的运行速度与人脑相比,差不多慢了有1500倍。此外,它还需要几百万瓦特的功率,而生物大脑在工作时仅需要20瓦特的功率。
IBM计算机科学家Dharmendra Modha称,这个被称为Compass的超级计算机,如果要达到人类大脑一样速度,它需要的电量等价于旧金山和纽约两座城市的供电。
目前IBM和高通以及斯坦福大学的研究团队,已经设计出了具有神经形态的处理器,与此同时,高通还表示首个商业版本的神经形态处理器将会于2014年推出,预计它的推出将主要用于进一步发展。此外许多大学现在也集中于这种新风格计算的研究,今年秋天,美国国家科学基金会资助了由麻省理工学院、哈佛大学和康奈尔大学等组成的研究中心,该研究中心也从事类似的研究。
斯坦福大学计算机科学家Andrew Ng.教授称,这体现了时代精神,每个人都知道将会有什么大事发生,而他们现在正在试图找出它是什么。
Brainlike Computers, Learning From Experience - NYTimes.com
Erin Lubin/The New York Times
Kwabena Boahen holding a biologically inspired processor attached to a robotic arm in a laboratory at Stanford University.
By JOHN MARKOFF
Published: December 28, 2013
PALO ALTO, Calif. — Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.
The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.
In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.
Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.
“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs theCalifornia Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.
Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.
But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.
In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.
The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.
“We have no clue,” he said. “I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.”
Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.
The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.
The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.
They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.
“Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”
The new computers, which are still based on silicon chips, will not replace today’s computers, but will augment them, at least for now. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and in the giant centralized computers that make up the cloud. Modern computers already consist of a variety of coprocessors that perform specialized tasks, like producing graphics on your cellphone and converting visual, audio and other data for your laptop.
One great advantage of the new approach is its ability to tolerate glitches. Traditional computers are precise, but they cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks.
Traditional computers are also remarkably energy inefficient, especially when compared to actual brains, which the new neurons are built to mimic.
I.B.M. announced last year that it had built a supercomputer simulation of the brain that encompassed roughly 10 billion neurons — more than 10 percent of a human brain. It ran about 1,500 times more slowly than an actual brain. Further, it required several megawatts of power, compared with just 20 watts of power used by the biological brain.
Running the program, known as Compass, which attempts to simulate a brain, at the speed of a human brain would require a flow of electricity in a conventional computer that is equivalent to what is needed to power both San Francisco and New York, Dr. Modha said.
I.B.M. and Qualcomm, as well as the Stanford research team, have already designed neuromorphic processors, and Qualcomm has said that it is coming out in 2014 with a commercial version, which is expected to be used largely for further development. Moreover, many universities are now focused on this new style of computing. This fall the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.
The largest class on campus this fall at Stanford was a graduate level machine-learning course covering both statistical and biological approaches, taught by the computer scientist Andrew Ng. More than 760 students enrolled. “That reflects the zeitgeist,” said Terry Sejnowski, a computational neuroscientist at the Salk Institute, who pioneered early biologically inspired algorithms. “Everyone knows there is something big happening, and they’re trying find out what it is.”
Related Links:
类大脑计算机,一个可以适应并从经验中学习的计算机
Brainlike Computers, Learning From Experience 
本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
大脑—你真的了解它吗?
大脑进化史
很好奇,人的大脑算力相当于什么水平的GPU和CPU呀 ?
20个英语四六级考试阅读难点关键句翻译
我们仅仅用了大脑的10%吗?
学会如何学习
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服