人工智能,机器学习和深度学习之间有什么区别?(2)
时间:2016-08-01 来源:Nvidia 点击:
次
这是一个由多个部分组成的系列文章的第一篇,该系列文章介绍了长期的技术记者迈克尔·科普兰德(Michael Copeland)的深度学习基础。
Machine Learning — An Approach to Achieve Artificial Intelligence
Machine learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.
Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.
As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O-P.” From all those hand-coded classifiers they would develop algorithms to make sense of the image and “learn” to determine whether it was a stop sign.
Good, but not mind-bendingly great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it. There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error.
Time, and the right learning algorithms made all the difference.
Deep Learning — A Technique for Implementing Machine Learning
Another algorithmic approach from the early machine-learning crowd, artificial neural networks, came and mostly went over the decades. Neural networks are inspired by our understanding of the biology of our brains – all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.
You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.
Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and “examined” by the neurons — its octogonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network’s task is to conclude whether this is a stop sign or not. It comes up with a “probability vector,” really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident it’s a speed limit sign, and 5% it’s a kite stuck in a tree ,and so on — and the network architecture then tells the neural network whether it is right or not.
Even this example is getting ahead of itself, because until recently neural networks were all but shunned by the AI research community. They had been around since the earliest days of AI, and had produced very little in the way of “intelligence.” The problem was even the most basic neural networks were very computationally intensive, it just wasn’t a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasn’t until GPUs were deployed in the effort that the promise was realized.
If we go back again to our stop sign example, chances are very good that as the network is getting tuned or “trained” it’s coming up with wrong answers — a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain. It’s at that point that the neural network has taught itself what a stop sign looks like; or your mother’s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.
Ng’s breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ng’s case it was images from 10 million YouTube videos. Ng put the “deep” in deep learning, which describes all the layers in these neural networks.
Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google’s AlphaGo learned the game, and trained for its Go match — it tuned its neural network — by playing against itself over and over and over.
Thanks to Deep Learning, AI Has a Bright Future
Deep learning has enabled many practical applications of machine learning and by extension the overall field of AI. Deep learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep learning’s help, AI may even get to that science fiction state we’ve so long imagined.
|
相关文章
- 机器学习产业Longlist长名单(TOP44) 机器学习 人工智能 标杆企业
- 全球及中国机器学习行业发展研究报告(2022年新版) 机器学习 人工智能 机器学习报告
- 《全球机器学习行业技术及市场前景展望报告》(全球技术及市场版) 机器学习 人工智能 机器学习报告
- Oracle:什么是机器学习? 机器学习 人工智能 深度学习
- AI,机器学习和深度学习之间有什么区别? AI 机器学习 深度学习
- 2024中国(山西)自动化暨机床展览会 工业自动化
- SEMl-e 2024第六届深圳国际半导体展 半导体 电子 智能 电子信息 展会论坛
- 第六届全球半导体产业(重庆)博览会(GSIE 2024) 半导体 电子 智能 电子信息 展会论坛
- 2024成渝双城电子智企会 电子 电子信息 展会论坛
- 第102届中国电子展在沪隆盛开幕 打造产业发展合作阵地 电子 电子信息 集成电路 汽车电子 展会论坛
- 第103届中国电子展 电子 电子信息 集成电路 汽车电子 展会论坛
- IOTE 2024上海物联网展 物联网 展会论坛
- 2024武汉国际电子元器件、材料及生产设备展览会 电子 嵌入式系统 集成电路 汽车电子 电子元器件 展会论坛
- 2024第四届广州国际新能源汽车产业智能制造技术展览会 新能源汽车 智能制造 展会论坛
- 2024第十一届广州国际汽车零部件及加工技术/汽车模具展览会 汽车电子 智能制造 展会论坛
- AUTO TECH 2024华南展 第十一届中国国际汽车技术展览会 展会论坛 汽车电子
- 第十二届中国(西部)电子信息博览会 电子 电子信息 集成电路 汽车电子 展会论坛
- CITE2024借助深圳电子信息产业的蓬勃发展,顺势而上 电子 电子信息 展会论坛
- 2024九峰山论坛暨中国国际化合物半导体产业博览会 化合物半导体 半导体 半导体材料 展会论坛
- CHInano 2024 第十四届中国国际纳米技术产业博览会 纳米 展会论坛 微纳制造MEMS
- 全球及中国机器视觉行业研究报告(2023-2024年) 机器视觉 机器视觉报告 研究报告
- 移动机器人(agv/amr)行业发展研究报告(2024-2025年) AGV AGV报告 移动机器人 移动机器人报告 AMR AMR报告 研究报告
- 机器视觉市场前景及AI+应用洞察研究报告(2024) 机器视觉 机器视觉报告 研究报告
- 中国机器视觉行业主要政策汇总 机器视觉 政策 产业政策
- IIM信息:2025-2030年工业自动化行业研究报告(2024) 研究报告 工业自动化 自动化报告 工业自动化报告
- SEMI-e第六届深圳国际半导体暨应用展览会(SEMI-e) 汽车半导体 IGBT Al算力 算法 展会论坛
- 华为、中芯国际亮相,大湾区迎来“新”风潮! 芯片 半导体 展会论坛
- 信创产业链条上的“明珠”企业集体亮相CITE2024 信创 电子信息 展会论坛
- NEPCON China2024 中国国际电子生产设备暨微电子工业展 电子 SMT 微电子 展会论坛
- 全球及中国人工智能行业产业链发展研究报告(2024-2025) 人工智能 人工智能报告 研究报告 AI报告
- 电池管理系统(BMS)市场调查与技术发展趋势报告(2024-2025) BMS 电池管理系统 BMS报告 电池管理系统报告 研究报告
- 2024年中国(上海)机器视觉展Vision China-报名入口 机器视觉 机器视觉报告 机器视觉 机器视觉报告
- 机器视觉产业链大数据及AI+应用洞察研究报告(2024) 机器视觉 机器视觉报告 研究报告 大数据
- 家电零部件展丨家电配件展丨CAEE家电制造业供应链展览会 智能家电 智能家居 展会论坛
- IIM信息:基站行业研究报告(2024-2025年) 基站 研究报告 基站报告