{"id":6536,"date":"2018-06-08T23:05:46","date_gmt":"2018-06-08T23:05:46","guid":{"rendered":"http:\/\/dounyazad.com\/sciences\/?p=6536"},"modified":"2020-07-31T14:28:32","modified_gmt":"2020-07-31T14:28:32","slug":"whats-the-difference-between-artificial-intelligence-machine-learning-and-deep-learning","status":"publish","type":"post","link":"https:\/\/algerienetwork.com\/sciences-tec\/whats-the-difference-between-artificial-intelligence-machine-learning-and-deep-learning\/","title":{"rendered":"What\u2019s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?"},"content":{"rendered":"<p><span class=\"meta-sep\">by<\/span> <span class=\"author vcard\"><a class=\"url fn n\" title=\"View all posts by Michael Copeland\" href=\"https:\/\/blogs.nvidia.com\/blog\/author\/michaelcopeland\/\">Micha<\/a><\/span><span class=\"author vcard\"><a class=\"url fn n\" title=\"View all posts by Michael Copeland\" href=\"https:\/\/blogs.nvidia.com\/blog\/author\/michaelcopeland\/\">el Copeland&nbsp;<\/a><\/span><\/p>\n<p>Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to.<\/p>\n<p>For example, when Google DeepMind\u2019s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and <a href=\"http:\/\/www.nvidia.com\/object\/deep-learning.html\">deep learning<\/a> were used in the media to describe how DeepMind won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol. But they are not the same things.<\/p>\n<p>The easiest way to think of their relationship is to visualize them as concentric circles with AI \u2014 the idea that came first \u2014 the largest, then machine learning \u2014 which blossomed later, and finally deep learning \u2014 which is driving today\u2019s AI explosion \u2014 &nbsp;fitting inside both.<\/p>\n<h2><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/Deep_Learning_Icons_R5_PNG.jpg.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-34236 size-full\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/Deep_Learning_Icons_R5_PNG.jpg.png\" sizes=\"auto, (max-width: 1080px) 100vw, 1080px\" srcset=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/Deep_Learning_Icons_R5_PNG.jpg.png 1080w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/Deep_Learning_Icons_R5_PNG.jpg-300x191.png 300w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/Deep_Learning_Icons_R5_PNG.jpg-768x489.png 768w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/Deep_Learning_Icons_R5_PNG.jpg-672x427.png 672w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/Deep_Learning_Icons_R5_PNG.jpg-676x430.png 676w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/Deep_Learning_Icons_R5_PNG.jpg-328x209.png 328w\" alt=\"What's the difference between Artificial Intelligence (AI), Machine Learning, and Deep Learning? \" width=\"1080\" height=\"687\"><\/a><\/h2>\n<h2><b>From Bust to Boom<\/b><\/h2>\n<p>AI has been part of our imaginations and simmering in research labs since a handful of computer scientists rallied around the term at the Dartmouth Conferences in 1956 and birthed the field of AI. In the decades since, AI has alternately been heralded as the key to our civilization\u2019s brightest future, and tossed on technology\u2019s trash heap as a harebrained notion of over-reaching propellerheads. Frankly, until 2012, it was a bit of both.<\/p>\n<p>Over the past few years <a href=\"https:\/\/blogs.nvidia.com\/blog\/2016\/01\/12\/accelerating-ai-artificial-intelligence-gpus\/\">AI has exploded<\/a>, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) \u2013 images, text, transactions, mapping data, you name it.<\/p>\n<p>Let\u2019s walk through how computer scientists have moved from something of a bust \u2014 until 2012 \u2014 to a boom that has unleashed applications used by hundreds of millions of people every day.<\/p>\n<h2><b>Artificial Intelligence &nbsp;<\/b>\u2014 <b>&nbsp;Human Intelligence Exhibited by Machines<\/b><\/h2>\n<figure id=\"attachment_34224\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/checkers_philip-taylor-1.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-34224\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/checkers_philip-taylor-1.jpg\" sizes=\"auto, (max-width: 1080px) 100vw, 1080px\" srcset=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/checkers_philip-taylor-1.jpg 1024w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/checkers_philip-taylor-1-300x199.jpg 300w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/checkers_philip-taylor-1-768x509.jpg 768w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/checkers_philip-taylor-1-672x445.jpg 672w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/checkers_philip-taylor-1-676x448.jpg 676w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/checkers_philip-taylor-1-328x217.jpg 328w\" alt=\"King me: computer programs that played checkers were among the earliest examples of artificial intelligence (AI), stirring an early wave of excitement&nbsp;in the 1950s.\" width=\"1080\" height=\"715\"><\/a><figcaption class=\"wp-caption-text\">King me: computer programs that played checkers were among the earliest examples of artificial intelligence, stirring an early wave of excitement&nbsp;in the 1950s.<\/figcaption><\/figure>\n<p>Back in that summer of \u201956 conference the dream of those AI pioneers was to construct complex machines \u2014 enabled by emerging computers \u2014 that possessed the same characteristics of human intelligence. This is the concept we think of as \u201cGeneral AI\u201d \u2014 &nbsp;fabulous machines that have all our senses (maybe even more), all our reason, and think just like we do. You\u2019ve seen these machines endlessly in movies as friend \u2014 &nbsp;C-3PO \u2014 &nbsp;and foe \u2014 &nbsp;The Terminator. General AI machines have remained in the movies and science fiction novels for good reason; we can\u2019t pull it off, at least not yet.<\/p>\n<p>What we can do falls into the concept of \u201cNarrow AI.\u201d Technologies that are able to perform specific tasks as well as, or better than, we humans can. Examples of narrow AI are things such as image classification on a service like Pinterest and face recognition on Facebook.<\/p>\n<p>Those are examples of Narrow AI in practice. These technologies exhibit some facets of human intelligence. But how? Where does that intelligence come from? That get us to the next circle, Machine Learning.<\/p>\n<h2><b>Machine Learning <\/b>\u2014 <b>&nbsp;An Approach to Achieve Artificial Intelligence<\/b><\/h2>\n<figure id=\"attachment_34225\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/spam_fit.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-34225\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/spam_fit.jpg\" sizes=\"auto, (max-width: 1080px) 100vw, 1080px\" srcset=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/spam_fit.jpg 1180w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/spam_fit-300x175.jpg 300w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/spam_fit-768x447.jpg 768w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/spam_fit-672x391.jpg 672w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/spam_fit-676x394.jpg 676w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/spam_fit-328x191.jpg 328w\" alt=\"Spam free diet: machine learning, a subset of AI (Artificial Intelligence) helps keep your inbox (relatively) free of spam.\" width=\"1080\" height=\"629\"><\/a><figcaption class=\"wp-caption-text\">Spam free diet: machine learning helps keep your inbox (relatively) free of spam.<\/figcaption><\/figure>\n<p><a href=\"http:\/\/www.nvidia.com\/object\/machine-learning.html\">Machine Learning<\/a> at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is \u201ctrained\u201d using large amounts of data and algorithms that give it the ability to learn how to perform the task.<\/p>\n<p>Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.<\/p>\n<p><em><strong>To learn more about deep learning, listen to our Deep Learning 101 podcast with NVIDIA\u2019s own Will Ramey.&nbsp;<\/strong><\/em><\/p>\n<p><iframe loading=\"lazy\" src=\"https:\/\/w.soundcloud.com\/player\/?url=https%3A\/\/api.soundcloud.com\/tracks\/295616968&amp;color=%23ff5500&amp;inverse=false&amp;auto_play=false&amp;show_user=true\" scrolling=\"no\" data-mce-fragment=\"1\" width=\"100%\" height=\"20\" frameborder=\"no\"><\/iframe><\/p>\n<p>As it turned out, one of the very best application areas for machine learning for many years was <a href=\"http:\/\/www.nvidia.com\/object\/imaging_comp_vision.html\">computer vision<\/a>, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters \u201cS-T-O-P.\u201d From all those hand-coded classifiers they would develop algorithms to make sense of the image and \u201clearn\u201d to determine whether it was a stop sign.<\/p>\n<p>Good, but not mind-bendingly great. Especially on a foggy day when the sign isn\u2019t perfectly visible, or a tree obscures part of it. There\u2019s a reason computer vision and image detection didn\u2019t come close to rivaling humans until very recently, it was too brittle and too prone to error.<\/p>\n<p>Time, and the right learning algorithms made all the difference.<\/p>\n<h2><b>Deep Learning <\/b>\u2014 <b>A Technique for Implementing Machine Learning<\/b><\/h2>\n<figure id=\"attachment_34226\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/orange_cat-1.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-34226\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/orange_cat-1.jpg\" sizes=\"auto, (max-width: 1080px) 100vw, 1080px\" srcset=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/orange_cat-1.jpg 960w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/orange_cat-1-300x200.jpg 300w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/orange_cat-1-768x512.jpg 768w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/orange_cat-1-672x448.jpg 672w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/orange_cat-1-676x451.jpg 676w, https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2016\/07\/orange_cat-1-328x219.jpg 328w\" alt=\"Herding cats: Picking images of cats out of YouTube videos was one of the first breakthrough demonstrations of deep learning, a subset of AI and machine learning. \" width=\"1080\" height=\"720\"><\/a><figcaption class=\"wp-caption-text\">Herding cats: Picking images of cats out of YouTube videos was one of the first breakthrough demonstrations of deep learning.<\/figcaption><\/figure>\n<p>Another algorithmic approach from the early machine-learning crowd, Artificial Neural Networks, came and mostly went over the decades. Neural Networks are inspired by our understanding of the biology of our brains \u2013 all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.<\/p>\n<p>You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.<\/p>\n<p>Each neuron assigns a weighting to its input \u2014 &nbsp;how correct or incorrect it is relative to the task being performed. &nbsp;The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and \u201cexamined\u201d by the neurons \u2014 &nbsp;its octogonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network\u2019s task is to conclude whether this is a stop sign or not. It comes up with a \u201cprobability vector,\u201d really a highly educated guess, &nbsp;based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident it\u2019s a speed limit sign, and 5% it\u2019s a kite stuck in a tree ,and so on \u2014 and the network architecture then tells the neural network whether it is right or not.<\/p>\n<p>Even this example is getting ahead of itself, because until recently neural networks were all but shunned by the AI research community. They had been around since the earliest days of AI, and had produced very little in the way of \u201cintelligence.\u201d The problem was even the most basic neural networks were very computationally intensive, it just wasn\u2019t a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasn\u2019t until <a href=\"http:\/\/www.nvidia.com\/object\/what-is-gpu-computing.html\">GPUs <\/a>were deployed in the effort that the promise was realized.<\/p>\n<p>If we go back again to our stop sign example, chances are very good that as the network is getting tuned or \u201ctrained\u201d it\u2019s coming up with wrong answers \u2014 &nbsp;a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time \u2014 fog or no fog, sun or rain. It\u2019s at that point that the neural network has taught itself what a stop sign looks like; or your mother\u2019s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.<\/p>\n<p>Ng\u2019s breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ng\u2019s case it was images from 10 million YouTube videos. Ng put the \u201cdeep\u201d in deep learning, which describes all the layers in these neural networks.<\/p>\n<p>Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google\u2019s AlphaGo learned the game, and trained for its Go match \u2014 &nbsp;it tuned its neural network \u2014 &nbsp;by playing against itself over and over and over.<\/p>\n<h2><b>Thanks to Deep Learning, AI Has a Bright Future<\/b><\/h2>\n<p><a href=\"https:\/\/developer.nvidia.com\/deep-learning\">Deep Learning<\/a> has enabled many practical applications of Machine Learning and by extension the overall field of AI. Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. <a href=\"http:\/\/www.nvidia.com\/object\/drive-px.html\">Driverless cars<\/a>, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep Learning\u2019s help, AI may even get to that science fiction state we\u2019ve so long imagined. You have a C-3PO, I\u2019ll take it. You can keep your Terminator.<\/p>\n<div class=\"fluid-width-video-wrapper\"><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/TLw_I1ghvLM\" name=\"fitvid0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\" width=\"300\" height=\"150\" frameborder=\"0\"><\/iframe><\/div>\n","protected":false},"excerpt":{"rendered":"<p>by Michael Copeland&nbsp; Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to. For example, when Google DeepMind\u2019s AlphaGo program defeated South Korean Master Lee Se-dol in the board [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":6822,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"tdm_status":"","tdm_grid_status":"","footnotes":""},"categories":[39,17,25],"tags":[],"class_list":{"0":"post-6536","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ia","8":"category-informatique","9":"category-technologie"},"_links":{"self":[{"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/posts\/6536","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/comments?post=6536"}],"version-history":[{"count":1,"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/posts\/6536\/revisions"}],"predecessor-version":[{"id":6823,"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/posts\/6536\/revisions\/6823"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/media\/6822"}],"wp:attachment":[{"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/media?parent=6536"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/categories?post=6536"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/algerienetwork.com\/sciences-tec\/wp-json\/wp\/v2\/tags?post=6536"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}