Artificial Intelligence: The Big List of Things You Should Know

“Artificial intelligence is one of the hottest subjects these days, and recent advances in technology make AI even closer to reality than most of us can imagine. Robots are no longer limited to traditional blue-collar jobs, fully automated assembly lines and high-frequency trading algorithms. White-collar jobs are ripe for automation, and robots are replacing bank tellers, mortgage brokers and loan officers in the financial industry.” – TechCrunch

Yup… That’s #4IR (Fourth Industrial Revolution)

And I can speak from personal experience on the matter: my background is in financial planning, private banking, and portfolio strategy. I saw the Robos coming, saw my skills quickly losing relevancy, and got out of the industry as soon as I had researched where I could have a future impact.

I’m comfortable with disruption, though. It creates opportunity. As always, my focus is on investing, so in this case: opportunity = money. I’ve invested in companies such as Google, IBM, Microsoft, and more. They all have unique areas of focus in the world of AI and there really isn’t any one-size-fits-all approach to AI investing. Find great companies, hold on for the ride, and buy more when they drop 25% (which they will, at some point: #GreatRecession).

Below is a list of things you should know about AI, Machine Learning, Deep Learning, Neural Networks, Robot Ethics, and more. Enjoy!

  1. Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industry-wide machine-learning delirium.
  2. The idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c. 1300 CE).
  3. With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations on concepts rather than numbers.
  4. The field of AI research was founded at a conference at Dartmouth College in 1956.
  5. Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent.
  6.  John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert Simon were the first leaders of AI research.
  7. The expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986 and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000
  8. Attempts to create artificial intelligence have experienced many setbacks, including the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973, the second AI winter 1987–1993 and the collapse of the Lisp machine market in 1987.
  9. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.
  10. March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.
  11. The AI Effect: “AI is whatever hasn’t been done yet.”
  12. Joseph Weizenbaum wrote that AI applications can not, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy was deeply misguided, suggesting that AI research devalues human life.
  13. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.
  14. ArkInvest predicts that 76 million U.S. jobs will disappear in the next two decades due to AI & Robotics — almost 10 times the number of jobs created during the Obama years.
  15. High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions and targeting online advertisements.
  16. While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data – over and over, faster and faster – is a recent development.
  17. Visual Deep Learning technology will save lives, by precisely identifying suspected terrorists and hidden ordinance in remotely-recorded video feed.
  18. Google Translate now renders spoken sentences in one language into spoken sentences in another for 32 pairs of languages, while offering text translations for 103 tongues, including Cebuano, Igbo, and Zulu.
  19. What’s required to create good machine learning systems? Data preparation capabilities, Algorithms – basic and advanced, Automation and iterative processes, Scalability, & Ensemble modeling.
  20. Banks and other businesses in the financial industry use machine learning technology for two key purposes: to identify important insights in data, and prevent fraud.
  21. Machine learning is a fast-growing trend in the health care industry, thanks to the advent of wearable devices and sensors that can use data to assess a patient’s health in real time.
  22. A new mobile app called FaceApp uses neural networks to edit your selfie via photo-realistic filters – letting you add a smile, swap genders, and can take years off your age.
  23. A company in Japan has made the first big steps toward a robot companion—one who can understand and feel emotions. Introduced in 2014, “Pepper” the companion robot went on sale in 2015, with all 1,000 initial units selling out within a minute. The robot was programmed to read human emotions, develop its own emotions, and help its human friends stay happy.
  24. Deep learning combines advances in computing power and special types of neural networks to learn complicated patterns in large amounts of data.
  25. The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.
  26. Wendell Wallach: in his book titled Moral Machines introduced the concept of artificial moral agents (AMA): how ethically the machine should behave towards both humans and other AI agents.
  27. As of 2016, there are over 30 companies utilizing AI in the creation of driverless cars.
  28. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience and artificial psychology.
  29. Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities.
  30. Around the 1940s, Alan Turing‘s theory of computation suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction.
  31. The hard problem of consciousness: If an AI system replicates all key aspects of human intelligence, will that system also be sentient – will it have a mind which has conscious experiences?
  32. The first work that is now generally recognized as AI was McCullouch and Pitts‘ 1943 formal design for Turing-complete “artificial neurons”.
  33. In a Jeopardy! quiz show exhibition match, IBM‘s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.
  34. According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increasing from a “sporadic usage” in 2012 to more than 2,700 projects.
  35. The Bank of England estimates that 48% of human workers will eventually be replaced by robotics and software automation.
  36. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
  37. Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects.
  38. There is no established unifying theory or paradigm that guides AI research.
  39. Using AI, medical startups claim they’ll soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists, to diagnose cancer earlier and less invasively, and to accelerate the search for life-saving pharmaceuticals.
  40. AI could double annual economic growth rates in 2035 by changing the nature of work and creating a new relationship between man and machine. —Accenture
  41. Invented by Ian Goodfellow, now a research scientist at OpenAI, generative adversarial networks, or GANs, are systems consisting of one network that generates new data after learning from a training set, and another that tries to discriminate between real and fake data.
  42. Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science.
  43. Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science.
  44. In his book Superintelligence, Nick Bostrom provides an argument that artificial intelligence will pose a threat to mankind. He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down.
  45. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.
  46. The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.
  47. “The singularity”—the hypothesized moment when superintelligent machines start improving themselves without human involvement.
  48. AI researchers have developed several specialized languages for AI research, including Lisp and Prolog.
  49. Computers can now teach themselves. “You essentially have software writing software,” says Jen-Hsun Huang, CEO of graphics processing leader Nvidia.
  50. The study of non-learning artificial neural networks began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch.
  51. The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning.
  52. Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.
  53. The main categories of neural networks are feedforward neural networks and recurrent neural networks.
  54. Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, argues that specialized artificial intelligence applications, robotics and other forms of automation will ultimately result in significant unemployment as machines begin to match and exceed the capability of workers to perform most routine and repetitive jobs.
  55. Among the most popular feedforward neural networks are perceptrons, multi-layer perceptrons and radial basis networks.
  56. Peter Lee, cohead of Microsoft Research: “Our sales teams are using neural nets to recommend which prospects to contact next or what kinds of product offerings to recommend.”
  57. Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning, GMDH or competitive learning.
  58. In the 1980s artist Hajime Sorayama‘s Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins.
  59. Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa, and was introduced to neural networks by Paul Werbos.
  60. Neural Networks: Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.
  61. Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies.
  62. Deep learning in artificial neural networks with many layers has transformed important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.
  63. AI for robotics will allow us to address the challenges in taking care of an aging population and allow much longer independence.
  64. Google states they have in collaboration with NASA a quantum computer that is 100 million times faster than a traditional computer.
  65. Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either.
  66. Quantum computing is the latest and hottest in AI development.
  67. In a famous 1950 essay, Alan Turing proposed a test for an artificial general intelligence: a computer that could, over the course of five minutes of text exchange, successfully deceive a real human interlocutor.
  68. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced a new way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning.
  69. Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.
  70. AI Philosophy: Can a machine have a mind, consciousness and mental state in exactly the same sense that human beings do? Can a machine be sentient, and thus deserve certain rights? Can a machine intentionally cause harm?
  71. “The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” — Stephen Hawking
  72. Leading AI researcher Rodney Brooks: “I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years.”
  73. Mary Shelley‘s Frankenstein considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel?
  74. Edward Fredkin argues that “artificial intelligence is the next stage in evolution”, an idea first proposed by Samuel Butler‘s “Darwin among the Machines” (1863), and expanded upon by George Dyson in his book of the same name in 1998.
  75. Ray Kurzweil has used Moore’s law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that “singularity” will occur in 2045.
  76. Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean. Andrew Ng, chief scientist at Baidu Research:  “AI is the new electricity. Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

Live Twitter feed for #DeepLearning: