History of AI
Artificial intelligence as a concept began to take hold in the 1950s when computer scientist Alan Turing released the paper “Computing Machinery and Intelligence”, which questioned whether machines could think and how one would test a machine’s intelligence. This paper marked the beginning of AI research and development, and was the first proposal of the Turing test, a method used to assess machine intelligence. The term “artificial intelligence” was coined in 1956 by computer scientist John McCartchy in an academic conference at Dartmouth College.
After McCarthy’s conference and throughout the 1970s, interest in AI research grew from academic institutions and US government funding. Innovations in computing allowed several AI foundations to be established during this time, including machine learning, neural networks, and natural language processing. Despite its advances, AI technologies eventually proved more difficult to scale than expected and declined in interest and funding, leading to the first AI winter until the 1980s.
In the mid-1980s, AI interest reawakened as computers became more powerful, deep learning became popular, and AI-powered “expert systems” were introduced. However, due to the complication of new systems and an inability of existing technologies to keep up, the second AI winter occurred and lasted until the mid-1990s.
By the mid-2000s, innovations in processing power, big data, and advanced deep learning techniques resolved AI’s previous roadblocks, enabling further AI breakthroughs. Modern AI technology such as virtual assistants, driverless cars and generative AI began to enter the mainstream in the 2010s, making AI what it is today.
Artificial Intelligence Timeline
(1943) Warren McCullough and Walter Pitts publish the paper “A Logical Calculus of Ideas Immanent in Nervous Activity,” which proposes the first mathematical model for building a neural network.
(1949) In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more often they are used. Hebbian learning is still an important model in AI.
(1950) Alan Turing publishes the paper “Computing Machinery and Intelligence,” proposing what is now known as the Turing Test, a method for determining whether a machine is intelligent.
(1950) Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.
(1956) The phrase “artificial intelligence” is coined at the Dartmouth Summer Research Project on Artificial Intelligence. Led by John McCarthy, the conference is widely regarded as the birthplace of AI.
(1958) John McCarthy develops the AI programming language Lisp and publishes “Programs with Common Sense”, a paper proposing the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans.
(1959) Arthur Samuel coined the term “machine learning” while at IBM.
(1964) Daniel Bobrow develops STUDENT, an early natural language processing program designed to solve algebra word problems, as a doctoral candidate at MIT.
(1966) MIT professor Joseph Weizenbaum creates Eliza, one of the first chatbots to successfully mimic the conversational patterns of users, creating the illusion that it understands more than it does. This introduced the Eliza effect, a common phenomenon where people falsely attribute human thought processes and emotions to AI systems.
(1969) The first successful expert systems, DENDRAL and MYCIN, are created at the AI Lab at Stanford University.
(1972) The logic programming language PROLOG is created.
(1973) The Lighthill Report, outlining the disappointments in AI research, is released by the British government and leads to severe cuts in funding for AI projects.
(1974-1980) Frustration with the progress of AI development leads to major DARPA cuts in academic grants. Combined with the earlier ALPAC report and the previous year’s Lighthill report, AI funding is drying up and research stalls. This period is known as the “First AI Winter.”
(1980) Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to set up orders for new computer systems, it begins an investment boom in expert systems that will last for much of the decade, effectively ending the first AI winter.
(1985) Companies spend over a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies such as Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.
(1987-1993) As computer technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the “Second AI Winter”. During this period, expert systems were too expensive to maintain and update, and eventually fell out of favor.
(1997) IBM’s Deep Blue beats world chess champion Gary Kasparov.
(2006) Fei-Fei Li began work on the ImageNet visual database, launched in 2009. It became the catalyst for the AI boom, and the basis on which image recognition grew.
(2008) Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.
(2011) IBM’s Watson Handily Beats the Competition on Jeopardy!.
(2011) Apple releases Siri, an AI-powered virtual assistant through its iOS operating system.
(2012) Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat was, ushering in the breakthrough era for neural networks and deep learning funding.
(2014) Amazon’s Alexa, a virtual home smart device, is released.
(2016) Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major obstacle to eradicate in AI.
(2018) Google Releases Natural Language Processing Engine BERT, Reducing Barriers in Translation and Comprehension by ML Applications.
(2020) Baidu releases its LinearFold AI algorithm to scientific and medical teams working to develop a vaccine during the early stages of the SARS-CoV-2 pandemic. The algorithm is able to predict the RNA sequence of the virus in just 27 seconds, 120 times faster than other methods.
(2020) OpenAI releases natural language processing model GPT-3, capable of producing text modeled after the way humans speak and write.
(2021) OpenAI builds on GPT-3 to develop DALL-E, capable of creating images from text prompts.
(2022) The National Institute of Standards and Technology releases the first draft of its AI risk management framework, voluntary US guidance “to better manage risks to individuals, organizations and society related to artificial intelligence.”
(2022) OpenAI introduces ChatGPT, a chatbot powered by a large language model that gains over 100 million users in a few months.
(2022) The White House unveils an AI Bill of Rights that outlines principles for the responsible development and use of AI.
(2023) Microsoft launches an AI-powered version of Bing, its search engine, built on the same technology that powers ChatGPT.
(2023) Google Announces Bard, a Rival Conversational AI. It would later become Gemini.
(2023) OpenAI introduces GPT-4, its most sophisticated language model yet.
(2023) The Biden-Harris administration issues the Executive Order on Safe, Secure, and Trusted AI, which calls for safety testing, labeling of AI-generated content, and increased efforts to create international standards for the development and use of AI. The order also emphasizes the importance of ensuring that artificial intelligence is not used to circumvent privacy protections, exacerbate discrimination, or violate civil rights or the rights of consumers.
(2023) The chatbot Grok is released by Elon Musk’s AI company xAI.
(2024) The European Union adopts the Artificial Intelligence Act, which aims to ensure that AI systems deployed within the EU are “secure, transparent, traceable, non-discriminatory and environmentally friendly.
(2024) Claude 3 Opus, a large language model developed by AI company Anthropic, outperforms GPT-4 – the first LLM to do so.
Disclaimer for Uncirculars, with a Touch of Personality:
While we love diving into the exciting world of crypto here at Uncirculars, remember that this post, and all our content, is purely for your information and exploration. Think of it as your crypto compass, pointing you in the right direction to do your own research and make informed decisions.
No legal, tax, investment, or financial advice should be inferred from these pixels. We’re not fortune tellers or stockbrokers, just passionate crypto enthusiasts sharing our knowledge.
And just like that rollercoaster ride in your favorite DeFi protocol, past performance isn’t a guarantee of future thrills. The value of crypto assets can be as unpredictable as a moon landing, so buckle up and do your due diligence before taking the plunge.
Ultimately, any crypto adventure you embark on is yours alone. We’re just happy to be your crypto companion, cheering you on from the sidelines (and maybe sharing some snacks along the way). So research, explore, and remember, with a little knowledge and a lot of curiosity, you can navigate the crypto cosmos like a pro!
UnCirculars – Cutting through the noise, delivering unbiased crypto news