The evolution of machine learning

Omar Dajani. 02/19/2021


This illustration showcases the variety of topics and functions related to Machine learning and demonstrates how expansive this field has become. (Dataversity)


Being one of the most popular computer science fields, machine learning (ML) has come a long way since its inception back in the 1940s. It has been expanding ever since then and will likely continue to do so. But wait, what is ML? According to IBM, it “focuses on applications that learn from experience and improve their decision-making or predictive accuracy over time.” ML is a branch of the much bigger artificial intelligence (AI) field in both computer science and technology in general, and it has a central focus on algorithms. The better the algorithm, the more efficient the ML will be in the future.

Despite the field’s inception technically being in the 1940s, it did not truly start becoming the ML we know today until the 1950s. To be more specific, 1956 marked the year of the Dartmouth Workshop, which has been credited countless times as “the birthplace of AI” over the years. During this workshop, 20 of the most intelligent minds in computer and cognitive science were brought together to work on a project seeking to uncover the degree to which computers and other machinery could push themselves. The event lasted approximately seven weeks and acted as one giant brainstorming session to figure out ways to push computers to their full potential.

Some outcomes of this workshop included John McCarthy (the event’s main organizer) coining the term ‘artificial intelligence’ during the conference and Arthur Samuels (one of the intelligent minds) eventually coining the term ‘machine learning’ three years later in 1959. Samuels would go on to create the Samuel Checkers-playing program, which was one of the world’s first self-learning programs. Other outcomes involved Herbert Simon and Allen Newell (more of the intelligent minds) showing off Logic Theorist, which was the first program designed to copy the problem-solving skills of a human being after observing them in action and committing them to memory.

Even though the Dartmouth Workshop was centered around the advancement of AI, it had numerous outcomes, one of these being the creation of ML. Thus, the event not only was the birthplace of AI, but it was also technically the birthplace of ML as well. From there, the ML field continued its ever-growing expansion, eventually reaching another milestone during the 1980s and 1990s. This milestone was a change of focus from gaining more knowledge (which was ML’s main purpose before) to a new approach centered around data. This new paradigm would make it so computers would learn from batches of data, rather than storing them.

This new approach in the ML field was mainly centered around neural networks and support vector machines (SVM), with the former being the bigger focus. While the term ‘neural networks’ may seem daunting at first, it actually is not too hard to understand. Essentially, a neural network is a bundle of neurons all clumped together that work towards a specific goal. When it comes to ML, that ‘specific goal’ is making sure computers and other machinery can take various groups of data and learn from them.


This illustrates a simple neural network. Yes, this is what exists inside all computers and other machinery! (Medium - VIASAT)

As for SVMs, these act, as their name suggests, primarily for support. They help organize all the different data they come across through the use of margins and hyperplanes (which are used for data separation), which work together to classify data. SVMs are also big on math. There is a wide variety of functions (mainly ones relating to calculus) used in relation to SVMs, which are used for problem solving, and they may be even more sophisticated than neural networks as a result of this.

Ever since this new focus on neural networks and SVMs, ML has found itself being used for a variety of applications. For example, Facebook created the DeepFace technology in 2014 with the use of neural networks, which is used to recognize user facial features. In 2017, Waymo introduced autonomous cars and used ML to learn from user feedback.

ML is impressive. Even though it started out as a spinoff of the much popular AI field, it has now become just as popular. This popularity has yielded its own spinoff in the form of deep learning, as mentioned before. Many discoveries were made within a short period of time after its inception, and many more are surely still on the horizon. This optimism contributes to the positive and exciting outlook that ML is always changing, never staying the same for a prolonged period of time due to the constant discoveries. This only makes the field ever more popular, as its applications will only continue to increase.

Cover Photo: (Becoming Human)


Omar Dajani
Omar Dajani is an international student from Jerusalem, Palestine. He is currently a sophomore at Fullerton College and is majoring in English with a minor in Computer Science. He enjoys gaming, blogging, journaling, meditating, and going on walks. He intends on transferring to UC Berkeley for Fall 2021.