Back

No Pause Button: From Vaccum Tubes to Learning Models

“In retrospect, we can see that the progression from computers to the Internet to machine learning was inevitable: computers enable the Internet, which creates a flood of data and the problem of limitless choice; and machine learning uses the flood of data to help solve the limitless choice problem.” – Pedro Domingos

Big Thought: In 2016, my friend gave me Pedro Domingos’ “The Master Algorithm.” A best selling book, described by Bill Gates as an essential guide to machine learning. It indeed is. The book explored the “five tribes” of machine learning and argues that by combining the strengths of these five tribes, we can develop a “master algorithm” that can learn and solve any problem. It focuses on a lot of the positives. I couldn’t help but think about the first time I read this book and what the world has become now. 

In the early 19th century, the world woke up to what was arguably the first programmable computer, the Difference Engine designed by Charles Babbage. Some argue that the abacus, invented by William Oughtred in 1622, was the first computer. Regardless of whether you are on team Babbage or Team Oughtred, we can agree on one thing that the journey through computing history has been really remarkable.  

The first generation of computers used vacuum tubes made into the bulky ENIACs, the second generation used transistors, the third generation introduced integrated circuits, and the fourth generation brought in microprocessors and the internet. Today, we are in the fifth generation, characterized by the use of ultra-large-scale integration (ULSI) technology, parallel processing, and artificial intelligence (AI)

Two important moments stand out for me in this timeline; The Internet Revolution and The Rise of AI.  The former showed us just how much we could bring the world closer together. The internet brought so much to us, I remember a comment from an MIT video I was watching on Youtube. The user said “ Me, a poor inner city kid sitting through an MIT lecture like this. Unreal”. I thought there isn’t a better way to express what the internet brought, a more connected world, access to information, access to opportunities but it also brought disinformation, sophisticated networks of sex trafficking, social media addiction that is ruining our attention, disruption of elections and so much more. We were so excited about this new connectedness that we only started thinking about the repercussions later. 

Artificial intelligence has immensely improved our lives through helpful technologies like Google Maps, translation tools, spell-check, and many other useful applications built upon the subfield of machine learning. However, the advancement of these technologies and the push toward artificial general intelligence (AGI) calls for alarm. The dangers of this sprint  according to experts like Professor Geoffrey Hilton is that because these systems autonomously write their own code to modify themselves, we could get to a time where they can’t simply be turned off, they will be able to literally have a mind of their own. 

In a recent interview with The Economist, Open AI’s CEO Sam Altman, echoed professor Hilton’s fears. When asked about pausing developments on AI, he  said there was no magic red button to stop AI. It means that if this system ever gets out of hand, its developers will struggle to control it.

So, maybe it is good that two years after ChatGPT went mainstream, researchers are continuously expanding the conversation on the social and economic downsides of the advancements in AI currently. At the center of our experiment with these algorithms must be responsible development, ethics and fairness. Regulators have to  ensure the transparency and explainability of systems, human oversight over critical decisions, and strong safeguards against misuse.   We must continue to consider the implications of AI developments on the human race and its continuity, on social interaction, on jobs and on the environment.