In today’s Finshots, we explore why two pioneers of AI—John Hopfield and Geoffrey Hinton—were awarded the 2024 Nobel Prize in Physics.
But before we begin, if you’re someone who loves to keep tabs on what’s happening in the world of business and finance, then hit subscribe if you haven’t already. If you’re already a subscriber or you’re reading this on the app, you can just go ahead and read the story.
The Story
In a surprising turn of events, two pioneers of Artificial Intelligence (AI), John Hopfield and Geoffrey Hinton, were awarded the 2024 Nobel Prize in Physics.
Yep, AI and Physics! Not something you’d expect, right?
But before we dive into the AI-Physics connection, let’s take a little detour to a tech giant you’ve probably heard of, Nvidia.
Nvidia recently overtook Microsoft to become the world’s second-largest company by market capitalisation, right behind Apple. And if you guessed that AI had a role to play in this, well, you’re spot on.
Nvidia’s rise?1
It’s all thanks to their GPUs (graphics processing units). Simply put, GPU is a chip or electronic circuit capable of rendering graphics for display on an electronic device. It was initially designed for gamers craving super-smooth, high-quality graphics.
But here’s the twist: those same GPUs are perfect for AI.
How, you ask?
Imagine you’re teaching an AI model to recognise dog breeds. It needs to process thousands of online images in a blink. And that’s where Nvidia’s GPUs come in. They handle complex calculations at lightning speed, making them the go-to hardware for AI developers.
But it’s not just about hardware. Nvidia has also built software tools, like CUDA, to optimise AI applications for their GPUs.
And guess how all of this ties to the Nobel Prize?
Well, one of the winners this year, Geoffrey Hinton, figured all of this early on, even before Nvidia.
Back in 2009, Hinton tried to get a free GPU from Nvidia for his AI experiments. They turned him down, but Hinton used their GPUs anyway. In 2012, he and his students developed an artificial neural network that could teach itself to recognise images, thanks to Nvidia’s CUDA platform.
This was a watershed moment.
Hinton proved that GPUs could dramatically accelerate AI training, something Nvidia hadn’t realised until then. Hinton’s breakthrough work showed Nvidia that their GPUs had massive potential beyond gaming, which was the moment Nvidia pivoted to AI. Before this, Nvidia’s CUDA was mainly used for high-performance computing such as CT scans, financial modelling, and animation.
So, in a way, Hinton’s experiments didn’t just revolutionise AI. They also helped Nvidia understand the full power of its own technology.
And the rest, as they say, is history.
Now, back to our main story—why did this AI breakthrough win a Physics Nobel Prize?
See, the two Nobel Prize winners Hopfield and Hinton laid the groundwork for something called artificial neural networks.2 They are crucial to the story and basically, the building blocks of modern AI.
Think of a neural network as a system inspired by how the human brain works. Just like your brain learns to recognise faces or words, a neural network does the same with data. It takes in information, processes it, and makes decisions, learning from experience!
Also, the idea of neural networks isn’t new; it dates back to the 1940s when scientists like Warren McCulloch and Walter Pitts first proposed simple models of neural activity.3
But yes, practical applications were limited until the 1980s, untill John Hopfield, developed the Hopfield Network, which uses principles of physics to model how these neural networks can learn from incomplete data. For instance, if a picture of a cat is blurry, his model helped the computer guess what it should look like. This was a huge step forward, but still had its limitations.
It was great at recognising patterns but wasn’t enough to build AI systems that could predict or generate new information. That’s where Geoffrey Hinton came in.
Hinton took Hopfield’s ideas to the next level by introducing something called the Boltzmann Machine. He added hidden layers to neural networks, allowing machines to analyse data in more sophisticated ways. These hidden layers act like a “subconscious,” helping computers not just recognise things but also make predictions. For example, instead of just identifying how the cat would look like in a photo, a computer could now guess what the cat might look like in a completely different scene.
And this idea of hidden layers became the foundation of today’s AI, whether it’s ChatGPT generating coherent text or DALL-E creating original artwork.
So why is all of this worthy of a Physics Nobel Prize?
Well, these neural networks we just spoke about are based on 3 principles from physics—how the brain works (biophysics), how data is processed (statistical physics), and how computers solve complex problems (computational physics).
Hopfield’s neural networks were inspired by biophysics, which models how the brain works using math. Then there’s statistical physics which helps AI process huge amounts of data and find patterns. And finally, computational physics stood as the driving force behind the complex AI models we use today.
So, without the work of Hopfield and Hinton, AI wouldn’t be what it is today.
Their contributions laid the groundwork for neural networks, the brains behind Siri, Alexa, and even medical imaging systems that detect cancer faster than doctors. Take AlphaFold, for instance, an AI that predicts protein structures. It’s revolutionising drug discovery and biochemistry, and it’s all built on Hinton’s breakthroughs.
AI is also solving problems in fields like astronomy, particle physics, and climate science, working with data at a speed and scale we never thought possible.
Even self-driving cars? Yep, they rely on this tech, too.
And according to a 2023 McKinsey report, Generative AI could inject up to $4.4 trillion into the global economy every year.4 That’s staggering.
Of course, there’s a flip side to this, as the IMF warns that nearly 40% of global jobs could be impacted by AI.5
Yet, all this reminds us of a timeless truth. Breakthroughs in one field often spark revolutions across industries, and they could reshape how the world works. It’s a ripple effect we’re seeing unfold right now. And as Steve Jobs once said, “You can’t connect the dots looking forward; you can only connect them looking backward.”
The next time you chat with a virtual assistant or see AI in action, remember that those dots all trace back to physics.
Of course, some argue that Hopfield's and Hinton's work is more about mathematics and computer science.
Maybe they’re right. Maybe not. In the end, only time will tell which dots will connect next.
Until then…
Don't forget to share this story on WhatsApp, LinkedIn and X.
📢 Ready for even more simplified updates? Dive into Finshots TV, our YouTube channel, where we break down the latest in business and finance into easy-to-understand videos — just like our newsletter, but with visuals!
Don’t miss out. Click here to hit that subscribe button and join the Finshots community today!
Story Sources: Times of India [1]; The Conversation [2]; Big Think [3]; McKinsey [4]; IMF Blog [5]
A message from one of our customers
Nearly 83% of Indian millennials don’t have term life insurance!!!
The reason?
Well, some think it’s too expensive. Others haven’t even heard of it. And the rest fear spam calls and the misselling of insurance products.
But a term policy is crucial for nearly every Indian household. When you buy a term insurance product, you pay a small fee every year to protect your downside.
And in the event of your passing, the insurance company pays out a large sum of money to your family or your loved ones. In fact, if you’re young, you can get a policy with 1 Cr+ cover at a nominal premium of just 10k a year.
But who can you trust with buying a term plan?
Well, Shamsher - the gentleman who left the above review - spoke to Ditto.
Ditto offered him:
- Spam-free advice
- 100% Free consultation
- Direct WhatsApp support for any urgent requirements
You too can talk to Ditto’s advisors now, by clicking the link here.