In today's Finshots, we look at the possible challenges associated with reckless AI usage and a few regulators who are stepping up to the plate.
Before we begin, if you're someone who loves to keep tabs on what's happening in the world of business and finance, then hit subscribe if you haven't already. We strip stories off the jargon and deliver crisp financial insights straight to your inbox. Just one mail every morning. Promise!
If you’re already a subscriber or you’re reading this on the app, you can just go ahead and read the story.
The Story
Picture this.
A father drops off his son in an exam centre. He wishes him luck, bids goodbye and goes about his day. Soon, he gets a call from a certain Vinod Kumar who identifies as a police officer. He tells him — “Your son has been apprehended alongside a gang of rapists”, and he demands Rs 30,000 on Paytm to clear his son’s name. He then hands over the phone to his son. Trembling, his son pleads, “Papa, please pay him. They are real policemen. Please save me.” The father instantly recognises his son’s voice — his mannerisms and his desperation. He hangs up and contacts the local police station only to find out that there is no Vinod Kumar. The real police soon send him a picture of his son from inside the exam centre. He is safe.
So, who was on the call earlier?
Well, it was an AI-generated clone mimicking the son’s voice.
This story appeared in the Indian Express earlier this year and it is not an isolated incident. In some cases, AI scammers have mimicked the voices of company executives, manipulating employees to divulge sensitive insider information or process financial transactions. And this massive unchecked usage of AI is fueling a major uptick in online scams.
Now, you’d probably look at this and say — “Hey, I am not falling for this!”
And while that may be true even a simple slip-up could entail catastrophic consequences. According to a recent report by McAfee, around 80% of Indian AI fraud victims said that they lost money, and half of them lost more than Rs 50,000.
So, by now, it should be clear that there is an urgent need to regulate AI, and thankfully, some countries are stepping up to the plate. Take, for instance, the European Union. They’ve taken a giant leap forward in addressing some of these concerns with the EU Artificial Intelligence Act. The European Parliament approved the bill on March 13, 2024, and though it's pending review in the EU Council, it’ll be applicable across EU nations starting in 2026 if everything goes well.
But what does this Act entail?
Well, AI systems should be trusted to not infringe on fundamental rights, safety, and ethical principles. And the law attempts to mitigate risks associated with powerful AI models.
They’ve categorised these risks into three levels.
First, you have the unacceptable risk category. These are AI systems that pose a threat to safety, fundamental rights, or public order. For instance, an AI-powered autonomous weapon that can independently target and engage in combat. This can endanger human lives, violate international law, and potentially even trigger wars. The AI Act aims to prevent the deployment of such systems.
Next, there’s the high-risk category. These are AI systems used in critical areas where a failure could lead to significant consequences. Consider an AI system that assists radiologists in detecting early signs of cancer from medical images. Patients’ lives could be at risk if this system provides inaccurate results. Sure, doctors make mistakes as well. But they’re regulated too and in the same vein, the AI Act wants to make sure that such systems undergo rigorous testing, and certification, and involve human oversight.
Then, there’s the limited risk category. For instance, Chatbots or virtual assistants that help customers navigate e-commerce websites. While this isn’t exactly life-or-death business, you still want to make sure that AI systems are transparent and fair in their user interactions.
Also, the European Union isn’t the only one that’s worried right now.
In the US, President Joe Biden’s Executive Order (E.O.) on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, aims to guide responsible AI development and deployment. And America’s NIST (National Institute of Standards and Technology) is developing standards for “red-teaming” AI systems, stress-testing their defences and identifying potential problems.
Let us explain.
For instance, consider facial recognition software that uses AI. The red team’s objective could be to determine if the AI can be fooled into misidentifying a person or object. The team might use masks, artificial lighting or even digital photos/videos to trick the AI into granting access.
In the US, the NIST will function as a principal authority establishing guidelines for ‘red-teaming’ exercises. In other words, big companies prove their AI is safe, and NIST sets rules for testing the AI. If the AI fails here, then NIST will report on the vulnerabilities and test it again. So they’re the guardians here.
The UK, on the other hand, is keen on a pro-innovation approach to AI regulation. It’s eager to keep an eye on the wrongful usage of AI. But it doesn’t want to take an overbearing approach that curbs innovation.
In India, the Ministry of Electronics and Information Technology (MeitY) recently issued a revised AI advisory. The advisory highlights intermediaries’ responsibility to prevent AI models from propagating unlawful content or perpetuating bias and discrimination, among other things. But it’s sparse on any concrete details.
Finally, country-specific laws may not be effective in cases where an AI crime is committed in another country and has to be tried in a different country altogether. This is why OpenAI’s CEO Sam Altman calls for an international agency monitoring the ‘most powerful’ AI to ensure ‘reasonable safety’. He is wary of a time in the not-so-distant future wherein frontier AI systems have the capability of causing enormous global harm.
So yeah, while the European Union and the US may have some ideas on regulating AI, maybe what we truly need is an agency that can work on global regulations.
Until then…
Don't forget to share this story on WhatsApp, LinkedIn and X.
📢Finshots is also on WhatsApp Channels. Click here to follow us and get your daily financial fix in just 3 minutes.
Why you MUST buy a term plan in your 20s 👇🏽
The biggest mistake you could make in your 20s is not buying term insurance early. Here's why:
1) Low premiums, forever
The same 1Cr term insurance cover will cost you far less at 25 years than at 35 years. And once these premiums are locked in, they remain the same throughout the term!
So if you’re planning on building a robust financial plan, consider buying term insurance as early as you can.
2) You might not realise that you still have dependents in your 20s
Maybe your parents are about to retire in the next few years and funding your studies didn't allow them to grow their investments — making you their sole bread earner once they age.
And although no amount of money can replace you, it sure can give that added financial support in your absence.
3) Tax saver benefit
Section 80C of the Income Tax Act helps you cut down your taxable income by the premiums paid. And what's better than saving taxes from early on in your career?
So maybe, it's time for you to buy yourself a term plan. And if you need any help on that front, just talk to our IRDAI-certified advisors at Ditto.
With Ditto, you get access to:
- Spam-free advice guarantee
- 100% free consultation from the industry's top insurance experts
- 24/7 assistance when filing a claim from our support team
Speak to Ditto's advisors now, by clicking the link here.