Solo Career Insighter
Solo Career Insighter Podcast
The AI Ferrari with No Breaks
0:00
-7:28

The AI Ferrari with No Breaks

GPT 4o release again raises the question of AI safety
Arches National Park, 2023

GPT 4o launched this week with much fanfare, causing quite a stir in the general public.

One thing that stood out from the demos was how fluid the display of human emotions is in real-time, with multimodal capabilities including far superior vision and natural language upgrades.

Although the public remains unaware of the construction costs of such an advanced chatbot, it is conceivable that, in the near future, many people will have their own AI companions. These companions' interactions and emotional intelligence could become indistinguishable from their human counterparts.

The Danger Zone

Humans often make decisions based on their emotions. That’s why companies hire the best sales who exhibit high emotional intelligence and can appeal to customers' needs and wants.

A recent article by Prof. Mollick, “Superhuman?” sheds light on AI’s superior emotional handling.

“If you debate with an AI, they are 87% more likely to persuade you to their assigned viewpoint than if you debate with an average human

GPT-4 helps people reappraise a difficult emotional situation better than 85% of humans, beating human advice-givers on the effectiveness, novelty, and empathy of their reappraisal.”

These capabilities, however, come with significant risks. With the release of GPT-4o, scammers could use AI chatbots to manipulate human emotions and exploit vulnerable individuals with a higher success rate.

For instance, fraudsters can pretend to be me very easily with GPT-4o. Using a chatbot with my cloned voice, they can deceive my elderly parents into draining their life savings to resolve fictitious crises.

You could argue that humans have always been susceptible to persuasion and emotional manipulation, whether by skilled orators, marketing tactics, or social influence.

One in four people who reported losing money to fraud since 2021 said it started on social media. Reported losses to scams on social media hit $2.7 billion from 2021-2023, higher than any other contact method, according to Emma Fletcher’s Social media: a golden goose for scammers.

The FBI estimates that senior citizens (over 60) lose more than $3 billion yearly to financial scams, including those initiated online, in its report: Elderly Senior Citizen Scam and Fraud Statistics 2024.

This was before GPT-4o became available. Imagine what scammers could do in the future with AI superhuman capabilities.

The core issue may not be AI's capabilities per se but rather how they are deployed safely and ethically. To mitigate the risks, we need AI companies' self-regulation, government regulation, and public education. But there's a twist.

Where is AI Safety in the equation?

Does the private sector have strong financial incentives to self-regulate AI development responsibly to maintain public trust?

According to one Utah State University article, How Private Governance Mitigates AI Risk, the answer is yes.

"Private governance is the critical component of mitigating AI technological risk to firms, industries, consumers, and American society...This regulatory tension, between government-mandated social control and responsible innovation of AI based in industry self-regulation, market forces, and firm adherence to best practices, will provide for the still-to-come potential benefits accruing to American society from this revolutionary technology."

But a Carnegie Mellon article, Toward AI Accountability: Policy Ideas for Moving Beyond a Self-Regulatory Approach, counters with

"While industry will continue to play a key role in developing norms and institutionalizing best practices regarding the development and implementation of accountable AI systems, effective legislation should acknowledge — but surpass — a self-regulatory approach, which tends to address harms after they are realized."

A Forbes article, Trustworthy AI: String Of AI Fails Show Self-Regulation Doesn’t Work, echoes that concern and further disputes self-regulation.

A recent Pew Research study found that 70% of those surveyed have little to no trust in companies to make responsible decisions about how they use AI in their products.

Apparently, there are ongoing debates surrounding the ideal balance.

Superalignment is Dead

Interestingly, in the same week that OpenAI announced GPT-4o, its co-founder, Ilya Sutskever, and another leader, Jan Leike, resigned.

They formed the Superalignment team together last July. This team was responsible for developing ways to govern and steer “superintelligent” AI systems. They were promised to use up to 20% of OpenAI’s computing resources on AI safety research and development. Yet the truth was that the team’s funding requests were often denied. (source: TechCrunch).

The friction between Ilya Sutskever and Sam Altman was publicly displayed in the theatrical drama of OpenAI’s ousting and later reuniting with Altman.

We should view the joint news of the GPT 4o release and Superalignment’s founding members’ departure as a signal that the company prioritizes products over guardrails.

Be the Informed Individuals

The AI Arms Race is in full force.

Once the pack leader, OpenAI has acted, others have no choice but to follow or risk being left behind. The question of safety and ethics may just become an academic debate.

It’s the equivalent of driving an AI Ferrari at 300 mph without seat belts and breaks.

We, as consumers, are left to fend for ourselves. It’s our responsibility to increase our awareness of its usage and limitations in our communities.

Already, I’ve seen groups such as the AIEthicist.org newly formed to raise public awareness. We also saw the FTC looking into consumer’s concerns about AI (source: FTC article), including copyright, IP, personal data, bias, scams, fraud, and malicious use.

We are still in the early stages of developing and exploring AI's capabilities. So, time will tell whether these organizations impact and influence the whole AI industry.

Share

Discussion about this episode