Tech Talk: Generative AI's biggest ethical concern – Mint

Trouble viewing this email? View in web browser
Generative AI’s biggest ethical concern
When writing about Generative artificial intelligence (AI), a recurrent theme is the fear that these models will think and act like humans, plagiarize artists’ work, and replace thousands of routine jobs.
The reason is that, unlike traditional machine learning models, which typically make predictions based on historical data, generative AI models can create entirely new content, including audio, code, images, text, simulations, and videos—a trend that appears to have excited and alarmed many ever since OpenAI’s ChatGPT garnered over 100 million users in the just the first two months of its launch in December 2022.
I have written about these concerns earlier when talking about ‘We must rein in the precocious Generative AI children. But how?’ and ‘Will an LLM take away your job?’.
In a white paper released jointly with Accenture, the World Economic Forum (WEF) now talks specifically about the impact of large language models (LLMs) on jobs. It posits that with 62% of total work time involving language-based tasks, the widespread adoption of LLMs such as ChatGPT could significantly impact a broad spectrum of job roles. The study analyzed over 19,000 individual tasks across 867 occupations, assessing the potential exposure of each task to LLM adoption. You can read the article here.
Source: WEF, Accenture white paper
However, while individuals may fret over the loss of jobs, data privacy appears to be the primary concern of companies using Generative AI, followed by transparency, data poisoning (datasets can be deliberately “poisoned” or “polluted” by hackers, resulting in “inaccurate” results), and intellectual property and copyright violation, according to a new study.
Nearly three-quarters of individuals surveyed by consultancy firm Deloitte say their companies have begun testing Generative AI, with 65% already using it inside their businesses and 31% using this technology for external consumption. Yet, more than half (56%) of respondents don’t know or are unsure if their organizations have ethical standards guiding their use. The findings are part of Deloitte’s second annual report on the ‘State of Ethics and Trust in Technology’, which underscores that companies are wrestling with many issues that Generative AI models raise, including data privacy, exacerbating the digital divide, potential for plagiarism, distribution of harmful content and misinformation, and worker displacement.
The study surveyed more than 1,700 business and technical professionals across industry sectors to assess if and how ethical standards are applied to emerging technology in their organizations. According to Deloitte, emerging technology comprises cognitive technologies, digital reality, ambient experiences, autonomous vehicles, quantum computing, distributed ledger technology and robotics. Cognitive technologies include AI, generative AI, machine learning (ML), neural networks, bots, natural language processing, and neural nets.
The study notes that the perception of cognitive technologies’ potential for social good is increasing even as the perception of its potential for harm “is rising even faster”. In this year’s survey, 39% of respondents indicated cognitive technologies have the most potential for good among all emerging technologies, up from 33% last year. Cognitive technologies were also ranked as most likely to pose a serious ethical risk among 57% of respondents, compared with 41% in 2022.
Further, nearly three-quarters (73%) of respondents said their organizations are shifting some workers’ tasks due to adopting new technologies. Among these organizations, 85% retain individuals whose roles are affected, and more than two-thirds (67%) additionally retrain or upskill those employees for new positions, countering common perceptions that emerging technology will eliminate jobs. When asked to rank top ethical concerns about Generative AI’s use more broadly across business, only 7% of respondents cited job displacement where Generative AI replaces human jobs.
Collaboration with other businesses on ethical tech standards remains unchanged while expectations of the government increase. Despite the increased attention on emerging technologies in the wake of Generative AI, only 27% of survey respondents reported their companies collaborating with commercial entities (down from 31% last year), and only 23% reported partnering with government organizations to review potential ethics concerns (flat relative to 22% last year). The percentage of respondents who believe the government should have a bigger role in setting ethical standards rose to 71% this year from 61% last year.
Respondents said their organizations are supportive of the government playing a role in technology regulation, specifically in fostering cross-business collaboration to define standards (69%), setting regulations (59%), incentivizing adoption of standards (50%), and imposing financial penalties (37%).
The report recommends, among other things, that “formulating and abiding by robust standards and protocols can help forestall potential risks and harms of Generative AI”. It also suggests that employees should be trained to help them feel “more invested and empowered”. According to the study, companies should also consider proof of concepts and pilot programmes. Pilots that fail to meet requirements or are deemed too high-risk can be junked at this stage, preventing companies from taking big risks.
What’s stopping India from becoming an AI superpower?
In Mint’s AI Rising podcast, Debjani Ghosh, president of Nasscom, and the hosts decode the next era of IT and Tech in India when it comes to AI. Together, they address questions like why India is not on the global map when it comes to AI, where India is on the value chain globally and how far they are from who is controlling it, India’s AI talent pool and how it is the biggest competitive advantage globally when it comes to the IT industry.
Further, they answer why India is low on research and development, investments, and infrastructure when it comes to AI and what can be done to address these issues. They also discuss how India can reskill its citizens to add to the talent and productivity boost to become the leader when it comes to the AI talent hub.
OpenAI is making its own chips?
Reuters recently reported that OpenAI is exploring manufacturing its own AI chips. Apple is already producing its processors for the iPhone and Mac, and Google has developed its tensor processing units (TPUs), while Amazon has its Trainium chips. While OpenAI declined to comment, the Reuters report underscored that with ChatGPT query costing roughly 4 cents, according to an analysis from Bernstein analyst Stacy Rasgon, ChatGPT would require about $48.1 billion worth of graphics processing units (GPUs) initially and about $16 billion worth of chips a year to “grow to a tenth the scale of Google search…(and) keep operational”.
You may liken central processing units (CPUs) to the brains of computing devices like mobiles, desktops, laptops, and servers. But unlike CPUs, GPUs are specifically designed to handle multiple tasks simultaneously, a feature known as parallel computing makes them more suitable for high-performance computing tasks like supercomputers and training LLMs. The explosive growth of AI has now made Nvidia, known for selling its processors to the gaming industry even five years ago, one of the hottest technology assets in the world.
That said, Nvidia only designs its chips (they are manufactured by semiconductor fab makers such as Taiwan Semiconductor Manufacturing Company and Samsung Electronics Co.), rivals like Advanced Micro Devices, Intel Corp., Alibaba Group, Alphabet, Amazon, Meta, Qualcomm, Broadcom, and Baidu make their own chips (check). Microsoft, too, is reportedly working on its own AI chips that can be used to train large language models—this can avoid the company’s reliance on Nvidia.
For now, though, Nvidia has stolen a march in this space. According to a 27 May report by investment bank JPMorgan, Nvidia can garner about 60% of all AI deals this year on the back of its hardware products, such as GPUs and networking products. You may read the article here.
Can humans oversee AI decisions?
The ministry of corporate affairs (MCA) is automating its compliance system with artificial intelligence (AI) and machine learning (ML) tools. But decisions such as serving notices on companies will be left to (human) officials to make. Mint explains the hybrid approach.
Artificial Empathy: Are we ready for emotions from AI
Iron Dome gets foxed by low-tech paragliders
It is Israel’s air missile defence system, developed to defend against short-range rockets by intercepting them in the air. According to Israel’s Ministry of Defense, it’s capable of successfully handling multiple rockets at a time. Developed by Rafael Advanced Defense Systems and Israel Aerospace Industries, the system became operational in March 2011 and performed its first interception of a rocket from Gaza in April 2011. Since then, it’s intercepted thousands of rockets.
The Iron Dome, according to Rafael Systems’ website, “has become the only multi-mission system in the world that provides a combat-proven solution to defeat Very Short Range (VSHORAD), as well as rocket, artillery and mortar (C-RAM) threats, aircraft, helicopters, UAVs, PGMs, and cruise missiles, for land and naval air defence, operating 24/7, in all weather conditions”. The system was designed to respond to threats from Gaza and southern Lebanon. The Iron Domes uses multi-mission radar to detect what a rocket’s route is. The system’s command and control centre analyzes the trajectory of the rocket and its estimated landing area, according to Israel’s Ministry of Defense. But during their recent attack on Israel, Hamas used many low-tech hand or paragliders to avoid radar detection while launching rockets. This foxed the Iron Dome system.
Hope you folks have a great weekend, and your feedback will be much appreciated.
Were you forwarded this email? Did you stumble upon it online? Sign up here.
Share this newsletter      
Written by Leslie D’Monte. Edited by Saikat Chatterjee. Produced by Shad Hasnain.


Leave a Reply

The Future Is A.I. !
To top