Daily News
img1

04-04-2023

12:00:AM

1805 Views


Table of Content



  • GS - 1 Polity
    • What technology leaders asking for a six-month halt on AI don’t want you to know
  • Fact File
  • ISRO’s Reusable Launch Vehicle Mission RLV LEX



What technology leaders asking for a six-month halt on AI don’t want you to know

GS-1: Indian Society Effects of Globalization on Indian society.

 

On March 28, a letter was drafted by the Future of Life Institute calling for a six-month halt on “training AI systems more powerful than GPT-4”, signed by more than 2,900 people. Some of these people are famous in the worlds of AI, computer science, economics, and policy, such as Steve Wozniak, co-founder of Apple, Yoshua Bengio, Turing Prize winner, and Daron Acemoglu, MIT professor of Economics. Any intelligent observer of the field will agree with the core of the letter which calls for caution about AI, but the devil is in the details. While hyping the non-existent and improbable red herring of fantastical AI technology, in tune with the ideology of “Longtermism”, the letter misdirects from the actual dangers of the AI industry. A shallow reading of the letter will only make clear the warning, not the clever but cynical misdirection.

 

The Demands of the Tech Leaders

  • The group of tech leaders, which includes Elon Musk and Demis Hassabis, has called for a six-month halt on the development of AI, particularly in the area of autonomous weapons systems.
  • They argue that there is a risk of unintended consequences if AI is allowed to develop unchecked.
  • The group also calls for a broader conversation about the ethical implications of AI, and for the development of a regulatory framework to guide its development.

 

Arguments in favour

  • AI is a group of technologies that depend on machine learning to identify patterns from large amounts of data for decision-making, clustering or generation.
  • These technologies have replaced certain cognitive labor and can be economically lucrative, but they are pseudorandom and statistically-based, which means errors are baked into the system no matter how much training data is used.
  • No AI system should be used in fields where errors or blind replication of the past could cause harm, such as medicine, law enforcement, and the justice system.
  • However, the government's policy myopia and the profit motive of private companies have pushed for harmful use cases such as facial recognition technology in law enforcement.
  • AI technologies are useful, but their statistical nature must be distinguished from real intelligence and knowledge generators, and regulatory red lines must be set to prevent harm to individual and social rights.
  • The AI industry is data-hungry, and this hunger violates privacy and other constitutional rights, leading to a surveillance state and economic exploitation.
  • Low-paid and exploited workers called "ghost workers" from economically weak countries curate and clean large quantities of data for the AI industry.
  • Social media platforms sell user data via "data brokers" in an ill-regulated grey market, creating an ecosystem in which the AI industry and platform economy feed into each other and contribute to exploitation.
  • The AI industry is part of the larger market economy, and pressures for profit can ignore the necessary precautions for AI research, design, and deployment, leading to real dangers and harms.
  • The potential risks of AI are significant, and could include unintended harm to humans, as well as economic disruption and geopolitical instability.
  • A moratorium on AI development would provide an opportunity for reflection and for the development of a responsible approach to AI.
  • A regulatory framework is necessary to ensure that AI is developed in a way that is aligned with human values and priorities.

 

Arguments against

  • It may not be feasible or desirable to halt the development of AI for a period of six months.
  • AI is already being developed by a wide range of actors, and that a moratorium would be unlikely to achieve its intended goals.
  • The tech leaders may have ulterior motives for making the demand, such as protecting their own business interests or limiting competition.

 

Challenges and Risks of AI in India

  • Lack of data protection law in India: Currently, India lacks a substantial data protection law to ensure the fundamental right to privacy, leading to the proliferation of harmful facial recognition technology projects in law enforcement and other areas.
  • Platform/Gig work not recognized as employment: India's laws do not recognize platform/gig work as employment, leaving gig workers without the protections afforded to ordinary workers.
  • AI systems in delicate processes: AI systems in telemedicine and the justice system are a cause for concern.
  • The Future of Life Institute letter ignores primary harm: The letter ignores the actual harm caused by the AI industry, such as the use of error-prone non-explainable artifacts, dangers of replicating past societal problems, continuous erosion of privacy, and expanding platformisation and workers’ exploitation.
  • Dangers ignored: The letter uses dystopian fantasies to distract from the actual harm caused by the AI industry.
  • AI safety can't be a red herring: "AI safety" cannot become a red herring to much-needed regulation.
  • The central issue is who owns AI and how society uses it: The central issue is not technology, but rather who owns AI and how society uses it.

 

Conclusion

  • The debate over the tech leaders' demand reflects broader tensions and disagreements within the tech industry about the role of AI and its ethical implications.
  • The development of AI will continue to be a contentious issue, and that it will require ongoing dialogue and collaboration between a range of stakeholders to ensure that it is developed in a responsible and ethical manner.

 

[Ref- IE] 


Fact File


ISRO’s Reusable Launch Vehicle Mission RLV LEX

  • The Indian Space Research Organisation (ISRO) and its collaborators effectively conducted a precise landing trial for a Reusable Launch Vehicle at the Aeronautical Test Range in Chitradurga, Karnataka
  • This trial, known as the Reusable Launch Vehicle Autonomous Landing Mission (RLV LEX), marked the second of five tests in ISRO's plan to develop space planes/shuttles capable of transporting payloads to low earth orbits and returning to earth for reuse.


Reusable Launch Vehicle-Technology Demonstrator (RLV-TD)

  • The Reusable Launch Vehicle-Technology Demonstrator (RLV-TD) is aimed to develop essential technologies for a fully reusable launch vehicle to enable low-cost access to space.
  • It will be used to develop technologies like hypersonic flight (HEX), autonomous landing (LEX), return flight experiment (REX), powered cruise flight, and Scramjet Propulsion Experiment (SPEX).
  • It looks like an aircraft and consists of a fuselage, a nose cap, double delta wings, and twin vertical tails.
  • ISRO plans to scale up the RLV-TD to become the first stage of India’s reusable two-stage orbital (TSTO) launch vehicle in the future.


Advantage

  • A reusable launch vehicle is considered a low-cost, reliable, and on-demand mode of accessing space.
  • The structure of a space launch vehicle accounts for approximately 80 to 87 percent of the launch cost, while the cost of propellants is comparatively minimal. 
  • Utilizing RLVs can result in a reduction of launch costs by nearly 80 percent of the current cost.


Global advancements in RLV technologies

  • NASA's space shuttles have been carrying out human space flight missions using reusable space vehicles for a long time.
  • The use of reusable space launch vehicles has regained interest in recent years, with Space X showcasing partially reusable launch systems using its Falcon 9 and Falcon Heavy rockets since 2017.
  • Space X is also working on a fully reusable launch vehicle system called Starship.
  • Several private launch service providers and government space agencies around the world, including ISRO, are working on developing reusable launch systems.


Comments

Recent Comments