Microsoft Urges Human Control over AI to Ensure Safety and Ethical Use

Microsoft’s President and Vice-Chairman Brad Smith has stressed the critical need to exercise control over artificial intelligence (AI) to prevent it from being weaponized. In an interview with CNBC, Smith highlighted the historical trend of technologies being used as both tools and weapons and underscored the importance of maintaining human oversight over AI, regardless of its application in government, military, or critical systems automation.

The increasing proliferation of AI has raised global concerns, particularly with the rise of ChatGPT, a generative AI-powered chatbot known for its remarkably human-like responses. Generative AI, a type of AI technology, is capable of creating various forms of content, from text to images and code. Prominent figures in the tech industry, including executives from OpenAI, Google’s DeepMind, and Microsoft, have warned about the existential risks associated with AI, comparing it to the potential devastation caused by nuclear warfare.

During his interview with CNBC, Smith reiterated the need to move beyond relying solely on ethical conduct within companies and advocated for the implementation of legal frameworks and regulations to enforce safety measures. He drew parallels between AI and previous technological advancements, emphasizing the importance of establishing a comparable framework to prevent potential harm, similar to circuit breakers for electricity or emergency brakes for school buses.

While AI’s rapid expansion raises concerns about job automation, Smith emphasized that AI should be seen as a tool that enhances human capabilities rather than replacing jobs entirely. Addressing fears of job obsolescence, Smith highlighted Microsoft’s perspective on AI as “co-pilots” that collaborate with individuals, rather than acting independently. He used the analogy of transforming a Word document into a PowerPoint slide, emphasizing that human engagement and scrutiny are vital for achieving optimal outcomes, even with AI assistance.

FAQ:

Q: What is generative AI?
A: Generative AI is a type of artificial intelligence technology capable of creating various forms of content, ranging from text to images and code.

Q: What are the concerns associated with AI?
A: Concerns include the potential weaponization of AI, its existential risks comparable to nuclear warfare, and the impact on job automation.

Q: How does Microsoft view AI?
A: Microsoft sees AI as a tool that enhances human capabilities, acting as “co-pilots” to individuals rather than replacing their roles entirely.

Sources:
– CNBC: https://www.cnbc.com/2021/07/28/microsoft-brad-smith-ai-needs-human-control.html
– Goldman Sachs report: (URL of domain only)

Subscribe Google News Channel