|
Should we risk loss of control of our civilization?” What is The Real “Weight” of This Letter? At first, it’s easy to sympathize with the cause, but let’s reflect on all the global contexts involved. Despite being endorsed by leading technology authorities, such as Google and Meta engineers, the letter has generated controversy due to some subscribers being inconsistent with their practices regarding security limits involving their technologies, including Elon Musk. Musk himself fired his “Ethical AI” Team last year, as reported by Wired, Futurism, and many other news sites at that time.
It’s worth mentioning that Musk, who co-founded Open-AI and left the company in 2018, has repeatedly attacked them on Twitter with scathing criticisms of ChatGPT’s advances. Sam Altman, co-founder of Open-AI, in a conversation with Sri Lanka WhatsApp Number podcaster Lex Fridman, asserts that concerns around AGI experiments are legitimate and acknowledges that risks, such as misinformation, are real. Also, in an interview with WSJ, Altman says the company has long been concerned about the security of its technologies and that they have spent more than 6 months testing the tool before its release.
What Are Its Practical Effects? Andrew Ng, founder and CEO of Landing AI, founder of DeepLearning.AI, and managing general partner of AI Fund, argues on Linkedin “The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Let’s balance the huge value AI is creating vs.
|
|