Connect with us

Hi, what are you looking for?

Business

Artificial general bullshit

I recently attended a conference on AI, where experts and representatives from various sectors discussed the risks, benefits, and governance of the technology. However, before delving into the serious topics, I feel the need to address the current news surrounding OpenAI, a leading AI generator. In my opinion, the company’s claims of building “artificial general intelligence” or “artificial superintelligence” are pure nonsense. The board members’ obsession with effective altruism and AI doomerism is also baseless. Even the output of their ChatGPT is nothing but a figment of their imagination. I fear that even the discussion of AI safety in relation to OpenAI could be misleading. This is not to say that AI and its capabilities, as practiced by OpenAI and other companies, should not be taken seriously. However, we must also consider the wonder and potential of AI, as well as its impact, speed of development, and governance. These were the main topics of discussion at the AI conference I attended, hosted by the World Economic Forum in San Francisco. Despite any criticism of the Forum, one thing they consistently do well is convene multistakeholder conversations on important topics, as people are eager to accept their invitations. The conference had representatives from technology companies, governments, and academia. I was pleased to sit next to a philosopher who is leading a program on ethical AI. Finally, someone is taking this issue seriously. I knew I was in the right place when the topic of AGI was quickly dismissed. AGI, or artificial general intelligence, is the supposed goal of OpenAI and other AI companies, where they aim to create a machine smarter than all of us, including themselves. This machine is said to have the potential to destroy humankind unless we listen to its creators. I find this claim to be utter nonsense. During the public portion of the conference, panel moderator Ian Bremmer stated that he had no interest in discussing AGI. I couldn’t agree more. Andrew Ng, co-founder of Google Brain and Coursera, also expressed skepticism towards claims of imminent AGI doom, calling them “vague and fluffy.” He compared it to trying to prove that radio waves won’t attract aliens that will wipe us out. Gary Marcus, a voice of reason in the discourse on AI, mentioned his attempts to get Elon Musk to make good on his prediction that AGI will arrive by 2029, with a $100,000 bet. However, it is unclear what Musk means by this, as he has made many other predictions that have not come to fruition. One participant even suggested that large language models, like ChatGPT, may just be a parlor trick. With the BS out of the way, the conference focused on practical discussions for better understanding and governance of AI. 

You May Also Like

Tech

In an era of increasing digitalization, the Human Machine Interface (HMI) takes center stage as the linchpin of our interaction with technology. It serves...

Tech

The preview of Nintendo Switch 2 innovations excites gamers worldwide. This preview promises cutting-edge features, enhancing interactive experiences. Nintendo’s preview hints at a transformative...

Business

The Importance of Sales Leadership Sales leadership plays a crucial role in driving business growth and success. Effective sales leaders have the ability to...

News

The announcement followed a third unsuccessful attempt to free the stranded cruise liner. The Australia-based Aurora Expeditions, operator of the MV Ocean Explorer, stated...