Big Tech Companies Face Pressure to Release AI Technologies

Big-Tech-Companies-Face-Pressure-to-Release-AI-Technologies-Despite-Ethical-Concerns

Big Tech companies are racing to market with their cutting-edge AI technologies, but some AI ethicists are concerned that this could expose billions of people to potential harms before trust and safety experts have had a chance to study the risks. In particular, Facebook’s parent company Meta, Google, and Microsoft are facing pressure to consider the ethical implications of their AI technologies as they race to release them.

ChatGPT and Blenderbot

Three months before ChatGPT’s debut, Facebook’s parent company Meta released a similar chatbot called Blenderbot. However, Blenderbot was considered boring by Meta’s chief artificial intelligence scientist, Yann LeCun, who blamed this on being “overly careful about content moderation.” In contrast, ChatGPT is able to converse about controversial subjects and is quickly going mainstream, with Microsoft investing in the company behind it and incorporating it into its office software.

As ChatGPT gains more attention, there is pressure on tech giants like Meta and Google to move faster, potentially pushing safety concerns aside. ChatGPT is more versatile than the Blenderbot and it is able to converse about controversial subjects and is quickly going mainstream, with Microsoft investing in the company behind it and incorporating it into its office software.

Meta employees have recently shared internal memos urging the company to speed up its AI approval process to take advantage of the latest technology. Google has issued a “code red” around launching AI products and proposed a “green lane” to shorten the process of assessing and mitigating potential harms.

Generative AI

ChatGPT is part of a new wave of software called generative AI which creates works of their own by drawing on patterns they’ve identified in vast troves of existing, human-created content. This technology was pioneered at big tech companies like Google that have grown more secretive in recent years, while research labs like OpenAI rapidly launched their latest versions.

Tech giants have been cautious since public debacles like Microsoft’s Tay, which it took down in less than a day in 2016 after trolls prompted the bot to call for a race war.

“People feel like OpenAI is newer, fresher, more exciting and has fewer sins to pay for than these incumbent companies, and they can get away with this for now,” said a Google employee who works in AI, referring to the public’s willingness to accept ChatGPT with less scrutiny.

Big Tech and Ethics

Some AI ethicists are concerned that Big Tech’s rush to market could expose people to potential harms, such as sharing inaccurate information, generating fake photos, or giving students the tools to cheat on school.

Joelle Pineau, managing director of Fundamental AI Research at Meta said, “The pace of progress in AI is incredibly fast, and we are always keeping an eye on making sure we have efficient review processes, but the priority is to make the right decisions, and release AI models and products that best serve our community.”

Under Pressure

Despite the pressure to move faster and release AI products, it is important that tech giants like Google, Meta, and Microsoft take the necessary steps to ensure the safety and ethical implications of their cutting-edge AI technologies are considered before they are released to the public. The potential harms that could come from sharing inaccurate information, generating fake photos, or giving students the ability to cheat on school tests, for example, must be thoroughly studied and mitigated before release.

OpenAI, the company behind ChatGPT, has been able to gain a competitive advantage over its rivals by releasing its language models for public use, allowing for “reinforcement learning from human feedback”. This approach allows for the AI to be fine-tuned and improved through real-world interactions, making it more versatile and able to converse about controversial subjects.

However, some AI ethicists believe that this approach could come with its own set of risks. The lack of a established system of checks and balances for vetting the ethical implications of AI inside big tech companies is not as established as privacy or data security.

The race to market with AI technologies is a complex issue that requires a balance between innovation and safety. Tech giants must take the necessary steps to ensure that their cutting-edge AI technologies are safe and ethical before they are released to the public. OpenAI’s approach of releasing its language models for public use allows for real-world feedback and improvement, but it is important to consider the potential risks that come with this approach. It is a task that requires the collaboration of experts in the field of AI, ethicists and the government to ensure that these technologies are used for the betterment of society.