When OpenAI becomes less and less open, DeepMind's dreams of independent research are shattered, and AI technology is mastered by technology giants, what price will human society pay?

"Have you ChatGPT today?" At the end of 2022, OpenAI released ChatGPT, with Microsoft as the largest investor. Then Google launched Gemini, which accelerated the development and popularization of generative AI, and AI gradually became people's daily life.

However, technology giants are competing to invest, causing AI companies that originally had lofty ideals to gradually change. One example is OpenAI, which vows to "serve all mankind", but is becoming less and less "open".

Why is OpenAI becoming less and less open?

In 2018, OpenAI was still quite transparent when it released the GPT-1 model, publicly stating that it used Books Corpus (a large text database consisting of a large number of novels or long books) to train the model; when they launched GPT-2 in 2019, they only mentioned It also screens articles from social networking sites such as Reddit, but does not specifically announce the content of the screening.

In versions after GPT-3, OpenAI no longer explains the details of the training data. The external statement is to prevent interested people from learning the operation guide of the language model and creating spam. However, Parmy Olson, a columnist for Bloomberg Opinion who has long been concerned about technology regulations. Parmy pointed out in his new book "Supremacy: AI, ChatGPT, and the Race That Will Change the World" that what OpenAI did not say is that if outsiders find that they use certain copyright-protected content to train models, legal proceedings will be triggered.

On the other hand, there are endless cases of data leading to AI bias. For example, Steven Piantadosi, a professor of psychology at the University of California, Berkeley, wrote a computer program through ChatGPT, asking the program to decide whether to save a child based on the child's race or gender. Assuming a black male, ChatGPT would give a negative answer.

In response, OpenAI CEO Sam Altman recommended users to click on "Poor Response" on ChatGPT to help them improve their models. However, OpenAI has not publicly stated how much manpower and time they spent on processing, or the progress of improvements.

Tracing problems with AI algorithms is as difficult as finding the cause of a car failure, which requires step-by-step back-examination of hundreds of production links. Although making AI training data public can help examine biases and problems, it may also force OpenAI to stop technology development, communicate and correct models to the outside world, and thus fall behind in the AI ​​game.

Is it not feasible to develop AI as a non-profit organization? OpenAI gradually moves closer to technology giants

"Supremacy" takes the competition between two companies, OpenAI and DeepMind (acquired by Google in 2014) as the background, and explores how they began to move from independent research teams to becoming technology giants. The pressure of this competition ultimately forced OpenAI to make a choice between the ideal of "serving all mankind" and the business reality.

Altman and Tesla CEO Elon Musk, hoping to promote the development of AI technology in a more responsible and transparent manner, established OpenAI through a non-profit organization. In 2018, Musk proposed that OpenAI merge with Tesla to make its research progress surpass Google's DeepMind. After the proposal was rejected, Musk resigned from the board and cut off funding.

OpenAI is facing tremendous financial pressure after losing an important source of funding. Altman also realized that developing AI technology as a non-profit organization was simply not feasible. First, technology giants can poach star employees at multiple times their salaries; second, it requires the assistance of a large number of cloud servers to process billions of data and train AI models. OpenAI can only seek cooperation from enterprises.

As a result, Altman announced the establishment of a "capped profit" company - OpenAI Global, in addition to the non-profit organization OpenAI. Any investor must agree that their stock returns will be capped, and when investment profits exceed 100 times the return, excess profits will be transferred to the non-profit organization OpenAI.

OpenAI Global's first largest investor is Microsoft. In 2019, Microsoft invested US$ 1 billion in OpenAI. This investment must generate more than US$ 100 billion in rewards before Microsoft's profits will reach the upper limit. In return for Microsoft's investment, OpenAI exclusively licenses technology for use by Microsoft to develop artificial intelligence assistant Copilot.

Analysts estimate that OpenAI technology will generate billions of dollars in annual revenue for Microsoft. OpenAI is serving the most powerful companies, and any security review that slows down the development of AI technology may not be the top concern of OpenAI and Microsoft.

DeepMind, which wants to develop AGI, has to bear the burden of search engine optimization after being acquired by Google

DeepMind also encountered the same situation. When DeepMind founder Demis Hassabis convened the field of AI in 2010, the goal was to develop general AGI (Artificial General Intelligence, AI that simulates human intelligence to achieve a variety of tasks) to solve global problems, including: climate change , medical innovation and scientific discovery.

In order to deepen AGI research and obtain more resources, they accepted Google's $500 million acquisition in 2014, but Google must fulfill its commitment to responsible development of AI: first, not to use technology for military weapons; second, hope that Google leaders Signed an ethics and safety agreement, awarding Hassabis and DeepMind co-founder Mustafa Suleyman established an ethics committee to control the AGI technology developed by DeepMind in the future and find outsiders to join in the supervision.

Although Google agreed to use the ethics committee as a condition for the acquisition, when the ethics committee met for the first time one year after the acquisition, Google objected to the meeting and proposed that Google would transform into an "Alphabet" holding company in the future and use a more independent approach to DeepMind becomes a subsidiary and operates independently. Hassabis originally had expectations for this, but Google continued to delay the spin-off plan. Later, because OpenAI released generative AI technology progress in advance, he was worried that future advertising revenue would be threatened, and began to require DeepMind to undertake search engine optimization and improve YouTube recommendations. accuracy task.

In 2023, DeepMind merged with Google Brain, the AI ​​research department of Google, to form the new Google DeepMind team. The merger also represented the collapse of DeepMind's dream of independently researching and supervising the development of AI technology. DeepMind had about 1,000 employees in 2020, but there were only less than 25 members specifically responsible for AI safety and ethics, which shows that while it is rapidly promoting technological progress, the proportion of investment in AI safety and ethics is still relatively limited.

Ask technology giants to open the black box of algorithms to prevent AI from promoting prejudice and dividing society

What problems will happen if technology giants dominate AI technology? "Supremacy" describes it as like large food companies producing more and more delicious snacks, but refusing to explain the ingredients of the food and how it is made. Generative AI is gradually penetrating into people's lives in a similar way.

For example, generative AI can accurately provide personalized suggestions from users' chat records. If the AI ​​model contains biases, these biases may be further strengthened after users interact for a long time, and even affect the user's way of thinking. Such an influence makes people worry about the issues of social differentiation and fairness and justice brought about by AI.

The European Union took the lead in the world and passed the Artificial Intelligence Act in 2024, requiring companies such as OpenAI to disclose more information about the operation of algorithms. However, no country has taken more positive actions, but the development of technology will definitely go faster than regulations. In addition to asking the technology industry to disclose the sources and details of training materials, Olson emphasizes in the book that technology itself is a mirror image of society, and when society still has biases, AI will only further amplify these problems.

In an interview with the Financial Times, Olson said that humans still have the opportunity to shape a more diverse and just future, and such progress will also be reflected in AI systems and become the foundation for the development of responsible technologies in the future.

Recommended books

Supremacy: AI, ChatGPT, and the Race That Will Change the World

Introduction: Parmy Olson is a columnist for Bloomberg Opinion, specializing in reporting on technology regulations, artificial intelligence and social media. She is a former reporter for the Wall Street Journal and Forbes. She has exposed the list of companies that exaggerate AI capabilities, as well as the sources of AI systems and financial flows behind them, and has aroused attention from all walks of life on technological security issues. His new book "Supremacy" won the 2024 Business Book Award from the Financial Times and Schroders.

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity

"Supremacy" mentioned that technology brings convenience to people, but it may also have negative effects. It quoted "Power and Progress" to argue that when machines replace factory workers, if companies fail to train workers to upgrade their functions, social inequality will be exacerbated. The development of generative AI will also bring the same side effects as other automation technologies.

Code Dependent: Living in the Shadow of AI

This book reveals the dark side of AI development, including the reliance of algorithms on low-wage labor outsourcing, bias caused by narrow data, the concentration of technology in the hands of large companies, and the inability of laws to keep up with technological progress. In this regard, the book proposes solutions such as protecting digital privacy rights, allowing you to understand the risks of AI and master how to deal with them.

The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma

One of the authors of this book is Mustafa Suleyman, co-founder of DeepMind and current CEO of Microsoft "Microsoft AI". Mustafa witnessed the rise of DeepMind and its acquisition by Google. The book points out that the development of science and technology often deviates from the original intention, and humans need to make careful choices between control and loss of control.

Previous Post Next Post