Who Remembers That AI Was Never Meant to Replace People?

57

When OpenAI released Sora, a Vincentian video AI tool, on February 15, 2024, its video results amazed many. However, once the novelty wore off, the question arose: who will watch short, emotionless videos that lack a storyline? Similar to the metaverse, technologies that don’t address real-world needs or lack a stable business model eventually fade into hype. Sora, contrary to claims of advancing General AI, is a product of computing driven by heavy capital, lacking true technological innovation. 

Technically, Sora’s foundation relies on innovations like the Transformer, Diffusion, and GAN models. But these breakthroughs are not unique to OpenAI. Their success stems from scaling algorithms, data, and computing power, reflecting the political economy effects of Moore’s and Metcalfe’s Laws. Big models rely on massive computing through high-performance GPUs processing massive data sets, with companies like Microsoft investing heavily in supercomputing infrastructure, such as tens of thousands of NVIDIA A100 chips, providing immense computing power for training models like ChatGPT.

The training of big models is energy- and capital-intensive, raising the question: do the benefits outweigh the costs? A cost-benefit analysis should consider overlooked externalities, such as environmental impact and systemic risks, which are borne by the public. While big models can boost productivity by automating tasks like content creation, diagnosis, research, and legal reviews, they also risk replacing human workers, especially when ordinary consumers and content creators lack influence over the technology’s deployment. Thus, AI is not just a technological issue but a public one, shaped by political, economic, and legal forces beyond technology itself.

Human labor is being devalued, reduced to auxiliary services for machines. The AI-generated content from tools like ChatGPT and Sora, which lacks emotion or a storyline, is often praised, while human-created work is criticized or ignored. This shift is influencing education and the future of humanity, with parents questioning the value of traditional education when many future jobs may not require human workers. While experts know this isn’t true, the industry’s promotion of AI as a solution may lead to a self-fulfilling prophecy. Teenagers might increasingly rely on AI, leading to a decline in the quality of human work, which could ultimately justify replacing humans with machines.

To avoid this downward spiral, AI education must include humanistic and social science perspectives, rather than solely following narratives set by industry giants. The public must understand that AI is not a neutral technology; it is embedded within political, economic, and legal structures. While AI can learn and evolve, it remains a tool designed to achieve human-defined goals. We cannot abandon the pursuit of purpose, or our future could be shaped by those who control AI, like Microsoft and OpenAI.

OpenAI, once a non-profit aiming to “create value for everyone,” has shifted toward a for-profit model after its 2019 decision to abandon its non-profit status. Despite its initial promises of openness, OpenAI’s focus on protecting intellectual property grew stronger, particularly after Microsoft’s $1 billion investment. Now, OpenAI is effectively a research division of Microsoft, which strengthens its monopoly in operating systems and productivity tools. Microsoft’s strategy of acquiring platforms like GitHub and integrating AI into its products, such as Office Copilot and Bing, ensures its continued dominance.

Monopolies can lead to arbitrary pricing and decreased service quality, harming consumers. Microsoft’s strategy of subsidizing users and later imposing higher prices and lower quality is a common pattern in digital markets. In such cases, public oversight and regulation are necessary to protect consumer welfare. This includes price regulation, minimum service standards, and data security requirements.

Institutional leadership is essential to ensure AI development aligns with human values. Some argue for abandoning the human-centered approach, envisioning a future where AI surpasses human intelligence. However, this view, akin to extreme ecological beliefs, is not widely accepted. A humanistic stance is necessary, where AI remains a tool that supports human activities and is developed with human needs at the forefront.

In the era of artificial intelligence, machines are increasingly taking over tasks that once required human intellect, leaving humans with work that relies more on instinct. This shift, driven by a blend of technical, political, and economic forces, reflects a desire to reduce labor costs. For example, in industries like takeaway and express delivery, AI-driven algorithms are now performing managerial tasks, optimizing routes and schedules for human laborers. This allows capitalists to replace expensive brain work with cheaper automation.

The true purpose of creating machines is not to replicate human beings, as there are already billions of people on Earth, but to assist in tasks that are repetitive, tiresome, or beyond human capacity. AI’s role is to enhance productivity, not to fulfill emotional or existential needs. Therefore, the goal of AI development should be to create tools that help us perform necessary but monotonous tasks or to do things that are difficult or inefficient without AI.

From a humanistic perspective, the current direction of generative AI, such as OpenAI’s tools, is misguided. Generative AI tools like ChatGPT and Sora replicate what human creators can already do but lack the contextual depth, purpose, and adaptability that human-created content offers. As a result, they often only serve to entertain or confuse, rather than contribute to meaningful, authentic, or rigorous content creation.

There are three primary models of digital economy regulation shaped by distinct political and economic systems: the U.S. market-driven model, which fosters innovation and supports winner-takes-all dynamics; the state-driven model of China, which balances development and stability; and the rights-based model of the European Union, which strives to protect human dignity, privacy, and autonomy. The competition among these digital powers plays out across two dimensions: horizontally, between countries, in terms of technology, business models, and norms; and vertically, between countries and enterprises, particularly concerning foreign versus domestic tech companies.

The global expansion of digital technologies, including AI, has led to the U.S. advocating for the free flow of data and AI deployment across borders, a strategy known as digital imperialism. This allows U.S. companies to dominate the global digital market. On the other hand, countries that lack powerful local digital giants, such as those in the European Union, emphasize data sovereignty and restrict cross-border data flows to protect personal data rights and limit foreign companies’ access to local data.

Meanwhile, China’s approach to AI regulation is walking a tightrope—balancing progress and stability while facing immense pressure. This delicate balance may be essential for AI regulation, as allowing the market to drive AI development could lead to exploitation by digital giants and exacerbate fears of runaway AI, as expressed by figures like Elon Musk. 

However, maintaining an individual rights-based approach may stifle AI development, hindering the industry from progressing. In either case, the result could be exploitation, with digital giants like those in the U.S. reaping the benefits of AI without adequate oversight or regulation.

Source: softwarium, tech co, the job blog