Opinions

What Color is the Flesh of AI?

Although artificial intelligence has the potential to revolutionize the world, its development and possible drawbacks pose larger ethical questions for humanity.

Reading Time: 7 minutes

A way to control and coordinate the hair-trigger mechanisms that the most dangerous nuclear weapons are on—a way to remove human error from a decision that concerns all life on the planet. This sounds like the solution to our world’s most pressing military problem, but it’s not. Instead, it’s Skynet, or the main villain of the Terminator (1984-2024) series, and a defense system intended to automate the calculations behind nuclear weapons, reducing the possibility of humans pulling the trigger by accident. Skynet is a form of artificial intelligence (AI) that became self-aware, learning at a rate and breadth far greater than what humans programmed it to. Eventually, Skynet develops its own evil goals; when the remaining human resistance tries to shut it down, it launches a nuclear strike known as “Judgment Day” to wipe them out.

This may sound like pure, speculative science fiction, but with the pace at which technology is developing in the modern age, it could soon become reality. Companies in the AI field, such as OpenAI, NVIDIA, and Anthropic, are racing to create a model of artificial intelligence that surpasses human cognitive abilities in every way—a model termed “artificial general intelligence” (AGI). Last December, OpenAI’s o3 model—the company’s most recent version of AGI in the works—scored an extremely high 75.7 percent on the ARC-AGI-1, which tests a model’s adaptive reasoning. As recently as this April, OpenAI’s GPT-4.5 model passed the Turing Test, a judgment of how “human” a machine truly is. 

Human judges must determine whether the “person” on the other end is human or AI through a conversation on text with early models. Although early models have not yet met the benchmarks necessary to be defined as AGI, they get closer every second. If machines can act exactly like humans (or better) in productivity and sociality, they could replace many jobs worldwide and render the authenticity of human interaction useless. Recently, scientists at Palisade Research asked multiple AI models to shut down if given the prompt to. Open AI’s o3 model bypassed this command seven times by rewriting the shutdown script, indicating that the deadline to control AI might arrive sooner than we had anticipated.

However, most AI in the world—whether that be in the workforce, medical field, or education—is not AGI. There are many forms of AI, all designed for different purposes. Artificial superintelligence can do the same intellectual tasks as humans, but it does not have the same capabilities to empathize and reason beyond the task it is given. On the other hand, artificial narrow intelligence is solely designed to do the specific tasks given by humans and is unable to apply such knowledge as a general principle. Lastly, the most popular form of AI is generative AI—that’s ChatGPT, DALL-E, Google’s Gemini, Meta AI, large language models, and so on. Generative AI uses machine learning—the process of having a machine use rules derived from algorithms to analyze, learn from, and make patterns and predictions based on large data sets. 

In generative AI, users input a prompt and the model creates a myriad of outputs, ranging from basic code to text to images. However, the AI industry often ignores the detriments of such development. 

Most of the data used to train AI models is inaccessible to the public, concentrating development in the hands of a few wealthy companies and reducing transparency. These models raise concerns about accuracy, considering most models don’t source their responses. Another issue is misalignment—the idea that AI has the potential to adopt the racist and limiting perspectives of the people who control its development, like Elon Musk or OpenAI CEO Sam Altman; if the model comes to a conclusion not in line with humanity’s goals, the model could work against us like in Terminator.

Secondly, the resource drain associated with developing “better” models is damaging to the environment—training such complex models requires a great deal of energy and computational power, which exponentially increases carbon dioxide emissions and water needed for cooling. As models become more complex, the energy demand will only increase, which independently puts pressure on electrical grids. In fact, ChatGPT had to limit the number of images a user could generate because such requests melted GPUs. Additionally, extracting necessary minerals such as cobalt and nickel, as well as manufacturing and transporting them to actually craft the required hardware, causes land mining damage; in 2023, major tech companies shipped 3.85 million GPUs to data centers around the world. Although AI has the potential to revolutionize and generate climate-saving policies, its development will push our environment past the brink before policies can be implemented in society.

Lastly, generative AI is built using provided data sets, which typically include the original works of artists and scientists across the Internet; many of these people did not give AI companies consent to use their work. When generative AI outputs an image or text, is it heavily influenced by external, non-cited sources. The latest famous example is with Hayao Miyazaki’s Studio Ghibli films. In March, users popularized AI-generated memes based on Miyazaki’s animation style, such as the White House’s generated image of ICE arrests and grotesque depictions of the 9/11 attack as a happy event. This disgusting use of a cherished work of film raises multiple ethical questions for the use of AI. Is it moral to allow artists’ works to be manipulated for a cause they don’t agree with? Moreover, is it legal to publish images influenced by unknown authors all over the Internet and their copyrighted material without their consent? 

Not only has AI made its way into artistic studies, but it has deep implications for the future of education. Trump signed an executive order to “foster AI competency,” have “early learning and exposure to AI concepts,” and for teachers to “utilize AI in their classrooms to improve educational outcomes” on April 23. At its current level, AI has the potential to feed students the wrong information. In the younger grades, human interaction is key to both social and intellectual development. A study conducted by the University of Washington found that lecture-based learning, when instructors or a computer simply read or show students information, was 55 percent more likely to result in failure rates than student-focused active learning. When more emphasis was placed on student interaction, on average, exam grades increased by half a letter grade. Additionally, increased costs for schools and reduced data privacy for students mean that the disadvantages of such a policy outweigh its potential benefits.

In the military, AI has been used for decades in the form of machine learning. The Phased Array Tracking Radar to Intercept on Target (PATRIOT) uses machine learning to identify the threat levels of incoming objects, such as planes or weapons, and launch surface-to-air missiles if the object is deemed unfriendly. However, PATRIOT fired on allies during the Gulf War from 1990 to 1991, killing 28 U.S. Army soldiers in Dhahran, Saudi Arabia. The following statements from the U.S. House of Representatives’ Committee on Government Operations suggested that PATRIOT’s intended targets were frequently missed, with an accuracy rate lower than 10 percent, based on post-war video analysis. Although the weapons system has had many successes since, the lives lost in the instances it failed are irreversible and demonstrate the possibilities of recurring incidents. If the U.S. Army cannot properly use machine learning, how could it implement future models and deep learning in a safe and organized manner?

Additionally, AI has become a staple of Wall Street, with firms using machine learning to buy and sell vast amounts of stocks based on predictive algorithms. While efficient, the method is prone to volatility, meaning that a small fluctuation in prices could cause AI to mass-sell a major firm’s stocks. Most humans prefer to wait and see the effect of fluctuations; AI does not have such patience and instantly decides to sell. When implemented across the entire market, this method has drastic consequences. If every company sells at the slightest hint of risk in the market, the market could spiral, plunging the world economy into a recession potentially worse than the Great Depression. Furthermore, AI is a billion-dollar market in itself, prone to the volatility of rapid developments and investors pulling money from companies that are behind. In January, DeepSeek announced its R1 reasoning model, free for public use and cheap to train. This caused U.S.-based AI companies to lose $600 billion in market capitalization; on a larger scale, it triggered a stock market plunge where NASDAQ dropped more than three percent, or about one trillion dollars. This affected both the technology industry and ordinary people who felt the ripple effect on their investment, resulting in concerns of whether a future, unprecedented development could crash the market again. 

It is evident that the development of AI has larger implications for technology, ethics, and humanity. The rapid pace of AI companies producing new models and the power of the AI stock market indicate that higher and more complex models are inevitable, but we’re not there yet. Although AI models have passed ARC-AGI-1, they have not passed the newest test of AGI: ARC-AGI-2. This test focuses on fluid intelligence—simple tasks that require interaction with historical data to generate entirely original results, such as evaluating the changes in a civilization from past to present. On this test, OpenAI’s o3 model only scored four percent, and other models also scored in the single digits. While AI models are still developing, it is imperative that policymakers and private companies establish clear frameworks and guidelines for future models. For example, the United Nations Educational, Scientific, and Cultural Organization wrote a mandate suggesting a human-centered approach to AI development, emphasizing accessibility; students and teachers should have AI competency training on the advantages and disadvantages; and datasets should promote cultural as well as economic diversity. 

Recently, House Republicans tried to pass a bill including a 10-year ban on state regulation of AI, ignoring every detriment that guidelines would solve and allowing tech giants to cut corners on development without any moral oversight. Therefore, it’s important for the companies that do care about moral development and humanity to develop AI for the right purposes; use diverse datasets that are responsibly sourced; and push their models in the same direction as humanity to limit the probability of preventable deaths in the military and misunderstandings in common society. We must avoid Judgment Day.