Artificial Intelligence: General (mis)Information

Applications like ChatGPT and Copilot—whether revered or despised—have dominated artificial intelligence (AI) discussions. People “ask Chat” or “AI” something in the same way that they would “Google” it. Despite the term AI encompassing well-known tech (e.g. the Netflix recommendation system, facial recognition software, Siri and Google Assistant, etc.), Generative AI (GenAI) has become the polarizing “face” of Artificial Intelligence.  

Regardless of the feelings it evokes, it’s essential to have a basic understanding of Artificial Intelligence and how it is going to affect human society.  

Intro to Artificial Intelligence

Our technological goals have always been to seek efficiency through automation. Where technology used to streamline simple, monotonous tasks, AI is complex enough to process information and provide feedback under various conditions.  

IBM defines automation as “the application of technology, programs, robotics[,] or processes to achieve outcomes with minimal human input.” Artificial components are used to complete a task automatically.  

Intelligence is often regarded as an ability to understand, reason, and learn. Artificial Intelligence therefore encompasses automation in the data-interpretation process that seeks to reflect human-like intelligence.  

In short, artificial intelligence uses technology to replicate intellectual capacity.  

Despite conflicting opinions on AI, debate typically narrows down to three main questions:   

  • Can intelligence be simulated?   
  • Is AI more efficient than humans?   
  • Does AI help or hurt humanity?  

With the situation becoming more complicated by the day, these questions have become more frequent in conversations, in both professional and educational settings as well as in daily life.

To have nuanced and productive discussions, we must solidify our understanding of the intellectual capabilities of technology, the purpose and limitations of AI, and whether it’s an aid or a hindrance.  

Intelligent Code or Coded Intelligence?

Intelligence, for humans, is a capacity for complexity: it describes the ability to perceive the world, draw conclusions, and respond meaningfully.  

Our brain works to transform data—raw, unorganized input—into applied knowledge. Capacity for intelligence, therefore, describes how well something is capable of exhibiting a process similar to that of the human brain.

Humans have a unique ability to do the following things instantaneously:  

  1. input visual, auditory, and sensory data
  2. send data-filled neurons to different parts of the brain
  3. draw conclusions
  4. apply knowledge to output a reaction

In society, levels of intelligence are often determined by using our own abilities as a frame of reference. Intelligence, as a result, is based on something’s ability to turn data into information, information into knowledge, and knowledge into wisdom for future use.  

Algorithms—which use pre-programmed rules to sort and categorize data—automate the process of turning data into information for processing. Artificial Intelligence—working to contextualize data—automates the process of turning information into knowledge.  

Generative AI, the subject of most debates, works to create a unique and contextually appropriate output, people now questioning whether GenAI could automate the process of turning knowledge into wisdom.

Wisdom applies knowledge through reasoning and problem-solving; it’s an interpretation of patterns and information to create a meaningful inference.  

It was once thought that machines could never achieve the intelligence of a human being. Ever since a computer beat the best human chess player, the line between what is and isn’t possible for machines has been blurred.  

Algorithms bridge the gap between data to contextualized information, and AI automated transforming that information into knowledge. Does Generative AI take technology from knowledge to wisdom?

Information, to Knowledge, to Wisdom:

1. Pre-GenAI

With traditional AI, information becomes knowledge automatically through the programming’s performance of specific tasks: analyzing data, identifying patterns, and making predictions.  

Data would exist in a repository, which is a data center. Originally, this data existed in finite storage. It was company based and often on-site, meaning there was a smaller, physical collection of data that AI was sorting through. This could be any kind of data, whether it is rows and columns of numbers, documents, or images, stored in a large computer.

The data would go through the analytics platform—oftentimes an algorithm or early form of AI. Data would be contextualized based on guidelines (sorting criteria) the programmer laid out for the machine to enact. Programmers would create categories that the information could be automatically sorted into.

Programming logic, Prolog, was among the first major developments that allowed the machine to run inferences to contextualize the information based on programmed “rules.” It was the introduction to the idea of using recursion, a feedback loop, to improve an algorithm with each use.   

The application layer is the model itself that would carry out tasks like recognizing images, recommending products, or answering specific queries. Programmers could manually update the code based on new data, making the system smarter. This was an early form of Machine Learning: the programming would follow a set of pre-defined rules that allows it to use new output to update the code that interprets input.

AI, as a result, learns from previous mistakes and successes. Models would make a prediction (that was either accurate or inaccurate) and “learn” for next time; the more data it has, the more context it has, and the more consistently accurate its answers will be.  

This feedback loop, this idea of learning and improvement, allowed AI to automate this process.

2. Post-GenAI

Generative AI takes the traditional AI method a step further by creating entirely new inputs based on the outputs from its training data. GenAI is a form of AI that creates original content in response to a prompt or request.

Instead of merely recognizing patterns, Generative AI learns patterns and uses them to update its programming. Where there was once simply an input and output, AI learns and improves via output becoming new input.   

GenAI uses an entirely different architecture than traditional AI. This is primarily because the way we think about “data” has completely changed. Rather than there being a physical repository, a bulky data center limited to one company, data is now an accumulation of any information from across the internet.

Foundation models, often Large Language Models (LLMs), identify patterns derived from a massive amount of data—no longer confined to one set of “training data.” LLMs have no nuance to understand specific situations, so they go through a prompting and tuning layer—where the query (inputted search) is used to narrow down the data it’s processing—to specify the use case.

They work by identifying patterns and relationships in huge amounts of data and then using that information to understand conversational requests or questions. LLMs use information they have learned to fulfill new requests. For example, if a student asked an AI software to write an essay about character development in a certain novel, the software would: scour the internet to find as much information as it could on each term in the search; analyze data on character development, the novel, and the writing style of average students; format it into an essay.

As humans, we express ourselves in various ways. As a result, the software must also go through a layer of Natural Language Processing (NLP) to understand the nuance of a conversation. Each person expresses themselves and communicates in a different way based on who they are. Patterns of speech like vocabulary, phrases, and even tone, all vary from person to person, which makes it very difficult for a machine to both understand and respond to in kind.

As a result, the software sorts through a massive amount of data—not only on the topic itself, but on how each term exists within the language. The application layer (ChatGPT, Copilot, etc.), is the final step in the process that, trained in human-like language and processing, creates an output.

This output isn’t always perfect. Much like a human analysis, the data could have been interpreted wrong or from a limited perspective. When there isn’t enough data to bridge a lapse in knowledge, AI assumes a connection and relies on that assumption throughout the process. The more information it has, the more reliable the output will be. Now existing as its own independent data, the output is then compared to new input and taken into account when processing later queries.

Essentially, the software is trying to mimic the way a human brain works; to be capable of “thinking” and “reacting.” Machine learning, as a result, created a feedback loop between the data, the analytics platform, and the application itself.

Chatbots and Agentic AI

AI Chatbots use a combination of LLMs, machine learning, and several layers of AI to enact Natural Language Processing, understand queries, and replicate human conversations. They use information found on the internet to compromise human-like responses. In doing so, chatbots are able to carry on a full conversation, responding in predicted responses based on training data.

Where these applications were once only known for customer service and phone answering systems, Agentic AI—a combination of Generative AI and AI chatbots—has begun to dominate the AI discussion. ChatGPT, Copilot, Gemini, and other big names in the industry are examples of Agentic AI; “agents” that use several layers of AI to (1) understand a conversational query, (2) break it down into micro tasks to perform and analyze, and (3) generate new content in response to the prompt. These are, at the moment, freely accessible for anyone with any purpose at any time.

These “chatbots,” however, do more than just respond to a prompt. In addition to textual responses, they can create content; simulated images, videos, art, and music, all using existing data. Combining unlimited prompted creativity—where all data can be used in any way—with a conversational, accessible chatbot, AI “agents” have brought ethics to the conversation’s forefront.

The ability to use an unrestrained generator with growing capabilities and accuracy to, for example, put a simulation of any person in any situation—replicating their voice, look, and mannerisms—began to create a fear that GenAI could affect reputations. Deepfakes, where AI is used to simulate a real, existing person under fake circumstances, are only a glimpse into the potential harm of generative AI.

Consent is at the core of most anti-AI sentiment. Content generation, whether it be text, art, or music, is created using all available data (including social media, online forums, published work, etc.), often without any form of credit. AI perceives everything as raw data—individual pixels, sounds, and letters, that it can combine in different ways for different purposes. The ability to use judgment—to get consent to mimic ideas and generate an ethically produced output—is determined first by the programmer and then by the user.

Humanity has reached a point in trying to discover the potential of AI that they let it overshadow its intended purpose: to aid humanity through technology. We can’t apply concrete, black-and-white ethics to a system that’s incapable of drawing its own conclusions. Though AI has been advancing rapidly, and may eventually be even more capable, it still relies on programming. For now, AI is simply a tool and not capable of moral judgment. It was made to assist, not replace humanity.

When a tool causes harm, we must change our usage. With AI seamlessly integrating itself in the job market and in education, we must question what we really seek from it: How much of our contribution to the world needs to be automated? Is meaning lost when something without the depth of the human experience tries to replicate it?

Leave a Reply

Your email address will not be published. Required fields are marked *