What is a Large Language Model Artificial Intelligence?
A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.
What are large language models used for?
LLMs have become increasingly popular because they have broad applicability for a range of NLP tasks, including the following:
Text generation. The ability to generate text on any topic that the LLM has been trained on is a primary use case.
Translation. For LLMs trained on multiple languages, the ability to translate from one language to another is a common feature.
Content summary. Summarizing blocks or multiple pages of text is a useful function of LLMs.
Rewriting content. Rewriting a section of text is another capability.
Classification and categorization. An LLM is able to classify and categorize content.
Sentiment analysis. Most LLMs can be used for sentiment analysis to help users to better understand the intent of a piece of content or a particular response.
Conversational AI and chatbots. LLMs can enable a conversation with a user in a way that is typically more natural than older generations of AI technologies.
Writing code. Use an AI to get code completion and suggestions.
ChatGPT is an AI chatbot system that OpenAI released in November 2022 to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and sometimes will get an answer that's useful.
For example, you can ask it encyclopedia questions like "Explain Newton's laws of motion." You can tell it, "Write me a poem," and when it does, say, "Now make it more exciting." You can ask it to write a computer program that will show you all the different ways you can arrange the letters of a word.
Here's the catch: ChatGPT doesn't atually know anything. It's an AI that's trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong. More significantly, AI programs like this are not connected to the internet, doing searches in the traditional way. They only know what they are trained with, which may be two or more years out of date.
What is the problem with Artificial Intelligence in the classroom?
There are several significant concerns with AIs like these. The first, and probably most important, is that it gets its information from its programmers. It is fed all the available information from the general internet at a particular point in time. Occasional updates might occur, but in general, the information might be very old. Additionally, any information fed to it was derived from the general internet, which is notoriously unreliable. Imagine the damage this can do with bad medical advice. Here at Ranger College we expect you to do enough quality research to find accurate answers to your questions.
Along a similar vein, AI is subject to bias. Since it is drawing on the collective writing of millions of humans, past and present, it picks up biases as fact.
AIs can provide bibliographical information if requested, but it is often entirely made up. AIs do not incorporate journals, paywalled articles, or peer reviewed material.
Invention, or hallucinations, are a common issue with AI programs. Material is routinely invented based on use of keywords. This is a serious issue for the quality of the information.
AIs can struggle with the use proper language and grammar. That aspect is improving faster than other problems, but it often sounds as if it were written by a non-native English speaker.
AIs cannot tell between blatantly false information and real information. Information literacy skills have always been critical. When someone uses AI information without verifying the accuracy, false information gains ground even faster.