I am a Copyist. I Know What Will Happen Online.

Sign up for Slatest to get critical analysis, criticism and advice delivered to your inbox every day.

If you’ve ever used the internet to plan a trip, chances are you’ve taken advice on what to see and do from someone who’s never been to your destination. In fact, your guide probably has no specific knowledge of—or even personal interest in—sunbathing on the Gulf Coast, rock climbing in Moab, or admiring the architecture of Milan. However, on travel websites across the internet, writers provide jet-setters with very clear guidance: what time of day to go out, what kind of shoes to wear, and where to get money.

In the past, you could buy a travel book written by someone who has actually been there gone in the area (or who, at least, did an old-school report on it, making phone calls to collect and verify information from people who were there). Today the recommendations you get through Google are made by people who, in turn, have used Google.

This is a problem faced by not only travel consultants. It infects everything related to recommendations. Every day, writers are paid a pittance by advertising companies, big brands, and a bunch of content that tries to get a place in our search results and boost our opinion with clear advice. I’m one of those writers, starting the job: In my ten years as a monkey, I’ve recommended drinks and dishes from bars and restaurants I’ve never been to, and I’ve cried about hunting gear even though I’ve shot one gun in my life. I have even written product reviews for items that are not available in my country. (About half of the long-sleeve styles seem to only ship to the United States, not my home country of England, much to the chagrin of my stiff knee.)

The information included in these articles is drawn from many sources. Sometimes they are very functional, like web pages. But usually, it’s sources like Tripadvisor, Amazon reviews, or even random ads on niche subreddits. And not every writer will be like me, making good use of what I learned from my history degree and being careful to include information that has only been repeated in many reputable sites. When deadlines and bills surround you, the temptation to cut corners is strong.

Although I do a lot of research and pride myself on accuracy, without direct experience things go wrong. In the past, I have mistakenly provided incorrect transportation information when writing about how to get to the museum, or reported the wrong number of digits in the tent product description. Small mistakes, but those that don’t happen when you’re traveling or holding something in your hands. Such errors can be corrected, and are not always critical. But it could be: Imagine a person with a mobility impairment looking for an alternative route in a museum and finds himself finding the steps—ruining his carefully planned exit date, simply because someone had to meet a deadline and assume that his favorite tourist spot is available.

Through methods like search engine optimization and other page ranking techniques, this unverified content rises to the top of search results and people’s attention. Yes, there are great travel tips—and products, and drinks—based on real experiences. But the pieces that are well researched by real experts may not be worth being caught by SEO tactics, since the people who produce those content will not know the importance of internal linking, repeating keywords, and other factors that can help the page to shoot up the search result.

With the rise of large linguistic varieties, the problem of not-well– good advice will get worse. Fast written, verified content will often become what LLMs consider to be true.

LLMs don’t want information like we do. Instead, they generate answers by predicting signs, clearly a sophisticated form of predictive text. (Signals are numerical values ​​given to words, parts of words, and sometimes even letters, thus allowing the computer to “read” them.) But these predictions are based on information transmitted to machines, and information that corresponds to what is considered “high value” can be given more weight in the internal thinking of the model during its training. LLM does not know if what it says is correct. It is designed not to provide truth—only to provide answers. You can see this clearly when examples lead their users to “AI psychosis.” LLM doesn’t care where it takes you. It simply chooses the word that makes the most sense based on the previous one, based on the presets and the amount of data it has been trained on.

Although many LLM engineers are able to handle the weight of the source, such as X in Grok every time it is closer to the real reality than anything Elon Musk imagines, people who use many types of large languages ​​such as ChatGPT and Google Gemini say that they prioritize training their models with sources that are often seen as powerful. However, that does not mean that those sources will always provide the truth or that the chatbot will always repeat. It means that chatbots will try to gather information from sources that tick the right boxes. Those sources can be wrong, and information can be lost or distorted in the telephone game. Additionally, marketing professionals are already studying how LLMs position resources to ensure that their content is captured by AI’s totality. That is, a false copy produced in a hurry – intended, at the end of the day, to capture as many eyes as possible rather than to inform – can easily be repeated by LLM.

The property is not very high when the model believes that the hotel is 50 meters from the beach is not really 500, or that the stain remover was paid nothing to “check” indeed work with colors. But the number of people using AI to generate things like mental health support and nutritional advice makes these differences worrisome. Leaders in the field of AI, such as Nvidia and OpenAI, claim that there are strong safeguards against this false clarity, but OpenAI researchers have now admitted that “hallucinations are inevitable,” and industry experts note that there are real problems with similar errors in many models.

Consider the following hypothesis: a natural health brand wants to sell supplements to a mass audience. It might hire a writer to, in another piece on its site, extol the virtues of zinc and magnesium, focusing on the supposed immune-boosting properties of taking supplements with some combination of the two (which of course the company sells). This writer, eager to do a good job, now reads other studies that show this fact, but due to a lack of scientific understanding, or a lack of understanding of statistics, he makes a false claim. (One of the most inaccurate statements in modern advertising is studies show.) The author, due to their ability to improve page rankings with keywords and section titles, will have created an article that looks like information but is a slightly disguised advertisement. It floats to the top of Google… and is copied over and over again by other vitamin sellers. This topic will be included in the top AI answers about the benefits of magnesium and zinc supplements, as LLM considers it the most “possible” answer to, say, common questions about staying healthy during cold and flu season.

The tips and tricks I use to avoid being taken over by AI-generated junk news are the same ones that have been around for combating disinformation, and were honed during my humanities degree. I double-check facts and figures, and make sure they come from reputable sources, with multiple sources to back them up. (Often, articles about the topic will cite similar wrong source—so be careful!) Different emotions often rise to the surface: If I read something that makes my blood boil. or It totally matches my view, I make sure to check the source. When it comes to your health, experts stress the importance of having “someone in the loop”—that is, talking to your doctor before taking advice from a machine. And on your next vacation? Well, if you use ChatGPT to design it, you can probably bake in extra time if things go wrong carelessness.

#Copyist #Happen #Online

Leave a Comment