Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsLatest Borowitz speculates that Orange Julius Caesar is dead
https://open.substack.com/pub/borowitzreport/p/string-of-correctly-spelled-texts
5 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies

Latest Borowitz speculates that Orange Julius Caesar is dead (Original Post)
generalbetrayus
Oct 6
OP
Celerity
(52,502 posts)1. It is satire. nt
generalbetrayus
(1,281 posts)5. Yes, Andy Borowitz is a well-known satirist.


orleans
(36,479 posts)2. the "most troubling sign" was that the douchebag made another appearance today.
so... not dead yet.
usonian
(21,394 posts)3. Borowitz reads my posts!
https://www.democraticunderground.com/100220281483
The Hallucinating ChatGPT Presidency -- Is Tr-mp a chatbot?
copied here
https://www.techdirt.com/2025/04/29/the-hallucinating-chatgpt-presidency/
Judge for yourself.
Tue, Apr 29th 2025 09:34am - Mike Masnick
Great article and hard to summarize, because the author gives so many spot-on examples.
The Hallucinating ChatGPT Presidency -- Is Tr-mp a chatbot?
copied here
https://www.techdirt.com/2025/04/29/the-hallucinating-chatgpt-presidency/
Judge for yourself.
Tue, Apr 29th 2025 09:34am - Mike Masnick
We generally understand how LLM hallucinations work. An AI model tries to generate what seems like a plausible response to whatever you ask it, drawing on its training data to construct something that sounds right. The actual truth of the response is, at best, a secondary consideration.
snip
But over the last few months, it has occurred to me that, for all the hype about generative AI systems hallucinating, we pay much less attention to the fact that the current President does the same thing, nearly every day. The more you look at the way Donald Trump spews utter nonsense answers to questions, the more you begin to recognize a clear pattern he answers questions in a manner quite similar to early versions of ChatGPT. The facts dont matter, the language choices are a mess, but they are all designed to present a plausible-sounding answer to the question, based on no actual knowledge, nor any concern for whether or not the underlying facts are accurate.
snip
This is not the response of someone working from actual knowledge or policy understanding. Instead, its precisely how an LLM operates: taking a prompt (the question about job losses) and generating text based on some core parameters (the system prompt that requires deflecting blame and asserting greatness).
The hallmarks of AI generation are all here:
Confident assertions without factual backing
Meandering diversions that maintain loose semantic connection to the topic
Pattern-matching to previous responses (ripped off, billions of dollars)
Optimization for what sounds good rather than whats true
snip
But over the last few months, it has occurred to me that, for all the hype about generative AI systems hallucinating, we pay much less attention to the fact that the current President does the same thing, nearly every day. The more you look at the way Donald Trump spews utter nonsense answers to questions, the more you begin to recognize a clear pattern he answers questions in a manner quite similar to early versions of ChatGPT. The facts dont matter, the language choices are a mess, but they are all designed to present a plausible-sounding answer to the question, based on no actual knowledge, nor any concern for whether or not the underlying facts are accurate.
snip
This is not the response of someone working from actual knowledge or policy understanding. Instead, its precisely how an LLM operates: taking a prompt (the question about job losses) and generating text based on some core parameters (the system prompt that requires deflecting blame and asserting greatness).
The hallmarks of AI generation are all here:
Confident assertions without factual backing
Meandering diversions that maintain loose semantic connection to the topic
Pattern-matching to previous responses (ripped off, billions of dollars)
Optimization for what sounds good rather than whats true
Great article and hard to summarize, because the author gives so many spot-on examples.
TexasTowelie
(123,591 posts)4. Orrex will be disappointed about the news. nt