Apple is reportedly planning to launch AI-powered glasses, a pendant, and AirPods
Source: The Verge
Apple is reportedly aiming to start production of its smart glasses in December, ahead of a 2027 launch. The new device will compete directly with Metas lineup of smart glasses and is rumored to feature speakers, microphones, and a high-resolution camera for taking photos and videos, in addition to another lens designed to enable AI-powered features.
The glasses wont have a built-in display, but they will allow users to make phone calls, interact with Siri, play music, and take actions based on surroundings, such as asking about the ingredients in a meal, according to Bloomberg. Apples smart glasses could also help users identify what theyre seeing, reference landmarks when offering directions, and remind wearers to complete a task in specific situations, Bloomberg reports.
-snip-
Apples plans for AI hardware dont end there, as the company is expected to build upon its Google Gemini-powered Siri upgrade with an AirTag-sized AI pendant that people can either wear as a necklace or a pin. This device would essentially serve as an always-on camera for the iPhone and has a microphone for prompting Siri, Bloomberg reports. The pendant, which The Information first reported on last month, is rumored to come with a built-in chip, but will mainly rely on the iPhones processing power. The device could arrive as early as next year, according to Bloomberg.
Apple could also launch upgraded AirPods this year, which Gurman previously said could pair low-resolution cameras with AI to analyze a wearers surroundings.
Read more: https://www.theverge.com/tech/880293/apple-ai-hardware-smart-glasses-pin-airpods
The Trump regime must love the AI bros' quest to create a society where everyone has themselves and everyone around them under surveillance at all times.
They just need lots of data centers to handle the non-stop data gathering by all the dumb Americans tech lords can convince to use the devices.
The pendant with the always-on camera and microphone sounds even more ideal for surveillance than a camera in an eyeglass frame that might have to be turned on and would light up to show you're recording everyone and everything around you.
And if you think the Trump regime won't demand that data...
FalloutShelter
(14,350 posts)Multichromatic
(95 posts)Why should we help corporations set up an unconstitutional surveillance state?
Polybius
(21,697 posts)Pretty cool. You can listen yo music, make calls, take pics, etc. If you say "Hey Meta, what am I looking at?", it will tell you what it sees in detail.
Not sure why so many hate this tech, but it's most likely a generation thing.
highplainsdem
(61,119 posts)recording video and/or audio with them, or using them with a facial recognition app to find out who they are and maybe pretend to have met them before. Or try to find out where they live, even before they've given you their name.
They're essentially surveillance devices. What they pick up will be saved by Meta and will become available to the government, if the authorities want it.
People are getting rid of their Ring devices, too.
I wouldn't trust anyone wearing smart glasses.
I saw video on YouTube from someone recording an Oasis concert while wearing smart glasses, and at no point did she tell the people she was talking to that she was recording them from a couple of feet away.
At least if someone is using a smartphone for recording, it's usually pretty obvious.
Polybius
(21,697 posts)When recording or looking, it omits a bright light, letting anyone know that it is being used. Also, Ray-Ban Meta glasses do not have built-in facial recognition technology that identifies people.
I've have Ring too. Why are people getting rid of it? What's it gonna see, me leaving for work? Maybe getting the mail? Just wear pajamas and it won't be an issue.
highplainsdem
(61,119 posts)And even without it being built in, there are facial recognition apps that anyone wearing smart glasses can use.
People don't like being surveilled and recorded by just anyone they meet, anywhere.
And people who want to record others without them knowing are creeps most people wouldn't want to know.
Editing to add that in the current political climate, no one should trust a stranger wearing smart glasses.
Polybius
(21,697 posts)From your link:
Also, there are no outside apps that are official that you can download, at least without rooting your device.
No comment about the the light that goes off that alerts others that it's recording? I guess you missed it.
highplainsdem
(61,119 posts)they've already tested smart glasses recording strangers and discovered people don't notice the light.
No AI user deserves wearable AI at the expense of the privacy of everyone they meet.
They're a violation of other people's rights.
It's against the law in most states to record a phone conversation without letting the person you're talking to know you're recording.
If anyone has a legitimate reason to need smart glasses, they should be made glaringly obvious to everyone around.
Polybius
(21,697 posts)Meta intentionally built visible safeguards into the design. The recording LED isnt optional you cannot simply cover it with your hand or tape and keep recording. If you try to block it, the glasses wont record. The only theoretical workaround would involve physically damaging the glasses, like drilling into them to remove the LED, which is complicated, risky, and could almost certainly destroy a very expensive product. Thats not something the average person is going to do.
And as for the claim that people wont notice the light its extremely bright and obvious when recording. If someone truly wouldnt notice that LED, they probably wouldnt notice someone discreetly taking a photo with a smartphone either. Smartphones are far more common, easy to conceal, and far less regulated in how they signal recording.
This isnt the first time new camera technology has sparked privacy panic. When camera phones first became mainstream in the early 2000s, many people insisted they were unnecessary and invasive. There were serious arguments from old-timers that such as no one needs a camera on them at all times and that only creeps would want that capability. Fast forward over twenty years, and smartphones with cameras are completely normalized. Society adapted, etiquette evolved, and life went on.
As for practical uses, there are plenty and most are completely ordinary. The glasses are incredibly convenient for hands-free calls and music while walking. The voice assistant features are genuinely useful being able to say, Hey Meta, what am I looking at? and get real-time context is impressive, helpful, and not to mention fun. And sometimes you just need to capture a quick moment without fumbling for your phone like a deer crossing your path or something happening in real time that would be gone by the time you unlock your screen.
Like any technology, the glasses can be misused but so can a phone, a laptop, or practically any recording device. The key difference here is that Meta clearly anticipated privacy concerns and built visible, enforceable safeguards directly into the hardware. Thats not negligence thats responsible design.
Technology evolves. The question isnt whether it exists its whether its built with thoughtful protections. In this case, it clearly is.
highplainsdem
(61,119 posts)or would you wonder if they're a pedo trying to record the kids surreptitiously - and since they like AI, maybe using whatever they record for deepfake porn?
Someone wearing smart glasses does not deserve the benefit of the doubt. No one doing surreptitious surveillance does.
Smart glasses should be banned, and people using them should be shunned or banned from places where people don't want to be surveilled/recorded. Think of creeps wearing smart glasses in men's rooms, for instance. Or in dressing rooms used by multiple people.
Polybius
(21,697 posts)If a relative or close friend that I trusted was watching my kids or nephew and happened to be wearing Meta's, that fine with me. They have their cell phones on them too.
Banned? That's an insane argument to make. This isn't North Korea. Dressing rooms? Sure, that would bother me. But a creep can sneak a cell pic in there too. Or, worse yet, have a dedicated spy device (which are legal to own, btw). Those are just the risks we have to take in a free society. Freedom comes comes with the good and bad.
highplainsdem
(61,119 posts)video and audio at any time. Google Glass smart glasses were discontinued because people objected to them. (See the link in reply 11.)
It's legal to own a dedicated spy device, but illegal to put them in bathrooms or changing rooms or other areas where people expect privacy. And creeps taking photos there will get in trouble, too.
You want freedom for smart glass users, and everyone else to lose their rights not to be recorded/surveilled.
Neither you nor anyone else deserves that freedom.
They're already having to ban smart glasses for SATs because they can't trust the AI users wearing them not to cheat.
If you think people shouldn't mind being recorded, I suggest you try walking around in public for a while, holding up your smart phone everywhere you face, telling everyone you look at that you're recording them.
You're not likely to find very many people happy about that. Some might try to drive you away, some might call the police, and if you try this in any business you'll probably be asked to leave.
You want the "freedom" to do that surreptitiously with smart glasses.
That's creepy.
Polybius
(21,697 posts)I think youre framing this as a zero-sum issue that giving someone the ability to use smart glasses automatically strips everyone else of their rights. Thats not how this works.
First, in most public spaces in the United States, there is no legal expectation of privacy. That standard already applies to smartphones, dashcams, security cameras, GoPros, and even doorbell cameras. The Ray-Ban Meta Smart Glasses dont create a new legal reality they exist within the same one thats been in place for decades.
Second, the idea that this is about surreptitious recording ignores the built-in safeguards. The glasses have a mandatory recording LED that cannot simply be covered to keep filming the device literally wont record if you block it. Thats more transparency than most phones provide. With a phone, someone can appear to be texting while recording. With these glasses, theres a visible signal by design.
On the SAT example schools banning devices during exams isnt new or unique to smart glasses. Phones, smartwatches, calculators, even certain headphones have all been restricted in testing environments. Thats not evidence that the technology itself is unethical; its just normal policy adapting to new tools. We dont ban smartphones from society because theyre banned in testing centers.
Your suggestion about walking around openly holding up a phone and announcing youre recording isnt really comparable. Social norms matter. If someone walks around aggressively filming people at close range, theyll make others uncomfortable whether theyre using a phone, a DSLR, or anything else. Thats a behavior issue, not a hardware issue. The same social expectations would apply to someone misusing smart glasses.
The reality is that most people using them are doing very ordinary things hands-free calls, music, quick photos of things happening in real time, or using AI features for accessibility and convenience. The overwhelming majority of users arent trying to secretly surveil strangers.
Every new recording technology has triggered fear at first. Camera phones did. GoPros did. Even early portable camcorders did. Society adjusted, etiquette developed, and life continued.
You may personally find the concept uncomfortable thats fair. But discomfort doesnt automatically equal a loss of rights. The legal framework hasnt changed. The social norms havent collapsed. And the device was intentionally designed with visible safeguards to address exactly the concerns youre raising.
Calling it creepy assumes malicious intent. Most of the time, its just another evolution of the camera thats already been in everyones pocket for 20 years.
As for creeps getting in trouble, good. Lock up anyone who takes pics in bathrooms or changing rooms.
This is clearly a generational issue, and we're unlikely to ever see eye to eye on this.
highplainsdem
(61,119 posts)prices typically come down quickly. Privacy concerns don't go away.
Sigh. Just google
can smart glasses led light be disabled
and you'll quickly find lots of results on disabling that light, such as this one:
https://www.404media.co/how-to-disable-meta-rayban-led-light/
See the article with video below, which I saw mentioned on social media by people commenting that those being recorded didn't notice the LED light or know what meant.
https://euroweeklynews.com/2025/12/06/ai-glasses-spark-rip-privacy-alarm-in-the-netherlands-a-new-era-of-recognition/
See this Reddit thread
https://www.reddit.com/r/RaybanMeta/comments/1n80c1o/is_the_led_light_super_noticeable/
and the comments in the replies about people being recorded not noticing the LED light at all (especially in bright light, like on a sunny day), and people using these surveillance devices buying stickers to block the LED lights without shutting off the camera, or just drilling out the light while keeping the camera functioning.
Your saying you woudn't record surreptitiously does nothing to assure anyone that others using them won't use them that way.
You're also ignoring the fact that AI companies are data-gathering, using pretty much everything collected for training their AI, and they WILL share that data with authorities including the Trump regime.
So even if you think you're recording just for yourself, you have no way of assuring anyone around you that the Trump regime won't end up with everything your smart glasses capture.
Someone wearing smart glasses and recording at a protest is a threat to everyone at the protest. Someone using a smart phone just to openly record ICE agents is much less likely to end up providing closeup shots of other protesters to federal authorities, or conversations that ICE might think make them a threat to ICE officers who aren't breaking the law, or to Trump. Do you really want to record some angry remark about Trump that could get someone reported to the Secret Service and prosecuted, when they hadn't been serious about what they said and retracted it a second later?
Even if you aren't using facial recognition software, you can't stop others from using that software on what you record.
You're effectively mobile surveillance for the AI bros and government, every time you use smart glasses to record.
And there WILL be creeps using this software to record because they're perverts or hope to record something for blackmail or just to embarrass others.
Your use of AI, all by itself, counts against you being as trustworthy as you would be otherwise, since AI is so often used for fraud of different types, from students cheating, to workers pretending to have done work they had AI do instead, to scams by professional criminals.
Wanting to use AI without others seeing you use a smartphone, and record without an obvious camera including a smartphone camera, does suggest possible deceit even more, and not just convenience for you.
And this could end up making people wearing ordinary glasses appear suspicious at first, which would be really unfair to them.
Polybius
(21,697 posts)Theres a lot there, so Im going to respond point-by-point rather than brushing it off.
1. Prices come down. Privacy concerns dont.
True privacy concerns dont disappear. But they also arent static. They get addressed through design changes, policy, norms, and law. Thats exactly what happened with smartphones, dashcams, Ring doorbells, and body cams. None of those eliminated privacy concerns they forced clearer rules and expectations.
The Ray-Ban Meta Smart Glasses are operating inside an already-established legal framework about recording in public. They didnt create that framework.
2. You can disable the LED.
Yes, Im aware that people online experiment with hardware modifications. But thats not the same as normal use.
If someone:
Drills into a $300$400 device
Risks breaking it
Voids the warranty
Potentially damages internal components
Thats intentional tampering.
You can also:
Jailbreak phones
Disable shutter sounds
Install hidden camera apps
Modify drones
The existence of modding communities doesnt mean the default product is designed for secrecy. It means determined people can modify hardware which is true of almost any device with a camera.
If someone is willing to physically alter hardware to secretly record others, they were already willing to violate norms. The glasses didnt create that intent.
3. People dont notice the LED.
In bright sunlight, yes visibility of any small light is reduced. The same is true of:
A phone screen angled downward
A smartwatch recording
A GoPro clipped to clothing
No indicator system is perfect in every lighting condition. The relevant question is: Did the manufacturer attempt visible disclosure? In this case, yes.
And again a smartphone can record far more discreetly than someone turning their head directly at you with glasses that visibly light up.
4. AI companies gather data and share with authorities.
This is where the argument shifts from device ethics to broader distrust of tech companies and government. Thats a separate and legitimate policy debate.
But it applies equally to:
iPhones
Android phones
Social media uploads
Cloud backups
Email providers
If someone records a protest on a smartphone and uploads it to Instagram, that footage is also on corporate servers and accessible via lawful process. That risk isnt unique to smart glasses.
And importantly: users control whether media is uploaded or kept local. Not everyone is live-streaming everything to AI systems.
If the concern is mass surveillance or government overreach, thats about data governance laws, not about whether a camera is mounted on your face or in your hand.
5. Someone recording at a protest is a threat.
Anyone recording at a protest with any device creates that same dynamic. Phones already capture high-resolution, zoomed, stabilized video with far greater detail than smart glasses.
In fact, someone openly holding a phone above a crowd often captures more faces than someone wearing glasses casually looking around.
Again, the risk youre describing is tied to recording in general not uniquely to this product category.
6. Creeps will use it.
Creeps already:
Use phones
Hide cameras
Install spy devices
Misuse AirTags
Abuse drones
We dont ban all smartphones because some people take upskirt photos. We criminalize the behavior.
Technology doesnt eliminate bad actors. It sets default guardrails and relies on laws for enforcement.
7. Using AI counts against your trustworthiness.
Thats a broad generalization.
AI is used for:
Accessibility tools
Navigation assistance
Language translation
Image recognition for the visually impaired
Productivity support
Saying using AI makes you less trustworthy is like saying using a calculator makes you dishonest because some students cheat.
Intent matters. Context matters.
8. Wearing glasses will make everyone suspicious.
We already went through this phase with:
Bluetooth headsets
AirPods
Early smartwatches
Body cameras
At first, people reacted strongly. Over time, norms adjusted. Most people now assume someone wearing AirPods is listening to music not secretly recording.
If smart glasses ever become widespread, visible indicators and cultural familiarity will normalize their presence the same way smartphones did.
The core disagreement
Youre arguing from a worst-case lens:
What if someone disables safeguards?
What if data is misused?
What if the government abuses it?
What if a creep exploits it?
Those are valid concerns but they apply to nearly all modern recording technology.
Im arguing from a proportionality lens:
The legal environment hasnt changed.
The default hardware includes visible disclosure.
The vast majority of use cases are mundane.
Bad actors already have more powerful tools in their pockets.
If the issue is broader AI data practices or government overreach, thats a serious civic discussion. But thats not unique to these glasses.
The device itself doesnt automatically convert someone into mobile surveillance for AI bros. Its a camera in a different form factor operating under the same laws, norms, and risks that already exist.
We can debate regulation and corporate data policy. But treating the hardware category itself as inherently sinister assumes malicious intent by default, and thats a much bigger claim than this technology has tradeoffs.
highplainsdem
(61,119 posts)You're trivializing how much easier smart glasses make spying.
You're exaggerating legitimate uses for AI, especially needing to access an AI model when you're just walking around.
Your wearing smart glasses is still a good reason for others to be suspicious of you.
At least some CBP agents now wear Meta's smart glasses:
https://www.404media.co/a-cbp-agent-wore-meta-smart-glasses-to-an-immigration-raid-in-los-angeles/
Its clear that whatever imaginary boundary there was between consumer surveillance tech and government surveillance tech is now completely erased, Chris Gilliard, co-director of The Critical Internet Studies Institute and author of the forthcoming book Luxury Surveillance, told 404 Media.
The fact is when you bring powerful new surveillance capabilities into the marketplace, they can be used for a range of purposes including abusive ones. And that needs to be thought through before you bring things like that into the marketplace, the ACLUs Stanley said.
-snip-
Update: After this article was published, the independent journalist Mel Buer (who runs the site Words About Work) reposted images she took at a July 7 immigration enforcement raid at MacArthur Park in Los Angeles. In Buer's footage and photos, two additional CBP agents can be seen wearing Meta smart glasses in the back of a truck; a third is holding a camera pointed out of the back of the truck. Buer gave 404 Media permission to republish the photos; you can find her work here.
Polybius
(21,697 posts)Big platforms absolutely want data, and theyve earned skepticism over the years. Im not arguing that Meta or any AI company deserves blind trust.
What I am pushing back on is the idea that smart glasses uniquely transform ordinary people into surveillance agents in a way smartphones, body cams, dashcams, and social media already havent.
1. Youre trivializing how much easier smart glasses make spying.
They change the form factor. They dont change the underlying capability.
A modern smartphone:
Has a higher-resolution camera
Has optical zoom
Has stabilization
Can live-stream instantly
Can upload automatically to cloud storage
If someone wants to secretly record people, a phone is already a far more powerful tool. Smart glasses are actually more limited in angle, battery, and control. Theyre not some quantum leap in surveillance theyre a hands-free camera.
The difference is subtlety of posture, not power of capture. And subtle recording has existed for years via phones held low, chest-mounted cameras, button cams, etc.
2. Youre exaggerating legitimate uses for AI while walking around.
Not really. For some people, especially those with visual impairments, AI description features are genuinely useful. Even for fully sighted people, real-time translation, object recognition, or contextual info can be practical.
Is it essential for survival? No.
But neither is:
AirPods
Smartwatches
Voice assistants
Fitness trackers
Convenience tech doesnt need to be life-or-death to be legitimate.
3. Your wearing smart glasses is still a good reason to be suspicious.
Suspicion isnt a rights framework its a social reaction.
People were suspicious of:
Early Bluetooth earpieces
Google Glass users
People filming with GoPros
People flying drones
Over time, norms settle. Suspicion doesnt automatically equal wrongdoing. If someone behaves normally, most of that suspicion fades in context.
4. Law enforcement using them
The 404 Media article you cited is important. If U.S. Customs and Border Protection agents are wearing Ray-Ban Meta Smart Glasses during immigration raids, that absolutely raises civil liberties questions.
But notice something critical:
That concern is about government use, not civilian ownership.
Law enforcement already uses:
Body cameras
Facial recognition databases
Dron
Stingrays
License plate readers
If agencies adopt a consumer product, thats a policy and oversight issue. It doesnt logically follow that ordinary citizens shouldnt own the device.
Otherwise, by that reasoning, once police started using smartphones, civilians shouldve stopped carrying them too.
5. Meta is guiding privacy norms because its early.
Thats fair early-stage tech often has company-driven norms before regulation catches up.
But thats not permanent. Smartphones were once dominated by a few players shaping norms. Now privacy law, court rulings, and public pressure heavily influence what companies can and cannot do.
If smart glasses become widespread, they will fall under:
State privacy laws
Federal wiretap laws
Biometric data laws (in some states)
Civil liability
Meta doesnt get to operate outside the legal system just because the form factor is new.
6. The protest scenario
Youre worried about:
Facial recognition
Protester identification
Government abuse
Someone saying something angry on camera
Those are serious concerns but again, smartphones already enable all of that at scale. In fact, most protest footage that ends up online today is captured via phones and posted to social platforms.
The risk youre describing is about:
Data retention
Uploading to corporate servers
Government subpoenas
Facial recognition databases
Those exist independently of smart glasses.
If someone is concerned about surveillance at a protest, the safest approach is digital hygiene not assuming glasses are uniquely dangerous while phones are somehow benign.
7. AI companies are desperate for training data.
Yes, companies want data. But:
Users can control upload settings.
Not all captured footage is automatically used for training.
Policies around AI training data are under intense regulatory scrutiny globally.
If the issue is AI training practices, thats a broader regulatory debate not something solved by opposing one wearable device.
The real divide here
Youre arguing from systemic distrust:
Corporations will exploit data.
Governments will abuse access.
New tech amplifies surveillance creep.
Thats a coherent worldview.
Im arguing that:
The surveillance ecosystem already exists.
Smart glasses are incremental, not revolutionary.
Misuse is a behavioral and regulatory issue, not an inherent property of the device.
Civilian ownership doesnt equal endorsement of state surveillance.
Its reasonable to demand strong data governance and limits on law enforcement use. I support that.
But equating every civilian wearer with mobile surveillance for AI bros and the government assumes malicious intent and inevitability of abuse and thats a leap.
The conversation we probably should be having isnt ban smart glasses its:
What are the default upload settings?
What transparency exists around AI training?
What limits exist for government acquisition of consumer-captured data?
Should visible indicators be standardized across all wearable cameras?
Thats a policy conversation.
Calling individual users inherently suspicious because they wear a new form factor camera feels less like a privacy argument and more like a presumption of guilt.
highplainsdem
(61,119 posts)an AI device, which means you're fine with technology built on the worldwide theft of intellectual property. It may not be fair to everyone who's a fan of AI to wonder if their responses come from a chatbot, at least at times, but it's always a possibility when people use and defend genAI. We have seen other DUers posting AI responses and not admitting, at least at first, that they're AI.
I don't know for certain whether you are posting replies completely or partially written by AI, but there seemed to be a noticeable shift in your writing style with your first long reply here (reply 16), where you also started using em dashes, and where your replies changed from using straight apostrophes and quotation marks to the curly form. It's possible you were simply composing a long reply elsewhere to copy to this board, instead of just typing one on the board, but I hate that genAI has led to people having to wonder if online posts were written by AI.
And with smart glasses, there's no way to know if what someone says was suggested by AI. Which, as has been pointed out in various articles, is particularly dangerous if someone using facial recognition software as well can get enough instant info from AI to pretend familiarity, shared interests, knowledge, etc.
We're not going to agree - ever - on your praise of AI devices like smart glasses despite their being so useful as surveillance devices, and despite Meta as a corporation and Zuckerberg as a tech lord showing zero concern about ethics unless forced to.
And it's disappointing to see any indications, in any online discussion, that genAI might be being used.
I hate the potential for pretense of all types that generative AI has caused to explode. GenAI has caused tremendous damage to our society already, and smart glasses will worsen that.
A society where people have to wonder at all times if someone they meet might be recording them, or if someone is being coached/helped by genAI in what they say, is a society most people would consider hostile to humanity and real relationships.
Polybius
(21,697 posts)Ive been drafting a lot of them offline in Notepad because theyre long and I dont want to lose them if the page refreshes (and I do it in in bits and pieces on Notepad). When I paste them in, sometimes the formatting shifts. Bullets compress, em dashes change, spacing changes, etc.. The tone shifted because I am passionate about the subject and wanted to respond carefully instead of casually firing off short comments.
Thats it.
I get why generative AI has made people suspicious. Its created a weird environment where you cant always tell whats human-written. But structured writing, longer paragraphs, or punctuation changes arent proof of anything. Plenty of people write in a deliberate, organized way when they care about the subject.
And I do care about this subject. Thats the real shift. When something feels mischaracterized, I tend to slow down and write more thoroughly. Thats passion, not automation.
On the broader point: I understand your distrust of Meta and of generative AI generally. You see it as built on scraped data, corporate power, and potential social harm. I dont dismiss that concern. But defending a product like the Ray-Ban Meta Smart Glasses doesnt automatically mean I endorse every corporate practice behind AI training datasets or that Im indifferent to intellectual property debates.
Youre also right that AI creates ambiguity in social interactions whether someone is being coached, assisted, or augmented. Thats a cultural shift were still adjusting to. But that ambiguity exists whether I personally use AI tools or not. Its already part of the digital landscape.
We probably wont agree on the larger philosophical divide. You see generative AI and wearable tech as corrosive to authenticity and human trust. I see them as tools with tradeoffs that require norms and guardrails but arent inherently dehumanizing.
But at least on one point, I can remove the uncertainty: these replies are mine. The formatting quirks are just copy-paste artifacts and me trying to write clearly about something Im genuinely engaged in not a chatbot speaking for me.
highplainsdem
(61,119 posts)this board will be honest about AI use - we liberals are supposed to be ethical - but I can't deny that I still feel a little bit of skepticism because you use genAI and defend its use.
And that's a fundamentally unethical choice, because you know how AI models were trained, and you're aware of the harm it does.
To me, the AI companies' theft of the world's intellectual property is an absolute wrong. There was never any excuse for it. And it's continuing, every second of every day.
You can't ignore that and decide it's just fine to use genAI (when you aren't being forced to) without making a decision that the rights of all the people whose work was stolen DON'T MATTER.
And honestly, that leaves me wondering if you'd have been just fine with other great injustices throughout history, including slavery, as long as that injustice created some benefit for you.
I've hated seeing DUers using genAI and pushing its use, because to me that's a betrayal of everything liberals and progressives are supposed to stand for.
It's a selective blindness that permits you to enjoy a technology that would not exist without that theft.
I expect DUers to be better than that. To care about the greatest theft of intellectual property in history.
I expect DUers to care about all the other harm done by genAI and the AI companies and AI bros, who are every bit as much a threat to society and the natural environment as the Trump regime is.
Instead, some people who should be more ethical than that are apparently choosing that selective blindness so they can pretend to have knowledge they don't have and don't want to bother to look up. Or pretend to write better than they can. Or pretend they can create music or images or video, when they never bothered to acquire those skills.
Even if those AI users are still ethical in other areas of their lives, their voluntary use of genAI says they're willing to sell at least part of their soul to enable fraud and support oligarchs out to crush the people who provide the world's knowledge and culture.
There are times I want to chalk up AI-using liberals making that deal with devils to naivete - and there are probably a few AI users who aren't aware that genAI exists only because of theft. But most AI users know about that theft, and I've been posting about it for three years on DU.
So what I'm seeing is a lot of people who consider themselves liberal but are anything but liberal - are fans and supporters of AI robber barons and all the harm they do - when they voluntarily use genAI, circulate AI slop because they like it, and defend its use.
I'd simply find this disgusting if RWers were doing it. The theft of the world's intellectual property is a natural fit with RWers.
But when I see liberals happy to use genAI without being forced to, what I feel instead is almost incredulous disappointment, and heartbreak, and betrayal.
Because your embrace of genAI is a betrayal of everyone whose work was stolen, and a betrayal of humanity in general.
Being pro-genAI is being anti-human.
And I hope everyone who's AI-addled figures that out before they sleepwalk out on that genAI plank into a future controlled by tech oligarchs, frauds using genAI, and hallucinating chatbots.
And btw, I'm not the only person who sees genAI and the tech lords controlling it as just as great a threat as Trump is. Maybe a greater threat.
Will Bunch wrote this yesterday:
Is AI's authoritarianism a bigger threat than Trump's?
https://www.inquirer.com/opinion/ai-advances-anthropic-claude-20260219.html
Polybius
(21,697 posts)Nothing I've ever wrote to you was personal, as I'm sure that we agree on 9 out of 10 things. I hear how strongly you feel about AI, and Im not dismissing it. Youre coming at this from a place of protecting artists, writers, and ordinary people from corporate exploitation. Thats not something I see as crazy or extreme. Its rooted in real concerns about power and accountability.
Where we differ is that I dont see using AI as automatically endorsing theft or devaluing human creators. I think the legal and ethical questions around training data are complicated and still being worked out in courts and legislatures. Reasonable liberals can disagree on whether training on publicly available data constitutes theft in the way you describe. That debate is ongoing, and I support clearer rules, compensation models, and guardrails.
But I dont accept the leap from this technology has serious unresolved ethical issues to anyone who uses it is morally equivalent to someone who would support slavery or authoritarianism. Thats a bridge too far for me. It turns a policy disagreement into a character indictment.
Im not trying to replace human creativity or help oligarchs crush culture. I still value human art, human writing, and human relationships. Using AI for other purposes doesnt negate that.
We may never agree on this. But I hope we can at least keep it in the realm of good-faith disagreement about technology and ethics, and not betrayal, soul-selling, or being anti-human. I dont see myself that way, and I dont see you as necessarily wrong for opposing it. Were just drawing the moral line in different places.
highplainsdem
(61,119 posts)The people who use Meta glasses in public are so creepy
https://www.reddit.com/r/Cameras/comments/1l6v0ib/the_people_who_use_meta_glasses_in_public_are_so/
Are wearing AI-powered Smart Sunglasses actually the creepiest thing in tech right now?
https://www.reddit.com/r/technology/comments/1pkqpo5/are_wearing_aipowered_smart_sunglasses_actually/
Meta raybans are creepy as all fuck
https://www.reddit.com/r/SeriousConversation/comments/1n820wi/meta_raybans_are_creepy_as_all_fuck/
Meta Rayban glasses used to identify folks on the street within seconds. They're also becoming more popular within rave and concert events. Should these devices be banned from all dance music events?
https://www.reddit.com/r/aves/comments/1prnot0/meta_rayban_glasses_used_to_identify_folks_on_the/
I don't want to interact with someone who is wearing smart glasses
https://www.reddit.com/r/unpopularopinion/comments/1hq3kdb/i_dont_want_to_interact_with_someone_who_is/
I Can't Help Feeling Like a Creep Wearing Meta's New Gen 2 Glasses
https://www.reddit.com/r/technology/comments/1omgrb3/i_cant_help_feeling_like_a_creep_wearing_metas/
Smartglasses spark privacy fears as secret filming videos flood social media | Technology News
https://www.reddit.com/r/privacy/comments/1qv7kue/smartglasses_spark_privacy_fears_as_secret/
Gen Z pushes back against smart glasses and cameras over privacy fears
https://www.reddit.com/r/technology/comments/1n5qsr5/gen_z_pushes_back_against_smart_glasses_and/
Polybius
(21,697 posts)Reddit amplifies strong reactions. Thats what it does. If you search smart glasses creepy, youll obviously find posts calling them creepy. If you search Ray-Ban Meta awesome or Ray-Ban Meta useful, youll find plenty of people praising them for convenience, accessibility, hands-free video, music, calls, and travel use. Online forums skew toward outrage and hot takes thats not the same thing as broad societal consensus.
The product were talking about Ray-Ban Meta Smart Glasses is selling well, being reviewed positively by major tech outlets, and being used by regular people for normal, boring reasons. That matters more to me than emotionally charged thread titles.
You can curate a list of links calling anything creepy:
AirTags were called stalking tools.
Drones were called spy machines.
GoPros were called invasive.
Early camera phones were labeled pervert devices.
Unless you still agree with those early opinions?
Every time, early adopters got side-eyed. Every time, a vocal online minority framed the tech in worst-case terms. And every time, usage normalized once people realized most owners werent villains.
There are also counterarguments all over Reddit and elsewhere, with users pointing out the LED indicator, people explaining they use them for biking, walking, or accessibility, people preferring them over pulling out a phone constantly, and users saying they feel less awkward recording because the device signals clearly when its active.
Youre presenting Reddit discomfort as proof of inherent wrongdoing. Thats just social friction around new tech. Social friction doesnt equal ethical collapse.
If someone personally doesnt want to interact with a person wearing smart glasses, thats their choice. But thats a social preference not a moral verdict.
The internet will always have threads calling something creepy. That alone doesnt make the technology illegitimate, and it doesnt make every person wearing it suspicious by default.
travelingthrulife
(4,927 posts)Our 'need' for instant gratification will kill us.
Crowman2009
(3,469 posts)....
muriel_volestrangler
(105,917 posts)"which had been specially designed to help people develop a relaxed attitude to danger. At the first hint of trouble they turn totally black and thus prevent you from seeing anything that might alarm you."
Douglas Adams (The Restaurant at the End of the Universe)
FemDemERA
(742 posts)Skittles
(170,484 posts)all complain about internet tracking being invasive
Red Mountain
(2,291 posts)HELL no!
Society will adjust. Signs on doors, peer pressure, I just don't know what else. For sure, the amount of data even small scale adaptation of devices like these will generate enormous need for storage and processing space......which will mean new power sources and new data centers. Kind of like an AI which came first....the chicken or the egg question.
highplainsdem
(61,119 posts)in 2014 because of privacy concerns, and a lot of businesses and facilities banned them.
https://en.wikipedia.org/wiki/Google_Glass
That was before genAI, and unfortunately we have a lot of AI-addicted people now.
Crowman2009
(3,469 posts)SheltieLover
(78,923 posts)SheltieLover
(78,923 posts)highplainsdem
(61,119 posts)especially considering how dangerous the Trump regime is.
But it has also surprised and disappointed me to see some Democrats and liberals apparently don't care that generative AI is based on worldwide theft of intellectual property, as long as any of the genAI tools resulting from that theft are in any way useful or amusing for them.
SheltieLover
(78,923 posts)Very disappointing, indeed.
As with all things, we all need to look at the potential for abuse.
Personally, I would never put some high tech gadget that close to my eyes.
yaesu
(9,172 posts)With lots of ass kissing.
hunter
(40,525 posts)
If she can only cook as well as Honeywell can compute.
Her souffles are supreme, her meal planning a challenge? She's what the Honeywell people had in mind when they devised our Kitchen Computer. She'll learn to program it with a cross-reference to her favorite recipes by N-M's own Helen Corbitt. Then by simply pushing a few buttons obtain a complete menu organized around the entree. And if she pales at reckoning her lunch tabs, she can program it to balance the family checkbook
https://en.wikipedia.org/wiki/Honeywell_316
This Apple thing will be even better! If you leave it on all the time, uploading all your data to the cloud, your family will be able to make a simulation of your best self when you're dead.
highplainsdem
(61,119 posts)And this, from the Wikipedia page...
hunter
(40,525 posts)Buy them the optional teletypewriter and the next thing you know they'll be finding employment and leaving home. And there you will be, alone in your big empty house eating cold cereal and sandwiches with your kitchen computer as your only companion.
chouchou
(3,005 posts)highplainsdem
(61,119 posts)wonder if students wearing glasses are cheating via AI.
As if smartphones' effect on education wasn't bad enough...
chouchou
(3,005 posts)..text anytime and anywhere to anyone, while supposedly learning about subjects in school.
Some of the students text like a National News Service!
FakeNoose
(41,010 posts)... without having to hire a private detective. No thanks.