Ask HN: Would you trust an AI that sees and hears everything you do?

By: aurintex

I'm currently in a very early phase of my project. Therefore, I'm interested in your opinions about this.

Imagine an AI (e.g. wearable) is always integrated into your daily life. It sees what you see and hears what you hear.

Do you think this will be the future — that AI will be more integrated into your daily life? Or is humanity not yet ready for something like that? VR glasses are also sometimes very polarizing when it comes to data protection and privacy.

By: JohnFen

2 days ago

I don't know about humanity, but I wouldn't use such a thing. I think it's unconscionable to impose such surveillance on unconsenting others. I would actively avoid being near anyone who did, to the best of my ability, as well.

By: aurintex

2 days ago

That makes me wonder --> how do you currently deal with your smartphone, which often has wakeword detection enabled (like “Hey Google”)? Or when you're visiting friends who have an Alexa device at home. Would that already be a problem for you?

I’m genuinely curious where you draw the line. Because in practice, many of us are already surrounded by passive listening systems, even if we don’t actively use them ourselves.

By: JohnFen

2 days ago

> how do you currently deal with your smartphone, which often has wakeword detection enabled (like “Hey Google”)?

I keep all of that stuff disabled.

> Or when you're visiting friends who have an Alexa device at home. Would that already be a problem for you?

Yes, it is, although I only have one friend who has such a device. I tend not to spend much time at their place. If someone I knew had a wearable that was analyzing/recording everything, and they refused to remove it, I'd minimize the amount of time I'd spend with them.

> I’m genuinely curious where you draw the line.

"Drawing the line" isn't really the way I'd put it. As you say, we're surrounded by surveillance devices and there's little I can do about it. All I do is avoid them wherever it's possible for me to do so and be sparing/cautious about what I say and do when I'm near surveillance that I can't avoid.

By: aurintex

2 days ago

Totally fair perspective, and I respect how thoughtfully you approach it. It’s helpful to hear where others draw boundaries, even if the whole landscape feels hard to escape. Thx

By: fleeting900

1 day ago

Sometimes I feel like I live in a parallel universe to these guys who see value in things like this.

Where my life is mundane shit that most of the time I don’t even need the current generation of tech anywhere near. Walking the dog. Playing with and looking after my kids. Everyday conversations and intimacy with my wife. Barbecues with friends. Work.

And these guys lives are just working out, coding, and cooking on-trend dishes with expensive cookware, all to be relentlessly optimised.

By: mikewarot

1 day ago

No. Photography is a hobby, and even with a DSLR not hooked to any networking there are some things that just shouldn't be recorded.

For instance I've never brought my camera to a funeral. Most daily life deserves the right to be forgotten.

Then there are privacy laws, etc.

By: kylecazar

1 day ago

The invasion of privacy is the obvious problem. But it isn't going to be functionally very useful to me either.

You're going to capture hours of walking and/or seemingly doing nothing, exchanging pleasantries/small-talk/banter. Without access to my thoughts, this is stuck in some superficial layer -- useless other than to maybe surface a reminder of something trivial that I forgot (and that's not worth it). Life happens in the brain, and you won't have access to that (yet).

By: aurintex

1 day ago

Just to clarify, the “reminder” example was just one possible use case. The idea isn’t to capture everything or pretend to read thoughts, but to offer flexible support based on context you control.

Curious though: if there were a way for an AI to understand your thoughts, would that even be something you’d want? Or is the whole concept off-limits for you?

By: kylecazar

1 day ago

Understood!

It's an interesting question -- I've thought about it a lot in the context of some hypothetical brain interface. There are a lot of unknowns but I personally would go for it with the very hard constraint that it be the equivalent of read-only (no agents here) and local (no cloud).

As potentially scary as it seems, I would not be able to fight the temptation to participate under those conditions. It would make elusive thought a thing of the past.

By: aurintex

1 day ago

A very interesting insight. I hadn't expected that, but of course it opens up new possibilities. Sometimes it would be really interesting to crawl back through older thoughts :)

By: Froedlich

1 day ago

If it was local - implanted, or on my body, it might be a useful tool with the correct training.

If it was networked, it would need to have much tighter security than the current internet.

If it was just a terminal to some corporate server running unknown software for purposes I wouldn't necessarily agree to, nope, nope, nopity-nope. Even if it didn't start off as a device for pushing propaganda and advertising, there's no realistic expectation that it wouldn't evolve into that over time.

By: skx001

2 days ago

Absolutely not.

By: whitehexagon

1 day ago

I only just got rid of my spy-phone, why would I want something with even more intrusive spying on me. I certainly would not inflict that upon others. A sure way to lose friends fast.

BigTech has burned so much good will at this point, that every new venture just feels like a timer ticking down to a bait and switch for ad revenue, subscriptions, pro features or just selling our data to the highest bidder.

And what happens to 'local' data when the 3 letter agencies want access. No thanks, sounds completely dystopian. If the data is there, someone will find a way to abuse it.

By: aurintex

1 day ago

You're right, the trust is at zero. My entire mission agrees with your core premise: "If the data is on a server, someone will find a way to abuse it."

That's why my entire architecture is being designed differently, based on two principles:

- Fully Functional Offline: 3-letter agencies can't access data that isn't on a server in the first place. The core AI runs on-device. - Open Core: You're right to expect a "bait & switch." That's why the code that guarantees privacy (the OS, the data-pipeline) must be open-source, so the trust is verifiable.

My business model is not to sell ads or data. I'm trying to design a system where trust is verifiable through the architecture, not just promised.

By: nmeagent

1 day ago

Consider a world where a sizeable fraction of the population has and uses such a device, such that its presence is assumed and ultimately mandated by authoritarian law enforcement entities, surveillance capitalist firms, and so on. Can you imagine the inescapable nightmare this would become even with the norms of today? Do you really want to offer fuzzy recall of two of five senses to "legitimate legal process", rapacious marketers, or anyone else who somehow gains access to these data?

Personally I would consider it a moral imperative to refuse to use such a device and to avoid anyone who does otherwise.

So no, please don't create such a thing. Stop now.

By: aurintex

1 day ago

I completely understand your concern, and I agree, the kind of scenario you describe would be morally unacceptable. I wouldn’t want to build or be responsible for anything that could enable it.

That said, I often think about how this tension applies to nearly every new technology. Most tools can be used for good or bad, and history shows that progress tends to happen either way. If we had refused to develop technologies simply because they could be misused, we might not have any at all.

I do believe it’s possible to build responsibly through transparency, local-first design, and strong legal safeguards. The EU’s data protection laws, for example, give me some hope that we’re not entirely defenseless.

Do you see this kind of outcome as something we’re tangibly heading toward, or more as a warning of what could happen if we’re not careful?

By: GeoAtreides

1 day ago

The only '''''AI''''' I would trust with that is a Culture Mind. And maybe Earth Bet Dragon.

Definitely won't trust AI shackled to other humans.

By: conartist6

2 days ago

Trust it with what? Trust it to do what?

There's one thing AI can't do, and that's actually care about anyone or anything. It's the rough equivalent a psychopath. It would push you to a psychotic break with reality with its sycophancy just a happily as it would, say, murder you if given motive means and opportunity.

By: aurintex

2 days ago

I imagine it helping me remember things or stick to my habits. For example, if I want to lose weight and have a weak moment, it could remind me—like when I order a burger.

It could also help me use my time better. If it knows what I’ve been doing lately, it might give me useful tips.

So overall, more like a coach or assistant for everyday life.

By: conartist6

2 days ago

Right so now you're giving it your brain and body to run and giving it the kind of trust and faith you had previously reserved for humans and it's already managed to sever many of the ties to reality that were your layers of protection against having a psychotic break. It is not, emphatically not a person and if you allow yourself to start to treat it like one and build a relationship with it then you already have one foot over the edge of the precipice

By: aurintex

2 days ago

Then, we are in Matrix ;) But I think this is already one step ahead of my idea ;)

But I think I know, what you mean - the human aspect should not be lost in the process. Do you see a chance for the future, what can unite the two aspects, i.e., supportive AI without losing the human touch? How could this be ensured?

By: conartist6

2 days ago

It is the universal experience of life and death that is the source of the human touch, as well as well as the incredible ability for empathic bonds to be formed even between animals of different species.

You want the human touch, make unique individual entities which experience life and death. Brb gotta go play with my cat.

By: aurintex

2 days ago

I agree that real empathy comes from lived experience. But I wonder if there’s room for something softer: systems that don’t pretend to be human, but still support us in deeply personal ways.

I’m building something that tries to stay on the right side of that line: not replacing human touch, but amplifying it. Curious how you'd draw that boundary.

By: conartist6

1 day ago

That was what software was, before AI.

Now nobody wants to do that anymore, they want to do AI

By: andrei_says_

1 day ago

What percentage of things do you really forget?

Have you read about procrastination / resistance? The issue is not an absence of nagging but unresolved emotions / burnout etc.

By: ponector

1 day ago

For a personal coach version there should be a module which can deliver electric shock to the owner in case rules have been violated.

By: aurintex

1 day ago

Yeah, that's a great idea, I will add this to the feature list ;)

But I think for the V1, I'll stick to positive reinforcement. Not 100% sure "aversive conditioning" builds the long-term trust we're aiming for. ;)

Cheers!

By: conartist6

1 day ago

You're amazing! That's a really good idea!

By: firefax

2 days ago

I don't trust god, I don't trust man, why would I trust a god made by man?

By: aurintex

2 days ago

I wouldn’t trust a “god made by man” either ;) That’s why I’m building something small, local, and transparent—more like a tool than a deity. Curious what kind of system (if any) you would trust?

By: firefax

1 day ago

Not only won't I help you build such a system, I'll hack it if you try :-)

By: aurintex

1 day ago

Good to know, I will double check commits if I see your name haha ;)

By: firefax

1 day ago

The commits will be under your name :D

By: HardwareLust

2 days ago

No.

By: aurintex

2 days ago

Could you give more specific reasons?

By: diamond559

1 day ago

How about I just cut peep holes in all the rooms you live in and video tape you. How would you feel about that hmm?

By: firefax

2 days ago

No

By: HardwareLust

2 days ago

Because I don't want my life to be training data for some stupid LLM so that you can try to become to the next tech oligarch.

By: aurintex

2 days ago

Would it make a difference if the data were only stored locally on your device?

By: baobun

1 day ago

If that is verifiable by the stack being libre software then perhaps.

By: HardwareLust

2 days ago

It might, depending on how you intend to anonymize the data, and whether you intend to market that data to other companies.

By: aurintex

2 days ago

The core idea is that the AI runs locally on the device, and all data is stored on the device. Therefore, no data will be shared or sold to other companies.

Regarding anonymization --> do you mean, what if I pointed the camera at someone else? That would be filtered out.

By: diamond559

1 day ago

Oh, so you'll peep on everything we do, but don't worry, only you and your team will be able to be the voyeurs. lol, lmao even. Do you ppl even hear yourselves talk?

By: BolexNOLA

1 day ago

>what if I pointed the camera at someone else? That would be filtered out

I'm no expert at this but that sounds a lot harder to implement than you're implying, especially if it's all locally stored and not checked over by a 3rd party. What's to stop me from just doing it anyway?

By: aurintex

1 day ago

That’s why the plan is to invert the usual logic: instead of capturing everything and trying to filter later, the system would reject everything by default and only respond to what the user explicitly enables --> similar to how wake word detection works.

I’ve also thought a lot about trust. Would you feel differently if the system were open source, with the critical parts auditable by the community?

By: BolexNOLA

1 day ago

I mean maybe this is just ignorant of me but can you really build an app where the AI is completely disabled out the box, is totally locally controlled by the user, who is then able to customize its activation with such granularity/control that it will only film them and not other people? Is that something one can actually build and expect to be reliable? Can this actually work…?

I mean generally speaking yes open source but the issue is that if it’s open source then people can easily disable the safeguards with a fork so idk I feel mixed on it. I’m still leaning towards yes because in general I am for open source. But I’d have to think about it and hear other people’s takes

By: moose_man

2 days ago

no

By: aurintex

2 days ago

Could you give more specific reasons?

By: Froedlich

1 day ago

Sounds like Jeff Duntmann's "jiminy", which he wrote about in PC Techniques magazine back in 1992. A matchbox-sized general purpose computer and life log, infrared connections to peripherals as needed, with a mesh network to other jiminies and the internet-at-large. Jeff didn't use the term "AI", but that would describe how it worked.

http://www.duntemann.com/End14.htm

Elon Musk's portable-Grok-thing is a long step toward the jiminy idea.

By: dragonwriter

1 day ago

> A matchbox-sized general purpose computer and life log, infrared connections to peripherals as needed, with a mesh network to other jiminies and the internet-at-large. Jeff didn't use the term "AI", but that would.

Notwithstanding that most of the mobile OS’s are locked down more than some would prefer for a “general purpose computer” (but less than is likely for a porta-Grok), and that most devices are bigger than a matchbook to support UI that wouldn't be available in that form factor (though devices are available in matchbook size with more limited UI), and that it mostly uses RF like Bluetooth instead of IR for peripherals because IR works poorly in most circumstances, isn’t that what a smartphone is?

By: aurintex

1 day ago

Really interesting link to "Jiminy", I hadn’t heard of that before. Also didn’t know Musk was planning something similar, but I think it’s more of a phone than a wearable?

By: diamond559

1 day ago

No.

By: codingdave

2 days ago

No!

By: neximo64

2 days ago

Rewind was quite good

By: aurintex

2 days ago

That's a good point. Did you use it? If so, what did you use it for? Could you imagine that something like rewind but with camera could be useful?

By: sph

2 days ago

Long answer: fuck no. Not even if it was free. Not even if it was libre software.

By: aurintex

2 days ago

So even if it were open source, that wouldn’t be a reason for you to trust it?

Just curious: what would have to change for you to even consider it? Is it more about the concept itself, or the way it's implemented?

By: sph

1 day ago

Literally nothing would change my mind: I do not care to have anything that sees and hears everything I do. It's not a problem I will ever have, and the issues and vulnerabilities of such a thing are too many to even consider.

By: Fairburn

1 day ago

Build a POC for yourself and use it. Go from there.

By: paulcole

2 days ago

Yes, I would trust this 100%.

By: aurintex

1 day ago

Is there anything you’re thinking about where this might be useful for you?

By: paulcole

1 day ago

No.

I think that there is some limit to how much additional information is useful to the AI tools that I use. I don’t know where that limit is and I also think that models are getting better all the time, so storing the data now for later use might be useful.

I have no idea how much it would cost to store/analyze 14-18 hours of data a day? I’m assuming that it could be post-processed and delete the useless stuff?

Obviously I understand the privacy-zealots issues with this technology. But I’m going to be dead in a couple decades and this idea sounds interesting to me. To me, whatever risk there is would be worth the unknown reward.

By: aurintex

1 day ago

Thank you, that's a very valuable and encouraging contribution :)

I've also considered the idea of collecting data now that might be useful later. However, at least within the EU, data storage must be purpose-specific.

That’s why I believe everything should be rejected immediately (similar to how wake word detection works), and only the data the user explicitly enables the AI to store should be retained. This reduces the required storage already. And the post-processing: agree, can also imagine something like "compressing the information of one hour" --> and "compress day" ...

By: andsoitis

2 days ago

[dead]