nerdjon 2 days ago

I can no doubt see the value in this, even being a big skeptic of AI the ability for a computer to just pull up details from throughout the day or interject is genuinely a powerful idea (if it works in practice is another question).

However, I find it very frustrating that all of the tools that just record your mic or sell a pin or whatever never think about privacy. They seem to take the Google approach that the person who owns the device is allows to consent for everyone else around them (or any data they share with certain people) when that is not ok.

That is before talking about the major concern of people doing this at work without their company knowing, bringing it into a meeting and potentially leaking sensitive information.

I realize that unfortunately there is nothing that can be done at this point to stop this technology from being used by people, but I just wish the people that were making these tools thought about the fact that you are not asking for the consent of people around you.

  • the_snooze 2 days ago

    >They seem to take the Google approach that the person who owns the device is allows to consent for everyone else around them (or any data they share with certain people) when that is not ok.

    This is what's often overlooked (intentionally so?) in discussions about privacy. Too much of privacy is framed as "I am OK with sharing this data." But too often the data you hold is actually about other people: contact lists, conversations, associations. When you let a third party sift through your information, you're also making that decision for everyone else you interact with. I think the right way to think about privacy is: even if I have nothing to hide, I respect the agency of those I communicate with, so I won't make that decision to disclose unilaterally.

    • nerdjon 2 days ago

      I flat out refuse to let an app go through my contacts, there is just no reason for it. Photos is always what I select and never full access. etc.

      So when one of the social media sites recommends someone I may know, have no other contacts with this person and I recently shared my number with them. I get angry.

      I will never understand how we got to the point that someone else can consent to a company gathering up my data just because it happens to be on their phone.

      I don't think it's overlooked, some people talk about it. But a lot of companies try to push it away because they take "privacy" seriously while ignoring the consent problem.

      • caseyy 2 days ago

        When I lived in a shared house, Facebook learned about my relation to my housemates without my consent. I didn’t use a single Facebook product at the time and used a VPN on my devices. A few years later, when I signed up to Facebook, my ex-housemates were all “people I may know”.

        I was wondering for a long time how it knew. I think it was because some of my ex-housemates shared their contacts with FB and it discovered us as a social group.

        It is really an eye-opening experience to sign up to Facebook for the first time in recent years. It already knows so much about you and your interests. It’s as if there was already a profile with your name on it that they were building, without even approaching to ask for consent.

        • techjamie 2 days ago

          Facebook has done this for a long time. They're commonly referred to as shadow profiles. Profiles built up purely on information provided by other people's devices/tracking measures where the person in question doesn't directly have a Facebook account themselves.

        • kevin_thibedeau 2 days ago

          I had an Android phone with Facebook integration in the camera app. It would ping Facebook every time you started it up. No doubt with all the permissions of the camera app, including GPS access. From there it's trivial to infer your associations.

          • caseyy 2 days ago

            That is devious. I’m not sure if my phone did anything like that as it was an iPhone 6 at the time.

      • userbinator 2 days ago

        I will never understand how we got to the point that someone else can consent to a company gathering up my data just because it happens to be on their phone.

        It's no longer "your" data if you gave it to someone else.

  • makeitdouble 2 days ago

    > the tools that just record your mic or sell a pin or whatever never think about privacy

    That's not the tool's job.

    Fundamentally this _has_ to be the operator's job to take consent and deal with the legal repercussions of how the tool they run on their own device for their benefit works.

    This is exactly how a physical microphones works: you don't have ethical safeguards that prevent you from pushing the record button if you're not using it according to your state's law.

    We had your approach for phone calls on smartphones, especially on iOS, and as a result the vast majority of people effectively can't record phone calls even when they have full right to do so. In the current situation companies will record the call while you won't, which sucks.

  • 627467 2 days ago

    what do you think will happen faster? some "social" agreement that technology CANNOT be used to its full extent (ie. I can't use my tech to its fullest extend, like augmenting my own personal recording keeping) or just society to learn how to behave in this new environment - where anyone anywhere can be recording?

    Remember when cameras started appearing on phones and supposedly phones had to make noises when photos were taken, or when special phone models were made WITHOUT cameras? what happened to those conservative restrictions?

    Obviously, some personal responsibility on how this personally recorded data is handled, stored and shared must be upheld but I'm skeptical of attempts at banning these new capabilities - specially, as you say, organizations have long been able to do this, just not individuals (not easily at least)

    • dbspin 2 days ago

      I share your skepticism that rules can contain these technologies. But "society to learn how to behave in this new environment" underplays how continuous souveillance [1] limits our individual liberty. So much of our freedom in public (which has declined enormously in my lifetime) comes from our relative anonymity in public spaces. Sure we are being monitored by CCTV and so on, but until very recently we had a basic assurance of freedom to speak and act without being individually identified and monitored.

      The normalisation of AI + continual recording from audio or video recording devices (say future smart glasses) would create an entirely different order of self consciousness.

      The chilling effect of a subconscious awareness of being continually observed and recorded precludes many kinds of practical freedom. Political protest, 'controversial' speech, spontaneous performance etc. Just as paradoxically, being able to share our thoughts at any time reduces their value and their capacity to amuse entertain or instate change. Have you ever tried to have a conversation with someone who's filming you with their phone? Or tried to completely forget the 'hot' mic in an interview? There's a basic disingenuousness to performance, and if we're all forced to perform all the time - the healthy space for self is diminished to the interior. A place our technology will no doubt force into the light in due course.

      1 - https://en.wikipedia.org/wiki/Sousveillance

      • oriettaxx a day ago

        super!

        I trust humans, and their ability to be more conscious as you are: yes, it will take some generations, but the direction is conscience and the struggle for freedom: I've just read an article about slavery in England before and after the Norman Conquest of 1066 (up to 30% of the population were slaves, who were treated brutally), and how is England today (and us writing this in English).

        What I hope, in my lifetime, is keeping trusting human by seeing trends and tools that go into that direction

  • deegles 2 days ago

    We just have to look at the recent news about 23andMe planning to sell all of its user's data to whoever buys it. Sure, they can have a privacy policy and all that, but does it really matter when the company can be sold and they say "all previous agreements are null and void, your data is ours now"?

  • KolmogorovComp 2 days ago

    > I realize that unfortunately there is nothing that can be done at this point to stop this technology from being used by people

    Seems like you answered your own question? Since there's nothing they can do, what would you want them to do?

    • nerdjon 2 days ago

      Acknowledge the problem?

      Build in functionality so it only recognizes your voice and everything else is removed?

      Don't hide behind "privacy" promises or "your data is secure" when that doesn't fix the issue.

      Just because we can admit that the reality is the problem is not going away doesn't mean that we just give up and not talk about it.

    • mihaaly 2 days ago

      Maybe not arguing for the acceptance of a bad situation just because it is ubiquitous?! I have a strong dislike against people knowing perfectly well that something is bad but still they raise their voice for the acceptance of that bad thing on the basis of size, of giving up and doing absolutely nothing against it, not even trying.

      ... or ... if I see it from a different standpoint then doing nothing is not that a bad idea actully. In my case I make it very hard sharing (accessing) pictures of my kids with relatives. Knowing how the big tech hijacking the communication that is used by average folk and how ignorant most are towards privacy implications on others I rather not share pictures except with those few being able to handle secure channel and private thing the way supposed to. Or they can come over in person. So I have to agree now, doing nothing, not using contemporary technology is a great idea! And ban it from (your own) children too.

    • financetechbro 2 days ago

      Come up with a solution, most likely. Since when has this crowd bowed down to a challenge because “there is nothing you can do”?

  • m-louis030195 2 days ago

    to easily take advantage of all these recordings, we're making it easy to extend them and, as the LLM context grows every day and video/image/audio LLMs arrive, we'll be able to ingest a week or a lifetime of 24/7 human recordings

  • oriettaxx 2 days ago

    my point of view is different

    1. I would submit data only to a private (in house and even on premises) LLM setup (think of ollama, for example)

    2. By using this (especially experimenting with it in a contest as close as possible to your king of professional 'circle of trust'), users become conscious on power and risks

    It's not easy, but take a stupid example I've read about:

    Nothing better than having a colors printer in your office to show your workers that any document can be reproduced indefinitely (think of an 100$ bill)

    • sdenton4 2 days ago

      One year later:

      "To interact with our support bot, please enable screen recording and keyboard logging."

      "Thanks! Now go to the settings and enter your password."

      "Thanks! All your Bitcoin are belong to us!"

      • oriettaxx a day ago

        yes, example gives the idea

        What scares me most is not "cash", but how culture (and world culture) is affected by this.

        Take HN, for example: yes, it's a great place and we like it... and we know the quality of its content, but the influence of HN in AI chats results is way too high.

        As an example: the other day I wrote a post here, where I asked to post AI answers to a question.. a couple of users posted their answer... and then, few hours later, I re-posted the question myself in perplexity and phind...

        well... and the first answer was a link to my post in HN (!)

        English content is then easily translated in other languages, ... which is great, but it's too much influence... it gives even more "power" to an already powerful "western" culture.

        Take what is was Esperanto vs. the predominant English culture

        So, I am more scared of what we'll lose.

        Take Pellagra, the disease, and how it spread in Europe: a total catastrophe just because we took the corn from the Americas ...loosing, on the way, how to prepare it (Nixtamalization); something (another thing) Mesoamerica's natives knew very well: such "details" can easily be lost, I'm afraid.

  • amelius 2 days ago

    From the readme: "Open. Secure. You own your data."

    • Cheer2171 2 days ago

      That's not what they were saying. What comes across your screen and can be picked up by your device's mic includes other people's data. This is a nightmare for anyone in the EU or two-party consent state. If we are on a Zoom call, this is recording both sides of the conversation without notice or consent. If we are meeting in person, and I leave my device on the table with this installed, same problem.

      • piltdownman 2 days ago

        Why would it be a 'nightmare' for anyone in the EU?

        Only single party consent is required for recording conversations whether on a phone or in person - Ireland, Italy, Czechia, Latvia, Poland, The Netherlands... in fact the only prominent country that comes to mind re: two party consent is Germany.

    • nerdjon 2 days ago

      Correction, you own my (if we are talking near your device) data if you are using this.

      That is the problem here.

      • bbqfog 2 days ago

        That's not really an AI problem, it's an analog hole problem and the very nature of communication. If you give someone your thoughts, they now have them.

        • fwip 2 days ago

          Well, the difference is that solutions like this automate the hoovering-up of everybody's data.

          I know that some things that people share with me are more or less sensitive, and I use my knowledge and empathy to decide what I should and shouldn't repeat. I would tell my friend "Oh yeah, Carl said he can't make it to the party tonight," but I wouldn't tell them "Carl is absent from the party due to an IBS flare-up."

          A well-meaning friend or coworker might never consider telling another person these personal details, but not think twice about it when enabling a service like this.

          • drdaeman 2 days ago

            But service like this doesn't have agency to disclose anything to anyone, does it?

            As I get it, it's meant for personal use, not for automatically sharing with anyone - although those Notion plugins are a potentially gray area, simply because Notion is a third party.

            The idea is that if you forgot if Carl can make it or not, you can look in up on your private computer (which is assumed to be reasonably secure against unauthorized access), not that it should somehow act as a virtual secretary and automatically respond to others if Carl can make to a party or not. Doing the former does not create any issues wrt privacy (privacy is about not sharing sensitive data - and there is no data sharing), doing the latter is questionable at best.

            Recordings increase risks, and it's concerning that I don't see a single word about even the existence of retention policies (and whenever Screenpipe can "realize" and go off the record when it detects something sensitive, pausing recording until it notices the topic changes), but besides that I'm not sure how it harms any reasonable privacy. IMHO, trying to prohibit making notes is not really reasonable, but not sharing such notes with anyone without consent is very reasonable (YMMV, of course).

          • ImPostingOnHN 2 days ago

            Neither I nor you would tell people about Carl's IBS if he told us.

            How would writing down what we are told change things, versus remembering it, considering what we write down wouldn't be shared with anyone but ourselves?

            How about recording, considering what we record wouldn't be shared with anyone but ourselves?

            How would somebody suffering from a disability such as hearing loss, vision loss, memory loss, or IBS change that? Should people with disabilities be forced to disclose them whenever they have a conversation? Does that include Carl with IBS? Or would some people with disabilities have to disclose while others are allowed to keep theirs private?

          • bbqfog 2 days ago

            It is inevitable that this is the direction we're headed. It's already been happening, Google, browser plug-ins... can all read both sides of an email conversation. Your phone can (if it wants) listen to you and those around you. Cameras are everywhere... at least this is self-hosted and open source. Since there will never be an option for pre-electronics privacy again, it seems like this is the direction we should push to balance out the power of corporations and governments.

            • drdaeman 2 days ago

              > Since there will never be an option for pre-electronics privacy again

              Why do you think so?

              You still can have a private conversation to which no third parties are privy, you just have to use the right tools and avoid using the tools that don't match your requirements. Which is getting harder as most tools don't seem to have real privacy in mind (everyone loves the word and boasts about it, but few actually mean or practice it), but there's still plenty of tools that do and no signs this niche is dying or anything.

              Even the big tech seem to respect this to some extent, because there is a genuine demand for explicitly end-to-end private conversations, and because it allows companies to avoid liability and keep certain cans of worms closed. I'd say the trends aren't looking strictly antiutopian in this regard, and it's still uncertain how everything is going to turn out.

          • CaptainFever 2 days ago

            Please don't use words like "hoover" in relation to data, it makes no sense.

            This reasoning goes against the Right to Delegate [1]. It doesn't matter if you're remembering it using your brain, a notebook, an assistive tool, or Screenpipe. You're just remembering it.

            Whether or not you would tell another person these details is another matter altogether, and has no relation to Screenpipe, which does not, from my knowledge, republish your information anywhere else automatically.

            In summary: we're talking about an auto-remembering tool here. Your example talks about repeating certain sensitive details. There is no relation between your example and the topic on hand.

            "But isn't it the same? You're repeating the information to Screenpipe!"

            In the context of your example, I consider "repeating" to be "repeating to a person". See:

            > A well-meaning friend or coworker might never consider telling another person these personal details, but not think twice about it when enabling a service like this.

            "Another person". Just as you would not think twice about repeating such data to a notebook (e.g. maybe you have bad memory), you also would not think twice about repeating such data to Screenpipe.

            There are some exceptions, such as confidential information, where you would not be allowed to write things down on a notebook. In those cases, yes, you should not use Screenpipe. But the IBS example does not cover that; a friend, in my experience, generally does not expect such levels of secrecy unless they explicitly request so (e.g. "Hey, turn off your computer for a while/Hey, don't bring any phones to my house. We need to talk.").

            "What about phone recording laws?"

            Yeah, you're right. That throws a wrench in my argument that automated remembering has no difference to manual remembering. I could say that there ought to be no difference, but you're right that this isn't necessarily the case.

            I could then argue that this only applies to phone calls, and perhaps video calls. For example, there's no need to get the consent of the people around you in public when you record or take photos, as long as you're not harassing someone.

            But then this just becomes a legal question (what is) rather than a moral one (what ought to be).

            [1] https://anewdigitalmanifesto.com/#right-to-delegate

  • goldfeld 2 days ago

    Potentially.. I'd say absolutely leaking sensitive information.

  • thelastparadise 2 days ago

    > or interject

    Clippy?

    • nerdjon 2 days ago

      TBH I think the problem with Clippy was more a problem with its implementation than the core idea.

      It just wasn't helpful and we lacked the tech at the time to really make it helpful. My impression was that it was basically a glorified AIM Chatbot with some hooks into Office to get some context.

neilv 2 days ago

Before things like this become widespread, can we get some etiquette (and legal awareness)?

On one recent videoconf with a startup founder, it turned out that they were using some random AI product with access to video and audio on the call, without my knowledge or consent.

(Thanks a lot; now some AI gold rush company has video and audio recording of me, as well as ingested the non-public information on the call.)

Their response to me asking about that made me sure I wanted nothing to do with that startup.

But some AI company probably still has that private data, and if so, presumably will go on to leak it various ways.

And it seems the only way to get an AI goldrush company to actually remove all trace of data it's obtained sketchily/illegally, might be to go legal scorched-earth on the company and its executives, as well as to any affiliates to whom they've leaked.

We shouldn't need random people undertaking filing lawsuits and criminal complaints, just because some random other person was oblivious/indifferent about "consenting" on their behalf. Which "consent" I don't think is legal consent (especially not in "two-party" states), but AI goldrush companies don't care, yet.

  • cj 2 days ago

    > Before things like this become widespread, can we get some etiquette (and legal awareness)?

    I was watching CNBC interview last week with the founder of https://mercor.com/ (backed by Benchmark, $250m valuation).

    The founder was pitching that their company would take every employee's employment and payroll history (even from prior roles) and use that to make AI recommendations to employers on things like compensation, employee retention, employee performance, etc.

    The majority of what the founder was describing would clearly be illegal if any human did it by hand. But somehow because a LLM is doing it, it becomes legal.

    Specific example: In most states it's illegal for a company to ask job candidates what their salary was in prior roles. But suddenly it's no longer illegal if a big company like ADP feeds all the data into a LLM and query against the LLM instead of the raw dataset.

    Copyright issues wasn't enough to regulate LLMs. But I suspect once we start seeing LLMs used in HR, performance reviews, pay raise decisions, hiring decisions, etc, people will start to care.

    [0] https://www.cnbc.com/video/2024/09/27/streamlining-hiring-wi...

    • drdaeman 2 days ago

      > But somehow because a LLM is doing it, it becomes legal.

      IANAL, but I believe it does not. As it was famously said way back in the day, a computer can never be held accountable, therefore a computer must never make a management decision.

      There is always a human in the loop who makes the actual decision (even if that's just a formality), and if this decision is based on a flawed computer's recommendation, the flaws still apply. I think it was repeatedly proven that "company's computer says so" is not a legal defense.

      • sterlind 2 days ago

        aren't credit score checks literally this? credit scores are assigned to you based on feeding your personal data into some proprietary model. that score determines whether you get a loan, or a job, or a security clearance. there are policies that use hard cut-offs. how is that not exactly this?

        • consteval 2 days ago

          It is exactly that (basically), and there's numerous ethical arguments around credit score. But credit score was in a somewhat unique position, because what predated it was obvious racism, sexism, and other types of discrimination.

          For both companies and consumers, it was a step up. Now, I'm not sure if that's the case.

          Today there's still many legal and moral qualms about using credit score for job applicants. It's illegal in many areas and highly scrutinized if a company does this.

        • drdaeman 2 days ago

          Well, I haven't really kept track on this, but I believe some states (at least California and Illinois, I believe) prohibit use of credit scores for employment decisions, and I think there was some legislation banning this approved by the House but haven't passed Senate or something like that...

          So, yeah, you're right that it's an issue - and chances are we'll see a wider bans on this (governments are extremely slow).

    • kevin_thibedeau 2 days ago

      > take every employee's employment and payroll history (even from prior roles)

      Selling and purchasing employment history is thankfully banned in a growing number of states. Their business prospects in the US will eventually shrink to zero.

      • cj 2 days ago

        But is it illegal to buy a LLM model trained on such data?

        No, perfectly legal and acceptable (if we apply the same standards as we did for copyright).

        • drdaeman a day ago

          I’m not a ML expert but I’m not sure a _language_ model makes sense here.

          A model like that… It’s basically going to be a ZIP code to salary coefficient mapping on steroids (with more parameters). The model by itself is probably (IANAL) legal if it can no longer produce any data points for individuals, but whenever using it for hiring purposes is legal or not certainly depends on inputs: e.g. feed it protected category (or a data that strongly correlates with one, e.g. name -> gender) and it most likely won’t fare well in court.

  • grendelt 2 days ago

    A number of states have laws where you must disclose if the call is being recorded. I pointed this out to a former supervisor that was using otter.ai to record and transcribe calls he would make with potential leads and conference contacts. He said that law didn't apply in the state of our home-office. I told him it applies on calls made to subjects in other states. He disagreed. Not a lawyer, but I don't think that's how it works.

    And something like this... yikes. Sure, boss, you're not doing eyeball tracking, but all they would have to do is install this on a work computer and just ask AI for a percentage of time not directly spent hammering out code and pay you piecemeal per keystroke or something truly awful. (and the webcam module is coming later, I'm sure.) The future is dystopian.

    • eightysixfour 2 days ago

      These are called one-party vs two(/all)-party consent states, states that are one-party only require a single party in the conversation to consent to being recorded. When the call is across state lines, the ECPA is the federal law, which only requires the consent of one-party, although in theory you could take them to court in the state which had an all-party rule. In general it is considered "nice" to ask, but the court case across state lines probably isn't going to go in favor of the person with the complaint.

      There are far more one-party states, 37, then two-party.

    • SparkyMcUnicorn 2 days ago

      IANAL. I asked a lawyer about this and they said that recording a phone call from a one-party consent state to someone in any other state is typically legal without consent, as long as the the person recording is on the call.

      Personally, I think it's courteous to at least inform that it's being recorded, legality aside.

      > typically the law applies to the state where the recording is made.

      https://recordinglaw.com/united-states-recording-laws/one-pa...

  • goldfeld 2 days ago

    If really it is to become widespread it means we as society really got completely lost in the fold. It would be an act of defeat of the human element. But the magnates gonna love it.

  • simonw 2 days ago

    "But some AI company probably still has that private data, and if so, presumably will go on to leak it various ways."

    Both OpenAI and Anthropic don't train on data sent to them via their API. If they leak that private data it was from a security breach, not because they piped it into their training run.

    (Cue a dozen comments saying "if you believe them about that you're naive", to which I have no useful response.)

    • roywiggins 2 days ago

      I moderately trust them not to egregiously lie, but what about every AI startup that's just a thin layer around an OpenAI API- what are their privacy policies, has it changed in the last 12 hours, and what happens when they go bust and sell their assets off?

      • simonw 2 days ago

        Yeah, I agree - if you're sending data to some OpenAI wrapping company you have much less insight into whether or not they are permanently logging the data that you send them.

Cheer2171 2 days ago

This is a nightmare and illegal for anyone in the EU or two-party consent state OR anyone who interacts with others in the EU or any two-party consent state. What comes across your screen and what can be picked up by your device's mic includes other people's data. If we are on a Zoom call, this is recording both sides of the conversation without notice or consent. If we are meeting in person, and I leave my device on the table with this installed, same problem.

I can't find anything in the docs about how to pause or temporarily stop data collection. People don't like others recording every interaction they have, which is what killed Google Glass.

  • xd1936 2 days ago

    Are you sure Google Glass looking absurd wasn't what killed Google Glass?

    • Cheer2171 2 days ago

      It looking absurd was a key element in the creepiness factor, because it drew attention to how you knew someone else was recording you.

      • spookie 2 days ago

        A good thing! But we are everyday nearing more and more AR glasses that look like glasses.

  • ls612 2 days ago

    On the other hand it's my system and I don't care what the Euros or anyone else thinks about the software I run on it. Maybe if you are a big business with compliance departments and whatnot this would be an issue but this product seems more geared towards tech savvy individuals.

  • brokencomb 2 days ago

    Not all EU countries are two-party consent.

  • KaoruAoiShiho 2 days ago

    No it's not what killed google glass lol.

vid 2 days ago

I tried it a week or so ago, it didn't really go well. Transcription was terrible, even though using the same model (whisper-large) on its own works well. The UI messages were trying to make it sound like a straightforward app, they should have been much more helpful as many things went wrong with a "try again later" type message. I also wish they'd bundled more functionality in the service, so the app could be a simple web front end that uses its API.

At a greater level, I've always wanted something like this, but we shouldn't be able to collect info on other people without their permission. The dangers of breaches or abuse are too great.

Since corporations are holding back the greatest benefits, we should be able to remember commercial works we've accessed, and collect and organize info on corporations, and some info on governments, but by their nature corporations will be better at getting the upper hand and the resulting attacks on government might not be for the better for society at large.

Yes, some parts of some governments conduct abuses which should be called out, but that is specific departments, other departments work to other principles, sometimes pushing back against the abuse. Otherwise comparing governments to business or computers is a downward spiral. This[1] is an interesting series on this topic.

1. https://en.wikipedia.org/wiki/All_Watched_Over_by_Machines_o...

  • m-louis030195 2 days ago

    hey sorry about this, we improved voice activity detection since then, also you can use deepgram if you want higher quality

    • vid a day ago

      Good to hear you improved it. I just need it to be as good as standalone whisper-large. I certainly don't want to send all my conversations to a third party.

qntmfred 2 days ago

I've been playing around with this concept this year as well. Not surprised to see the negativity in the comments. But there is value in this concept, and some of us are going to explore it and find out how to make it work without the dystopian weirdness some of y'all immediately jump to.

I've recorded roughly ~1000 hours of livestream of myself this year. 90%+ of it is boring nothingness, but it is not intended for public consumption anyways. I have and will continue to chop up and post on youtube/twitter the interesting moments I do end up capturing, but it's mostly for myself anyways.

I haven't quite gotten to the point where I've been able to articulate the benefits of this practice in a way that might sufficiently pique the interest or at least acceptance of others in the tech community or otherwise, but that's ok. I did make one video where I started to touch on this idea https://www.youtube.com/watch?v=2zqXkNhaJx0 and I will keep working on it and hopefully demonstrate the personal value (and perhaps societal at scale) as time goes on.

The future is going to get pretty weird. Keep an open mind.

elif 2 days ago

Hmm this might unintentionally be the glue that brings us actually smart Star Trek computer assistants.

Think about it, you can access every single kind of automation through a desktop app or android emulator. Screenpipe becomes a working memory that the LLM develops to have context of its previous actions AND their results.

"Computer, lights" can not only send out a signal like a TV remote, it can check a camera to make sure that the light for the room the user is looking at actually turned on at an appropriate light level for the situation.

  • ericd 2 days ago

    I’ve dabbled with building something like this for myself, I’m guessing it’s not totally unintentional, it’s the first step. After getting it to interpret what it sees on your computer, give it the ability to use mouse/keyboard, maybe in a sandbox vm, and trying to do some basic tasks, working up from there.

    No way I’d use something like this that wasn’t local-only, though.

darknavi 2 days ago

How is the data stored? Isn't this the feature Windows got royally roasted for because it stored the data in something like a SQLite db?

How are you protecting the data locally? Sorry if it's in the README, I didn't see it when skimming.

  • botanical76 2 days ago

    What is the recommended approach for this? I feel as though the specific database should be irrelevant. All OSes are equipped with permissions models that can restrict a SQLite file to use by a single application.

    • g_p 2 days ago

      The issue so far seems to be that most OSs don't really have an effective way to restrict that file to a single application. User-oriented filesystem permissions don't work, as all software runs "as" the user.

      If you assume there's a way to restrict permissions by application (a bit like TCC on Mac for certain folders), you need to then go down a rabbit-hole of what matcher you use to decide what is a "single application" - Mac OS can use developer Team ID (i.e. app signature identity), or similar. You wouldn't want to rely on path or binary name, as those could be spoofed or modified by a rogue app.

      So in short, in a multi-user OS, generally the filesystem (asides from Mac OS, under certain circumstances) is fairly widely readable by other software running as the current user. At least in my experience, Mac OS is the desktop OS that is closest to having some level of effective protections against apps accessing "everything" owned by the user (but belonging to other apps).

  • chankstein38 2 days ago

    Admittedly I have little knowledge on the Windows feature's functionality but my problem with that is that I want to choose whether or not something like this is happening on my computer and have control over it. I barely trust Microsoft and Windows anymore as it is but it's a somewhat-necessary evil in my case. I don't trust them to record my data and actually keep it local to me and I want to actively find software to do it not have them auto install something and have full control over the data.

wkat4242 2 days ago

So, basically this is Windows Recall for Mac.

I'm still very iffy about this. It opens a huge can of worms in terms of privacy. However at least in this case it's not managed by a big company but installed and maintained by the users themselves.

jumploops 2 days ago

Love the focus on just recording your system, and then using that as a building block for more advanced apps.

When I prototyped doing this for a sub-problem (terminal agent), it was nice to have a tight feedback loop between read/write.

Curious how difficult it would be to add “actions” on top of this, or if it’s mostly geared towards a read-only/replay mindset.

layoric 2 days ago

What’s the power consumption like for running this 24/7? Is it event based to reduce the need to process data when pc is idle? It’s an interesting idea for sure, but seems like a lot for an alternative of providing specific context.

  • inhumantsar 2 days ago

    screen recording isn't generally that hard on a system, depending on display config of course. video compression would be piped thru the GPU's media encoders, which are extremely efficient these days.

    • layoric 2 days ago

      Yeah but tokenising all of it regardless of if it is ever used would be a lot more intensive yes?

buffer1337 2 days ago

This could turn into a market changing app. When they finish the desktop interaction API, you chain it together with babyagi et al, and you get an ai agent that operates on a variety of simple tasks, in many different software applications.

spullara 2 days ago

I tried using this during a meeting and my CPU was hitting 700% and my fans started blasting on a MacBook M3 Max... ctrl-c'd and moved on.

maxglute 2 days ago

Undetermined legalities aside, is there any doubt this is how personal digital assistance will trend going forward? It's just like messaging read reciepts, I want to be privy to others info, but not vice versa. Maybe that's the eventual legal compromise, these are going to be networked database, if you want to retain others info, you need to opt in yourself. If one opts out, recordings will turn into placeholder image and written transcripts.

Also, if you transcribe a conversation, but do not record the audio, is that still relevant to recording consent laws? Even if it conversation exists temperarily in some hardware buffer?

cloudking 2 days ago

https://www.rewind.ai/ tries to do this also. Ultimately I found over the course of a work week, there was way too much noise vs signal for it be useful.

  • replwoacause 2 days ago

    I heard some not so great things about the CEO of that company on Reddit. Now sure how true it was but I recall it putting me off the product at the time, considering the sensitivity of the data they would have on me and my company. Reputation matters in this space and his seemed questionable.

musicale 7 hours ago

Is that you Satya Nadella?

Scea91 2 days ago

How do you as a user validate the security of tools like this? I am avoiding many potentially useful tools (e.g. Some Obsidian or Chrome plugins) out of paranoia.

Is there a better way than going through the code or running Wireshark myself? Even these are not bulletproof…

For now, I am resorting to building the simple things for my use myself. The benefit is that its often quite fun.

elintknower 2 days ago

An intelligence agency's dream :)

  • wkat4242 2 days ago

    True, though those will probably install loggers on their targets' computers anyway.

tofof a day ago

As another user points out, Screenpipe currently allows users to obtain the app for free[1][2] by posting 10 times on social media.

This story is not organic; it is Screenpipe scummily leveraging their client base into sneaking advertising into social media including HN. If their clients are already just leverage at the outset, I have no doubt their clients' data will just be leverage later.

1: https://screenpi.pe/onboarding 2: https://i.imgur.com/UvjXc1I.png

liendolucas 2 days ago

I can't understand why we have to inject AI in every piece of technology be hardware or software. Few days ago another thread about HP printers having an AI feature (yes, WTF?). It feels like tomorrow we will be taking a s*t on AI featured toilettes. Madness.

vouaobrasil 2 days ago

With such technology, we are becoming less and less like human beings and more like technological beings augmented with a biological base. I think it's a bad thing because at least the average human being in modern society is not brought up with wisdom, but only the drive to advance technology and operate in a highly capitalistic world.

The augmentation of human beings with tech like this is a proto-type for a dismal world where wisdom is lacking and the pure pursuit of knowledge is becoming a more and more seductive path to destruction.

  • esafak 2 days ago

    If it's all done locally, this is just a more efficient way of taking notes.

    • vouaobrasil 2 days ago

      It's an efficient way, but I vehemently disagree with "just". No technology is "just" anything. All of these little "improvements" constitute a very advanced modification of human beings to become more mechanical and less empathetic towards life.

      • ImPostingOnHN 2 days ago

        Cyber augmentation (hearing aids, cochlear implants, etc.) are often a means to become more empathetic, because it allows deeper connections.

        As someone who suffers from ADD, I simply won't be able to recall forever everything someone and I said, so I use technological augmentation in the form of writing down birthdays, for example. When I'm at meetups, when a conversation huddle ends, I'll write down notes, or more likely, send a custom linkedin connection request mentioning what we talked about.

        The result is that we have the same, empathetic, human conversation as before, and also next time we talk, I can ask them about their startup or hobby, and also I wish them a happy birthday every year, which I think is a net positive without any downsides.

        • vouaobrasil 2 days ago

          A common rebuttal, but I don't think the tradeoff (on average) is worth it when the technology becomes sufficiently advanced. (Of course, it's worth it for some people, but the resulting technology makes society worse on average.)

          And you are forgetting all the destructive technology required to get to the "benign" ones.

          • ImPostingOnHN 2 days ago

            What do you think the negative trade-offs (the less empathetic you speak of) of the aforementioned examples would be?

            From my experience, society is better on average as a result of my using notes and calendar entries in the ways I described.

            • vouaobrasil a day ago

              Negative trade-offs are not directly related to individual products, but to the technology they depend on and the technology that can follow from them, plus our tendency in capitalistic society to invent whatever can be invented for incremental advantages. For example, AI note taking (benign) requires AI (overall bad) and can imply future technologies (greater surveillance). The bad parts cannot be separated from the good in modern global capitalism because we have no oversight mechanism to do so.

              • ImPostingOnHN a day ago

                Cyber augmentation already exists, and it's going to be used for what it's going to be used for.

                In my case, it's being used in the way I described, increasing the depth of human connections. So the question is, how does my usage result in "human beings to become more mechanical and less empathetic towards life"?

                • vouaobrasil 12 hours ago

                  I'm really not interested in your use case. Only in the effects caused by technology on average, summing all positives and negatives.

    • iterateoften 2 days ago

      Its hard for me to see how you could think it is “just” taking notes.

      Objectively the notes are being constructed and filtered by an external 3rd party (even if it is run on device locally, someone external is still training and choosing the agent).

      It is the homogenization of thought and note taking of everyone using AI to record their lives that is the potential problem.

  • bondarchuk 2 days ago

    >the pure pursuit of knowledge is becoming a more and more seductive path to destruction.

    How so?

    • vouaobrasil 2 days ago

      As knowledge becomes more powerful in the sense of enabling us to do more things, it becomes more tempting to use it to gain short-term advantages that typically have long-term detrimental consequences. Such as AI for example, which is too quick at disrupting employment or cheap energy to generate bitcoin but is problematic for local energy grids. The more powerful the knowledge, the easier it is for people to ignore the downsides at the expense of fellow human beings.

      That is especially true because we have an economic system that rewards short-term improvements in the efficiency of the system, regardless of the long-term costs. Fossil fuel use, cutting down local forests (has relatively litle short-term impact, but adds up).

      And, as we pursue knowledge and technology more vigorously, we slowly lose other forms of gaining knowledge such as a relationship with nature.

      Human society is advanced with regard to its knowledge capability, but exceptionally primitive with regard to basic wisdom about community, love, nature, and friendship. We continually donwgrade these things to make way for new technology, and the prisoner's dilemma (tech gives some people advantages, so everyone is pressured to use it), makes it hard to make decisions for the long-run like the Amish do.

  • cparish 2 days ago

    I resonate with your concerns, but I believe the best way to secure a human-centric future is fully diving in to technology and our constant surveillance realities. My goal is to empower people to collect their own data to help them objectively understand the impact technology has on themselves: https://hindsight.life/

    • vouaobrasil a day ago

      I am absolutely against a human-centric future. I advocate for a biological future where animals have equal rights to exist.

  • XorNot 2 days ago

    What a bizzare attitude to take on a site fundamentally based on the advancement and refinement of information handling technology

    Like what do you think everyone who posts here does?

    • sirsinsalot 2 days ago

      Do you want an echo chamber? Perhaps just being fed with opinions you agree with?

      We should celebrate opinions that go against local conventions

    • 7952 2 days ago

      Arguably this kind of viewpoint can be particuarly interesting to this group. People are well placed to see some of the more worrying aspects of technology.

    • vouaobrasil 2 days ago

      I understand exactly what everyone else does here. And nothing intrinsically wrong with that -- technology is unquestionably fun and interesting. I like programming myself. BUT, and this is a huge BUT, I think we as people who are well versed in technology should take a little more responsibility for what we create.

      So, not bizarre at all.

    • dylan604 2 days ago

      i guarantee you not everyone here has benevolent intent. for example, we know people from FB,TikTok,Snap,Twit...er,X are here.

      what do you think everyone does?

  • CaptainFever 2 days ago

    > we are becoming less and less like human beings and more like technological beings augmented with a biological base

    I would actually love to be a technological being. This is transhumanism, isn't it?

    • vouaobrasil 2 days ago

      Yes and I think it's a horrible thing.

m-louis030195 2 days ago

Thanks for sharing screenpipe :) (author here)

  • tomComb 2 days ago

    I was liking what I was seeing, and I appreciate you making your business model very clear.

    But then I saw that users can get it free by posting "about screenpipe 10 times on social media".

    If you want ads, pay for proper ads! Don’t pay people to turn user content into sneaky ads.

    I understand why people hate regular ads, but IMO affiliate promotion (when done without disclosure) and stuff like what you are doing is much worse.

varispeed 2 days ago

Hopefully one day we will reach the place where such AI contraption after watching us for hours, days and months will be able to just do stuff we do and we could switch off the computers.

Check email, respond to some, see some news, pull up tasks, do some code, ship some code, read some more news, watch a tutorial, do some more code, check mails, respond to some etc etc.

At the end of the day send you a summary, some highlights, occasionally call you for manual intervention.

BrouteMinou 2 days ago

Microsoft is doing Recall, encrypted and what not with the TPM => Microsoft BAD!

A random dude is doing the same thing, but in rust (super important to mention right), storing the data in sqlite => Magnificent project, we want more!

You guys are fucking weird.

  • wkat4242 2 days ago

    > Microsoft is doing Recall, encrypted and what not with the TPM => Microsoft BAD!

    1) It was not in fact encrypted and any user could mess with it. AND the data was stored in SQLite too. Microsoft only started fixing this after their totally negligent security was brought to light.

    2) Recall is maintained and auto updated by Microsoft which can change the rules at any point (e.g. add datamining at a later point). At least with Screenpipe the end user decides. This solution is open-source so you know what's happening.

  • mldbk 2 days ago

    Random dude making the same thing is not the same thing what MS does.

    1. You have a choice. With MS you don't. You can't opt-out, at least for now.

    2. And as prev buddy said, you never know what MS will do with your data.

    3. Recall will be heavily targeted and from day-1 some malware will target it. Random dude's pet project doesn't (even though it is a security through obscurity).

  • beefnugs 2 days ago

    There is such a big difference between this being shoved down the throat of every single windows user in the world, vs someone choosing to download and run it themselves

  • ekianjo 2 days ago

    You understand that the data stays on the computer right? With Microsoft you never do what they do down the road even if they start local