> Instead, only a small sample of the data will likely be accessed, based on keywords that OpenAI and news plaintiffs agree on. That data will remain on OpenAI's servers, where it will be anonymized, and it will likely never be directly produced to plaintiffs.
What is the NYT's stance here? Is it pure spite? I guess their lawyers told them this is the winning move, and perhaps it is. But it just seems so blatantly wrong.
If you look at Reddit's r/ChatGPT, you'll quickly notice that the median use of ChatGPT is for therapy.
Is the NYT really ok with combing through people's therapy logs?
I find it interesting you are blaming the NYT on this and not ChatGPT for keeping these logs in the first place. If openAI didn't keep logs, then there would be nothing to search, and a more harmful actor couldn't accomplish something far more nefarious. Saying that there could be confidential information in the logs, so that means we shouldn't access it, should also mean the logs shouldn't be kept.
What I mean is there are two meanings to "expectation of privacy": the Bayesian prior, and the legal stance. I have an expectation of privacy in my home but I still close the shades.
> Instead, only a small sample of the data will likely be accessed, based on keywords that OpenAI and news plaintiffs agree on. That data will remain on OpenAI's servers, where it will be anonymized, and it will likely never be directly produced to plaintiffs.
I’m glad about this detail.
What is the NYT's stance here? Is it pure spite? I guess their lawyers told them this is the winning move, and perhaps it is. But it just seems so blatantly wrong.
If you look at Reddit's r/ChatGPT, you'll quickly notice that the median use of ChatGPT is for therapy.
Is the NYT really ok with combing through people's therapy logs?
I find it interesting you are blaming the NYT on this and not ChatGPT for keeping these logs in the first place. If openAI didn't keep logs, then there would be nothing to search, and a more harmful actor couldn't accomplish something far more nefarious. Saying that there could be confidential information in the logs, so that means we shouldn't access it, should also mean the logs shouldn't be kept.
As explained in literally the first paragraph of TFA, the court ordered openai to start keeping these logs. They didn't do it by choice.
Did you consider reading the article before writing out this comment? Please do next time.
Is there an expectation of privacy using ChatGPT? Do users think nobody is ever going to be looking at their logs?
If you are a paying member and are not sharing prompts, yes?
Stop having this expectation. It's factually incorrect.
When my expectation of privacy is violated, I'll have learned a little more about such violations, but I won't drop my expectation not to be violated.
What I mean is there are two meanings to "expectation of privacy": the Bayesian prior, and the legal stance. I have an expectation of privacy in my home but I still close the shades.
They don't care. This is purely for a business upper hand.
OpenAI should probably encrypt the chats and lock itself out going forward. Collect whatever metrics they need on the fly before locking.
OpenAI would never lock themselves out of free training data.
Now is the time to go have a chat with ChatGPT about how much NYT sucks. Maybe it can help come up with insulting things to call their lawyers too.