This is being done by PEOPLE. PEOPLE are using AI to do this.
I’m not defending AI, but we need to focus on the operator,
not the tool.The operator as much as the tool.
Nice try, NRA.
I had the same thought after I posted it, lol
Step one for gun control should be a fully functioning mental healthcare system. That’s not the final step by any means, but if people are getting the mental help they need there will be fewer shootings.
Step one for gun control should be gun control.
Sure, a functioning mental healthcare system is important and should be pursued in parallel. But, clearly, there’s a major issue with the availability of powerful guns. That needs to be addressed before, or at least at the same time as mental health.
Preach! lol
Especially for a tool that’s specifically marketed for people to delegate decision-making to it, we need to seriously question the person-tool separation.
That alleged separation is what lets gig economy apps abuse their workers in ways no flesh-and-blood boss would get away with, as well as RealPage’s decentralized price-fixing cartel, and any number of instances of “math-washing” justifying discrimination.
The entire big tech ethos is basically to do horrible shit in such tiny increments that there is no single instance to meaningfully prosecute. (Edit: As always, Mike Judge is relevant: https://youtu.be/yZjCQ3T5yXo)
We need to take this seriously. Language is perhaps the single most important invention of our species, and we’re at risk of the social equivalent of Kessler Syndrome. And for what? So we can write “thank you” notes quicker?
Respect.
Also: I just realized I need a Mike Judge marathon night.
You bring up a good point. In addition to regulating the tool, we should also punish the people who maliciously abuse it.
Regulate it because it’s being abused, and hold the abusers accountable, yeah.
I always see the names of the models being boogey-manned, but we only ever see the names of the people behind the big, seemingly untouchable ones.
“Look at this scary model” vs “Look at this person being a dick”
We’re being told what to be afraid of and not who is responsible for it, because fear sells and we can’t do anything with it.
Just my perception, of course.
I mean the tool is also being made by people. And there is people who pointed out, that a tool that is great at spurting out plausible sounding things with no factual bearing could be abused badly for spreading misinformation. Now there have been ethic boards among the people who make these tools who have taken these concerns in and raised them in their companies, subsequently getting ousted for putting ethical concerns before short term profits.
The question is, how much is it just a tool and how much of it is intrinsically linked with the unethical greedy people behind pushing it onto the world?
E.g. a cybertruck is also just a car, and one could say the truck itself is not to blame. But it is the very embodiment of the problems of the people involved.
Corporate ethics only exist within the realm of theoretical, and training videos. Ethics will not be tolerated in actuality.
It is all intrinsically linked. But we need to see who the people behind it are or it’s just a boogey-man.
subsequently getting ousted for putting ethical concerns before short term profits.
The irony is that there are no profits. The companies selling generative AI are losing such vast sums of money it’s difficult to wrap your head around.
What they’re focused on isn’t short-term profits, it’s being the biggest, most dominant firm whenever AI does eventually become profitable, which might take decades.
People seem to’ve already forgotten about Transmetropolitan. 🤷🏽♂️
I mean, sure, fuck Ellis, but still. Idiocracy came after, and even that’s fading from modern awareness, it seems. 😶🌫️
Ellis is like Gaiman, at some point you have to seperate the work from the author.
Yep. Machines will only ever do what they’re told to do. This is AI literally doing the job it’s been instructed to do under the rules it has been given.
Machines are not designed by hermits who have no knowledge of the outside world. They’re tools, but they’re tools designed with a purpose and with or without safeties designed to keep them from maiming or killing people. The design of the machine can be used to talk about the responsibility and morals of the machine’s designer. And, certain machines are so unsafe that even if they theoretically can have a useful purpose, the dangers of the machine being misused are so great that the machine shouldn’t be permitted to be sold.
In Arrested Development, George Bluth designs and sells the Cornballer, a machine to deep-fry cornballs. It was made illegal after it caused serious burns to anybody who used it. Part of the purpose of showing this device on the show is to reveal the character of George Bluth. It shows that he’s the kind of guy who doesn’t care enough to design a safe device, and who continues to try to sell it in Mexico even after it’s made illegal in the US because of how unsafe it is.
Yes, in this case it is people who are submitting papers full of fabricated data using ChatGPT as a tool. But, that doesn’t mean that ChatGPT is simply “neutral” in this whole thing. They’ve released a tool that lacks safeties and that is effectively “burning” science. The positive potential uses of ChatGPT are what, writing a dirty limerick in the style of Shakespeare? Meanwhile, the potential pitfalls of using it are things like having it convince a suicidal person to kill themselves, sowing confusion and making it harder to find good science, giving people unsafe medical diagnoses?
Leaving the information age and entering the disinformation age.
A deadly weapon given how much the ruling class is trying to turn a class war into an identity war.
AI content, AI bots in the forums, AI telemarketing, AI answering machines, AI everything. AI will make IRL and stuff like audited national encyclopedias important again. Gone is the promise of the internet. And this is the real reason why anonymity will not be possible online. If we can’t identify the poster as a human, it will mean nothing…
And since we don’t have any of our constitutional rights to privacy online, an internet without anonymity isn’t an acceptable solution either. What a waste. The most exciting and promising creation of the last hundred years, squandered for advertising and selfish means.
I honestly think anonymity is even better now, given the fact that you can self-host LLMs that can change the linguistic style of your writing into something entirely different.
The good news is think of all the possibilities in regards to funding new research to prove AI wrong!
Or think of the millions the rest of the world will have to spend on software engineers fixing their fucked up AI generated code!
It’s like outsourcing to an even worse firm!
Does anyone else hate when words are cut with hyphens, especially longer words? Just truncate the same word. Makes it easier to read.
It should have said “questionable, gpt fabricated, scientific”
Question: Does Google Scholar only list published papers from reputable journals or does it just grab anything people throw out there? We have already seen that some journals will publish complete nonsense without looking at it. AI or not, there’s a core problem with how academic work gets peer reviewed and published at the moment.
Imagine if peer review actually had to include a reproducible study and reproduce the same result.
That basically doubles the money necessary for everything.
If you run the same prompt through the engine, you ought to get fairly similar results. There’s your reproductibility.
What app is this that justifies the text with hyphens? Is it in a fixed-width display, or does it detect syllables automatically?
Tusky for Android.
There’s no point in existing anymore. ChatGPT can take it from here.
homie we’re not out yet. Chatgpt cant even do basic math
Me: What’s your source for this?
Them: Google Scholar
Me: fires
Skynet skipped judgement day and chose a different method to ruin humanity