- cross-posted to:
- autism
- [email protected]
- [email protected]
- cross-posted to:
- autism
- [email protected]
- [email protected]
I read the article but I didn’t check out the platform yet. Thought it might be useful for my fellow autistic people.
I read the article but I didn’t check out the platform yet. Thought it might be useful for my fellow autistic people.
It absolutely can do a worse job, and be more biased. Not to mention Sam Altman is backing it? Yeesh. I’m good.
Can you somehow prove that? I don’t see how „absolutely“ reinforces your claim. If conventional hiring wasn’t a bag of dicks, hiring companies (which are shit as well) wouldn’t make billions in revenue.
But I don’t recognize altman. The name sounds familiar. I might need to check him out.
AI can absolutely screw up these things as bad or worse than any other program.
AI sucks at nuances it isn’t explicitly trained on. That’s how you get AIs at eating disorder charities recommending things like 500 calorie daily deficits (this actually happened).
AI might be able to get a technically accurate translation, but can’t always tell what’s culturally offensive or colloquially given a new meaning.
For example, in Spanish “Soy” means “I”, and “Caliente” means “Hot”. What do you think “Soy caliente” means?
Well if you got ‘I am hot’, Google Translate will actually agree with you…but it doesn’t mean that at all. What it actually means is ‘I am horny’.
Yeah, I get it. Pretty rough around the edges, no doubt. I still don’t think this makes „AI powered“ or „assistet“ worse than conventional recruiting. That’s all I‘m saying. It’s also a buzz word that gets used for a lot more than it is worth btw.
The quality of conventional recruiters can vary wildly. I’ve dealt with both actual pieces of shit recruiters (the kind that try outright guilt tripping and manipulation) and some amazing ones.
Sure, that’s the same I have experienced but the argument I was making is that it’s not going to be worse if you train AI to especially target ND folks. It’s probably going to be worse than the good recruiters and better than the worst.
You do realise that’s going to be a metric fuckton harder than targeting neurotypicals, right? Like, bordering on impossible.
The clue is in the D of ND. To put it another way, let’s forget the entire spectrum of ND for a second and focus on ASD.
You not only need to train your AI on every possible interaction quirk an ASD person can have, such as trigger phrases to avoid and jobs they absolutely will not be able to do, you need it to be adaptable such that it can be useful to high functioning ASDers who can mask, to low functiiners that may not be able to leave the house but can maybe do some light computing work, and everywhere in between. And you need it to be able to detect which one it is dealing with.
That’s an impossible task, because the exact combination of issues, quirks, triggers, etc, are often very rare, if not completely unique.
But surely the AI can learn what the quirks of an individual are, right? Nope. AI learning relies on large datasets to do its work. Datasets that will not exist for all except the most common of issues and quirks. The most an AI can do is avoid a given topic when asked.
Now extrapolate that to the entire ND community. Good luck.
I understand your point. It’s correct that we do not know (neither do we need to) how their supposed AI works. It could be a mess, it could be no actual AI at all, many different possibilities.
But I think you might be missing my point here: as someone who is gifted and autistic (and traumatized due to both facts), I can absolutely build you a multi million dollar company (have done so in the past), but I can’t deal with bullying. A practical negation of my skills if I don’t design my environment in a very particular way.
This is just an example of a skilled individual that gets disabled by reality and nearly needs lab conditions (highly controlled) to work but then does tremendously well.
One of my personal (and supposedly that of others too) problem starts at recruiting. I can’t tell you what I‘m good at if asked but I can show you. I assume that a lot of people on the spectrum work this way and a change, albeit with infamous AI in the mix, is highly welcome.
Completely focusing on the AI component massively feels like a „I don’t have this problem so I don’t want it fixed“ type of thinking.
Does this make sense to you?
I think you might be the one that’s misunderstanding my point. Or the scope of what is being proposed here.
My point is not an objection to other methods being out there but a realistic look at what is being proposed here.
I am also autistic. I have met other autistic people that are high functioning like myself, even in social situations, I have met some that are completely debilitated by noise but are otherwise perfectly capable people, I have met some that are non verbal but can work a computer like a prodigy, and I’ve met one who will straight up never be able to have even a semi normal life.
Want to know what the common threads between them are? Beyond an ASD diagnosis, not a lot. One person’s fixation is another person’s trigger, one’s need for white noise is another’s audial hellscape. AI has difficulty navigating neurotypical behaviour, despite the latter having the most research behind it, never mind neurodiversity. And a mis step can lead to a trigger or a meltdown.
So yeah, there isn’t really a one-size-fits-all approach. When bringing it outside to include more than just ASD, you’re going to find it hard to find even a one-size-fits-most approach. You kinda need that if you want to make a scalable online service.
While not Google Translate, it’s a more advanced translation service.
AI is surprisingly advanced and there’s a lot more towards translation than you might think. But you’re right: AI absolutely sucks at nuances it isn’t trained on. That’s pretty much the reason ChatGPT and other “general purpose AIs” will always perform (much) worse than specialized ones.
I don’t know if there’s a great way to compare AI vs worthless recruiters, so finding something objective might be difficult. AI is going to pick up on systemic biases in reality and I’m not sure you can sanitize the data enough to avoid that.
I agree that this is unfortunate. I think what I‘m trying to say is that we see this in AI while recruiting in most companies is trash and most people familiar with AI have no knowledge of how bad recruiting actually is.
Removed by mod