• AutoTL;DRB
    link
    fedilink
    English
    68 months ago

    This is the best summary I could come up with:


    Over the past few weeks, it feels like I’ve received an uptick of Xbox customers reaching out to me either via email or direct message to help them overcome unfair bans.

    Content creators are increasingly being targeted in “mass reporting” events that Microsoft’s automated systems take as legit, resulting in permanent account closures.

    For those who don’t know, Free Companies are essentially FFXIV’s clan or guild system, where players can band together as mercs to take down the game’s big scary beasties.

    A viral tweet from Envinyon details how reddit user /u/TGB_B20kEn was handed a two-month account suspension for using the phrase “Free Company” in an LFG posting via Xbox.

    I myself recently got locked out of a second Microsoft 365 Business Account, and it took well over two weeks of constant calls to get it escalated and fixed, being passed around to different departments who didn’t want to take responsibility to help me out.

    It’s increasingly apparent that Microsoft thinks it can have its cake and eat it from a customer service perspective — the most minimal amounts of investment possible, while also having a “safe” sanitized environments that won’t cause PR headaches.


    The original article contains 1,178 words, the summary contains 194 words. Saved 84%. I’m a bot and I’m open source!

  • sylver_dragon
    link
    English
    18 months ago

    Not too surprising. AI is the latest technology to go though the Gartner Hype Cycle. We are currently riding high in the “Peak of inflated expectations”. And I just really cannot wait for the whole thing to crash. I’ve not been hit by the Xbox problem, but I work in the Cybersecurity space. And you can’t swing a Cat-5 O’ Nine Tails without hitting a security vendor touting their “AI Features” in their products. And they all suck, every single one of them. What they do really, really well is generate false positives with exactly zero supporting evidence, artifacts or documentation. Here’s one of Microsoft’s. That’s the description of a real alert in a real Microsoft security product. The page could have been a full-screen, auto-playing video of Sam Kinison screaming “Fuck You” and it would have improved the usefulness of the page. At least it would have been funny the first time. And with AI’s being black boxes most of the time, you never have any clue as to why the alert was triggered. Instead of a clear “here’s the logs/packet/process which was seen”. It’s basically, “we think something bad happened, god luck figuring it out”. It may as well be a random number generator on the back end for all the usefulness it provides.

    I’m sure AI will one day be useful and even now it likely has it’s niches. But the current trend of “sprinkle some AI in it” and hoping for the best isn’t working.