lmao, Zoom is cooked. Their CEO has no idea how LLMs work or why they aren’t fit for purpose, but he’s 100% certain someone else will somehow solve this problem:
So is the AI model hallucination problem down there in the stack, or are you investing in making sure that the rate of hallucinations goes down?
I think solving the AI hallucination problem — I think that’ll be fixed.
But I guess my question is by who? Is it by you, or is it somewhere down the stack?
It’s someone down the stack.
Okay.
I think either from the chip level or from the LLM itself.
Whatever he’s smoking, it’s strength rating is at least: “make it seem like a good idea to call employees back from remote work despite remote work facilitation being the one thing we sell”.
the important thing about Zoom is that it was the lucky winner of the pandemic. Could have been Google Meet, could have been any of their other competitors, but somehow everyone just converged on Zoom.
Having worked in an IT department in 2020, it wasn’t just random. Zoom was stable for large meetings and scaled pretty smoothly up to a thousand participants. And it’s a standalone product and it had better moderator tools.
MS Teams often got problems over around 50 to 80 participants. Google Meet worked better but its max was way lower than Zoom (250?). I tried a couple of other competitors, but none that matched up (including Jitsi, unfortunately).
So if you were at an IT department in an organization that needed to have large meetings and were looking for a quick solution that also worked for your large meetings , Zoom was in 2020 the best choice. And big organisations choices means everyone has to learn that software, so soon enough everyone knows how to use Zoom.
They were at the right place, had the better product, gained a dominant position. And now they are tossing all that away. C’est la late stage capitalism!
Also according to my freelance interpreter parents:
Compared to other major tools, was also one of the few not too janky solutions for setting up simultaneous interpreting with a separate audio track for the interpreters output.
Other tools would require big kludges (separate meeting rooms, etc…), unlikely in to be working for all participants across organizations, or require clunky consecutive translation.
Custom hardware designed with ai pipelines in mind similar to how gpu architecture solved a lot of render issues due to how memory can be accessed and what operations are prioritized. The idea people have been talking about is basically the llm on one part of the chip and other NNs beside it that can modify its biases - basically setting the ‘mood’ and focusing things as the answer is created should help enable creativity in some areas while locking it out in others. Coding for example requires creativity in structure or variable names but needs to be very factual about function names or mathematical operations.
I think it’s very unlikely to be the way things go based on progress with pure llms and llm architecture but maybe in the future it’ll turn out to be a more efficient way of solving the problem, especially with ai designed chips.
Lol I like how they put the author’s note at the beginning of the article, “this was a very special interview” as if it’s special because of the unique insights instead of special because it sounds coked up.
lmao, Zoom is cooked. Their CEO has no idea how LLMs work or why they aren’t fit for purpose, but he’s 100% certain someone else will somehow solve this problem:
Haha at the chip level? What’s he smoking?
Whatever he’s smoking, it’s strength rating is at least: “make it seem like a good idea to call employees back from remote work despite remote work facilitation being the one thing we sell”.
So that’s gotta be some strong stuff.
the important thing about Zoom is that it was the lucky winner of the pandemic. Could have been Google Meet, could have been any of their other competitors, but somehow everyone just converged on Zoom.
Having worked in an IT department in 2020, it wasn’t just random. Zoom was stable for large meetings and scaled pretty smoothly up to a thousand participants. And it’s a standalone product and it had better moderator tools.
MS Teams often got problems over around 50 to 80 participants. Google Meet worked better but its max was way lower than Zoom (250?). I tried a couple of other competitors, but none that matched up (including Jitsi, unfortunately).
So if you were at an IT department in an organization that needed to have large meetings and were looking for a quick solution that also worked for your large meetings , Zoom was in 2020 the best choice. And big organisations choices means everyone has to learn that software, so soon enough everyone knows how to use Zoom.
They were at the right place, had the better product, gained a dominant position. And now they are tossing all that away. C’est la late stage capitalism!
Also according to my freelance interpreter parents:
Compared to other major tools, was also one of the few not too janky solutions for setting up simultaneous interpreting with a separate audio track for the interpreters output.
Other tools would require big kludges (separate meeting rooms, etc…), unlikely in to be working for all participants across organizations, or require clunky consecutive translation.
As honourable mention, MS Teams is also uncontrollable, overblown jank that
And even at its best behaviour it randomly loses messages while eating up way more CPU and RAM than possibly justifiable for a glorified IRC UI.
No wonder Zoom won out over that one, if you tried to use Teams in 2020 you barely could.
Custom hardware designed with ai pipelines in mind similar to how gpu architecture solved a lot of render issues due to how memory can be accessed and what operations are prioritized. The idea people have been talking about is basically the llm on one part of the chip and other NNs beside it that can modify its biases - basically setting the ‘mood’ and focusing things as the answer is created should help enable creativity in some areas while locking it out in others. Coding for example requires creativity in structure or variable names but needs to be very factual about function names or mathematical operations.
I think it’s very unlikely to be the way things go based on progress with pure llms and llm architecture but maybe in the future it’ll turn out to be a more efficient way of solving the problem, especially with ai designed chips.
Lol I like how they put the author’s note at the beginning of the article, “this was a very special interview” as if it’s special because of the unique insights instead of special because it sounds coked up.
Wasn’t this an unsolvable problem?
it’s unsolvable because it’s literally how LLMs work lol.
though to be fair i would indeed love for them to solve the LLMs-outputting-text problem.
Yeah. We need another program to control the LLM tbh.
Sed Quis custodiet ipsos custodes = But who will control the controllers?
Which in a beautiful twist of irony is thought to be an interpolation in the texts of Juvenal (in manuscript speak, an insert added by later scribes)