• 1 Post
  • 821 Comments
Joined 2 年前
cake
Cake day: 2024年4月3日

help-circle


  • Best I can do is mandatory Lumen and Nanite. You can get almost-stable 60 fps on a 5090 with DLSS Performance and 3x frame gen, which should be optimized enough for anyone.

    My game will sell for 80 bucks, 150 if you want the edition with all the preorder-exclusive content.




  • To put in context how much they are driving up demand: OpenAI just bought 40% of the global wafer production from two of the three major RAM manufacturers, Samsung and SK Hynix. SK Hynix Micron (best known for their Crucial brand) decided to drop out of the consumer market entirely.

    Of course the other AI companies are going to try to nail down supply as well. If they get similar deals, 10 € per GB of DDR5 will look cheap.

    This will increase the cost of computers, phones, and laptops, both directly and indirectly (e.g. GPUs will also become more expensive; VRAM doesn’t grow on trees). We’re already at a point where Samsung Semiconductors reportedly refused to sell RAM to Samsung Electronics. I fear we might enter into an age of 2000 € basic office PCs and 1000 € mid-tier phones if the AI bubble won’t pop first. Even when it does, the repercussions will be felt for some time.


  • Looks simple enough. The choice of Godot for a UI library is an interesting one; how big is the program in the end?

    I would suggest being a bit more explicit about files: Which ones are in the working set; which one is currently being worked on, that sort of thing. Having a file list (even if it’s hidden in a drawer or something) before starting the conversion helps the user verify that the correct files are being worked on. Seeing which file is currently being processed might be useful for troubleshooting or just to see how far along the process is.



  • In stardate 42761, war was beginning.

    Picard: What happen?

    Worf: Somebody set us up the probe.

    Riker: We get signal.

    Picard: What !

    Riker: Main screen turn on.

    Picard: It’s you !!

    BORG: How are you individuals !!

    BORG: All your distinctiveness are belong to us.

    BORG: You are on the way to assimilation.

    Picard: What you say !!

    BORG: You have no chance to resist make your time.

    BORG: Ha ha ha ha …

    Riker: Captain !!

    Picard: Take off every ‘Voyager’!!

    Picard: You know what you doing.

    Picard: Move ‘Voyager’.

    Picard: For great genocide.


  • Oh, AI can be very useful. Just not the generative stuff that is currently trying to consume all resources of the entire solar system for nebulous potential benefits.

    A good example of AI that just works is document scanning. Get a picture of a document, locate text, OCR it, figure out which parts of the text correspond to entry fields, auto-populate the fields. That works pretty well and can greatly speed up manual data entry. It’s not perfect but the success rate is pretty good due to the constrained problem space and even if you have to check all fields and manually correct 10% of them you still save a lot of time.

    An early example of this is the automated parsing of hand-written postal codes. That iteration of the tech has been in productive use since the 90s! (Yes, that’s just OCR but OCR is considered a field of AI.)

    It’s one of those unexciting applications of tech that don’t make major waves but do work.


  • And not only is she capable of scaring Picard and Odo, she also becomes one of very few people who truly accept all of Odo’s weirdness and social awkwardness and just let him be himself.

    Sure, Kira also accepted him in a similar way but that took years of time and romantic feelings. Lwaxana was just cool with it pretty much immediately. DS9 did a lot to show us that there was more to her than just an incredibly loud personality.






  • I predict incremental quality increases. Qwen4 will probably be a somewhat better Qwen3 (and a dud if we’re unlucky). I do agree that it’ll probably come out; there’s not enough life left in this AI boom for a Qwen5, though.

    The biggest change will probably come from figuring out where LLM use will actually benefit us. Right now the industry zeros to answer that with “everywhere” and concludes that it’s prudent to spend money equivalent to the GDP of an industrial nation on compute-only data centers.

    For example, I expect the use case for coding to be more like “autocomplete a code block based on known patterns” rather than “build a public-facing web application from a prompt”.



  • Gotta be honest, though, a locally hosted 70B model with basic RAG functionality isn’t exactly playing in the same league as the market leaders, which can be bigger by two to three orders of magnitude. And a model that size is already around the limit of what a beefy gaming PC can do with reasonable performance. We’re unlikely to ever beat the big players on quality with local models.

    What might happen is that the market collapses, the big players all go bankrupt, further LLM development ceases, and locally hosted Qwen3-80B will be the pinnacle of available text generation for the next thirty years.