• @[email protected]
    link
    fedilink
    English
    4
    edit-2
    6 hours ago

    The Nintendo Famicom, renamed the Nintendo Entertainment System (NES) when it was released in America two years later, was a 3rd-generation videogame console released in 1983. It is best known for being the birthplace of the Super Mario Bros. series as well as having one of the largest libraries of any videogame system ever released. The system was very easy and very cheap to develop for, and since game reviews were not yet widely accessible, publishing games could be extremely lucrative regardless of if they were any good. To put it politely, games for the system varied wildly in quality, leading to Nintendo stepping in a year or two into the system’s life to introduce the Nintendo Seal of Quality, which it bestowed upon games that had passed a rigorous series of quality checks.

    When they redesigned and re-released the console outside of Japan, they made this seal of quality mandatory: the Famicom’s North American and European counterpart, the NES, is widely known for being the first game system to feature digital rights management. It contained a special Nintendo-proprietary chip called the Checking Integrated Circuit (CIC) which would look for an identical chip inside the game cartridge and, if one was not present, cause the game to continually reset itself. Games awarded the Nintendo Seal of Quality were also awarded a limited quantity of CICs to put in cartridges. Nintendo’s draconian policies regarding CIC distribution, especially the restriction of only publishing two games per year, led to numerous methods to bypass the CIC chip, including charge pumps that delivered negative voltages to its input pins, causing it to lock up; cartridges with a second cartridge connector on the back where the customer was expected to insert a game with a functional CIC; and, in the case of one particularly ballsy developer, falsely claiming they were in a lawsuit with Nintendo in order to get access to the patents governing the CIC, reverse engineering it from those, designing their own compatible chip called the Rabbit, and putting that in their cartridges. Nintendo was not thrilled about that one.

    The Famicom/NES uses a design architecture almost unique among all videogame systems ever released that allows it to never, ever show a loading screen. As with most consoles of the era, the console’s CPU reads game code directly from a ROM chip inside the cartridge as it executes it, rather copying it to RAM inside the console first as later consoles did. Where the NES was unique was that it did the same thing with the GPU (which in the NES was called the PPU, for Picture Processing Unit). You see, RAM was incredibly expensive in 1983, and if we put in enough video memory for it to be able to store an entire frame’s worth of textures, we would not be able to sell the thing for $99. It would be really nice if the PPU could load texture data directly from the cartridge as it rendered the frame to the TV screen, just like the CPU did when executing code. Unfortunately, it’s 1983, and the technology for sharing a single data bus and ROM between two chips that need to access it at the same time is kind of expensive too, especially when the PPU needs to access data 3 times as fast as the CPU. (The Commodore 64 could do that, but its CPU and GPU ran at the same speed, plus it cost over five times as much as the NES when it came out the same year – though that’s partially also because it had an eye-popping 64 kilobytes of RAM, unheard of at the time and 16 times as much as the NES). Then an engineer at Nintendo had a brainwave: who said we could only use one data bus? Famicom and NES cartridges have ridiculously wide connectors (60 and 72 pins respectively) compared to their contemporaries such as the Sega Master System and TurboGrafx16. This is because they contain two completely independent address and data buses, one for the CPU and one for the PPU. Early NES cartridges had two fully independent ROM chips. The CPU reads code from one at the same time the PPU reads pixel data from the other, and they don’t conflict. This raised the cost of the cartridges considerably, however, since they now needed two expensive-to-manufacture ROM chips instead of one. Later cartridges moved to special ASIC chips called “mappers” which took the two data buses from the console and multiplexed access to a single, massive ROM chip, which was often much higher capacity than what the NES could address directly (48KiB for the CPU and 16KiB for the PPU). The console could send special commands to the mapper chip to cause it to switch which blocks of data were exposed to the console at any given time as the player visited different areas of the game. Later mapper chips enabled support for giving the console additional RAM and VRAM via memory chips inside the cartridge (a RAM chip connected to a battery so it didn’t lose its data when power was unplugged was how the original Zelda saved your progress). Some even had full additional sound synthesizers connected to a sound input pin on the Famicom’s cartridge connector, which the console mixed with its own sound synthesizer before sending it to the TV, effectively giving the console additional sound channels and much nicer sounding instruments. (Strangely, the NES removed this feature – cartridges could no longer output sound. What it did have were ten lines that went straight from the console connector to a little known expansion port on the bottom of the console, whose only known use was for plugging in a telephone-line-connected modem through which, using a special game cartridge, adults could play the Minnesota state lottery. Some Japanese games that made use of the sound output pin connected it instead to one of these lines when released in America, and since the sound input is exposed on that connector, a common “mod” for old NES consoles is to connect a resistor between two pins on the expansion port, once again allowing sound from the cartridge to flow to the TV.)

    Is that enough cool trivia or should I keep going? I haven’t even touched on the Famicom Disk System yet.

    I knew 99% of that without googling, despite having been born after the twin towers collapsed. Why yes, I am autistic, why do you ask?

    • @Zangoose
      link
      37 hours ago

      I aspire to know this level of retro trivia and am slowly falling down the rabbit hole.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        4 hours ago

        90% of the knowledge here came from looooooong rabbit holes on https://nesdev.org as well as some YouTube videos by The Gaming Historian. If you’re interested in the inner workings of 8-bit systems that aren’t the NES, The 8-Bit Guy on YouTube has some great visual walkthroughs. Also Connections Museum has some really cool in-depth looks of old-school electromechanical telephone switching equipment and the stuff that engineers in the 1930s cooked up to make telephone switching automatic before the concept of an electronic computer was a twinkle in IBM’s eye. Also also, if you haven’t discovered Technology Connections yet, I cannot recommend it enough.

        Also unrelated, but I mentioned in that massive comment that the dual-bus architecture allows the NES to never show a loading screen but didn’t elaborate on how, and I wanna do that. Loading data is the process of copying it from some slow storage medium, like a cartridge, hard drive, or (horror of horrors) optical media into RAM, and loading screens arise from the fact that said storage mediums often take quite a while to spit out all that data and the user needs something to let them know the program isn’t just frozen. Since all data on the NES is read directly from the cartridge as it’s needed, it never has to load anything. All the data for the game is always right there. Unfortunately, this technique is all but useless today – the NES’s CPU was very slow (1.79 megahertz – for reference, no CPU slower than 1000 megahertz has been manufactured in over a decade), allowing it to get away with reading directly from the storage medium. Modern CPUs need to access data orders of magnitude faster than the fastest flash chips available, and actual mask-ROM like the NES used is so expensive to manufacture nowadays that they’re basically never seen outside of the firmware powering your CPU. Besides, never ever ever having even one microsecond of loading time is not really necessary anyway, and there are techniques that work on modern CPUs to get something like that without actually having it. As an example, here’s some more modern trivia for you: virtually all modern operating systems use a techique called memory mapped files to launch their local equivalent of an .exe file. Essentially, all modern CPUs (at least, the ones intended to run an operating system as opposed to a single gigantic firmware program) have a memory mapper unit, which essentially allows the CPU to pretend that it has more RAM than it does. Whenever a program tries to access a block of memory that doesn’t actually exist, it will trigger what’s called a page fault, where it will stop that application for a while, run a special function in the operating system to figure out what data is supposed to be there, put that data somewhere in physical memory where the program can access it, and then resume the program. Memory-mapping a file is when you connect a file to this system, essentially asking the operating system to pretend like it has the entire contents of the file loaded into an area of RAM, but only when someone actually goes to access that RAM does the operating system pause the application that asked for it, open the file, load just the portion it needs right at that minute, and resume the application again. This can lead to major speedups: most programs don’t need the entire contents of their EXE file right when they start up – they don’t need the code that finds out what text you have highlighted and copies it to the clipboard, for example, before you press Ctrl+C. Memory-mapping the EXE file allows the operating system to wait to load the code that does that off the hard disk until the program actually needs it, and thus have the whole program running before it’s finished loading the EXE file. If the user closes the application before ever pressing Ctrl+C, that portion of the EXE file might never get loaded! (Which bits get loaded is often not that granular, and the code for Ctrl+C specifically is small enough that it’s likely to get loaded anyway along with something else, but you get the idea.)

        Programs can use this technique to “lazy-load” their internal data files (or files you create yourself) too, but this unfortunately doesn’t have as broad an application as you’d think. It only really helps when you don’t need the entire file at once, and when the file is large enough that loading it into RAM all at once isn’t feasible. I’m currently working on one such program: a desktop client for a furry imageboard. I wasn’t satisfied with the search features the site had, so I built my own search algorithm that supported more granular tag constraints and rigged it up to the full database dump available from the site. Loading the entire database into RAM uses over a gigabyte of RAM, and that’s not going to fly if I want to port it to mobile devices, so I’m currently working on setting up memory mapping so I can only have the database loaded while I’m doing a search and let the OS swap it out when my app isn’t running.