• @InfiniteFlow
    link
    English
    611 months ago

    Yah, I can’t imagine finger being widely deployed nowadays, the huge security and privacy hole it would be!

    As for nntp and email… I also remember using email relay proxies for FTP way back when! FTP access to some places was spotty at best, so I sent a GET request to an email server that would get the file, UUENCODE it, and send it multipart by email. Not that files were big back then, but not was it possible to attach more than a few hundred KBs at once, if that.

    In fact, I just remembered a funny story from when I was using the Usenet. I used a client that ran on our VAX/VMS mainframe. While browsing the newsgroups, I would get a figure for the transfer rate at the bottom of the screen. It was usually in tens of bytes per second, sometimes a few hundred. Often it stalled, etc. One day, out of the corner of my eye, I see it is showing “1”. My immediate thought as the most plausible interpretation: “damn, one byte per second. this is especially slow today!” And then I noticed the units: one KILOBYTE per second. it was the first time I had ever seen such a fast transfer rate!

    A few years later, mid 90s I was trying to download a video that accompanied a conference paper. It was 6MB in size if memory serves. It took me from Friday afternoon to Sunday to manage it. Not only was it slow, but it kept interrupting and I had to start over numerous times. But I did manage in the end, and walked away with it split into a few floppy disks 🙂.

    We’ve certainly come a long way since!

    • BoofStroke
      link
      fedilink
      English
      311 months ago

      I remember stitching multipart uuencoded files together by hand, lol. Then when OS/2 2.0 came out, IBM fully embraced the Internet of the time and had the best Usenet client that would gasp do all of that automatically and display the image or save the binary file you were after. WebEx was also the best web browser until Netscape took over.