• 0 Posts
  • 3 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle
  • I think it depends a lot on context.

    Wiping the dust off an old, low-spec ex-office PC, getting it barely functional, throwing a couple of RGB lights in it and trying to pass it off as a competent gaming rig for a high price would be completely unethical, I agree. But salvaging an old PC, actually refurbishing it into something useful for light day-to-day use, and selling it as such with a small markup to cover parts and labour seems completely fine to me.

    You and I may have the skills needed to take a worn-out old PC and breathe new life into it easily, but not everyone who’d be happy with a modest secondhand system can do that.

    As it happens, until just a few years ago I was running my high-end games on what started as a secondhand commodity PC with an i5-3470, without complaint.


  • Seriously, the best option is whatever matches the brightness of your screen to its surroundings. I read about this decades ago and it eliminated screen fatigue for me.

    If switching to dark mode works for you, great. When I worked on a PC in a well-lit office all day, I would open a program with a white background, hold up a blank white piece of paper next to the screen, and adjust the screen brightness until it looked about the same as the paper. I did this once or twice a week because I was near a set of picture windows and I was affected by weather and the seasons, but in a room with more artificial light it would be “set and forget”.

    It seemed very dim at first, and several of my coworkers commented on it. It took a few days of resisting the urge to turn the brightness back up, but I got used to it and never went back.

    My PC at home is currently set up in a partially shaded corner of a well-lit room, so I put a dim little light bar behind the screen to make the wall match the brightness of the screen and the rest of my desk/room.


  • A couple of other commenters have given excellent answers already.

    But on the topic in general I think that the more you learn about the history of computing hardware and programming, the more you realise that each successive layer added between the relays/tubes/transistors and the programmer was mostly just to reduce boilerplate coding overhead. The microcode in integrated CPUs took care of routing your inputs and outputs to where they need to be, and triggering the various arithmetic operations as desired. Assemblers calculated addresses and relative jumps for you so you could use human-readable labels and worry less that a random edit to your code would break something because it was moved.

    More complex low-level languages took care of the little dances that needed to be performed in order to do more involved operations with the limited number of CPU registers available, such as advanced conditional branching and maintaining the illusion of variables. Higher-level languages freed the programmer from having to keep such careful tabs on their own memory usage, and helped to improve maintainability by managing abstract data and code structures.

    But ignoring the massive improvements in storage capacity and execution speed, today’s programming environments don’t really do anything that couldn’t have been implemented with those ancient systems, given enough effort and patience. It’s all still just moving numbers around and basic arithmetic and logic. But a whole lot of it, really, really fast.

    The power of modern programming environments lies in how they allow us to properly implement and maintain a staggering amount of complex minutiae with relative ease. Such ease, in fact, that sometimes we even forget that the minutiae are there at all.