When people ask me what artificial intelligence is going to do to jobs, they’re usually hoping for a clean answer: catastrophe or overhype, mass unemployment or business as usual. What I found after months of reporting is that the truth is harder to pin down—and that our difficulty predicting it may be the most important part of

https://web.archive.org/web/20260210152051/www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/

In 1869, a group of Massachusetts reformers persuaded the state to try a simple idea: counting.

The Second Industrial Revolution was belching its way through New England, teaching mill and factory owners a lesson most M.B.A. students now learn in their first semester: that efficiency gains tend to come from somewhere, and that somewhere is usually somebody else. The new machines weren’t just spinning cotton or shaping steel. They were operating at speeds that the human body—an elegant piece of engineering designed over millions of years for entirely different purposes—simply wasn’t built to match. The owners knew this, just as they knew that there’s a limit to how much misery people are willing to tolerate before they start setting fire to things.

Still, the machines pressed on.

  • ruuster13@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    7
    ·
    1 day ago

    Moore’s Law isn’t quite dead. And quantum computing is a generation away. Computers will continue getting exponentially faster.

    • bunchberry@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.

      This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.

      The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.

      Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.

      • Holytimes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        Yo be fair lossless frame gen has a number of short comings, performance issues, and quality problems compared to nividias offerings.

        While it’s “possible” to run frame gen on any hardware the quality and performance is definitely a sizeable downgrade

    • Kairos@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      No.

      We know how they work. They’re purely statistical models. They don’t create, they recreate training data based on how well it was stored in the model.

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      The problem is with hardware requirements scaling exponentially with AI performance. Just look at RAM and computation consumption increasing compared to the performance of the models.

      Anthropic recently announced that since the performance of one agent isn’t good enough it will just run teams of agents in parallel on single queries, thus just multiplying the hardware consumption.

      Exponential growth can only continue for so long.