A Harvard Business Review study is answering the question ‘what will employees do if AI saves them time at work?’ The answer: more work.

  • cmbabul@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    16 hours ago

    I’ve literally never used it for work at all, c suits is starting to push it more but there’s no much use. Definitely not working harder

    • stealth_cookies@lemmy.ca
      link
      fedilink
      English
      arrow-up
      20
      ·
      16 hours ago

      I honestly used AI for something other than summarizing a meeting yesterday. It failed so miserably that I’m really not apt to use it again. Maybe I was wrong to assume it could summarize a simple graph into a table for me.

      • brsrklf@jlai.lu
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        1
        ·
        15 hours ago

        A co-worker not long ago had AI (fucking copilot in this case) randomly trying to analyze a spreadsheet report with a list of users.

        There wasn’t any specific need to do this right now, but, curious, he let it do its thing. The AI correctly identified it was a list of user accounts, and said it might be able to count them. Which would be ridiculously easy to do, since it’s just a correctly formatted spreadsheet with each row being one user.

        So he says OK, count them for me. The AI apologizes, it can’t process the file because it’s too big to be passed fully as a parameter in a python script (OK, why and how are you doing that?) but says it might be able to process the list if it’s copy-pasted into a text file.

        My co-worker is like, at that point, why fucking not? and does the thing. The AI still fails anyway and apologizes again.

        We’re paying for that shit. Not specifically for copilot, but it was part of the package. Laughing at how it fails at simple tasks it set up for itself is slightly entertaining I guess, thanks Microsoft.

        • Jesus_666@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          13 hours ago

          Oh yeah, same here except with a self-hosted LLM. I had a log file with thousands of warnings and errors coming from several components. Major refactor of a codebase in the cleanup phase. I wanted to have those sorted by severity, component, and exception (if present). Nothing fancy.

          So, hoping I could get a quick solution, I passed it to the LLM. It returned an error. Turns out that a 14 megabyte text file exceeds the context size. That server with several datacenter GPUs sure looks like a great investment now.

          So I just threw together a script that applied a few regexes. That worked, no surprise.

      • Tollana1234567@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        i used it for the first time a few weeks ago, cant trust the results as they dont verify the actual sources where they get numbers/cost from. it was about an ACA plan.

      • ImgurRefugee114@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        15 hours ago

        AI has a lot of pitfalls. It helps knowing how they work: tokens, context, training, harnesses and tools,… Because then nonsense like this makes a lot more sense; same for “count the R’s in strawberry” type things. (For the record, I later told it to use JavaScript to manipulate strings to accomplish this task and it did a much better job. Still needed touchups of course)

        They work best when you know how to accomplish whatever it is you’re asking it to do, and can point it in a direction that leverages its strengths, and avoid weeknesses (often tied to perception and dexterity). Something like ASCII art is nearly a worst-case scenario, aside from maybe asking a general purpose LLM to do math.

      • unnamed1@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        7
        ·
        15 hours ago

        You must have done things wrong. These cases actually work extremely well. Like it or not.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 hours ago

          Yeah, after all, LLMs are known for their ability to do things correctly and not make up tons of random bullshit.