Dutch lawyers increasingly have to convince clients that they can’t rely on AI-generated legal advice because chatbots are often inaccurate, the Financieele Dagblad (FD) found when speaking to several lawfirms. A recent survey by Deloitte showed that 60 percent of lawfirms see clients trying to perform simple legal tasks with AI tools, hoping to achieve a faster turnaround or lower fees.

  • qwestjest78@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 hours ago

    I find it useless for even basic tasks. The fact that some people follow it blindly like a god is so concerning.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    8
    ·
    2 hours ago

    I think my least favourite thing about AI is when customers tell me something won’t take as long as I say, if I use AI. Look, if AI can do it why do you need me?

    The fact I’m not out of a job (yet) is because apparently AI cannot do everything I can. The very second it can I’ll be long gone.

    So I am on the side of the lawyers here. For the first and only time.

    • ZeDoTelhado@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      Law in particular is such a gnarly subject that you really want to have someone who knows what they are saying about anything. And even then they can be wrong too

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    With a coding agent, it’s wrong a lot and the code is usually terrible, but it can get working code with proper tests to create a feedback loop.

    How does that go with legal work? Well, turns out that was mostly made-up bullshit and the judge gave me a jail sentence for contempt of court, but once I get out, I’ll generate some more slop that will hopefully go over better next time.

  • pinball_wizard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    Yes please. More folks need to all in on the idiocy of trusting an AI for legal advice. Let’s get this public lesson over with.

    This is one of the cases where they can simply be a hilarious example for the rest of us, rather than getting a bunch of the rest of us killed.

  • Archer@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Seems pretty simple to me, you pay lawyers so that you don’t have to pay even more by getting legally screwed over. Why try and cheap out on the insurance policy against bigger losses and risk it all collapsing?

  • Eternal192@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 hours ago

    Honestly if you are that dependent on A.I. now when it’s still in a test phase then you are already lost, A.I. won’t make us smarter if anything it has the opposite effect.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      I’m watching that happen in my industry (software development). There’s this massive pressure campaign by damn near everyone’s employers in software dev to use LLM tools.

      It’s causing developers to churn out terrible, fragile, unmaintainable code at a breakneck pace, while they’re actively forgetting how to code for themselves.

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        1 hour ago

        I find ai can turn out code fast - but then I spend a week or more turning it into good code and so the time saved isn’t near as much. I’d be embarressd to call it my own and as a professional can’t allow garbage-

  • osanna@thebrainbin.org
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    4 hours ago

    I can’t stand AI, but the few times I’ve used it, I’ve used it as a starting point. Once it gives me advice, I then go and confirm that with other sources. But I don’t use AI much.

    • silverneedle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      edit-2
      48 minutes ago

      Let’s consider what you are doing on a purely abstract level.

      1. You prompt an generative large language model what to do.
      2. You receive a set of information whose veracity you can not count on in any practical sense.
      3. You go and confirm this information. Likely you are inputting similar prompts into you search engine of choice giving you answers from experts that are more or less guaranteed to be relevant and useful.
      4. Then you act accordingly.

      We could also do the following:

      1. You have an idea/question that you search. You have keywords to type into forums. You get the relevant information. If need be you make a post on a questions board.
      2. Then you act accordingly
      • Null User Object@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 minutes ago

        You have keywords to type into forums.

        That’s great when you do, and you usually do, but sometimes you don’t.

        Case in point; A while back I was creating a 3D model for my 3D printer. It had a part that was essentially identical to a particular unusual pipe fitting that I have seen and knew existed, but didn’t know the name of (spoiler: I’m not a plumber), and I wanted to give the sketch in the modeling software a proper name for the thing.

        Just trying keywords that sort of described it’s shape in search engines was useless. Search engines would focus more on the “pipe fitting” part of the keywords and just return links to articles about plumbing. Then I asked an LLM, and it responded with, “That sounds like X.” Then I checked that it wasn’t just making it up by searching for “X” and found online stores selling the very thing I was trying to figure out the name of.