Dutch lawyers increasingly have to convince clients that they can’t rely on AI-generated legal advice because chatbots are often inaccurate, the Financieele Dagblad (FD) found when speaking to several lawfirms. A recent survey by Deloitte showed that 60 percent of lawfirms see clients trying to perform simple legal tasks with AI tools, hoping to achieve a faster turnaround or lower fees.

  • elvis_depresley@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 minutes ago

    If a chatbot gives you bad advice, it’s your responsibility to. If a lawyer gives you bad advice, it’s the lawyer’s responsibility.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    That’s not a new thing, doctors had this for at least a decade with WebMD.

    No, you don’t have cancer

  • qwestjest78@lemmy.ca
    link
    fedilink
    English
    arrow-up
    33
    ·
    5 hours ago

    I find it useless for even basic tasks. The fact that some people follow it blindly like a god is so concerning.

    • ageedizzle@piefed.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 hours ago

      I work in a health-care-adjacent industry and you’d be surprised how many people blindly follow LLMs for medical advice

    • a4ng3l@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      It’s been doing wonders to help me improve materials I produce so that they fit better to some audiences. Also I can use those to spot missing points / inconsistencies against the ton of documents we have in my shop when writing something. It’s quite useful when using it as a sparing partner so far.

      • mech@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 minutes ago

        Rule of thumb is that AI can be useful if you use it for things you already know.
        They can save time and if they produce shit, you’ll notice.
        Don’t use them for things you know nothing about.

        • a4ng3l@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 minutes ago

          LLM’s specifically bc ai as a range of practices encompass a lot of things where the user can be slightly more dumb.

          You’re spot on in my opinion.

      • The_Almighty_Walrus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        It’s great when you have basic critical thinking skills and can use it like a tool.

        Unfortunately, many people don’t have those and just use AI as a substitute for their own brain.

        • a4ng3l@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          Yeah well same applies for a lot of tools… I’m not certified for flying a plane and look at me not flying one either… but I’m not shitting on planes…

            • a4ng3l@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              If you can’t fly a plane chances are you’ll crash it. If you can’t use llms chances are you’ll get shit out of it… outcome of using a tool is directly correlated to one’s ability?

              Sound logical enough to me.

              • DrunkenPirate@feddit.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 minutes ago

                Sure. However, the outcome of the tool LLM always looks very likely. And if you aren‘t a subject matter expert the likely expected result looks very right. That‘s the difference - hard to spot the wrong things (even for experts)

                • a4ng3l@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 minutes ago

                  So is a speedometer and an altimeter until you reaaaaaaaaly need to understand them.

                  I mean it all boils down to proper tool with proper knowledge and ability. It’s slightly exacerbated by the apparent simplicity but if you look at it as a tool it’s no different.

  • DGen@piefed.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    People do Not make a difference. You can use it for Help, guidance or whatever.

    But never, especially with law, Trust it. Fact Check.

    Well. But Look at this cat playing a trombone.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    14
    ·
    6 hours ago

    I think my least favourite thing about AI is when customers tell me something won’t take as long as I say, if I use AI. Look, if AI can do it why do you need me?

    The fact I’m not out of a job (yet) is because apparently AI cannot do everything I can. The very second it can I’ll be long gone.

    So I am on the side of the lawyers here. For the first and only time.

    • ZeDoTelhado@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 hours ago

      Law in particular is such a gnarly subject that you really want to have someone who knows what they are saying about anything. And even then they can be wrong too

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 hours ago

    With a coding agent, it’s wrong a lot and the code is usually terrible, but it can get working code with proper tests to create a feedback loop.

    How does that go with legal work? Well, turns out that was mostly made-up bullshit and the judge gave me a jail sentence for contempt of court, but once I get out, I’ll generate some more slop that will hopefully go over better next time.

  • pinball_wizard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 hours ago

    Yes please. More folks need to all in on the idiocy of trusting an AI for legal advice. Let’s get this public lesson over with.

    This is one of the cases where they can simply be a hilarious example for the rest of us, rather than getting a bunch of the rest of us killed.

  • Eternal192@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    6 hours ago

    Honestly if you are that dependent on A.I. now when it’s still in a test phase then you are already lost, A.I. won’t make us smarter if anything it has the opposite effect.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      I’m watching that happen in my industry (software development). There’s this massive pressure campaign by damn near everyone’s employers in software dev to use LLM tools.

      It’s causing developers to churn out terrible, fragile, unmaintainable code at a breakneck pace, while they’re actively forgetting how to code for themselves.

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        5 hours ago

        I find ai can turn out code fast - but then I spend a week or more turning it into good code and so the time saved isn’t near as much. I’d be embarressd to call it my own and as a professional can’t allow garbage-

  • Archer@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    Seems pretty simple to me, you pay lawyers so that you don’t have to pay even more by getting legally screwed over. Why try and cheap out on the insurance policy against bigger losses and risk it all collapsing?

    • sqgl@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      [edit: sorry I ended up on a tangent]

      Lawyers are no guarantee. They are sloppy because they have no skin in the game, and they usually get paid regardless (although some have “uplift” fees which reward them for winning).

      It is like hiring builders for your renovation. You still have to keep an eye on them and even tell them how to do their job, which of course is always a tense situation. If you develop a good relationship you can work as a team (requires a lawyer who is not insecure).

      Best avoid situations which need a lawyer. Do not litigate lightly. There is no such thing as a watertight case. If you get a corrupt judge they can outright lie, there is no point appealing, and you can be gagged from telling anyone (even your wife, let alone a politician or journalist).

  • osanna@thebrainbin.org
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    7 hours ago

    I can’t stand AI, but the few times I’ve used it, I’ve used it as a starting point. Once it gives me advice, I then go and confirm that with other sources. But I don’t use AI much.

    • sqgl@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      I had it cite a case which didn’t exist. It was perfect for what I was fighting (it tends to figure out what you want to hear then makes up stuff to satisfy you).

      When I tried to search for a phrase from the case (hoping it just gave the wrong citation) it said there was no such case with that phrase.

      I asked why it said there was such a case earlier. It confessed that AI sometimes hallucinates and promised to try better in future.

    • silverneedle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      edit-2
      4 hours ago

      Let’s consider what you are doing on a purely abstract level.

      1. You prompt an generative large language model what to do.
      2. You receive a set of information whose veracity you can not count on in any practical sense.
      3. You go and confirm this information. Likely you are inputting similar prompts into you search engine of choice giving you answers from experts that are more or less guaranteed to be relevant and useful.
      4. Then you act accordingly.

      We could also do the following:

      1. You have an idea/question that you search. You have keywords to type into forums. You get the relevant information. If need be you make a post on a questions board.
      2. Then you act accordingly
      • Null User Object@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 hours ago

        You have keywords to type into forums.

        That’s great when you do, and you usually do, but sometimes you don’t.

        Case in point; A while back I was creating a 3D model for my 3D printer. It had a part that was essentially identical to a particular unusual pipe fitting that I have seen and knew existed, but didn’t know the name of (spoiler: I’m not a plumber), and I wanted to give the sketch in the modeling software a proper name for the thing.

        Just trying keywords that sort of described it’s shape in search engines was useless. Search engines would focus more on the “pipe fitting” part of the keywords and just return links to articles about plumbing. Then I asked an LLM, and it responded with, “That sounds like X.” Then I checked that it wasn’t just making it up by searching for “X” and found online stores selling the very thing I was trying to figure out the name of.

        • ageedizzle@piefed.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          Yes LLMs are good with finding terms and phrases that you can’t remember but are at the tip of your tongue