

I am not saying this will work for you, but I am leaving it here for others as it does work for me when doing mass edits.
https://rhizomehouse.org/mutualaid/ is a good list of people you can help.
https://nlgmn.org/mass-defense/
https://www.wfmn.org/funds/immigrant-rapid-response/
How to organize a rapid response from a very high level with further detailed resources. https://southerncoalition.org/resources/rapid-response-101/
Good general advice on organizing, also a good resource to find groups near you that are likely aligned. https://www.fiftyfifty.one/organizer-resources
Feel free to reach out for any other resources.


I am not saying this will work for you, but I am leaving it here for others as it does work for me when doing mass edits.


Curious to see if another LeakBase will pop up around this. I’m already hearing rumors that a lot of it was AI training data but that’s unfounded squiddy speak on social media.


Yeah I don’t sort or tag with DarkTable I only edit.


Why not just use Darktable?


Darktable is my go to.


“Googling” used to get you to the needed IRS documentation, but now, with the help of Gemini you’re just being lied to.
If you need tax help, call your local library, they often have tax help. Also if it seems like a tax dodge, don’t take the deduction. Don’t outsource your brain to an LLM. You’ve done your taxes before without a GPT you can do it again.


Have fun with that audit.


The problem is message previews, not push notifocations. Which is funny because Meredith addresses that in the thread you posted.


The message preview notification is handled similarly in IOS and Android. The issue isn’t people seeing the notification, it’s that the content of the message being passed to the phone’s launcher. Which is unencrypted.
This skinny bitch …


Thing 0 you can use qm terminal
Thing 1 use ZFS no need for LVM
Thing 2 see above for zfs set share nfs
Thing 3 spice vnc is built in
Thing 4 you should check out QubesOS if you haven’t already…
You should especially look at https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_13_Trixie


The banana factory, and the fish pedicure are my favorites


I use WSL at work, I pin max RAM and only leave one CPU running for the host OS. It’s still a nightmare. This upcoming week I’m finally deploying Redhat IDM so that myself and others can use their smartcards and the ancient AD infra to get linux workstations and jumpboxes. Microsoft did me a massive favor by raising our licensing pricing so now it’s cheaper to replace Azure AD.


Always buy refurbished laptops, including MacBooks.


You scoff but this is already being done in China. They desolder good chips from bad cards and add them to a mule card.


Almost like an LLM wrote it…


I mean what you’re proposing was the initial push of gpt3. All the experts said, these GPTs will only hallucinate more with more resources and they’ll never do anything more than repeat their training data as a word salad posing as novelty. And on a very macro scale, they were correct.
The scaling problem
https://arxiv.org/abs/2001.08361
The scaling hype
https://gwern.net/scaling-hypothesis
Ultimately, hype won out.


will never achieve AGI or anything like it
On this we absolutely agree. I’m targeting a more efficient interactive wiki essentially. Something you could package and have it run on local consumer hardware. Similar to this https://codeberg.org/BobbyLLM/llama-conductor but it would be fully transform native and there would only need to be one LLM for interaction with the end user. Everything else would be done in machine code behind the scenes.
I was unclear I guess, I was talking about injecting other models, running their prediction pipeline for the specific topic, and then dropped out of the window to be replaced by another expert. This functionality handled by a larger model that is running the context window. Not nested models, but interchangeable ones dependent on the vector of the tokens. So a qwq RAG trained on python talking to a qwen3 quant4 RAG trained on bash wrapped in deepseekR1 as the natural language output to answer the prompt “How do I best package a python app with uv on a linux server to run a backend for a …”
Currently this type of workflow is often handled with MCP servers from some sort of harness and as I understand it those still use natural language as they are all separate models. But my proposal leverages the stagnation in the field and leverages it as interoperability.
squints at your username