What a trash click bait headline. That’s not how the statement “saying the quiet part out loud” works. This isn’t a secret and it’s not unspoken and it certainly doesn’t not reveal some underlying motive.
I actually do care about AI PCs. I care in the sense that it is something I want to actively avoid.

Stolen from BSKY
Weirdly dell always seems to understand what normal users want.
The problem is normal users have beyond low expectations, no standards and are ignorant of most everything tech related.
They want cheap and easy to use computers that require no service and if there is a problem a simple phone number to call for help.
Dell has optimized for that. So hate em or not, while their goods have gone to shit quality wise. They understand their market and have done extremely well in servicing it.
Thus I am not surprised at all dell understood this. If anything I would have been more surprised if they didn’t.
I think they all understand what we want (broadly), they just don’t care, because what they want is more important, and they know consumers will tolerate it.
They care, they just care differently. What they want is money, so they’re trying to find what the maximum price is they can sell the minimum amount of product for.
If they can dress that up as “caring for the consumer” it’s a bonus.
You’re not thinking about the bigger picture. They can sell you an irrepairable device, design it to fail after a short time so you have to buy another one, upsell you on useless AI shit to pump up investments, and load it with a bunch of invasive software so they can collect and sell information about you. None of this has anything to do with what you, the consumer, want, and they know that, but they don’t care, because it’s not what makes them money.
What companies actually make decent mid-range laptops these days?
Framework makes some very high quality laptops. Have one myself.
I’ve got a framework 13 but I wouldn’t suggest them for casual users. They’re very expensive for the specs.
How is their site (and product) as an option for your non-techy mum? Also does shipping end up being exorbitant if you’re not in the same country they’re based in?
They have a fully prebuilt option for every computer. Which works well for non-techy people.
No clue what shipping is like in your country. Was fine for me in the US.
Seconded on Framework. I’ve got the more performant (but more heavy, large, and expensive) 16, but for most people the 13 will be perfectly usable. The newer 12 model also seems pretty decent and is a bit cheaper.
They’ve kept their RAM prices relatively stable too, but if you already have other RAM lying around you can just bring your own and save yourself the money. Same for the SSD.
The main downside is they’re gonna be quite expensive upfront compared to alternatives, so I wouldn’t recommend them to someone price-sensitive, especially in the current economy.
The main benefit is that since they’re so modular and upgradable, you’ll save money down the line on repair services, replacement parts, or just the cost of buying a whole new device because one component broke that they don’t sell replacements for.
And yet just before looking at Lemmy I got an ad for the Dell AI laptop on YouTube (on my TV, still need to get a piHole up and running).
on YouTube (on my TV, still need to get a piHole up and running
Unfortunately that won’t help. The Youtube ads are served from the same domains as the videos, so a DNS based blocker is inherently powerless.
FWIW, Linux + FireFox + Ublock still blocks 100% of YouTube ads for me.
Can confirm, Firefox with uBlock Origin works. The OS doesn’t seem to matter. I use that combination on Linux (Fedora 43), Windows (10), macOS (15) and Android (16), no YouTube ads anywhere.
Just stop using the TV like that. Hook up a small Linux computer via hdmi and use that instead.
Not good for the WAF
I have an older MacBook with standard hdmi, but there are some creators I really like on YouTube and we have an ancient Roku stick that still works. The remote is convenient and I usually go pee during the ads.
Jesus, is that how long youtube ads are these days?
Yeah but the decent thing is that they show you how long before you can skip. So you know how long you have.
That is gold
This is extra funny to me since I just re-watched this episode the other day
They said they still adding all of it. They are adding ai. Just not talking about it. Which is probably correct 😂
> be me > installed VScode to test whether language server is just unfriendly with KATE > get bombarded with "try our AI!" type BS > vomit.jpg > managed to test it, but the AI turns me off > immediately uninstalled this piece of glorified webpage from my ThinkPadIt seems I’m having to do more jobs with KATE. (Does the LSP plugin for KATE handle stuff differently from the standard in some known way?)
It doesn’t confuse us… it annoys us with the blatant wrong information. e.g. glue is a pizza ingredient.
That’s when you use 3 years old models
Are you trying to make us believe that AI doesn’t hallucinate?
It doesn’t, it generates incorrect information. This is because AI doesn’t think or dream, it’s a generative technology that outputs information based on whatever went in. It can’t hallucinate because it can’t think or feel.
Hallucinate is the word that has been assigned to what you described. When you don’t assign additional emotional baggage to the word, hallucinate is a reasonable word to pick to decribe when an llm follows a chain of words that have internal correlation but no basis in external reality.
Trying to isolate out “emotional baggage” is not how language works. A term means something and applies somewhere. Generative models do not have the capacity to hallucinate. If you need to apply a human term to a non-human technology that pretends to be human, you might want to use the term “confabulate” because hallucination is a response to stimulus while confabulation is, in simple terms, bullshitting.
A term means something and applies somewhere.
Words are redefined all the time. Kilo should mean 1000. It was the international standard definition for 150 years. But now with computers it means 1024.
Confabulation would have been a better choice. But people have chosen hallucinate.
Although I agree with you, you chose a poor example.
Kilo doesn’t mean 1024, that’s kibi. Many of us in tech differentiate because it’s important.
No, but I was specifically talking about the glue and pizza example
Yeah, I’m not sure what the point of a cheap NPU is.
If you don’t like AI, you don’t want it.
If you do like AI, you want a big GPU or to run it on somebody else’s much bigger hardware via the internet.
A cheap NPU could have some uses. If you have a background process that runs continuously, offloading the work to a low-cost NPU can save you both power and processing. Camera authorization, if you get up, it locks; if you sit down, it unlocks. No reason to burn a core or GPU for that. Security/Nanny cameras recognition. Driving systems monitoring a driver losing consciousness and pulling over. We can accomplish this all now with CPUs/GPUs, but purpose-built systems that don’t drain other resources aren’t a bad thing.
Of course, there’s always the downside that they use that chip for recall. Or malware gets a hold of it for recall, ID theft, There’s a whole lot of bad you can do with a low-cost NPU too :)
What people don’t want is blackbox AI agents installed system-wide that use the carrot of “integration and efficiency” to justify bulk data collection, that the end user implicitly agrees to by logging into the OS.
God forbid people want the compute they are paying for to actually do what they want, and not work at cross purposes for the company and its various data sales clients.
Unveiling: the APU!!! (ad processing unit)
Just there to create ads based on your usage.
But hey, now ads load much faster and relevant to you, making everything snappier and slicker! Who wouldn’t pay more for such an upgrade???
I think you’re making the mistake of thinking the general population is as informed or cares as much about AI as people on Lemmy.
God forbid people want the compute they are paying for to actually do what they want, and not work at cross purposes for the company and its various data sales clients.
I think that way of thinking is still pretty niche.
Hope it’s becoming more widespread, but in my experience most people don’t actually concern themselves with “my device does some stuff in the background that goes beyond what I want it for” - in their ignorance of Technology, they just assume it’s something that’s necessary.
I think were people have problems is mainly at the level of “this device is slower at doing what I want it to do than the older one” (for example, because AI makes it slower), “this device costs more than the other one without doing what I want it to do any better” (for example, they’re unwilling to pay more for the AI functionality) or “this device does what I want it to do worse than before/that-one” (for example, AI is forced on users, actually making the experience of using that device worse, such as with Windows 11).
I want to run LLMs locally, or things like TTS or STT locally so it’s nice but there’s no real support rn
Most people won’t care nor use it
LLMs are best used when it’s a user choice, not a platform obligation
I guess an NPU is better of being a PCIe peripheral then?
And it can then have their specialised RAM too.Sorry, I’m not a hardware expert at all
When you’re talking about the PCIe peripheral, you’re talking about a separate dedicated graphics card or something else?
I guess the main point of NPUs are that they are tiny and built in
When you’re talking about the PCIe peripheral, you’re talking about a separate dedicated graphics card or something else?
Yes, similar to what a PCIe Graphics Card does.
A PCIe slot is the slot in a desktop motherboard that lets you fit various things like networking (ethernet, Wi-Fi and even RTC specialised stuff) cards, sound cards, graphics cards, SATA/SAS adapters, USB adapters and all other kinds of stuff.I guess the main point of NPUs are that they are tiny and built in
GPUs are also available built-in. Some of them are even tiny.
Go 11-12 years back in time and you’ll see video processing units embedded into the Motherboard, instead of in the CPU package.
Eventually some people will want more powerful NPUs with better suited RAM for neural workloads (GPUs have their own type of RAM too), not care about the NPU in the CPU package and will feel like they are uselessly paying for it. Others will not require an NPU and will feel like they are uselessly paying for it.So, much better to have NPUs be made separately in different tiers, similar to what is done with GPUs rn.
And even external (PCIe) Graphics Cards can be thin and light instead of being a fat package. It’s usually just the (i) extra I/O ports and (ii) the cooling fins+fans that make them fat.
Doesn’t confuse me, just pisses me off trying to do things I don’t need or want done. Creates problems to find solutions to
Can the NPU at least stand in as a GPU in case you need it?
No as it doesn’t compute graphical information and is solely for running computations for “AI stuff”.
GPUs aren’t just for graphics. They speed up vector operations, including those used in “AI stuff”. I just never heard of NPUs before, so I imagine they may be hardwired for graph architecture of neural nets instead of linear algebra, maybe, so that’s why they can’t be used as GPUs.
Initially, x86 CPUs didn’t have a FPU. It cost extra, and was delivered as a separate chip.
Later, GPU is just a overgrown SIMD FPU.
NPU is a specialized GPU that operates on low-precision floating-point numbers, and mostly does matrix-multiply-and-add operations.
There is zero neural processing going on here, which would mean the chip operates using bursts of encoded analog signals, within power consumption of about 20W, and would be able to adjust itself on the fly online, without having a few datacenters spending exceeding amount of energy to update the weights of the model.
NPUs do those calculations far more effectively than a GPU though is what I was meaning.
Nope. Don’t need it
“Recall was met with serious backlash”. Meanwhile I’m looking for a simple setting regarding the power button on my wife’s phone and stumble upon a setting that is enabled by default that has Gemini scanning the screen and using it for whatever it is that it does, but my wife doesn’t use any AI features on her device. Correct me if I’m wrong, but isn’t this basically the same as Recall? Google was just smart enough to silently roll this out.
Isn’t this only triggered when user use Gemini (and the google assistant before). To use something like circle to search. I’m rather sure this already exists before AI craze
That is the assumption but that is explicitly spelled out somewhere. I’m not sure you can trust it.
Yeah, Google assistant was able to read your screen and take screenshots when asked years ago.
I’d much rather have a more powerful generic CPU than a less powerful generic CPU with an added NPU.
There are very few people who would benefit from an added NPU, ok I hear you say what about local AI?
Ok, what about it?
Would you trust a commercial local AI tool to not be sharing data?
Would your grandmother be able to install an open source AI tool?
What about having enough RAM for the AI tool to run?
Look at the average computer user, if you are on lemmy, chances are very high that you are far more advanced than the average computer user.
I am talking about those users who don’t run Adblocker, don’t notice the YT ad skip button and who in the past would have installed a minimum of of five toolbars in IE, yet wouldn’t have noticed the reduced view of the actual page.
These people are closer to the average users than any of us.
Why do they need local AI?
Just offer NPUs as PCIe extension cards. Thats how computers used to be and should be. Modular and versatile.
Already existed for half a decade.
Google Coral is probably the most famous and is mainly suited for small IoT devices, e.g. speeding up image recognition for security cameras. They come in all shapes and sizes though.
M.2 Accelerator A+E key | Coral - https://www.coral.ai/products/m2-accelerator-ae
The fact that i didnt know about those means that consumers have zero need for them and building them into consumer hardware is just an attempt to keep the AI bubble afloat.
I considered getting one of these in the past for Frigate, but I ended up getting Reolink cameras with human detection built-in.
Exactly!
I could even see the cards having ram slots, so you can add dedicated ram to the NPU to remove the need for sharing ram with the system
There’s also the fact that many NPUs are pretty much useless unless used for a very specific model built for the hardware, so there’s no real point having them
My understanding from a very brief skim of what Microsoft was doing with Copilot is to take screenshots constantly, run image recognition on it, and then make it searchable as text and have the ability to go back and view those screenshots in a timeline. Basically, adding more search without requiring application-level support.
They may also have other things that they want to do, but that was at least one.
EDIT: They specifically called that feature “Recall”, and it was apparently the “flagship” feature of Copilot.
Do you mean Copilot, the local indexer and search tool or do you mean Copilot the web based AI chat bot or do you mean Copilot the rebranded Office suite or do you mean … etc.
Seriously, talk about watering down a brand name. Microsoft marketing team are all massive, massive fuck knuckles.
Hey, the last one is great.
Now when I get asked “what do you think about Copilot,” I can just say, “I prefer LibreOffice”
an added NPU
cmiiw but I don’t think NPUs are meant to be used on general-purpose personal computers. A GPU makes more sense.
NPUs are meant for specialised equipment e.g. object detection in a camera (not the personal-use kind)
They are in general purpose PCs though. Intel has them taking up die space in a bunch of their recent core ultra processors.
That’s stupid.
Probably not even general purpose GPUs, although we sucked it up when RT and Tensor cores were put on a plate whenever we like it or not. These though at least provided something to the consumer unlike NPUs.
Holy crap that Recall app that “works by taking screenshots” sounds like such a waste of resources. How often would you even need that?
Virtually everything described in this article already exists in some way…
It’s such a stupid approach to the stated problem that I just assumed it was actually meant for something else and the stated problem was to justify it. And made the decision to never use win 11 on a personal machine based on this “feature”.
So, it’s not really a problem I’ve run into, but I’ve met a lot of people who have difficulty on Windows understanding where they’ve saved something, but do remember that they’ve worked on or looked at it at some point in the past.
My own suspicion is that part of this problem stems from the fact that back in the day, DOS had a not-incredibly-aimed-at-non-technical-users filesystem layout, and Windows tried to avoid this by hiding that and stacking an increasingly number of “virtual” interfaces on top of things that didn’t just show one the filesystem, whether it be the Start menu or Windows Explorer and file dialogs having a variety of things other than just the filesystem to navigate around. The result is that you have had Microsoft banging away for much of the lifetime of Windows trying to add more ways to access files, most of which increase the difficulty of actually understanding what is going on fully through the extra layers. But regardless of why, some users do have trouble with it.
So if you can just provide a search that can summon up that document where they were working on that had a picture of giraffes by typing “giraffe” into some search field, maybe that’ll do it.
The world is healing
I’m readying for some new bullshit. I just hope it’s not tech related
Does a third world war count as tech related? It certainly uses a lot of tech!
Not the position Dell is taking, but I’ve been skeptical that building AI hardware directly into specifically laptops is a great idea unless people have a very concrete goal, like text-to-speech, and existing models to run on it, probably specialized ones. This is not to diminish AI compute elsewhere.
Several reasons.
-
Models for many useful things have been getting larger, and you have a bounded amount of memory in those laptops, which, at the moment, generally can’t be upgraded (though maybe CAMM2 will improve the situation, move back away from soldered memory). Historically, most users did not upgrade memory in their laptop, even if they could. Just throwing the compute hardware there in the expectation that models will come is a bet on the size of the models that people might want to use not getting a whole lot larger. This is especially true for the next year or two, since we expect high memory prices, and people probably being priced out of sticking very large amounts of memory in laptops.
-
Heat and power. The laptop form factor exists to be portable. They are not great at dissipating heat, and unless they’re plugged into wall power, they have sharp constraints on how much power they can usefully use.
-
The parallel compute field is rapidly evolving. People are probably not going to throw out and replace their laptops on a regular basis to keep up with AI stuff (much as laptop vendors might be enthusiastic about this).
I think that a more-likely outcome, if people want local, generalized AI stuff on laptops, is that someone sells an eGPU-like box that plugs into power and into a USB port or via some wireless protocol to the laptop, and the laptop uses it as an AI accelerator. That box can be replaced or upgraded independently of the laptop itself.
When I do generative AI stuff on my laptop, for the applications I use, the bandwidth that I need to the compute box is very low, and latency requirements are very relaxed. I presently remotely use a Framework Desktop as a compute box, and can happily generate images or text or whatever over the cell network without problems. If I really wanted disconnected operation, I’d haul the box along with me.
EDIT: I’d also add that all of this is also true for smartphones, which have the same constraints, and harder limitations on heat, power, and space. You can hook one up to an AI accelerator box via wired or wireless link if you want local compute, but it’s going to be much more difficult to deal with the limitations inherent to the phone form factor and do a lot of compute on the phone itself.
EDIT2: If you use a high-bandwidth link to such a local, external box, bonus: you also potentially get substantially-increased and upgradeable graphical capabilities on the laptop or smartphone if you can use such a box as an eGPU, something where having low-latency compute available is actually quite useful.
There are a number of NPUs that plug into an m.2 slot. If those aren’t powerful enough, you can just use an eGPU.
I would rather not have to pay for an NPU that I’m probably not going to use.I think part of the idea is: build it and they will come… If 10% of users have NPUs, then apps will find ‘useful’ ways to use them.
Part of it is actually battery life - if you assume that in the life of the laptop it will be doing AI tasks (unlikely currently) an NPU will be wayyyy more efficient than running it on a CPU, or even a GPU.
Mostly though, it’s because it’s an excuse to charge more for the laptop. If all the high end players add NPUs, then customers have no choice but to shell out more. Most customers won’t realise that when they use chat got or copilot one one of these laptops, it’s still not running on their device.
I’m not that concerned with the hardware limitations. Nobody is going to run a full-blown LLM on their laptop, running one on a desktop would already require building a PC with AI in mind. What you’re going to see being used locally are going smaller models (something like 7B using INT8 or INT4). Factor in the efficiency of an NPU and you could get by with 16GB of memory (especially if the models are used in INT4) with little extra power draw and heat. The only hardware concern would be the technological advancement speed of NPUs, but just don’t be an early adopter and you’ll probably be fine.
But this is where Dells point comes in. Why should the consumer care? What benefits do consumers get by running a model locally? Outside of privacy and security reasons you’re simply going to get a better result by using one of the online AI services because you’d be using a proper model instead of the cheap one that runs with limited hardware. And even for the privacy and security minded people you can just build your own AI server (maybe not today but when hardware prices get back to normal) that you run from home and then expose that to your laptop or smartphone. For consumers to desire running a local model (actually locally and not in a selfhosting kind of way) there would have to be some problem that the local model solve that the over the internet solution can’t solve. So far such a problem doesn’t exist today and there doesn’t seem to be a suitable problem on the horizon either.
Dell is keeping their foot in the door by still implementing NPUs into their laptops, so if by some miracle some magical problem is found that AI solves they’re ready, but they realize that NPUs are not something they can actually use as a selling point because as it stands, NPUs solve no problems because there’s no benefit to running small models locally.
More to the point, the casual consumer isn’t going to dig into the nitty gritty of running models locally and not a single major player is eager to help them do it (they all want to lock the users into their datacenters and subscription opportunities).
On the Dell keeping NPUs in their laptops, they don’t really have much of a choice if they want modern processors, Intel and AMD are all-in on it still.
Setting up a local model was specifically about people who take privacy and security seriously because that often requires sacrificing convenience, which in this case would be having to build a suitable server and learning the necessary know-how of setting up your own local model. Casual consumers don’t really think about privacy so they’re going to go with the most convenient option, which is whatever service the major players will provide.
As for Dell keeping the NPUs I forgot they’re going to be bundled with processors.
My general point is that discussing the intricacies of potential local AI model usage is way over the head of the people that would even in theory care about the facile “AI PC” marketing message. Since no one is making it trivial for the casual user to actually do anything with those NPUs, then it’s all a moot point for this sort of marketing. Even if there were an enthusiast market that would use those embedded NPUs without a distinct more capable infrastructure, they wouldn’t be swayed/satisfied with just ‘AI PC’ or ‘Copilot+’, they’d want to know specs rather than a boolean yes/no for ‘AI’.
Phones have already came with ai processors for a long time, specifically for speech recognition and camera features, its not advertised because its from before the bubble started
deleted by creator
Where in their comment does it say “exactly zero users”? Oh right, it doesn’t
-















