Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a shame that this didn't end up going anywhere. When Qualcomm was doing their press stuff prior to the Snapdragon X launch, they said that they'd be putting equal effort into supporting both Windows and Linux. If anyone here is running Linux on a Snapdragon X laptop, I'd be curious to know what the experience is like today.

I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips. They have similar performance/battery life, and run cool (so you can use the laptop on your lap or in bed without it overheating), but have full Linux support today and you don't have to deal with x86 emulation. If anyone needs a thin & light Linux laptop today, they're probably your best option. Personally, I get 10-14 hours of real usage (not manufacturer "offline video playback with the brightness turned all the way down" numbers) on my Vivobook S14 running Fedora KDE. In the future, it'll be interesting to see how Intel's upcoming Panther Lake chips compare to Snapdragon X2.



The iGPU in Panther Lake has me pretty excited about intel for the first time in a long time. Lunar Lake proved they’re still relevant; Panther Lake will show whether they can actually compete.


Lunar Lake had integrated RAM, right? Given certain market realities right now, it could be a real boon for them if they keep that design.


I'm typing this from a snapdragon x elite HP. It's fine really but my use is fairly basic. I only use it to watch movies, read, browse, and draft word and excel, some light coding.

No gaming - and I came in knowing full well that a lot of the mainstream programs don't play well with snapdragon.

What has amazed me the most is the battery life and the seemingly no real lag or micro-stuttering that you get in some other laptops.

So, in all, fine for light use. For anything serious, use a desktop.


Running Linux?


WSL or Docker is the only way to run Linux on these, it seems :(

Windows 11 with all the bloatware removed isn't a terrible experience though.


Yeah, w11 unfortunately, with bloatware removed fortunately.


What is it about it that makes it unsuited for anything serious? The way you describe it, the only thing it's not suited for is gaming, which is not generally regarded as serious.

Many people including myself do serious work on a macbook, which is also ARM. What's different about this qualcomm laptop that makes it inappropriate?


> What's different about this qualcomm laptop that makes it inappropriate?

Everything else around the cpu. apple systems are entirely co-designed (cpu to work with the rest of the components and everything together to work with mac os).

While i'd love to see macbook-level quality on other brands (looking at you, lenovo) tight hardware+software co-design (and co-development) yields much better results.


Microsoft is pushing hard for UEFI + ACPI support on PC ARM boards. I believe the Snapdragon X2 is supposed to support it.

That still leaves the usual UEFI + ACPI quirks Linux has had to deal with for aeons, but it is much more manageable than (non-firmware) DeviceTree.

The dream of course would be an opensource HAL (which UEFI and ACPI effectively are). I remember that certain Asus laptops had a microstutter due to a non-timed loop doing an insane amount of polling. Someone debugged it with reverse engineering, posted it on GitHub, and it still took Asus more than a year to respond to it and fix it, only after it blew up on social media (including here). With an opensource HAL, the community could have introduced a fix in the HAL overnight.


I get the lacking Linux support, but what about Windows? Most serious work happens on Windows and their SoCs seem to have much better support there.

Apple's hardware+software design combo is nice for things like power efficiency, but so in my experience so far, a Macbook and a similarly priced Windows laptop seems to be about equal in terms of weird OS bugs and actually getting work done.


I’m getting about 2 hours with current macos on an arm macbook pro. I used to get 4-5 last year.

This is out of the box. With obvious fixes like ripping busted background services out, it gets more than a day. There’s no way normal users are going to fire up console.app and start copy pasting “nuke random apple service” commands from “is this a virus?” forums into their terminal.

Apple needs to fix their QA. I’ve never seen power management this bad under Linux.

It’s roughly on par with noughties windows laptops loaded with corporate crapware.


That's unfortunate, perhaps your particular macbook is having a hardware problem?

As a point of comparison, I daily two ARM macs (work M4 14 + personal M3 14), and I get far better battery life than that (at least 8 hours of "normal" active use on both). Also, antidotally, the legion of engineers at my office with macs are not seeing battery life issues either.

That said, I have yet to encounter anyone who is in love with macOS Tahoe and it's version of Liquid Glass.


The current issue is iOS 26.1’s wallpaper renderer crashes in a tight loop if the default wallpaper isn’t installed. It isn’t under Xcode.

I have macos crash reporting turned off, but crashreport pins the CPU for a few minutes on each ios wallpaper renderer crash. I always have the iOS simulator open, so two hours battery, max.

I killed crashreport and it spun the cpu on some other thing.

In macos 25, there’s no throttle for mds (spotlight), and running builds at a normal developer pace produces about 10x more indexing churn than the Apple silicon can handle.


On my iPhone, even though I'm not on the latest "upgrade" (I made sure to avoid the Liquid Glass crap), the widgets just refuse to update most of the time. I have to tap them to get an update. Which completely defeat the purpose of having widgets in the first place. I am tempted to do a full reinstall from scratch but I think I'll just wait and bite the bullet for some Android in the near future. Apple software just isn't reliable at all, it makes the expensive hardware largely pointless.


I run an old T480 with FreeBSD and get about 17 hours of battery out of it. Sure, it’s a bit thicker but gets the job done as a daily driver.


There is literally no way. Spill the beans!


Sorry, thought I had posted, but didn't get through. It's a T480 with the 72Wh and the 24Wh battery running on FreeBSD. Screen has also been replaced with a low power usage screen which helps a lot in saving battery while still giving good brightness.

Most of the time I am running StumpWM with Emacs on one workspace and Nyxt in another. So just browsing and coding mostly.

OpenBSD gets close, but FreeBSD got a slight edge battery wise. To be fair, that is on an old CPU that still has homogenous cores. More modern CPUs can probably benefit from a more heterogenous scheduler.


Probably has the extra big battery. Thinkpads have options for different sized batteries.


Or they just got one of the 'good' models and tuned linux a bit. I have a couple lenovo's and its hit/miss, but my 'good' machine has an AMD which after a bit of tuning idles with the screen on at 2-3W, and with light editing/browsing/etc is about 5W. With the 72Wh battery that is >14h, maybe over 20 if I was just reading documentation. Of course its only 4-5 if i'm running a lot of heavy compile/VMs unless I throttle them, in which case its easy over 8h.

One of my 'bad' machines is more like 10-100W and i'm lucky to get two hours.

Smaller efficient CPU + low power sleep + not a lot of background activity + big battery = very long run times.


!!! I can get my laptop to 7.5W under web browsing with powertop tuning, but not 5. What did you do?


72Wh + 24Wh battery (one swappable one internal) and running FreeBSD Current.


for this to happen we would need to see a second company that controls both the hardware and the software and that's not realistic, economically. You can't just jump into that space.


You could argue that is exactly what Tuxedo is doing. In this case, they could not provide the end-user experience they wanted with this hardware so they moved on.

System76 may be an even better example as they now control their software stack more deeply (COSMIC).


when I say "control the software" what i mean is we need another company that can say "hey we are moving to architecture X because we think it's better" and within a year most developers rewrite their apps for the new arch - because it's worth it for them

there needs to be a huge healthy ecosystem/economic incentive.

it's all about the software for end users. I don't care what brand it is or OS and how much it costs. I want to have the most polished software and I want to have it on release day.

Right now, it's Apple.

Microsoft tries to do this but is held back by the need for backward compatibility (enterprise adoption), and Google cannot do this because of Android fragmentation. I don't think anyone is even near to try this with Linux.


Open Source has a massive advantage here.

Almost everything on regular Fedora works on Ashai Fedora out of the box on Apple Silicon.

You can get a full Ubuntu distribution for RISC-V with tens of thousands of packages working today.

Many Linux users would have little trouble changing architectures. For Linux, the issue is booting and drivers.

What you say is true for proprietary software of course. But there is FEX to run x86 software on ARM and Felix86 to run it on RISC-V. These work like Rosetta. Many Windows games run this way for example.

The majority of Android apps ship as Dalvik bytecode and should not care about the arch. Anything using native code is going to require porting though. That includes many games I imagine.


we are both right in different scopes but the context of the thread is the cancellation of an ARM notebook


Microsoft with their Surface line? They don't control every part of the hardware, but neither did Apple control even the majority before the M series.


Forget equal effort: Start off with hardware docs.


Equal effort is far more likely from Qualcomm than hardware docs. They don't even freely share docs with partners, and many important things are restricted even from their own engineers. I've seen military contractors less paranoid than QCOM.


I'd have to say that full hardware documentation, even under NDA, is prerequisite to claim equal effort. The expectation on a desktop platform (that is, explicitly not mobile, like phones or tablets) is that development is mostly open for those who want to, and Qualcomm's business is sort of fundamentally counter to that. So either they're going to have to change those expectations (which I would prefer not to happen), provide more to manufacturers, or expect that their market performance will be poor.


If they don't provide hardware documentation for Windows either (a desktop platform), how can it be a prerequisite for equal effort?


Qualcomm could've become "the Intel of the ARM PC" if they wanted to, but I suspect they see no problem with (and perhaps have a vested interest in) proprietary closed systems given how they've been doing with their smartphone SoCs.

Unfortunately, even Intel is moving in that direction whenever they're trying to be "legacy free", but I wonder if that's also because they're trying to emulate the success of smartphone SoC vendors.


I don't know if the prospect of being the "Intel of ARM" is very appealing when you can manufacture high-margin smartphone SOCs instead. The addressable market doesn't seem to be very large; any potential competition is stifled by licensing on both Microsoft and Softbank's side.

The legend of Windows on ARM is decades old, and people have been seriously trying to make it happen for at least the past two decades. They're all bled dry. Apple is the only one who can turn a profit, courtesy of their sweetheart deal with Masayoshi Son.


Well that would have an obvious solution. Go make RISC-V CPUs for phones etc. until you get good enough at it to be competitive in laptops, at which point Microsoft gets interested in supporting you and you get to be the Intel of RISC-V without dealing with Softbank.


The extent PCs are open is an historical accident, that most OEMs would rather not repeat, as you can see everywhere from embedded all the way to cloud systems.

If anything, Linux powered devices are a good example on how all of them end up with OEM-name Linux, with minimal contributions to upstream.

If everyone would leave Windows in droves, expect regular people to be getting Dell and HP Linux at local PC store, with the same limitations as going outside their distros with binary blobs, and pre-installed stuff.


OEMs don't care about that. It's Qualcomm in particular that sucks. If you buy a Linux PC from System76 it comes with their own flavor of Linux but it's basically Ubuntu and there is nothing stopping you from putting any other version you want on it. The ones from Dell just use common distributions.

Meanwhile Linux is getting a huge popularity boost right now from all the PCs that don't officially support Windows 11 and run Linux fine, and those are distribution-agnostic too because they didn't come with it to begin with.


I would not call huge the 4% market share.

Usually what is stopping us are the drivers that don't work in other distro kernels, or small utilities that might not have have been provided with source.


> I would not call huge the 4% market share.

4% was last year, it was 5% by this summer (a significant YoY increase and about what macOS had in 2010) and the Windows 10 end of support was only last month so the numbers from that aren't even in yet.

> Usually what is stopping us are the drivers that don't work in other distro kernels, or small utilities that might not have have been provided with source.

A lot of these machines are pure Intel or AMD hardware, or 95% and then have a Realtek network controller etc., and all the drivers are in the kernel tree. Sometimes the laptops that didn't come with Linux to begin with need a blob WiFi driver but plenty of them don't and many of the ones that do will have an M.2 slot and you can install a different one. It's not at all difficult to find one with entirely open source drivers and there is no apparent reason for that to get worse if Linux becomes more popular.


Better do the math, which means 15 years to reach where macOS is nowadays, which is still largely irrelevant outside tier 1 economies, while assuming nothing else will change in the computing landscape.

I was around when everyone was supposed to switch in droves to Linux back in the Windows XP days, or was it Vista, maybe Windows 7, or Windows 8, eventually 8.1, I guess Windows 10 was the one, or Windows 10 S, nah really Windows RT, actually it was Windows 11,or maybe....

I understand, I used to have M$ on my email signature back in the 1990's, surely to be found in some USENET or mailing list archive, yet we need to face the reality without Windows, Valve would not have a business.


> Better do the math, which means 15 years to reach where macOS is nowadays

macOS nowadays is closing in on 20%. And you can only buy macOS on premium-priced hardware and by now Linux supports more games than it does. The thing holding either of them back has always been third party software compatibility, which as the web has eroded native apps has been less of a problem, which is why both macOS and Linux have been growing at the expense of Windows.

And these things have tipping points. Can your company ignore Linux when it has 0.5% market share? Sure. Can you ignore it when it has 5% market share? There is a less of a case for that, so more things support it, which allows it to get even more market share, which causes even more things to support it. It's non-linear. The market share of macOS would already be significantly higher than it is if a new Mac laptop didn't start at a thousand bucks and charge $200 extra to add 8GB of RAM. Linux isn't going to have that problem.

Now, is it going to jump from 5% to 50% in three days? Of course not. But it's probably going to be more tomorrow than it was yesterday for the foreseeable future.

> we need to face the reality without Windows, Valve would not have a business.

Valve makes money from selling games and Steam. If Linux had 70% desktop market share and Windows had 5%, what would change about how they make money?


I mean, part of that is the difference between how easy it is to build a platform in Linux vs how hard it is to get into the tree. This is actually, in my mind, a major change in the Linux development process.

Nobody expected Intel to provide employees to write support for 80386 pagetables, or Philips to write and maintain support for the I2C bus. The PC keyboard driver was not sponsored and supported by IBM. Getting the code into Linux was really easy (and it shows in a lot of the older code; Linux kernel quality standards have been rising over time), because everyone was mostly cooperating on a cool open-source project.

But at some point, this became apparently unsustainable, and the expectation is now that AMD will maintain their GPU drivers, and Qualcomm (or some other company with substantial resources) will contribute code and employees to deal with Adreno GPUs. This led to a shift in reviewer attitudes: constant back-and-forth about code or design quality is typical on the mailing lists now.

This means contributing code to the kernel is a massive chore, which any person with interest in actually making things work should prefer to avoid. What's left is language lawyers, evangelists and people who get paid to sit straight and treat it as a 9-5 job.


The Asahi and pmOS folks have been quite successful in upstreaming drivers to the kernel (even for non-trivial devices like GPU's) as enthusiast contributors with no real company backing. The whole effort on including Rust in the Linux kernel is largely about making it even easier to write future drivers.


Agreed, and I'm fairly impressed by the GPU effort. That said, it did take a very long time, even with the demonstrably extreme amount of excitement from the Linux community (Linus himself was thrilled to use a Macbook). What do you do for parts that are useful but don't get people this excited?

What really burned me on this kind of stuff was the disappearance of Xeon Phi drivers from the kernel. Intel backed it out after they discontinued the product line, and the kernel people gladly went with it ("who'll maintain this?"). Intel pulled a beautiful piece of process lawyership on it: apparently they could back it out without difficulty, because the product was never released! (Never mind it has been sold, retired and circulated in public.)


> What really burned me on this kind of stuff was the disappearance of Xeon Phi drivers from the kernel

If you depend on that hardware, you can get it to be supported again. It just doesn't seem to be all that popular.


Note that the Rust effort is mostly sponsored by Google and Microsoft, thus the 9-5 example of the OP.


Correct me if I’m wrong but I’m pretty sure the Asahi GPU driver has not been upstreamed.


This is just part of the bureaucratisation of everything. The bureaucracy always try to extend its power and find ways to self-justify its existence, accaparing ressources to extend the control and bring ever more people into the fold. It's an intrinsically parasitic process that ends up killing the host in the long term.

Which is why most communist like endeavor ends up in failure. Without the necessary pruning that comes with competition, you end up in a situation where it's more profitable to get the power to control the resources and take a fee for each interactions than actually do anything worthwhile to get "rights" to resource allocation.


I was incredibly excited when they announced the chip alongside all kinds of promises regarding Linux support, so I pre-ordered a laptop with the intention of installing Linux later on. When reports came out that single core performance could not even match an old iPhone, alongside WSL troubles and disappointing battery life, I sent it back on arrival.

Instead I paid the premium for a nicely specced Macbook Pro, which is honestly everything I wanted, safe for Linux support. At least it's proper Unix, so I don't notice much difference in my terminal.


> I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips.

Depends why the Snapdragon chips were relevant in the first place! I got an ARM laptop for work so that I can locally build things for ARM that we want to be able to deploy to ARM servers.


Surprising. Cross compilation too annoying to set up? No CI pipelines for things you're actually deploying?

(I'm keen about ARM and RISC-V systems, but I can never actually justify them given the spotty Linux situation and no actual use case)


Cross compilation is a pain to set up, especially if you're relying on system libraries for anything. Even dynamically linking against glibc is a pain when cross compiling.


Linux on arm is probably the most popular computing device platform in the world.


Which doesn't mean that it's easy to use an ARM device in the way I'd want to (i.e. as a trouble-free laptop or desktop with complete upstream kernel support).


We do have ARM CI pipelines now, but I can only imagine what a nightmare they would have been to set up without any ability to locally debug bits that were broken for architectural reasons.


I guess you must be doing trickier things than I ever have. I've found docker's emulation via qemu pretty reliable, and I'd be pretty surprised if there was a corner case that wouldn't show on it but would show on a native system.


Not really trickier, but different stack - we’re a .NET stack with a pile of linters, analyzers, tests, etc. No emulation, everything run natively on both x86-64 and ARM64. (But prior to actually running/debugging it on arm64, had various hang-ups.)

Native is also much faster than qemu emulation - I have a personal (non-.NET) project where I moved the CI from docker/qemu for x86+arm builds to separate x86+arm runners, and it cut the runtime from 10 minutes in total to 2 minutes per runner.


It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.

Outside the embedded space, cross-compilation really is a fool's errand: either your software is not portable (which means it's not future-proof), or you are targeting an architecture that is not commercially viable.


> It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.

This is what we largely do - my entire team other than me is on x86, but setting up the ARM pipelines (on GitHub Actions runners) would have been a real pain without being able to debug issues locally.


Do the Lunar Lake chips have the same incredible standby battery times as the Snapdragon X's? That's where the latter really shines in my opinion.


I have a couple generation back amd laptop that can 'standby' for months.. its called S4 hibernate. Although at the same time its set for S3 and can sit in S3 for a few days at least and recover in less time than it takes to open the screen. The idea that you need instant wakeup when the screen has been closed for days is sorta a niche case, even apple's machines hibernate if you leave the screen closed for too long.

That isn't to say that modern standby/s2-idle isn't super useful, because it is, but more for actual use cases where the machine can basically go to sleep with the screen on displaying something the user is interacting with.


Roughly the same on my Intel Lenovo. It’s a great little machine. And Linux runs nicely.


Yea, Lunar Lake made hit into ARM, but Panther Lake should be even stronger hit


Better efficiency of X86 mobiles CPUs does negate much of the advantage of ARM laptops. It's just not worth the trouble of going through a major software transition.

One thing that I find suspicious is the large delta in single thread score between ARM and X86 currently. The real world performance does not suggest that big of a difference in actual use. The benchmarks suggest a 25% performance delta but in actual use the delta seems to be less than 10%. Of course Apple Silicon has the efficiency crown very much locked down

Since they have become a marketing target the benchmarks have become much less useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: