The other major issue with regards pricing is that intel need to pay one way or another to get market penetration, if no one buys their cards at all and they don't establish a beachhead then it's even more wasted money.
As I see it AMD get _potentially_ squeezed between intel and nvidia. Nvidia's majority marketshare seems pretty secure for the foreseeable future, intel undercutting AMD plus their connections to prebuilt system manufacturers would likely grab them a few more nibbles into AMD territory. If intel release a competent B770 versus AMD products priced a few hundred dollars more, even if Arc isn't as mature I'm not sure they have solid answers for why someone should buy Radeon.
In my view AMD's issue is that they don't have any vision for what their GPUs can offer besides a slightly better version of the previous generation, it appears back in 2018 that the RTX offering must have blindsided them, and years later they're not giving us any alternative vision for what comes next for graphics to make Radeon desirable besides catching up to nvidia (who I imagine will have something new to move the goalposts if anyone gets close), and this is an AMD that is currently well resourced from Zen
I think this is a bad take because it assumes that NVidia is making rapid price/performance improvements in the consumer space The RTX 4060 is roughly equivalent to a 2080 (similar performance and ram and transistors). Intel isn't making much margin, but from what I've seen they're probably roughly breaking even not taking a huge loss.
Also, a ton of the work for Intel is in drivers which are (as the A770 showed) very improvable after launch. Based on the hardware, it seems very possible that B580 could get an extra 10% (especially in 1080p) which would bring it clear above the 4060ti in perf.
Hard to say why the density is that different, if those transistor numbers are accurate. A less dense design would allow for higher clocking, & while the clocks are fairly high, they aren't that far out there, but that's one factor (I'd hope they wouldn't trade half the area for a few extra MHz, when a gpu w/ 2x the tr will just be better).
It could also be in addition that the # of transistors that each company provides is different as they may count them differently (but I'm not convinced of this).
Customers pay by the wafer, so mm^2; though tr cost is a function of that so :3 .
> I am not too impressed with the "chips and cheese ant's view article" as they don't uncover the reason why performance is SO PATHETIC!
Performance on GPU has always been about Drivers. Chip and Cheese is only here to show the uArch behind it. This isn't even new as we should have learned all about it during Voodoo 3Dfx era. And 9 years have passed since an ( now retired ) Intel Engineers said that they would be completing against Nvidia by 2020 if not 2021. We are now in 2025 and they are not even close. But somehow Raja Koduri was suppose to save them and now gone.
Intel seems to have deep-seated issues with their PR department writing checks their engineers can't pay out on time for.
Not that Intel engineers are bad - on the contrary. But as you pointed out, they've been promising they'd be further than they are now for over 5 years now, and even 10+ years ago when I was working in HPC systems, they kept promising things you should build your systems on that would be "in the next gen" that were not, in fact, there.
It seems much like the Bioware Problem(tm) where Bioware got very comfortable promising the moon in 12 months and assuming 6 months of crunch would Magically produce a good outcome, and then discovered that Results May Vary.