I guess the Arc a750 in my workstation is imaginary?
Am I living in an alternate timeline? They’ve been making GPUs for quite some time- and B580 was actually pretty good, incredibly good for the price.
The problem with intel. They never just keep going. They announce some new gpu/graphics product and when it falls short they don’t or wont stick with it. They abandon it and use it as a write off. They have done this multiple times and I have no reason to believe they will do anything different. The last time was just a few years ago and when sales and performance lagged they just quit.

Oh no, Nvidia’s pet is rebelling. Maybe they should be remindes of their current status
I had to check the date on the article. They’ve been making GPUs for 3 years now, but I guess this announcement–although weird–is a sign that Arc is here to stay, which is good news.
This article was based off what the CEO said at the Second Annual AI Summit, following the news of their new head of GPU hire who says he “will lead GPU engineering with a focus on AI at Intel”. The AI pivot is the actual news.
Oh so they will actually not focus on GPUs as end consumer products for you and me. They’re just like Nvidia and AMD. This news really just shows how cooked gaming is.
Just what every consumer needs. More AI focused chips.
Intel just trying to cash in on the AI hype to buy the sinking ship, as far as investors are concerned.
Don’t worry, it’s just a relabeling. The stuff is still the same.
It’s not even a pivot. They’ve been focusing on AI already. I’m sure they want it to seem like a pivot (and build up hype); the times before apparently just having the hardware and software wasn’t enough. nobody cared when the gaudi cards came out, nobody uses sycl or onednn, etc
It feels like TechCrunch is allowing a drunk Ai to write all its articles now.
Oh great, some wildly overpriced and underperforming GPUs.
Edit: went looking at Intel’s desktop GPUs and found this gem:
Powerful AI Engines
Unlock new AI experiences with up to 233 TOPS of AI engine performance for content creation, real-time AI chat, editing, and upscaled gaming.3
And checked out the specs for performance of Intel’s top cards (B580/A770) against a basic 3080 card (no OC/TI, whatever) and the intel cards ranked well below the older 3080, and weren’t even in the ballpark against upper tier 4- and 5- series Nvidia cards. Plus missing features like DLSS, etc.
Good enough for non-FPS dependent gaming? Sure. Can’t beat the price, I was wrong about that. Want to play high-FPS demanding twitch gaming? No.
They won’t be for you.
What the fuck? What kind of idiotic article is that? Did Techcrunch go down the drain too?
deleted by creator
Good luck fucking things up like you always do
Aren’t TPUs like dramatically better for any AI workload?
Intel’s Gaudi 3 datacenter GPU from late 2024 advertises about 1800 tops in fp8, at 3.1 tops/w. Google’s mid 2025 TPU v7 advertises 4600 tops fp8, at 4.7 tops/w. Which is a difference, but not that dramatic of one. The reason it is so small is that GPUs are basically TPUs already; almost as much die space as is allocated to actual shader units is allocated to matrix accelerators. I have heard anecdotally.
At scale the power efficiency is probably really important though
Yes, it works out to a ton of power and money, but on the other hand, 2x the computation could be like a few percent better in results. so it’s often a thing of orders of magnitude, because that’s what is needed for a sufficiently noticeable difference in use.
basing things on theoretical tops is also not particularly equivalent to performance in actual use, it just gives a very general idea of a perfect workload.
Been looking at their Arc B50/B60 but still too expensive in Canada
You mean non shit non arcs? They tried already and failed already with battle mage.
Not gonna make a lick of difference without the support to run CUDA.
ZLUDA exists.
Intel GPU support?
ZLUDA previously supported Intel GPUs, but not currently. It is possible to revive the Intel backend. The development team is focusing on high‑quality AMD GPU support and welcomes contributions.
Anyways, no actual AI company is going to buy $100M of AI cards just to run all of their software through an unfinished community made translation layer, no matter how good it becomes.
OneAPI is decent, but apparently usually fairly cumbersome to work with and people prefer to write software in cuda as it’s the industry standard (and the standard in academia)
Anyways, no actual AI company is going to buy $100M of AI cards just to run all of their software through an unfinished community made translation layer, no matter how good it becomes.
Good. So prices might actually be reasonable.
At least they are admitting the Intel ARC was more of a joke rather than a graphics card.












