Apple moving to their own chips

I expect they might get the A14 to 1440p60 natively, but driving 4k/5k/6k displays? I sincerely doubt that’s happening at anything more than slideshow rates. It’ll almost certainly fall to eGPUs in order to have real gaming power, and even over TB3 it isn’t really more than “adequate”. That could improve with USB4, but that draft isn’t even finalized yet, let alone released. IMO Apple should have waited to go ARM after the release of USB4 just to lock in true performance gains.

‘Retina’ is just apples marketing term for full resolution with the SIZE of the UI doubled. It’s not upscaling/pixel doubling. Its full resolution without a tiny UI.

The chips that go into macs aren’t going to be the same as the ones that go into iPhones. They’ve been very clear on that.

3 Likes

If it were “just full resolution without a tiny UI”, the performance impacts would be far greater. I guarantee you, it isn’t native resolution. The end result is scaled up to native, but the origins are not native. Otherwise “Retina 2880p” would be undriveable on most of the 5k iMacs.

We’ll see what the results are. I’m not exactly getting my hopes up for an ultra powerful gaming rig here. Apple’s made impressive fronts, but so far has relied on gimmicks to get there. The proof will be in the pudding. If it tastes great, I’ll eat it. We shall see.

1 Like

This is more what I was referring to.

You haven’t owned a 5k imac have you?
‘Retina’ is absolutely full resolution with the UI size doubled. You can read the developer guidelines.

    https://developer.apple.com/design/human-interface-guidelines/ios/icons-and-images/image-size-and-resolution/

You’re in for a surprise.

https://twitter.com/yiningkarlli/status/1279101037693919232?s=21
    https://twitter.com/yiningkarlli/status/1279101037693919232?s=21

Indeed, the iMac Pro that is sitting on my desk at work (sadly unused until who knows when because of WFH, thanks covid) is undeniably rendering actual 5120 x 2880. If you get SwitchResX you can even force it to run at 5120 x 2880 @ 1x. It’s pretty unusable that way due to UI elements being tiny, but every pixel is perfectly 1:1.

1 Like

And what settings are you using to get to 60 FPS, because you aren’t maxing them out, that’s for sure. If even a 1080 Ti has to work to maintain 60 FPS, I sincerely doubt AMD’s middle tier is going to fare anywhere near as well.

What gamers are looking for is power, not compromise. What kind of workload are you putting on the machine?

As I said, I’ll believe it when I see it. I trust people from within companies like Disney very little these days, even if they’re in the production department. They aren’t exactly known for letting employees speak about anything under NDA, even as a sideshot.

This amounts to nothing more than potential hearsay without anything substaintial to back it up. How far down did you have to reach to dig that one up?

Oh, and you shouldn’t need SwitchResX to run 5k native. Hold option while you open the Displays prefpane and it should give you all scaled resolutions, including 5k (yes, native is considered a scaled resolution, go figure).

That has to do with icons and images, not the actual content scaling.

Read the section on ‘point sizes’ for a hint on how it works.

It’s really simple ‘retina’ is just a marketing term of high resolution. The entire reason Apple went with 5k for the iMacs is so they could display pixel perfect 4K video with a UI around it for editing.

There is no upscaling of content (if the application is retina aware/has high resolution content). I know because I have a retina iMac as well as a retina MacBook Pro. (On some of the notebooks and iPhones there is downscaling of content from an even higher resolution).

In WoW for instance you can select 100% render scale at 5k and you get 5k. 50% 1440P etc.

This explains it better than I can.

    https://www.anandtech.com/show/6023/the-nextgen-macbook-pro-with-retina-display-review/6
1 Like

Well duh, if you select 100% render scale and you’re running at 5k native you’re going to get actual 5k. It isn’t going to run all that well at that render scale and resolution, but yes, it’s native. In other news, water is still wet.

Again, it is still scaling. Whether it’s interpolation, pixel doubling, or point mapping via floats, it’s still scaling. There’s no way the GPUs in Apple’s machines can drive super high resolution displays at any decent framerate otherwise. Even a full-on 1080 Ti like I have in my hackintosh isn’t doing native 5k at 60 FPS in most games. Hell even the 2080 Ti can’t do it yet and it’s currently king of the hill and isn’t thermally constained like Apple’s choice of GPUs are (which are also underclocked as well).

Retina is a scaling gimmick designed to give “close to but not quite 4k/5k appearances” as its means of driving those large displays. It essentially cheats along the same vein as checkerboarding to display a 4k image on a game console. It’s close, but isn’t true native and never will be.

In a game you’re going to find only one backing factor since it can’t realistically apply backing factors to thousands or millions of polygons/triangles. That backing factor will be the UI scaler as seen in WoW and other games.

I, and nearly every other gamer out there, is interested in one thing: native non-scaled gaming performance. Nothing Apple has produced in the last eight years can provide that. It’s highly unlikely their ARM SoCs can provide that either, at least not in the first generation of ARM Macs and certainly not if Apple doesn’t give them enough room for good heat dissipation. They may have hired a fair number of GPU devs and engineers over the years, but physics and thermodynamics are still a thing.

When they can prove they can drive ultra high resolution displays at native resolution at a minimum of the display’s refresh rate (minimum being 60 Hz in most cases), then they’ll have at least a shot at being a true winner. But let’s not forget that there are now gaming monitors that can run 120, 144, 165, and even 240 Hz now. You think Apple’s SoCs are going to reach those framerates at UHD or better? Didn’t think so. Not even the king of desktop GPUs can do that, which is why most of those displays don’t top 1440p for the >60 Hz refresh rates. They aren’t going to sell if nothing can drive them at native rates now, are they?

HDMI 2.1 is arriving this year. Great for home theater, but only a half-win for the PC gaming crowd since super high refresh rates are still beyond any GPU in the 4k+ range. VRR will be the saving grace there, and I suspect Apple is going to lean heavily on that HDMI 2.1 feature. I know Sony and MS most assuredly will be as well (I’m still laughing at them saying they’ll have 8k60 gaming on those consoles…ever).

If Apple has an ace up its sleeve, I guess we’ll see it when their ARM lineup launches. But Retina can only solve so much of the problem, and it can’t solve the native resolution problem since it’s purely a scaling technology.

Gamers are tired of compromising with Apple. They want true gaming performance that isn’t trickery. That means native resolutions at good refresh rates.

It’s really not. And you don’t seem to understand it. It IS 4K/5K.

Yeah, no. Those GPUs cannot drive native 4k/5k at 60 FPS. That’s a fact. The only way around that is scaling. That’s what Apple’s Retina tech is for. The screenshots come out as native resolution, but the renders do not. If it were actually 4k/5k native there would be no point/float scaling required. That’s what you’re not getting. The final displayed render takes up all pixels, but the originating resolution most assuredly is not 4k/5k native.

In order to reach really playable framerates on modern Macs, you must drop the render scale down. Why? Because those GPUs can’t drive the games at high framerates natively. That’s why. This really isn’t rocket surgery here. Apple’s using a gimmick to maintain higher framerates at high resolutions. It does work, and I daresay it makes Windows’ HiDPI modes look like a poor man’s solution at that. But it is not native 4k/5k.

FYI you two are arguing about different things

2 Likes

The iMac Pro mentioned is a dev machine, so it was ordered with the base spec GPU (which is a workstation GPU anyway). It’s passable in games, but it’s not really the point. It drives normal desktop apps at 5k60 just fine though.

My 6700k + 5700 XT hack tower handles real 5k alright though… I don’t have a 5k display hooked up to it (only 2560x1440), but if I crank WoW’s render scale to 200% with all settings except AA maxed, it runs 60FPS running around in Stormwind just fine.

Granted, the only real Macs with that kind of power right now are 2019 Mac Pro’s with a Pro W5700X, aftermarket Radeon VII, or aftermarket 5700 XT, but it’s not impossible for a Mac to drive 5k60 at decent settings.

You’re implying they can’t drive the desktop at 5k/60fps?

Of course they can, and they do.

Those are mostly 2D so that’s expected. 2D is easy to push on most GPUs. It’s the high polygon count 3D that doesn’t past muster there and requires lowering settings or using a “Retina” resolution.

That’s basically equivalent to DSR or SSAA, but not quite equivalent to native (w/o AA). I’m looking forward to what HDMI 2.1 GPUs bring to the table, especially the VRR portion of the spec, as that will eliminate a lot of input lag and tearing associated with variable vsync as it currently exists.

The dual Pro Vega II cards rock. Kinda telling though that it takes two of them to beat a single Titan GPU. I really wish Apple hadn’t blocked nVidia from releasing new drivers. I’d love to run OS X w/ the upcoming Ampere GPUs.

I’m referring to actual 3D workloads/games. They can’t. Not at native refresh rates. Desktop is easy. That’s mostly 2D. Technically the GPUs with at least 8 GB VRAM in modern Macs can drive an 8k display, though only realistically 2D. Not even a 2080 Ti is going to give you passable 3D on such a display, even in NVLink mode.

For what it’s worth, as much as I like nVidia, I’m not impressed there really either since even their top end GPUs can’t drive a 5k60 setup at native refresh w/ max settings. It comes close with two using NVLink, but that’s kind of cheating since you can’t use that mode in OS X (and sadly can’t use that GPU at all - thanks Apple).

When Apple’s SoC can drive UHD+ at 60 FPS w/ max settings on games, I’ll be impressed enough to plunk down for the machine even potentially if it’s still all proprietary. But I don’t see that happening for a few years. Thankfully my hackintosh will last me several years, so by the time I’m in it for a new system, Apple might have what I’m looking for, assuming developers other than iOS devs are making games for the system at that point. Though I have to admit that being able to run iOS games without having to have one of those god awful walled in iDevices would be nice regardless.

BTW, that Disney guy? The workload they use is compute based. Apple’s ARM CPUs are fairly good at OpenCL. But lacking CUDA will keep a lot of pros away from Apple’s ARM machines. A ton of renderers use CUDA workloads and nVidia is still king of the hill on the GPU front, including workstation GPUs.

If Apple came out with an ARM Mac w/ a good nVidia GPU in it, I’d be sold instantly.

Even that’s false. I can run WoW at 60fps on the 570 in the imac at 5k (If I turn AO off, and settings down).

That’s changing though. OTOY, Redshift etc have moved or are moving to Metal. The one major thing left is Blender. I really hope Apple throws a few engineers at that. It’s basically CUDA vs Metal now, which is a heck of a lot better than CUDA only… Competition is good.

Based on this slide from WWDC, Apple is at minimum patching Blender to run on Apple Silicon. There’s a fair chance they’ll add Metal support to the Cycles renderer. Based on what I’ve read from the Blender dev forums, the UI+viewport will probably be OpenGL for a while longer — the devs are planning at least a Vulkan port, but apparently that’s at least a couple of years down the road.

You just proved my point. They can’t run with high/max settings in games at native resolution. I’d be ecstatic if they could. If Apple can reach that lofty milestone, I’ll be all in.

It’s actually been CUDA vs. OpenCL. Apple did everything it could to kill OpenCL, despite using AMD GPUs that were often better with OpenCL than all but the top end of nVidia’s lineup with CUDA. AMD excels with OpenCL. The real question here is whether Apple can scale ARM to workstation levels of power. So far they haven’t. The potential is there, being ARM is a RISC architecture, but so far ARM scaling has been limited to weird setups like the one in Japan where they have clusters under a master controller in a supercomputer.

The other worry for pros is being locked into Apple’s proprietary walled garden, combined with how Apple treated pros over the last few years, especially with the 2019 Mac Pro debacle (it’s a single CPU machine, meaning maxing your DIMM slot allocations = 800 MHz RAM speed vs. native 2966 MHz - dual CPUs are required to power fully slotted machines at native rates with the Xeon Scalable architecture currently).

Neither Octane or Redshift support OpenCL. It really is going to be CUDA vs Metal.