You know, there’s a reason the term bleeding edge was coined; it was supposed to demonstrate a product that targetted early adopters and one that could potentially go on to disrupt a market. It also implies that the product is probably a bit different compared to the mainstream. Morgan Stanley appears to have completely missed the point with NVIDIA (NASDAQ:NVDA)’s bleeding edge RTX series, treating it with such a tone-deaf rigor and an apparent lack of understanding of the underlying tech involved, that is almost impressive.
They reached a “disappointed” conclusion based on conventional performance of an unconventional product, which wouldn’t in itself be so bad if it weren’t for the fact that 2/3rds of the RTX’s value proposition, which includes conventional-performance-enhancing-features, isn’t even available yet. But then again, these are the same peeps that gave AMD (NASDAQ:AMD) a price target of $11 before drastically revising their estimates – so maybe it’s not that bad an analysis.
Morgan Stanley’s downgrade is misleading at best and factually inaccurate at worst since most of NVIDIA RTX series final performance and features have not yet been unlocked
Analysts at Morgan Stanley appear to have access to a crystal ball, because while the most of us are waiting for NVIDIA to get its act together and give us our promised titles with RTX and DLSS support (so we can judge whether said features are worth the money being asked) they have simply consulted this coveted spherical mirror and formed conclusions already, deeming it unworthy of the market. It’s only a pity this mirror didn’t help them with forecasting AMD.
The mistake that MS made with AMD was due to a lack of technical understanding of the subtleties at play in the PC-triumvirate industry – a mistake they have now repeated with NVIDIA. Just like we had been talking about EPYC year or two before the first analyst ever heard of it, and pointed out that it would be utterly disruptive, (while mainstream analysts were contemplating whether AMD was overpriced at $6), we would like to tell you that the data for definitively calling Turing a great success or a great disappointment simply does NOT exist at this point in time.
We would also advise you to run for the hills whenever anymore of these “analysts” present a so-called expert opinion. (Ed: Play nice, Usman.)
So let me start by delivering on the problem statement. NVIDIA’s Turing GPUs are not meant to replace Pascal (the company has no reason to do this since no competition exists for them and you don’t need to win the performance crown if you already own it) and are instead supposed to add on top of the existing lineup with higher MSRP. The reason why NVIDIA thinks they can get away with this much-more-expensive lineup of cards is that they support two new engines that old graphics cards did not.
In fact, ever since we jumped from sprite-based graphics to 3D rasterized moving-pictures, there hasn’t been a single bigger architectural feature upgrade than in Turing. While Pascal, like all GPUs before it primarily contains only the shader engines, Turing contains two new engine types: Raytracing and AI (Tensor) cores – which could have absolutely monumental implications for the PC gaming industry – or they could not – but probably will as far as we can tell.
Major unavailable feature right now: RTX
It seems MS didn’t wonder why NVIDIA RTX series was named “RTX” because if they had, they might have wondered whether it’s a good idea to pass judgment on a lineup which has the exact feature it’s named after – disabled right now. To those who don’t know, ever wondered why games never look as good as the PIXAR films from say, the last decade? The reason has much to do with the fact that the former is raytraced while modern games are rasterized. For the average laymen, the difference is the same as if you hand drew a picture vs printing it. A really skilled artist could probably do a fairly good job of recreating it (and that’s what good raster aims for) but an actual printer will always be mathematically perfect.
To emphasize this fact a bit more I have attached some renders I did almost 7 years back:
The raytraced version of the 3D model is picture-perfect, while the rasterized one looks just like any other game. The raytraced side took 2 hours 47 minutes to render in 2011 while the GPU side (A then-top of the line NVIDIA GTX 580) took 27 seconds to render the rasterized scene. This is exactly why rasterization has been preferred in gaming – you want to achieve a high frames per second mark – not seconds per frame. All this stands to change with what NVIDIA is claiming – the company says they can raytrace games in real time – which would require at least 30 frames per second. It goes without saying this is an absolutely huge deal – assuming NVIDIA can even partially meet these claims – it will change gaming in a big way. This is one reason why we absolutely have to wait and see how RTX turns out before calling the cards “disappointing”.
To put this into perspective this is like declaring the first iPhone unimpressive compared to the wide variety of keyboard-supported handsets on the market and saying it was a disappointment. The first iPhone changed the market, and the third owned it – which is exactly what NVIDIA is trying to achieve here.
For this to happen, game developers are incorporating the tech into their games and once that happens, and assuming RTX and DLSS performances as advertised (that is the real question you should be asking by the way), it will be here to stay. No matter which way you slice it, Turing can only be seen as a good thing with NVIDIA trying to entrench itself (permanently) into this post-RTX and post-AI GPU market. GPUs that only have shader engines (like the keyboard-based mobiles of olde) will simply stop being relevant and comparable if the company succeeds in what it’s trying to do. Considering the fact they have a monopoly on the gaming market this is by no stretch of the imagination an unachievable goal.
Major unavailable feature right now: AI (Tensor)/DLSS
Now let’s talk about those AI cores. NVIDIA’s Turing GPU also contains tensor cores, which are designed to further the other side to graphics. You have quality – which RTX will target – and you have performance – which is where those tensor cores come in. See, Tensor cores are really useful things. Ever since someone figured out you can train AI deep neural networks to enhance image resolution CSI-style (enhance, enhance, enhance!) it was only a matter of time before the same approach was applied to a GPU to lower its workload and achieve – almost magically – higher than peak theoretical performance.
The idea is to run two instances of a game in supercomputers at NVIDIA HQ – one at a lower resolution like 1080p and the other at 4k or beyond. An AI DNN will be trained to “learn” how the 1080p resolution graphics map onto 4k. This information can then be downloaded by an NVIDIA RTX owner and used to upscale a game running natively in 1080p to 4k – perfectly. Unlike all other upscaling features out there right now, this would be a true upscale – virtually indistinguishable from a game running natively at 4k.
Just like RTX, this feature has not been activated in the wild yet in any gaming title and only some demos exist which are pretty meaningless on their own. This is supposed to allow NVIDIA RTX owners to achieve almost 2x the performance that would otherwise be conventionally possible. It goes without saying that this is just a promise at this point from NVIDIA – but its a particularly good one and until they fail to deliver it (or do) it is not advisable to reach any conclusions about RTX cards. The hardware to achieve this exists inside each RTX GPU – and we are quickly approaching the time where NVIDIA will be able to increase gaming performance significantly over the air! You might not be able to download GPUs over the internet but this is, almost certainly, the next best thing.
Morgan Stanley’s analysts have also compared RTX 2080 shader performance (without DLSS) to the 1080 Ti and bemoaned the lack of a differential the likes of which was seen during the 980 Ti to 1080 jump. Of course, they completely ignored the fact that 980 Ti to 1080 was a process node jump (28nm to 16nm FF) as well as an architectural upgrade while 1080 Ti to 2080 is mostly just an architectural upgrade (12NFF is essentially an enhanced version of 16FF). Even then, had NVIDIA not wanted to disrupt the market with something new and unknown, they could have covered the die in traditional shaders and achieved similar results.
Of course, accelerating gaming performance isn’t the only thing you can use these tensor cores for. Students and researchers all over the world should be able to make use of RTX cards to accelerate models and training of neural nets much more than Pascal could. From their perspective, the RTX series is downright cheap.
The real caveats: since MS didn’t do a very good job of it, here are some factual potential downsides to NVIDIA
The fact remains that this is a bleeding edge product right now and like the first iPhone, it will be far from perfect. In fact, just like we think it’s sheer idiocy to downgrade NVIDIA without looking at how DLSS and RTX performs in the real world – we also think it’s not advisable for any value-oriented consumer to go buy these till all the testing is done. Early adopters want to buy the best of tech and they take a certain amount of risk by doing so (which in our judgment and considering NVIDIA’s absolute hold over the gaming market is lower than you would think) and this fact will not be different for RTX series GPUs.
Most of the buyers who don’t spend big bucks on upgrading to the top-of-the-line card every year would best wait and see how DLSS performs before making the jump – and only if its worth it to them. Alternatively, they can go for Pascal based cards, which are much cheaper now and offer insane value. This is also why I believe the downgrade is very reactionary and not rational at all. What would a gamer that cannot afford RTX turn to? AMD? nope – most of them would turn to Pascal. The fact that the only real substitute to Turing is also from NVIDIA (Pascal) should tell you all that you need to know about its fundamentals. AMD is only competitive at a very specific budget-price-point (where its RX 580 cards lie) to the point where its presence is mostly negligible for future buying decisions.
The biggest caveat I can see for NVIDIA as a stock is the fact that the wind has probably been taken out of its sails after the cryptocurrency market burst. While it clearly didn’t affect them as much as AMD (Vega cards were god-like at mining), it almost certainly will impact revenue and it is uncertain if Jensen can keep handing out record quarters like candy. A lot of the future rides on RTX series doing well in the market and walling itself off with an RTX and AI-powered ecosystem – and we can only tell whether this is something likely to happen once we take a look at how the first DLSS and RTX- based titles fare (unfortunately we don’t have access to a crystal ball so you will just have to wait till a good numbers of these titles come out). Peace.