Is the PlayStation 5 more powerful or the Xbox Series X?

Maddox

Well-known member
Member
Joined
Dec 11, 2018
Messages
1,222
Trophies
0
Google Translate:
INTRO
The hardware specifications of the PlayStation 5 and Xbox Series X were officially announced a few weeks ago by Sony and Microsoft, and Digital Foundry had the opportunity to take a deep technical look at what we expect. Although there aren't many games for consoles yet, and we don't know much about their overall performance and user experience, the two companies are constantly competing in technical and complex debates that no one but engineers and programmers can understand. Providing the deepest technical information is not avoided this time around.

As we tracked down the information and read the specifications and prepared for a bachelor's degree in computer science, it seemed better to work with an engineer and programmer at Crytek, one of the world's most tech-savvy companies, with a powerful gaming engine. It's an engine to talk about. That's why I called Ali Salehi, a rendering engineer from Crytek, and asked him, as an expert, to answer our questions about console traffic and the power of their hardware, and to comment on which one is more powerful. Convincing answers with simple and understandable explanations that were contrary to expectations and numbers on paper.

In the following, you will read the conversation between Mohsen Vafnejad and Shayan Ziaei with Ali Salehi about the hardware specifications of the PlayStation 5 and Xbox Series X.

INTERVIEW
[Questions bolded,
answers not]
Vijayato: In short, what is the job of a rendering engineer in a gaming company?

Ali Salehi: The technical visual section of each game is related to us. That means supporting new consoles, optimizing current algorithms, troubleshooting current ones, implementing new technology and features like RayTracing (RayTracing) are all things we do.

What is the significance of Teraflops, and does higher Teraflops mean a console is stronger?

Teraflops shows that this processor can be as efficient if it is in the best and most ideal state possible. The Teraflops figure is in ideal and theoretical conditions. In practice, however, the graphics card and console are a complex entity. Several elements must work together to provide each part of the feed to the other and output one part to another. If each of these elements fails to work properly, the efficiency of the other part will decrease. A good example of this is the PlayStation 3 console. Because of its SPUs, the PlayStation 3 had a lot more power on paper than the Xbox 360. But in practice, because of its complex architecture and bottlenecked Memory and other problems, you never reached the peak of efficiency.

There is an image here with following
[Woes of PlayStation 3
The PlayStation 3 had a hard time running multi-platform games compared to the Xbox 360. Red Dead Redemption and GTA IV, for example, ran at 720p on the Microsoft console, but the PlayStation 3 had a poorer output and eventually increased the resolution to 720p with AppScale. But Sony's own studios have been able to offer more detailed games such as The Last of Us and Uncharted's second and third versions due to their greater familiarity with the console and the development of special software relationships.]

That is why it is not possible to value this figure so much. But if all the parts in the Xbox X-Series can work optimally and the GPU works in its own peak mode, that's not possible in practice. In addition to all this, we also have a software section. The example we saw on the computer was the addition of Vulkan and DirectX 12. The hardware did not change, but due to the change in the architecture of the software, it would be better to use the hardware.

The same can be said for consoles. Sony runs PlayStation 5 on its own operating system, but Microsoft has put a customized version of Windows on the Xbox Series X. The two are very different. Because Sony has developed software for the PlayStation 5, it will definitely give developers much more capabilities than Microsoft, which has almost the same directX PC for its consoles.

How have you experienced working with both consoles and how do you evaluate them?

I can't say anything right now about my own work, but I'm quoting others who have made a public statement. Developers say that the PlayStation 5 is the easiest console we've ever coded into so we can reach the console's peak performance. In terms of software, coding on the PlayStation 5 is extremely simple and has many features that leave the developer free. All in all, the PlayStation 5 is a better console.

If I understood correctly, is Traflaps the standard for optimizing different parts of the GPU or not? Or what do these floating points mean? How would you describe it for a user who doesn't understand this information?

The problem is with the person who made these public statements that need to be explained now. This technical information does not matter to the average user and is not a measurement criterion.

Graphics cards, for example, have 20 different sections, one of which is Compute Units, which performs the processing. If the rest of the components are next to them in the best possible way, there are no restrictions, the battery does not boot, and as long as the processor has the necessary information, the servers in this mode can operate 12 times of floating-point operation per second. So in an ideal world where we remove all the limiting parameters, that's possible, but it's not.

A good example of this is the X-Series Xbox series hardware. Microsoft has split the RAMs in two. The same mistake that the Xbox One made. One part of RAM has high bandwidth and one part of RAM has low bandwidth. And obviously, encoding this console will have a story. Because the total number of things we have to put in fast RAM is so much that it will be annoying again, and if we want 4K to support it, that's another story. So there will be parts that prevent the graphics card from reaching that speed.

You talked about the shadows. The PlayStation 5 now has 36 CUs, and the Xbox Series X has 52 CUs are available to the developer. What is the difference?

The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in some, they don't make much of a difference. An interesting example from an IGN reporter was that the Xbox Series X is very neat and tidy like an 8-cylinder engine, and the PlayStation 5 is turbocharged like a six-cylinder engine to the end. Raising the clock speed on the PlayStation 5 seems to me to have a number of good things to do, such as the memory, rasterizer, and other parts of the graphics card whose performance is related to this clock. So the rest of the PlayStation 5's GPU works faster than the X-Series. That's what makes the console work even more on the announced peak 10.28 Teraflops. But for the X-Series, because the rest of the sections are slower, it will probably work much lower on Teraflops in general, and only reach 12 Teraflops in highly ideal conditions.

Doesn't this difference show its impact at the end of the generation, when developers become more familiar with the X-Series hardware?

No, because the PlayStation software interface generally leaves the hand more open, and usually at the end of each generation, Sony consoles have even more exotic outputs. For example, in the early seventh generation, even multi platform games for both consoles performed poorly on the PlayStation 3. But the late in the generation Uncharted 3 and The Last of Us came out of the console. I think the next generation will be the same. But towards the end at higher native resolutions, the PlayStation 5 will probably be in a little trouble, and the X-Series will be able to display more pixels.

Sony says the smaller the number, the more you can integrate the tasks. What does Sony's claim mean?

It costs money to use all the CUs at the same time. Because CUs need resources that are allocated to the graphics card when they want to run code. If the graphics card fails to distribute all the resources on all the CUs to execute a code, it will be forced to drop a number of CUs. For example, instead of 52, use 20% of it because it doesn't have enough resources for all caches at all times.

Aware of this, Sony has hired a faster GPU instead of a larger GPU to reduce production costs. A more striking example of this was in the SIP. AMD has had high-core CPUs for a long time, or even Intel's larger-core CPUs didn't necessarily work better. 4-core or 8-core CPUs, but with much higher performance per core, usually performed better in gaming. Clearly, a 16- or 32-core CPU has a higher number of Teraflops, but a CPU with a smaller faster core will definitely do a better job. Because it's hard for gamers and programmers to use all the cores all the time, they prefer to have fewer cores but faster.

Could the Hypertering feature included in the X series be the last years of Microsoft's winning generation?

Technically, hypertheading has been on desktop computers since Pentium 4, and each physical core considers the CPU as two virtual cores, and in most cases helps with performance. Does the X-Series feature allow the developer to decide for themselves whether they want to use these virtual cores or turn them off with more CPU clocks? And that's exactly what you're saying. It's not exactly a big deal to make a local decision from the start, so the use of hyperthreading is likely to reach the end of the generation.

Do you open the door saying "there is no way out"?

That is, the analysis requires very accurate code execution. So it's not something everyone knows right now. There are now much more important concerns for recognizing console hardware, and developers are likely to work with a smaller number of cores at the beginning of the previous generation, but with a higher clock, and then move on to this feature.

The 3328 Shader is available in the Xbox Series X Computing Unit. What does Shader have, what does it do, and what does 3328 Shader mean?

When developers want to execute code, they do so through units called Wavefront. Multiply the number of shadows by the number of vipers. But it doesn't really matter, and everything I said about the CUs applies here. Again, there are limitations that make all of these shaders unusable, and having many of them all at once aren't necessarily good.

There is another important issue to consider, as Mark Cerny put it. CUs or even Traflaps are not necessarily the same between any architecture. That is, Teraflops cannot be compared to each other and which is actually numerically superior. So you can't trust these numbers at all and set the criteria.

Comparisons between Android devices and Apple iPhones have also recently risen to the top of consoles, with Internet discussions suggesting that Android users have higher RAM but poorer performance than iPhones. Is the comparison between the two with the consoles correct?

Software stacks that are placed on top of the hardware determine everything. As performance updates increase exponentially, so do they. Sony has always had better software because Microsoft has to use Windows. So that's right.

Microsoft has insisted that the Xbox Series X frequency is constant under any circumstances, but Sony does not have such an approach and provides the console with a certain amount of energy to use it as a variable and depending on the situation. What are the differences between the two and which will be better for the developer?

What Sony has done is much more logical because it decides whether the graphics card's frequency is higher or the CPU's frequency at certain times, depending on the processing load. For example, on a loading page, only the CPU is needed and the GPU is not used. Or in a close-up scene of the character's face, GPU gets involved and CPU plays a very small role. On the other hand, it's good that the X-Series has good cooling and guarantees to keep the frequency constant and it doesn't have throttling, but the practical freedom that Sony has given is really a big deal.

Doesn't this freedom of action make things harder for the developer?

Not really, because we're already doing that on the engine side. For example, the Dynamic Resolution Scaling technique used by some games is now measuring different criteria and measuring how much the graphics card is under pressure and how low the resolution should be to keep it fixed on the frame. So it's very easy to connect these together.

What is the use of the geometry engine or Geometry Engine that Sony is talking about?

I don't think it will be very useful until the first year or two. We'll probably see more of an impact for the second wave of games released on this console, but it doesn't have much use at the start.

The X-Series chipset is 7 nanometers, and we know that the smaller the number, the better the chipset. Are you exploring the nanometer and transistors?

Lowering the nanometer means more transistors and controlling their heat in large numbers and smaller spaces. A production technology is better and the number of nanometers is not very important, what matters is the number of transistors.

PlayStation SSD speeds reach 8-9 GB in peak mode. Now that we've reached this speed, what else will happen apart from loading games and more details?

The first thing to do is remove the loading page from the games. Microsoft also showed the ability to stop and run new games, which can run multiple games simultaneously and move between each in less than 5-6 seconds. This time will be below zero in PlayStation. Another thing that can be expected is a change in the game menu. When there is no loading, of course, there is no expectation and you no longer need to watch a video to load the game in the background.

How will the games on PC be in the meantime? Because having an SSD is a choice for a PC user.

Consoles have always determined what the standard is. Game developers also build games based on consoles, and if someone has a PC and doesn't have an SSD on it, they have to deal with long loads or think about buying an SSD.

As a programmer and developer, which do you consider the best console for working and coding? PlayStation 5 or Xbox X series?

Definitely PlayStation 5.

As a programmer, I would say that the PlayStation 5 is much better, and I don't think you can find a programmer who can outperform the PlayStation 5 from the Xbox X-Series. For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants. It is in their interest and in our interest. Because there is only one way to do everything, and theirs is the best way possible.


 
As a programmer, I would say that the PlayStation 5 is much better, and I don't think you can find a programmer who can outperform the PlayStation 5 from the Xbox X-Series. For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants. It is in their interest and in our interest. Because there is only one way to do everything, and theirs is the best way possible.

I don't get this. Do they mean that Direct X and Windows is older on XBox, but not on PlayStation? How would that make a difference? Would it make a real difference? How so? Is putting on old versions - just a way to save on building costs - so to make more profit?
 

Latest content

Back
Top