Once again, we’re entering the fall and winter rushes of video games, which will provide several months of AAA releases. One of the earlier entries, launching October 5th, will be Ubisoft’s Assassin’s Creed: Odyssey.
Ubisoft recently published their requirements for “Minimum” at 720p, “Recommended” at 1080p, and “Recommended 4K” at, as the name suggests, 4K. Each of these levels assume 30 FPS. While 30Hz is not what a lot of PC gamers consider recommended, I am glad that Ubisoft qualified what “minimum” and “recommended” actually corresponds to. They even publish expected clock rates, which leads to a notable scenario…
Here are the specifications. Be sure to read the analysis after! It should be interesting.
Minimum:
- 64-bit Windows 7 SP1 (or later)
- AMD FX 6300 @ 3.8 GHz or Intel Core i5-2400 @ 3.1 GHz or AMD Ryzen 3 – 1200
- AMD R9 285 (2GB) or NVIDIA GeForce GTX 660
- 8GB of RAM
- 46GB of available storage
Recommended:
- 64-bit Windows 7 SP1 (or later)
- AMD FX-8350 @ 4.0 GHz or Intel Core i7-3770 @ 3.5 GHz or AMD Ryzen 5 – 1400
- AMD Radeon R9 290X (4GB) or NVIIDA GeForce GTX 970 (4GB)
- 8GB of RAM
- 46GB of available storage
Recommended 4K:
- Windows 10 64-bit
- AMD Ryzen 1700X @ 3.8 GHz or Intel Core i7-7700 @ 4.2 GHz
- AMD Vega 64 or NVIDIA GeForce GTX 1080 (8GB)
- 16GB of RAM
- 46GB of available storage
As I look through this list, a few details pop out at me:
- AMD Ryzen 1700X requires a lower clock rate than the Core i7-7700 at 4K
- Seems to suggest that Odyssey will meaningfully use more than eight threads.
- Makes a strong case for higher core counts in consumer PCs going forward.
- 4K only requires a GTX 1080 (or a Vega 64)
- Suggests that even a single GTX 1080 Ti can run 4K significantly above 30FPS maxed.
- 4K recommends 16GB of RAM
- Seems to suggest that Ubisoft will keep higher level-of-detail (LOD) assets loaded at longer draw distances when the resolution is up to 4K. (I could be wrong though.)
Obviously the first point is the most interesting for me. Intel could have increased core counts for a long time now, albeit at the expense of more SKUs, larger dies, and so forth. If Assassin’s Creed is any indication, we’re beginning to see consumer software getting more comfortable with parallel code. That said, I expect that, even if Intel released bigger SKUs earlier, software would still lag until around this time anyway. The point is that AMD has an answer for it now, and, unlike their gamble with Bulldozer, it’s well-timed with software trends.
Of course, AMD probably coaxed that to happen with the Xbox One and PS4.
Assassin's Creed: Odyssey launches on Friday, October 5th. Check out the system specs here.
To me this only says they are
To me this only says they are recommending an AMD or an Intel processor at each stage, min, recommended, 4K.
Personally I wouldn’t run either of their AMD processors at minimum or recommended. A Ryzen 3 or Ryzen 5 would make more sense at those stages in late 2018.
It’s not a recommendation for
It’s not a recommendation for a new build. It’s letting you know “If you have this, you should be fine to play our game”
No kidding…
No kidding…
“Of course, AMD probably
“Of course, AMD probably coaxed that to happen with the Xbox One and PS4.”
With Microsoft and Sony paying AMD for its services then yes AMD did a lot of coaxing.
And what about that Chinese Console that’s using the AMD semi-custom Zen/Vega APU! That’s probably gettng more Rapid Packed math(16 bit FP) tweaking also from any games developers. AMD and the Console games makers looking to get a jump on the next generation console APUs will be wanting to tweak for that Chinese console with the AMD Semi-Custom
APU with Zen/Vega IP. Even if the Sony Console uses Navi based graphics in its next generation consoles as Navi will have all of Vega’s IP and then some!
AMD’s Zen/Vega or Zen/Navi will both have Rapid Packed math and other feature sets in common. Some of gaming features may be back ported from Navi games to Vega based games also if there was not wide adoption of some of Vega’s features before AMD moved on to focusing mostly on Navi. And most likely there is not much total difference between Vega and Navi all that much other than just a process node shrink and some additional new Navi only feature sets.
So even if Explicit Primitive Shaders are not readily adopted by games developers in time on Vega then those same features will be available in Navi and games developers will have more time to work with coding for Explicit Primitive Shaders and that work will be back ported from Navi games to Vega games. AMD has continously added to its GCN generations’ feature sets over the years but Vega has probably gotten the most new features like Rapid Packed math, Explicit Primitive Shaders, and that HBCC/HBC IP also introduced with Vega.
Nvidia is sure going to be Very Busy tweaking Games for its RTX feature sets while still trying to at least get the non RTX enabled gaming titles working with some improvments on its newer RTX branded Line of Turing GPUs. Nvidia does not have much of the console gaming market compared to AMD so AMD and its console APU clients will be tweaking for more low power gaming optimizations like Nvidia has to do for its Tegra/Nintendo Switch gaming units.
16 bit math on Console Gaming SOC/APUs and other features to get more gaming done with less graphics resources. AMD’s x86 based Console APUs being x86 based will be able to be ported over for desktop gaming usage relatively easily also.
AMD really needs to get some Tensor Cores of its own because that AI based supersampling and even AI based Upscaling may just be what Microsoft and Sony will want from any new semi-custom console APUs. Maybe some Ray Tracing cores also but that AI/Tensor Cores IP is very useful for all sorts of Image/other types of accelerated processing that fits in nicely with console gaming and the limited Graphics processing resources on gaming consoles. AI/Tensor Cores for rapid Image Upscaling, Supersampling, and anti aliasing/other filtering tasks even without Ray Tracing is still valuable IP for gaming/console gaming.
I can see Microsoft maybe wanting AMD to provide Tensor Cores and maybe Ray Tracing IP on some next generation console APUs what with all of the work that Mirosoft is doing with DX12/DXR and working with Nvidia on that IP also. AMD can not afford to sit still when it comes to graphics and any new IP to go along with Graphics and Image processing IP.
Thx Scott, I always like your
Thx Scott, I always like your articles because you give more than just what was in the press release. The “analysis” part is why I go to outlets like these.
“AMD Ryzen 1700X requires a
“AMD Ryzen 1700X requires a lower clock rate than the Core i7-7700 at 4K
Seems to suggest that Odyssey will meaningfully use more than eight threads.Makes a strong case for higher core counts in consumer PCs going forward.”
Maybe this time they are running 5 different protections in parallel to make it even more difficult to pirate the game.
Maybe this time they are
Maybe this time they are running 5 different protections in parallel to make it even more difficult to pirate the game.
That would then be the “this game runs so awfully slow and unstable, you do not really want to play it, pirated or not” method of piracy protection. 😉
Those clock speeds are just
Those clock speeds are just the turbo clocks for all processors.
The 1700X = 7700 though would seem to indicate thread scaling, or at least that Ryzen scales from 4 cores (Ryzen 5 1400) to 1700X sufficiently to recommend that.
Im currently playing AC
Im currently playing AC origins on my 8.5 year old i7 980 xe @ 4.3 ghz with 1080ti ftw3 at 3440x1440p 100hz with gsync. Max everything getting around 50 fps average which feels smooth due to the adaptive sync. Fingers crossed this will have same performance.
“AMD Ryzen 1700X requires a
“AMD Ryzen 1700X requires a lower clock rate than the Core i7-7700 at 4K”
All they’ve done is quote the rated stock speeds of each processor. Your major takeaway from this whole thing is not substantiated at all.
“AMD Ryzen 1700X @ 3.8 GHz or
“AMD Ryzen 1700X @ 3.8 GHz or Intel Core i7-7700 @ 4.2 GHz”
It’s just showing Turbo Boost clocks.
K only requires a GTX 1080
So with a 1080ti, 32gb of RAM, but an older I7-4770k CPU I am struggling to hit 30fps in the game’s benchmark with maxed settings.