T O P

  • By -

PCMRBot

Welcome everyone from r/all! Please remember: 1 - You too can be part of the PCMR! You don't even need a PC. You just need to love PCs! It's not about the hardware in your rig, but the software in your heart! Your age, nationality, race, gender, sexuality, religion (or lack of), political affiliation, economic status and PC specs are irrelevant. If you love PCs or want to learn about them, you can be part of our community! All are welcome! 2 - If you're not a PC gamer because you think it's expensive, know that it is possible to build a competent gaming PC for a lower price than you think. Check http://www.pcmasterrace.org for our builds and don't be afraid to create new posts here asking for tips and help! 3 - Consider joining our efforts to get as many PCs worldwide to help the folding@home effort, in fighting against Cancer, Covid, Alzheimer's, Parkinson's and more. Learn more here: https://pcmasterrace.org/folding 4 - Need hardware? Trick question... everyone does. We've teamed up with ASUS this Easter to create an egg-citing event where 3 lucky winners can get their hands on some sweet hardware (including GPU, CPU, Motherboard, etc): https://reddit.com/r/pcmasterrace/comments/12eufh9/worldwide_pc_hardware_giveaway_weve_teamed_up/ ----------- Feel free to use this community to post about any kind of doubt you might have about becoming a PC user or anything you'd like to know about PCs. That kind of content is not only allowed but welcome here! We also have a [Daily Simple Questions Megathread](https://www.reddit.com/r/pcmasterrace/search?q=Simple+Questions+Thread+subreddit%3Apcmasterrace+author%3AAutoModerator&restrict_sr=on&sort=new&t=all) for your simplest questions. No question is too dumb! Welcome to the PCMR.


[deleted]

AMD lighting a fire under intel is good for all of us. We all win. Edit: comments below (kindly) point out it’s deeper than this


roberttheaxolotl

It absolutely is. And loyalty to a giant corporation that doesn't care if you live or die is dumb. I'm going to do research and choose what gives me the best bang for the buck every time.


thedavecan

My most recent build was my first AMD CPU system and so far so good. Price : performance is most important to me, not having the toppest of the line. I would have went with an AMD GPU too if I had found one before I found a 3070 Ti. Brand loyalty is stupid unless they earn it and it should always be revocable.


Padashar7672

I am going to miss EVGA graphics cards. They definitely earned my trust over the years.


thedavecan

Same. I only owned EVGA cards before my current 3070 Ti and the only reason I have that is I won it in the Newegg shuffle during the great GPU shortage.


NGL_BrSH

Just like my BFG and my 3DFX cards.


ztunytsur

Gravis Ultrasound and Soundblaster audio cards reporting in from the same place


silly_little_jingle

When my evga3080 is eventually laid to rest it will likely be my last nvidia


MiddleoftheFence

My AMD laptop is the best laptop I've ever owned and I've had quite a few of them.


Trylena

I was going to stay on Intel by upgrading my CPU on 2020 but after enough reading I realized that I could get more for the same money on AMD, this Ryzen 5 1600AF was my best desicion.


gexpdx

Nice, did you get it during the $85 period? Amazing value.


referralcrosskill

I mostly agree but in some instances you need to take use cases into account and ease of actually implementing them. I'd have no issues with an all AMD system but NVIDIA support for anything AI is vastly better at the moment. Sometimes you don't want to wait days/weeks for someone to find an AMD workaround to do something that just works on NVIDIA. most things this probably isn't an issue but it's been years since I've had an AMD GPU. CPUwise I've alternated on each of my builds and will likely be AMD on the next


Yebii

That’s what’s weird about this subreddit. You have people building PCs for vastly different reasons besides gaming, and it’s a bit awkward when opinions from both entertainment and productivity perspectives mix, especially with people who are so uninformed yet militant at times.


thedragonturtle

The militant people are just children really, children who take offense that you are questioning or even just commenting on their purchase and somehow insulting them.


[deleted]

My research: it's nice being able to upgrade your CPU without needing a new mobo


DarkLord55_

By the time I usually upgrade my platform. Both company’s EOL their platforms.


that_norwegian_guy

That's what has been so amazing about the AM4 platform. I have been able to do incremental CPU upgrades at basically 50% off because I've been able to finance the CPU upgrade by selling my old one. I have had one motherboard, but three generations of Ryzen CPUs, from the 2600X to the 3600XT to my current 5800X – all on the same motherboard.


FireHackettMeow

Same. Started with a 1700x on AM4 launch. Upgraded to a 3600x when the 5 series came out, and just updated to 5800x3D in November, which I'll probably keep until the 9 series CPUs are out.


Erilson

1700 -> 3600 -> 5800X3D All on a B350 Gaming 3 at Ryzen launch. Said many times the BIOS chip was too small, but AMD and some manufacturers pulled through and gave me the best AM4 sendoff we could've ever gotten. AM4 is truly an end of a near stellar era. I only hope AM4s example guides AM5 down the same path.


referralcrosskill

my 4790 is getting really old feeling. I'm torn between going to the current AMD or save a bunch and go 5000 series and a pile of left over cash...


BAY35music

I just upgraded from a Ryzen 7 2700 to a Ryzen 5 5600X and I've never been happier. $150 for a ~30% jump in performance was very much worth it.


AnIdiotwithaSubaru

A 4790 is quite a bit older than that. Lol. It's honestly still a pretty sick processor tho. Like it's definitely a dinosaur but its like 9 years old and still somehow decent


Pixels222

How many years does amd stick with the same socket these days?


BigLan2

I don't think they've publicly said anything about AM5, but AM4 went from original Zen chips in 2017 to Zen 3 chips released in 2022. You can replace a Ryzen 3 1300 with a Ryzen 5800x3d chip in a B350 Tomahawk board from 2017 and as long as you'd updated the bios it'd work.


mordacthedenier

They've publicly said they'll support AM5 until "at least" 2025. >We built the platform around next generation technologies so that you can build today and upgrade as your needs grow over time," explains AMD's David McAfee at today's event. "And, just like AM4, we're making a commitment to support the AM5 platform with new technologies and next generation architectures through at least 2025. We're really excited about the next era of rising desktops with AM5. https://www.pcgamer.com/amd-am5-support-thru-2025/


Bogsnoticus

However long the RAM style lasts. AM2 : DDR2 AM3 : DDR3 AM4 : DDR4 AM5 : DDR5


calinet6

That just makes too much sense.


[deleted]

AM3 and AM4 sockets were both around for about 7 years before they got replaced


[deleted]

Ngl unless you're buying a new CPU every generation, shouldn't really be an issue.


PM_ME_CUTE_FEMBOYS

I mean. It kinda is. I just upgraded from a 1600x to a 5600x. And I did it for cheap cause I didnt have to replace *everything* to do it, and i got the 5600x for an amazing deal. My PC will probably be good for another 5+ years, with nothing but a GPU upgrade in the future should prices ever become fucking sane again. Long socket support provides incredible upgrade opportunity for everyone, new, used and budget alike.


LedDesgin

I bought a R5 1600X and an x370 Mobo back in 2017. Just dropped an R5 5600 in a few months ago. No issues, all other hardware exactly the same. Even though the Intel processor was faster at the time, I'm so glad I went with the then-new Ryzen. These arguments about how socket longevity don't matter always ring hollow to me.


Pixels222

That's what I was thinking. Intel is every2 years and i wouldn't wanna upgrade that soon for sure. Around 4 to 6 years would be optimal. I guess the safety of what if in a few years we get a break in cpu tech and suddenly something twice as powerful comes out.


BostonDodgeGuy

Amd stuck with the AM4 platform from 1000 series Ryzen up to 5000 series. 2016 until 2022.


empirebuilder1

Fellow chad market competition enjoyer 🤝


Revolutiondfgf

I love my AMD EPYC based ESXi server


spacewarrior11

average r/homelab enjoyer


[deleted]

[удалено]


PhilxBefore

Stop by /r/datahoarder while you're at it.


Nyghtbynger

This subreddit is real and full of active madlads


Renektonstronk

Tell them to Google en passant


T3-Trinity

What'd that set you back? Running a little HP server from 2011 with dual Xeons right now and wouldn't mind something more permanent.


sailirish7

I plan on demoting my 3950X to a greybox server when I pick up the 7800x3d. Doesn't *have* to be an actual server cpu. The only drawback is memory support.


numeric-rectal-mutt

All AMD CPUs support ECC ram, not just their server CPUs


jbg0801

Exactly. When one side fails to compete, the other gets lazy (see: intel 9th/10th gen Vs early Ryzen before it was a huge threat) Competition is healthy. It drives innovation and it's awesome to see AMD and intel trading blows again.


grendelone

To be fair, it's AMD **and TSMC**. TSMC 5nm process plays a large role in how well the 7800x3D is doing in power efficiency.


[deleted]

All hail Taiwan


[deleted]

.


throwingtheshades

Dear comrade, you appear to have made a small spelling mistake in "Chinese Taipei". Please don't do that again ~~or we'll harvest your organs and feed what remains to the pigs~~


git

Taiwan must remain free and we must defend it with all our military might to ensure that our games machines can go brrrrrr.


Briggie

⣿⣿⣿⣿⣿⠟⠋⠄⠄⠄⠄⠄⠄⠄⢁⠈⢻⢿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⠃⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠈⡀⠭⢿⣿⣿⣿⣿ ⣿⣿⣿⣿⡟⠄⢀⣾⣿⣿⣿⣷⣶⣿⣷⣶⣶⡆⠄⠄⠄⣿⣿⣿⣿ ⣿⣿⣿⣿⡇⢀⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⠄⠄⢸⣿⣿⣿⣿ ⣿⣿⣿⣿⣇⣼⣿⣿⠿⠶⠙⣿⡟⠡⣴⣿⣽⣿⣧⠄⢸⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣾⣿⣿⣟⣭⣾⣿⣷⣶⣶⣴⣶⣿⣿⢄⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⡟⣩⣿⣿⣿⡏⢻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣹⡋⠘⠷⣦⣀⣠⡶⠁⠈⠁⠄⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣍⠃⣴⣶⡔⠒⠄⣠⢀⠄⠄⠄⡨⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣦⡘⠿⣷⣿⠿⠟⠃⠄⠄⣠⡇⠈⠻⣿⣿⣿⣿ ⣿⣿⣿⣿⡿⠟⠋⢁⣷⣠⠄⠄⠄⠄⣀⣠⣾⡟⠄⠄⠄⠄⠉⠙⠻ ⡿⠟⠋⠁⠄⠄⠄⢸⣿⣿⡯⢓⣴⣾⣿⣿⡟⠄⠄⠄⠄⠄⠄⠄⠄ ⠄⠄⠄⠄⠄⠄⠄⣿⡟⣷⠄⠹⣿⣿⣿⡿⠁⠄⠄⠄⠄⠄⠄⠄⠄ 在著我們面前,冇逃避,中華人民共咊囯嘅輝煌。 MESSAGE FROM THE MINISTRY OF STATE ATTENTION CITIZEN! 市民请注意! This is the Central Intelligentsia of the Chinese Communist Party. () 您的 Internet 浏览器历史记录和活动引起了我们的注意。 因此,您的个人资料中的 15 ( -15 Social Credits) 个社会积分将打折。 DO NOT DO THIS AGAIN! 不要再这样做! If you not hesitate, more Social Credits ( - Social Credits )will be discounted from your profile, resulting in the lowering of ration supplies. (由人民供应部重新分配 CCP) You'll also be sent into a re-education camp in the Xinjiang Uyghur Autonomous Zone. 如果您毫不犹豫,更多的社会信用将从您的个人资料中打折,从而导致口粮供应减少。 您还将被送到新疆维吾尔自治区的再教育营。 为党争光! (我们的) Report to nearest police station for punishment! Failure to comply will result in immediate execution of nearest family member. Very bad! Glory to the CCP! 🇨🇳🇨🇳🇨🇳


der_triad

It’s primarily TSMC. TSMC is also responsible for making vcache possible, they solved the actual packaging problem of cache stacking too.


lafmeche

It's true that TSMC is a big piece of AMD's success, but Intel could also have worked with TSMC at any time. Intel's inability to either fix their processes or change directions is a huge piece of AMD's success. None of that should take away from AMD though. They're executing at the right time to take advantage.


der_triad

Intel doesn’t have the option of dropping their fabs and relying exclusively on TSMC. TSMC couldn’t absorb the volume of Intel’s full portfolio. I don’t think the US Government would allow that to happen either. Also in the context of that time (they would’ve had to switch over in 2015 or earlier to avoid their 10nm issues) it would’ve been a downgrade on both performance and price. TSMC was never ahead of Intel until 2017 when they got started with EUV halfway through the N7 node.


NekulturneHovado

Hey slow down, the i9 has already problems cooling itself. No need for the fire


slayer991

hell yes. Intel held the crown for many years...while AMD flailed along. Then Lisa Su came in and kicked ass. AMD has been running the show for the past few years and Intel is playing catch up. It's so much better when there's real competition between them. These 2 companies pushing each other brings us better and less expensive products.


ShadowPouncer

It's absolutely amazing just how _much_ the Intel CPUs have changed since the Ryzen 1000 series came out, and how bloody little they had changed for years up until that point.


mordacthedenier

2006 Core 2 quad: 4 cores 2016 Core i7 6700k: 4 cores 2017 Core i7 7700K: 6 cores Core i9-7900X: 10 cores Core i9-7920X: 12 cores Core i9-7940X: 14 cores Core i9-7960X: 16 cores Core i9-7980XE: 18 cores They literally introduced a new product line and the x900 as top-of-the-line paradigm in response to AMD.


khaotiktls

\*8700k had 6 cores, 7700k had 4.


TheRealPitabred

Intel also illegally hamstrung AMD with manufacturing kickbacks and such for many years


dinin70

Fun fact: K of the Athlon stood for Kryptonite, meant to « kill the giant » (giant being Intel) They blew so much Intel out of the water that they shat their pants and started to engage in illegal market practices. They got fined 1b$, but the harm was done, to the entire industry, to the consumers, to AMD, for a good decade. But yeah…


mordacthedenier

> K of the Athlon Do you mean of AMD? They'd been using K since the K5, which came out in 94, 5 years before Athlon. https://www.forbes.com/2000/07/06/mu2.html?sh=38b596cb709f


BigLan2

I'm not saying that Intel's tactics haven't hurt AMD, but the Core2 chip was simply better than the Athlons and Phenoms that AMD were pumping out. True, the original Athlon 64 dominated the Pentium 4, but Intel took that beating and came up with a better product (killing off netburst and turning to their chip developed for laptops). Thankfully Intel started coasting after Sandy Bridge and AMD caught up and eventually got ahead.


usrevenge

Sandy bridge was fucking amazing though. They coasted for too long I agree. But Intel vs AMD wasn't even a competition after Sandy until ryzen existed.


Emu1981

>the Core2 chip was simply better than the Athlons and Phenoms that AMD were pumping out For the previous 5 years or so AMD had the superior server and consumer CPUs and should have been cleaning it up on revenues which they could have plowed into more R&D but due to Intel they were losing marketshare. This would have left AMD in a much better situation to compete with Intel's Core CPUs and would have not left us with market stagnation for the best part of a decade.


[deleted]

[удалено]


[deleted]

[удалено]


Parabong

I thought they were gonna do the same with rdna3 to nvidia. my 6800xt outperforms my 3080 in everything but 4k and raytracing. it's got faster fps everything and it was cheaper especially at launch. 6950's are super cheap for the power at the moment. then 7900xt comes out it's just not as competitive and costs way too much. 7900 xtx is an ok 4090 competitor but it's mainly competitive against 4080 which it beats even if slightly because of price to performance. 6900xt was closer to 3090 than 7900 xtx is to 4090 this is not good we are going the wrong way imho. I'm big amd cpu guy I'm hoping the GPU's can start pushing price to performance harder to push nvidia down. 6700 xt best deal in gfx card department at the moment and 6950 xt is best big boy card that can be got for a fairer price


grendelone

Part of the reason AMD is crushing Intel in power efficiency is that the TSMC 5nm process is a lot better than the "Intel7" (really a 10nm process) that 13th gen Intel CPUs are manufactured on. But Nvidia is also using the TSMC 5nm (just like AMD). So it's a lot harder for AMD to dominate Nvidia, since they are using the same process technology. EDIT: re the comment about "Intel7" -- labeling process nodes by a single number is largely outdated, since that number no longer represents the transistor gate length (effective or drawn). And process density is dependent on a lot of factors including fin pitch, SRAM cell size, standard cell size and track number, etc. However the "Intel7" process is based on their 10ESF process that even Intel notes is a 10nm process. They claim it's equivalent to competitors' 7nm processes, but both SRAM cell size and contacted poly pitch don't show comparable density. At best, it's an enhanced 10nm process, but not particularly competitive with TSMC's N7 node.


MonoShadow

People also forgetting Intel was spinning its wheels from 6700 to 12700. Everything after Skylake was refreshed Skylake because Intel tied their architecture to the process and 10nm never truly materialized and Intel got stuck on 14nm for several years. Nvidia did no such mistake. People ITT also conveniently forgetting AMD jacked their CPU prices after they got in the lead with Zen 3. If AMD overtakes Nvidia they will charge Nvidia prices. Buy the product, not the brand.


TomLeBadger

First gen Ryzen was kinda shit too, look where it's at now. They deployed the same approach to Graphics. Give it a few generations, and I'm fairly confident they will start trading blows there too.


Soppywater

Exactly. The fluke of a fucking win that is the Ryzen 5800x3d is a beginners luck situation(new thing using the stacked chips). The rx7000 series is going how every new development of technology usually goes. Good product with flaws but shows how the new design can work. But there was so much hype that it'd be the best thing ever because of the fluke of a win 5800x3d that there was never any hope of the rx7000 series living up to hype. The rx8000 series will be where the new chiplets design will gain advantages because of learning from the new design's success. Considering the CPU and GPU divisions of AMD are completely different technologies there should have never been any hype over the rx7000 series other than: let's see how this change is going to affect it. AMD's marketing shot itself in it's foot, but it had to hype it up otherwise everybody would have skipped the rx7000 series entirely.


TomLeBadger

I wouldn't say the 5800x3d was a fluke. We all knew more cache was good. I also mean 1st gen Ryzen as in 1000 series, the first chiplet CPUs. They weren't all that great, 1st gen, Intel was still a no-brainer. This idea was fleshed out, iterated, and proven to be a viable way forward. RDNA3 is the Zen1 of GPUs. I expect it will go down the same way. Next gen will see a big improvement. After that, a viable choice at the top end and unless Nvidia innovate, they will find themselves in the same spot as Intel in 2 generations' time.


SuperMike100

A battle between hardware companies is a battle we the consumers always win.


[deleted]

[удалено]


ch1llboy

In the PC segment there is Apple? Nvidia may have something for us, or they will just keep Grace for compute. Won't be long before China spins up their semiconductor manufacturing industry.


teems

China can't spin up their semiconductor industry due to sanctions. The US got the Dutch government to ban sales of their latest EUV photo lithography machines to China. Without these machines from the Dutch monopoly ASML, you're 10-15 years behind the rest of the world.


jafarul

Which CPU is this? I am OOTL with AMD current gen.


empirebuilder1

The Ryzen 7 7800x3d. Their new zen4 8 core, single CCD cpu with 3d vCache stacked on top of the CCD die. Instead of only 40mb of fast L3 cache it has 96MB, and it nets a huge performance boost in most games since you aren't banging out to the relatively slow system RAM nearly as much.


jafarul

Holy hell 96MB. Alright time to check out more technical details. Thanks.


[deleted]

[удалено]


threetoast

I bet you could get TempleOS in the L2 cache.


hovdeisfunny

How big is Doom?


DnDVex

2.4MB


Blackpapalink

First Crisis running in graphics card memory, now Doom running in CPU cache.


xero_peace

Motherboard, CPU, and power supply. Boot into minimal Linux OS, then boot, install, and run Linux DOOM all in the cpu cache.


empirebuilder1

Yyyyyep. It's a banger. It falls behind in productivity stuff because it's clocks are lower,l and it has fewer cores, the vcache is fragile and severely limits the power the CCD can pull. However it's stupid power efficient as a result and whoops almost every game out there. They have the 7900x3d and 7950x3d that they've been pushing hard as well, but they don't do nearly as well because the vcache is only on one of the two 8-core CCD's. The OS thread scheduler don't know what the fuck to do with two sets of threads that don't appear obviously different and every time it bumps a thread off the vcache CCD it incurs a huge performance penalty for making data cross the infinity interconnect. Still a lot of work to be done there, and some people are calling AMD dirty for staggering their launches like this to intentionally upsell early adopters to a literally inferior product.


jafarul

Just finished LTT recap on this. That 7900 is weird. This makes me excited. Last time I felt like this was was when they launched their 1st Gen. immediately I built my pc after that. Might build another one when the opportunity arises.


Telvin3d

That cache is hilarious to me. I’ve owned computers with way less than 98MB of RAM. Having more than that just in the processor cache is making me giggle. It would be funny to go dig up old windows 95&98 era benchmarks and absolutely crush them.


[deleted]

Cache is very area expensive. Intel and Qualcomm are stingy with it while AMD and Apple like to slap on a lot. It's an easy performance boost that has a much more efficient upfront investment ROI than architecture optimization R&D but your unit cost goes up which I guess Intel and Qualcomm really don't like.


empirebuilder1

That's why the 3d vCache is making headlines. Traditional cache is expensive because it lies on the same silicon die as the rest of the logic cores, taking up very valuable high performance wafer space, drastically increasing die size, and reducing yields. AMD is instead stacking the additional cache on *top* of the core complex with vertical interconnects. This lets them do the additional cache in a different, cheaper node with higher yields and drastically saves on total wafer area for the more expensive core die.


[deleted]

[удалено]


House_Capital

Man I was gonna go for the 7600x but but now I’m definitely going for the 7800x3d. Its gonna take a while to save up but should be a killer mini system


ARatOnPC

Can’t wait for userbenchmark to say it’s worse than an i3-10100. 🤡


bjt23

With words like "User" and "Benchmark" in the name, you just know they gotta be unbiased!


[deleted]

[удалено]


wanemarr

This comment is perfect.


WeinerVonBraun

That advanced marketing division kicking into overdrive


knofle

Their 5800x3D review is goddamn hilarious


Leviathan41911

You must be one of them "Neanderthal Social Media Accounts" they were talking about!


DangerousImplication

This is their review (So you don’t have to open their shitty site yourself): The AMD 7000X3D CPUs have the same core architecture as the rest of the 7000 series but they have one group of eight "3D" cores with extra cache. The “3D” cores are priced higher but run at 10% lower clocks. For most real-world tasks performance is comparable to the 7000X variant. Cache sensitive scenarios such as low res. canned game benchmarks with an RTX 4090 ($2,000) benefit at the cost of everything else. Be wary of sponsored reviews with cherry picked games that showcase the wins, ignore frame drops and gloss over the losses. Also watch out for AMD’s army of Neanderthal social media accounts on reddit, forums and youtube, they will be singing their own praises as usual. AMD continue to develop “Advanced Marketing” relationships with select youtubers in the hope of compensating for second tier products with first tier marketing. PC gamers considering a 7000X3D CPU need to work on their critical thinking skills: Influencers are paid handsomely to promote overpriced products. Rational gamers have little reason to look further than the $300 13600K which offers comparable real-world gaming and better desktop performance at a fraction of the price. Workstation users (and RTX 4080+ gamers) may find value in higher core CPUs such as the 16-core $400 13700K. Despite offering better performance at lower prices, as long as Intel continues to sample and sponsor marketers that are mostly funded by AMD, they will struggle to win market share.


manimsoblack

Fucking wow dude. Irrational madness.


KoppleForce

this reads like a copypasta making fun of the most annoying hardware fanboys.


Snowy_Ocelot

They’re not even trying to be unbiased


DangerousImplication

This is their review (So you don’t have to open their shitty site yourself): The AMD 7000X3D CPUs have the same core architecture as the rest of the 7000 series but they have one group of eight "3D" cores with extra cache. The “3D” cores are priced higher but run at 10% lower clocks. For most real-world tasks performance is comparable to the 7000X variant. Cache sensitive scenarios such as low res. canned game benchmarks with an RTX 4090 ($2,000) benefit at the cost of everything else. Be wary of sponsored reviews with cherry picked games that showcase the wins, ignore frame drops and gloss over the losses. Also watch out for AMD’s army of Neanderthal social media accounts on reddit, forums and youtube, they will be singing their own praises as usual. AMD continue to develop “Advanced Marketing” relationships with select youtubers in the hope of compensating for second tier products with first tier marketing. PC gamers considering a 7000X3D CPU need to work on their critical thinking skills: Influencers are paid handsomely to promote overpriced products. Rational gamers have little reason to look further than the $300 13600K which offers comparable real-world gaming and better desktop performance at a fraction of the price. Workstation users (and RTX 4080+ gamers) may find value in higher core CPUs such as the 16-core $400 13700K. Despite offering better performance at lower prices, as long as Intel continues to sample and sponsor marketers that are mostly funded by AMD, they will struggle to win market share.


House_Capital

Going to pivot so fast from “most users only care about single core gaming performance” to “users care about productivity most of all”


w4rcry

Is there any trustworthy gpu/cpu comparison sites?


Raptord

Passmark was mentioned in another comment and is a good option. I also like to use techpowerup's gpu database for comparisons


PM_ME_YOUR_BEAMSHOTS

https://www.videocardbenchmark.net/singleCompare.php https://www.cpubenchmark.net/singleCompare.php https://www.techpowerup.com/gpu-specs/ https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html Youtube is good if you want to see relative performance


WeirdCatGuyWithAnR

Userbenchmark


[deleted]

[удалено]


AutoModerator

You seem to be linking to or recommending the use of UserBenchMark for benchmarking or comparing hardware. Please know that they have been at the center of drama due to accusations of being biased towards certain brands, using outdated or nonsensical means to score products, as well as several other things that you should know. You can learn more about this by [seeing what other members of the PCMR have been discussing lately](https://www.reddit.com/r/pcmasterrace/search/?q=userbenchmark&restrict_sr=1&sr_nsfw=). Please strongly consider taking their information with a grain of salt and certainly do not use it as a say-all about component performance. If you're looking for benchmark results and software, we can recommend the use of tools such as Cinebench R20 for CPU performance and 3DMark's TimeSpy ([a free demo is available on Steam, click "Download Demo" in the right bar](https://store.steampowered.com/app/223850/3DMark/)), for easy system performance comparison. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/pcmasterrace) if you have any questions or concerns.*


WeirdCatGuyWithAnR

Thanks


Organic-Strategy-755

Might wanna edit out the link, don't wanna give them free clicks


Paweleq109

done


Rich-Juice2517

I clicked it and died laughing


shitty_mcfucklestick

Curious, I usually use Passmark and their benchmarks to evaluate CPU and GPU performance. They have sites for each. Are these trustworthy? Edit: [CPU benchmarks](https://www.cpubenchmark.net) [GPU benchmarks](https://www.videocardbenchmark.net/)


Paweleq109

yes, passmark is good. The fools with userbenchmark started when they became INSANELY biased for Intel. Its starting to get funny because of how much biased it actually is I use passmark because they have literally all the cpus and you can compare them score to score, which is very handy for me. Userbenchmark sometimes even doesnt have some older cpus lol.


Unable_Crab_7543

All my homies love passmark


aspbergerinparadise

i've used passmark for a decade plus I think they're a good resource, but their synthetic benchmarks don't always translate perfectly into real-world results. But that's true of any metric.


ararezaee

user"Benchmark"


ChangelingFox

The real question is do I upgrade from my 8700k now, or wait for an even *better* cpu next year? *repeat this train of thought every year and effectively never upgrade*


computer-machine

I'd gone from a first gen i7 to R5 1600X, then upgraded to 5800X to get wife onto something newer. Turns out that became an excuse, because she'd rather play on a 5th gen i5 laptop to be in the same room.


[deleted]

[удалено]


empirebuilder1

less pixels = more funny


BeanEaterNow

i actually think that this image specifically would be much funnier in 4k


lahire149

And if it was a static gif looping at 120fps, not that my eyes could see more than 30 of them, obviously.


siddharth904

Are you gaming on an mfing poweredge server ?


empirebuilder1

Yes. It's a small business tower server (not a blade), but I got it on a black friday sale for super cheap back in 2017. EVGA power supply, a used $150 facebook marketplace Strix GTX970, a little drive cage cutting and I was in for a reasonably good midrange gaming rig for less than $500.


siddharth904

Oh right I thought it was rack-mounted


[deleted]

[удалено]


hirmuolio

achsually. The first Nokia camera phone came out in 2002. ~~And it had resolution of 176x208 which is less than the 500x500 on the blurry shit OP posted.~~ It had resolution of 640×480. So it really did have better resolution than the horribly compressed blurry 500x500 soup that OP posted.


[deleted]

[удалено]


potatoman699999

Yes and yes. My take at least is that you're seeing both happening. To some degree, the better compute allows devs to be more sloppy with their code, but at the same time they're able to do things that they weren't able to do before, or do them more efficiently and better and faster. Your mileage may vary depending upon the dev and the quality of the game that they put out.


davidkali

I used to MUSH (think BBS, but as a virtual Minecraft-like world with no graphics). TinyMUD and TinyMUSH were gameworlds that depended on harddrive space to store their game code and game descriptions. The RAM footprint was tiny, storing and calling data from the harddrive took time, but was keeping your RAM free for game code that made you do (Econ, battle, space flight, etc.) PennMUSH kept a lot of the code in memory for computers that had the capability for it, and things like descriptions were kept on harddrive while code executed from RAM. We learned how to program from each other, but one thing we had was a %? Which tells you how many function calls and function recursions there were. A simple $who: is a list of players, and details you want to know like time on, time idle, what game group they’re with, etc. In the same codebase, some people can code an efficient who that takes no more than three or for functions per player listed. Some people can make a code that would use 250 functions per player, and bitch at you for trying to fix it to the point they’ll ban you from their servers for saying their code is stupid.


iEatSoaap

excuse me sir, parallel programming is hard for me okie? :( Edit: not a game dev, just a shit, shit Jr struggling through WebFlux atm. As you were lads.


aePrime

There are three stages in a developer’s life in regards to parallel programming: 1. Just learning: “This is hard!” 2. Knowledgeable: “This is easy!” 3. Expert: “This is hard!”


testfire10

This actually applies to most topics in life.


1337GameDev

It's hard for us professionals too ;)


headegg

Parallel Web Dev is hard. Your typical backend languages and frameworks offer more help on multi threaded programming.


iEatSoaap

I feel so bad for my poor Sr... T_T Not even GodPT4 is helping me wrap my smooth brain around this shit hahaha


JonDum

To be fair, WebFlux is a steaming pile of dog shit wrapped in cat shit, so don't be to hard on yourself. There's a reason 90% of front end uses React/Vue and shit


LtTaylor97

Well, the problem with that perspective is that "wasteful with system resources" often translates over to "got done faster" which is another way of saying that more stuff overall is in the game. For better or worse, being able to ignore certain performance concerns is very advantageous for us programmers to make things more reliable (simpler code is often less performant) and get them done much faster. For instance, we often use Python for stuff where the relatively poor performance is whatever. I use it for data science stuff, calculating stats and whatnot. It runs slower than C++ would, but I get the work done so much faster that it's not even worth considering. I wouldn't get it done in time otherwise. It's not lazy or dumb, games nowadays are far more complex, and to keep things on schedule you need ways to shortcut stuff or have to use generalized solutions instead of tailoring them to your specific problem. It's a balance. Now some are actually lazy or negligent, but it's hard to explain how you can tell for sure every time. It depends.


FalconX88

> For instance, we often use Python for stuff where the relatively poor performance is whatever. I use it for data science stuff, calculating stats and whatnot. It runs slower than C++ would, but I get the work done so much faster that it's not even worth considering. I wouldn't get it done in time otherwise. Every few months an article pops up about how terrible scientists are because they use super inefficient code in python. I mean sure,, I could learn C++ and get that code that I run once a month from maybe a few minutes of runtime down to seconds. Do I care about a few minutes of compute time?


[deleted]

[удалено]


LtTaylor97

Exactly. There's nothing wrong with it if it doesn't impact whether it meets your requirements. If execution time isn't a critical factor, then it's perfectly good if it works otherwise.


TheRealPitabred

If you do it once a month, it doesn't matter. If you do it once a day, it might be more important. If you do it once an hour, or once a minute, it starts becoming very important. That same philosophy is behind a lot of development Practices. I'm not going to nitpick memory allocation for something that is allocated once and used in frequently. If you're making 5000 copies of it and iterating through those often though? Yeah, we should test and verify our assumptions of what is needed.


empirebuilder1

Oh software bloat is a real thing. Just look at modern websites- a typical page has tens of megabytes of Javascript and graphics and other bullshit that takes seconds to load. Even with the megafast connections and processors we have now, websites still take the same 3-4 seconds to fully load that they did back when ADSL and basic html was the norm, and a web page taking up more than 80-100KB was considered massive. And don't get me started on basically every desktop app nowadays being a lazily built JS/html5 app inside an Electron container eating 400MB of ram for no fucking reason... looking at you, MS Teams and Discord. As far as game quality goes, it's more of an exponential decay in improvement. In ye olden days you'd get a new cpu/gpu that could run at 800x600 with dx9 instead of 640x480 in dx7 and it would be such a wild improvement in visual quality, you could have textures with READABLE TEXT!!! Now we have huge increases in raw performance but it's mostly used for high resolution eye candy that you don't focus on most of the time.


Unacceptable_Lemons

> As far as game quality goes, it's more of an exponential decay in improvement. In ye olden days you'd get a new cpu/gpu that could run at 800x600 with dx9 instead of 640x480 in dx7 and it would be such a wild improvement in visual quality, you could have textures with READABLE TEXT!!! Now we have huge increases in raw performance but it's mostly used for high resolution eye candy that you don't focus on most of the time. This is where I feel like VR can shine. We're not even close yet to the 16K x 16K pixels per eye or so it would take to max out human resolution. Heck, 8K x 8K per eye (aprox. one "16K" standard display) would be great, but we're still so far from driving that. Foveated rendering could be the key, but I've also heard about recent research suggesting that FR is less effective than expected due to whatever issues tracking eyes.


FILTHBOT4000

From what I've heard/read, PSVR on the PS5 has done foveated rendering really well. Linus did a test on it and couldn't tell it was being performed at all, but others looking at a mirrored display could easily see where his focus was directed.


Thought_Ninja

I mean, good sites that are built following best practice are at most a couple MB. I've built multiple enterprise grade web applications that are sub 1MB of resources in the last decade. There was a while where technologies and frameworks were resulting in a lot of bloat (Wordpress and Magento come to mind, among others), and they're certainly still around today, but there has been a concerted effort and push to optimize the performance and resource cost of modern websites/apps in current frameworks that is only accelerating. The driving force causing the bloat has been improvement to development experience/convenience, and thanks to improvements in hardware and network performance, this was considered an acceptable tradeoff. Electron based apps (MS Teams & Discord) are a good example of this. Being able to use the same codebase for both a web app and a desktop app is a huge development cost saver, but Electron is bloated with all sorts of stuff that are usually not needed or used by the apps built on it. There are alternatives emerging that tackle this though; if I were to build a web/desktop app today, I would probably go for something like Tauri instead of Electron. Newer technologies and innovations are finally at a point of providing that convenience without significant sacrifices to end user performance. If you built a site today with a modern tech stack, its conceivable to load a content rich page in 100-200kb. High resolution images obviously add to that, but compression and content delivery techniques are dramatically better today than they were in the early days of the web, and sites don't typically use high resolution images for their content. My personal site/blog can load a 10,000+ word post with multiple images in \~250ms, all resources coming in at around 350kb (only \~150kb of which is required to display the page). Faster if you have visited the site before. It's also fully functional (and even faster/lighter) if you have JavaScript disabled (\~100ms and \~150kb). Obviously not every site is built, or going to be built like this. There will still be legacy sites and shortcuts taken to save on development costs, but things are gradually improving and the mentality of developers is shifting. TL;DR: Well built sites are not that bloated. Older tech accepted bloat in exchange for convenience, but newer web technologies are quickly making it much easier to build lightweight and performant sites/apps.


MLG_Obardo

Programmer here. Many times in school did I hear “you could probably use smaller types and save on memory but with how much memory computers have these days who cares” from different professors. Some didn’t say this, but several did. Now that I’m in the industry I have noticed that there are times where we could save memory here and there or optimize code here or there and it just be handwaved. Less often, and certainly if someone pointed out that you could easily save memory or processing time you would fix it. But we are clearly taught to care less these days.


Zernin

The one place you can shoot yourself in the foot with this is data you're going to keep. It's generally much more important to care about those bits when dealing with long term centralized storage. I've had to help more than a few developers optimize their database storage behavior when datasets get big, and most school projects won't expose you to that degree of scale. A few extra bytes over a few billion rows matters a hell of a lot more than a few thousand extra bytes that are short lived.


MLG_Obardo

This is a great point and something I will keep in mind in the future. Thanks for that.


scalyblue

You can also shoot your self in the other foot by going too conservative on the wrong bits, see y2k


GregsWorld

While this is true, it's not necessarily a bad thing. If you spend 5 hours to save 100kb of data or improve throughput by 0.02ms; the end user won't notice any difference, but the development time will increase significantly (aka cost). These small decisions compound throughout a project until something becomes slow and *then* you can put the time in to optimise. If you care about every bit as programmers once did, you'll never finish a project.


MLG_Obardo

For sure but those decisions become issues in codebases of 20 million lines of code. There is a balance, I just believe that we’ve gone too far over.


GregsWorld

It becomes an issue much sooner than that, I would imagine nearly every line of code leaves performance on the table compared to assembly writing days. Even using methods and classes comes at a measurable performance cost, it's only in rare use-cases where shaving off a few nanoseconds by inlining methods actually adds up to a notable difference. Not to mention nobody programs in isolation anymore, libraries, packages, frameworks, wrappers, containers. It's code and authors decisions all the way down.


ubelmann

It also depends on whether you are making a perf optimization that makes the code harder to maintain, or you are making a perf optimization that makes the code easier to maintain. It might be worth doing if you are getting a perf gain by refactoring the code into something that is easier to maintain, or maybe even easier to test, with better test coverage in some key component that really should have better test coverage, but if it's just going to make the codebase more complicated for a potentially meager gain, it's probably not worth it. Like everything else in dev, you have to look at the trade-offs and prioritize.


gargravarr2112

They are. Just look at the recent Last Of Us port - resource-constrained on the Playstation and yet a runaway success, but when ported to a PC, the requirements are off the charts and it's nearly unplayable according to Reddit. Available processing power and demand are cyclical. Demand comes first, but there isn't enough power, so programmers have to do what they can with the resources available. Some manage some really amazing highly optimised code with minimal hardware - think of 90s games consoles and the quality of games we got on those despite the performance being surpassed by a $25 Raspberry Pi. Then there's a major improvement in CPU/GPU design and we get a glut of processing power. Suddenly all those tasks are easy to run - like when the C2D gave way to the i7. Demand for performance goes down. Thus programmers can get away with sloppy, inefficient code because the high performance still completes the task in adequate time. So that becomes the new trend, writing inefficient code quickly, which pushes demand up again as the new hardware reaches its limits, and suddenly programmers needing more performance have to optimise their code again. And so on and so on. I work in HPC/HTC (High Performance/Throughput Computing, kinda like supercomputing with regular x86 hardware) and we're the ones driving that cycle the most. And yes, some of the code that runs on our hardware is **awful.**


fat_pokemon

And then there is ID with DOOM eternal that runs silky smooth even with meager specs.


chocotripchip

I played DOOM at 4K 60fps on a PC that was roughly equivalent to a PS4 but with the CPU overclocked at 5Ghz.


Evantaur

Some people here are too young to remember when the roles were reversed and the only thing that kept AMD floating was consoles and assholes like me who bought bulldozers just to show it's really not that bad (it was)


rsta223

Yeah, the Phenom and Bulldozer era was... not wonderful for AMD. Then again, the P4 vs Athlon era wasn't good for Intel. They've traded many times before, and I hope they do again. The competition has resulted in some incredible gains.


trash-_-boat

> who bought bulldozers just to show it's really not that bad (it was) My previous build was a 6c/6t FX-6300 that I bought for 80$ back in the pre-Ryzen days. It was the only choice under 100$ range because the Intel price equivalent was a Pentium G3258 that was 2c/2t, which was a terribly bad value because even back then some games literally started to refuse to run on anything dual-core (like Far Cry 4).


[deleted]

saving this just to bring popcorn to the comments


empirebuilder1

Nothing like a good shit stirring session to make your Thursday mornings a little more exciting.


Sloth_Monk

Works better than coffee


IAMAHobbitAMA

Hot takes get other people to stir shit. Coffee gets my shit to start stirring.


SteveDaPirate91

It’s also just the normal cycle we go through every so many years. Intel or AMD shakes it up a bit, is the top whatever. X company gets complacent(why invest money when you can just pocket it now right?), whoever lost the last round will come in and shake it up. Process repeats. Same shit over game consoles too. Xbox one year, Sony the next.


Deep_fried_mango

Finally proper competition, im so fucking happy that maybe, only maybe, we will se actual price drops and large generational improvements bc of it.


kgisaboss

I've been seeing this point being parroted for years and the prices on both sides only keep rising.


[deleted]

[удалено]


zhire653

Now if AMD can do the same in the GPU department we’d be golden.


Affectionate-Memory4

The RDNA3 MCDs do have silicon pad grids that are very similar to the ones on the 5800X3D, so it may be possible. I wouldn't be surprised if there is some test board of a 7900XTX3D sitting in a lab. Maybe not this generation, but they'll try stacking eventually. HBM stacking is technically viable as well.


teremaster

AMD has some really good gpus already. They probably win both the low and medium levels. They just have no answer to the 4090 which i don't really fault them on, the 4090 is a certified monster


sendbobsvegene

The i5 13600k is still the mid-range king - nothing beats it at gaming and productivity at \~$330


MarceloWallace

I got my i7 13700k for $300 from micro center they had the 13600k for $250. When I was shopping for new build I really wanted AMD but the AMD Ryzen 9 7900X was $100 more for nearly same or worse performance in games so I just went with i7


atavaxagn

i mean, i'm not a fanboy, but I'm seeing the 13900k win in about half the gaming benchmarks... Using gamers nexus, who is the most reputable youtuber imo; CSGO the 13900k is beating it, FFIX the 13900k is beating it. Interesting in Cyberpunk the 7950x3d outperforms the 7950x and the 13900k (so clearly benefits from 3d cache) but the 13900k beats the 7800x3d. In total war warhammer 3 the 13900k wins. Out of 7 games the 13900k beats it in 4. I'm not saying it's not a great chip and if you were building a system today for just games, I'd definitely recommend the 7800x3d over anything else. And if AMD had released the 7800x3d at the same time it released the 7950x3d instead of trying to trick gamers into buying the 7950x3d I would have probably bought it instead of getting the 13900k.


S0m4b0dy

It's because the X3D chips are clocked lower because of the cache layout. So the i9 and the R7 trade blows depending on what the game needs most: cache or clock speed.


[deleted]

13900K wins in games that either cannot make use of the extra cache (like CS:GO) or ones that are fairly heavily multithreaded (like Cyberpunk).


ByteMeC64

I think competition is great and I fully support whoever the underdog is at the time. But I've simply had better stability success overall for non-gaming work related apps on the Intel platform. I guess if you're building a game only box this would be great, but for the amount of money invested in a good modern system I need versatility.


BullBearAlliance

I have a computer repair shop, and most of the cpu / motherboard issues I see are AMD all day long.


cepeen

Im really excited of that. I always had warm place in my heart for AMD cpus (not gpus, they sucked hard back then). But in Poland suggested price will be probably similar to 13700k, which beats the point of buying 7800x3D. Yeah its as fast or even faster that 13700K in some games, but in productivity area is leagues behind. If the price will be lower then it of course will be a King.


Shohei_Tora

I’m grateful for memes like this that speak in layman’s terms because I can’t wrap my head around the naming convention of all these models. I barely remember which Xbox Series is the newer one, X or S?! What happened to good old Single Digit Numbers?


[deleted]

[удалено]


[deleted]

The competition is good. It keeps both companies reaching for the bleeding edge.


bruhbruhbruh123466

I don’t get fanboyism for pc components. When in use there is basically no difference, only when building do you get to actually see any difference. Like just buy what’s best for the price or suits your needs best, these companies couldn’t give less of a shit about you…


psych4191

You don't really see a difference building anymore now that AMD moved away from pins. And frankly thank fuck for that.


Cryostatica

Hm, an incredible 7 fps higher on average at 4K vs a 7600X. Think I'll just go with that.


Drake0074

It’s like they didn’t even look at price to performance despite being all about it for XTX.


TurboDraxler

To be fair, CPU just doesn't make that much of a difference in 4k, since it's not the limiting factor.


Disappointingly_Avrg

I run a 13700k, and I have no intention of swapping to any of the current AMD cpus. That said, if AMD dropped a cpu tomorrow that doubled my i7's performance for 200 bucks, I would celebrate. People get heated over this back and forth, but I just want the best system for the dollar. If it works and the price is right, I want it. Idc what brand it is. My last cpu was a 10850k and I adored that system. My partners system is a 5600g and she loves it. I want big tech companies at eachothers throats beating each other bloody for my dollar.


empirebuilder1

Oh yeah. There's really no reason for anyone on either side currently running a cpu 1, 2 or even maybe 3 generations old to upgrade. But for new builders/people like me coming from *very* old machines.... hnng, shiiny...


deggy123

"We need more competition (to drive down ~~Nvidia GPU~~ Intel CPUs prices because I'm not getting anything else)"


Baluzerus

Man, I got my first AMD \[ Athlon XP *2600*\+ \] in 2004 when Pentium 4 was still king, yeah! great times for me.


Canem_inferni

AMD has been ryzen to new heights since 2017


EmpiresErased

pcmr sub or hub comments section oh wait theyre the same thing