Semi-Proprietary Flat Pack Memory
Dell CAMM was announced yesterday to mixed reactions, with some impressed that Dell Precision mobile workstations will ship with 128GB of RAM and many others expressing concerns about the somewhat proprietary nature of this new type of memory module. It is the second point that most techies are focussing on as Dell’s response to their reactions about CAMM has been at best interesting, at worst confusing.
Dell claims that the design is not proprietary at all, regardless of the fact that they are the only source for Compression Attached Memory Module replacements or upgrades. They also hold the patents on both CAMM and an interposer which would let you use regular SO-DIMMs on a CAMM compatible motherboard, so there will be royalties, however Dell doesn’t want to talk about those royalties at all. They also describe CAMM as the next standard in memory, except that they seem to be presenting it to JEDEC as a fait accompli instead of working with them on the development of the new memory standard. This could help when it comes to arguing about the reasonable fee portion of JEDEC’s Reasonable and Non-Discretionary terms and agreements; the point that everything in modern computers are cross licensed is not incorrect however it also generally has meant a consortium of companies were involved in the design of the new standard.
That is not to say all is bad about CAMM, there are some interesting features it offers. As it is single sided it allows for even thinner laptop chassis to be designed, the 16″ 7670 is a mere 0.98″ while the the 17″ 7700 is a hair thicker at 1.13″. The design of the modules should also both offer more protection to the chips onboard and also act as a heatspreader which could allow CAMM to run cooler than SO-DIMMs usually do. The existence of an interposer to allow the use of SO-DIMMs on a CAMM motherboard is also far from an awful feature.
There is also the very good point that the SO-DIMM interface is about 25 years old and DDR5 is giving it some troubles. The tracing on SO-DIMMs is currently limiting the performance of large pools of DDR5, for instance a laptop with 128GB of traditional RAM would be limited to DDR-4000, while CAMM is still able to hit DDR5-4800.
There has not been much reaction from other laptop makers, who will have far more impact on the future of CAMM than users will, so for now we will have to wait and see if Dell can get other companies to back their proposed new mobile memory standard.
To achieve a 0.98-inch thinness for the 16-inch 7670 and 1.13 inches for the 17-inch 7700, the laptops' DDR5 memory uses a design Dell hasn't shown before, Compression Attached Memory Module (CAMM).
More Tech News From Around The Web
- Foxconn factories near Shanghai cease operations over COVID-19 cases @ The Register
- Apple Launches Do-It-Yourself Repairs For iPhone 13, iPhone 12 and iPhone SE, But There’s a Catch @ Slashdot
- Apple and Intel likely the first to use TSMC’s 2nm node in 2025 @ The Register
- Microsoft finds Linux desktop flaw that gives root to untrusted users @ Ars Technica
- Nvidia, Intel, others pour $130m into optical chip startup Ayar Labs @ The Register
- Two Largest Marsquakes To Date Recorded From Planet’s Far Side @ Slashdot
- Arm to IoT devs: Go faster with our pre-made chip subsystems @ The Register
Coming from someone with a interest in SBCs–which generally have onboard DRAM and go to great pains with their layout, it’s always amazed me that PCs has used removable memory modules without much done to control impedance mispatch and other discontinuities as well as much longer traces. The challenges the PCs must face to make this work considering all the parameters vary with time and temperature as the system runs (probably much more severe of an issue for laptops which do a lot more thermal cycling) are impressive.
The complexity this must add to the pin drivers for the DRAM interface must cost quite a bit and have many failure modes. While I don’t like the propriatary nature of this solution, it’s good to see someone at least addressing the issue. I’m surprised it took until DDR5’s high frequencies for this to happen. Possibly the “ultrabook” initiative–with all of its soldered on memory–hid the problem through the last generation. Add that laptop SO-DIMM memory tends to be almost a whole generation back in speeds from its desktop peers, probably helped as well.
But, with integrated graphics starting to become useful for something other than displaying popular games as slideshows, laptops cannot affort this ongoing handycap–they need to have desktop comparable speeds to perform to their full capabilities. It seems like now is a good time to make this change. I’d much prefer to see this come from JEDEC than from Dell, but ‘standards’ often originate with vendors (see Intel with PCI, PCI-E, USB, etc.) so this isn’t unusual. I just hope the pattents and other restrictive IP and their asociated greed doesn’t kill this before it has a chance to be considered (I’m looking at you RAMBUS).
Now, do desktops! Get us a better connector–more dense and with better signal properties! This will allow desktops to spend less silicon area and power budget on their DRAM interface which would allow processors to be less expensive or to have more DRAM interfaces.
I’ll point out one problem with this type of connection that will help explain why the modules have much more memory on them than an SO-DIMM. The connector takes up a lot more board space and, due to the density of signals, no other signal routing nor components can be mounted there–in addition, the brackets get in the way. This is why you see a much larger memory per module–you have to amortize the cost of the board space that huge connector takes. Also, the high density and the 2D nature of the connector will make signal routing much harder. With a double row SO-DIMM connector, you don’t have much issue routing the signals from the two rows elsewhere on the board while keeping the trace lengths the same–a little squiggle here or there along the path is all it takes to equalize the trace widths of pairs of signals–or groups of them. With the signals coming in a 2D array, the distance from one signal to another (or within a signal group) can be much larger which will make trace length equalization harder–and will impart a ‘direction’ to the signal group. This will tighten the ability of the designer to locate the connector somewhere mechanically convenient. You’ll start to see a CPU to socket layout patten being fixed and routed as one large group–much like you see on SBCs where the SoC/DRAM routing is specified by the vendor and board makes *stamp* it onto their designs and work around that combined footprint.
Interesting times ahead.