NHacker Next
login
▲Open-Source RISC-V: Energy Efficiency of Superscalar, Out-of-Order Executionarxiv.org
72 points by PaulHoule 13 hours ago | 19 comments
Loading comments...
dkjaudyeqooe 10 hours ago [-]
I feel like an open source RV CPU is very likely in the high-performance space.

The amount of effort required to design and implement such a device makes it difficult for a single company to invest in, but many interested users of it could band together to create a viable open source implementation.

I guess it's a question of a project that such an effort can crystalize around.

kimixa 9 hours ago [-]
Don't forget how much of a "high-performance" implementation is due to the physical implementation, a lot of engineering effort is put into that post-HDL. And much below HDL is hard to share, as it relies too much on (closed) fab IP libraries and PDK specifics. And then the verification of that result.

Which might discourage an Open Source hardware project with shared ownership as large as a high performance implementation would require - as each cooperating company would end up using rather different products anyway.

I fear it'll become just an "Dump Over The Wall An Old Snapshot" of a few different companies work at best, rather than true cooperation.

adgjlsfhk1 3 hours ago [-]
I don't think open source will be getting anywhere near leading edge in the near future, but I feel like a really good n12 or n7 chip might be possible. That would be enough to get to ~Zen1 levels of performance (or maybe a bit higher since we know Zen1 had some fairly avoidable mistakes)
zozbot234 5 hours ago [-]
There are open source PDK and IP libraries, though only for nodes far from the leading edge. OTOH, trailing-edge nodes are also the most viable overall for cheaper and smaller-scale fabrication.
SlowTao 2 hours ago [-]
In a way I am not too worried about the ISA, but having a set boot system that you can target the system with. This is where x86 still wins and ARM have dropped the ball. You can boot something like FreeDOS on an 8086 or the latest i9 with the exact same code base thanks to BIOS compatibility. But with ARM you are looking at hundreds of different targets.

The issue with ARM looks to be creeping into Risc V because anyone can make an additional processor entirely to their own target. For better or worse.

A standard boot target is much more useful to the end user than an open chip behind yet another boot standard. That I am praising the mediocre and closed x86 for this is a little showing of how bad the situation can be.

wmf 4 hours ago [-]
I don't know if that kind of collaboration has ever worked in chip design. It seems simpler for one company to design the core and license it out (which is the Arm business model).
vFunct 2 hours ago [-]
Unfortunately, a lot of the architecture is decided by your technology node as well as library. Examples include cache architecture as well as performance-power tradeoffs. There are thousands of standard cells in libraries now, and that's all custom tuned for each technology node.
almostgotcaught 6 hours ago [-]
> The amount of effort required to design and implement such a device makes it difficult for a single company to invest in, but many interested users of it could band together to create a viable open source implementation.

There are lots of companies that have their own high-performance accelerator cores (though not general purpose). Multiple generations. Eg every FAANG (except Netflix, that I know of).

There are exactly zero such OSS cores.

So I think you have this exactly backwards.

Pet_Ant 10 hours ago [-]
> some (e.g. BOOM, Xiangshan) are developed in Chisel with limited support from industrial electronic design automation (EDA) tools

Isn't translating between languages something that LLMs should excel at? I mean I'm sure it's more than just pasting it into ChatGPT but if the design has been validated and it's understood, validating the translated version should be several orders of magnitude easier than starting from scratch.

zozbot234 10 hours ago [-]
Chisel can be compiled to Verilog out of the box, and Verilog itself should have the required support from existing EDA tools. That remark from the paper may perhaps be somewhat confused.
bjourne 4 hours ago [-]
That is not enough. The generated Verilog code can be very opaque which makes it very difficult to analyze in cycle-accurate simulators. It also is (afaik) mostly impossible to automatically correlate an error in the Verilog code with a specific line in the Chisel code. Also pure Verilog is often not enough. You also need tons of vendor-specific pragmas to ensure that the design synthesizes well.
IshKebab 9 hours ago [-]
This is true, but unless great care is taken to generate nice Verilog you're going to run into issues when you try to integrate standard tools like functional coverage, formal SVA, etc.

I haven't looked at the Chisel SVA but I do recall another HDL touting readable Verilog generation as a feature in response to Chisel's being bad (can't remember which one) so I guess it can't be great.

I think Veryl stands a decent chance of success precisely because it hews so closely to SystemVerilog - you don't lose access to all the feature industry uses. It's kind of the Typescript of SystemVerilog.

https://veryl-lang.org/

eigenform 10 hours ago [-]
I'm not sure this sentence [from the paper] makes a lot of sense. The only thing non-standard is the use of Chisel (and then probably CIRCT to lower it into Verilog) - if you're actually taping these out, you're still feeding that to industry-standard EDA tools.
dkjaudyeqooe 10 hours ago [-]
> Isn't translating between languages something that LLMs should excel at?

No, not at all. Unless there is a large amount of training data relevant to the translation then LLMs are likely just to make up nonsense. Chisel is a very niche hardware description language.

Pet_Ant 10 hours ago [-]
Very niche? That's suprising to hear. I'm not in the space, and I know it's not in the big 2/3 (is SystemVerilog distinct from Verilog), but it's been around for 13 years and even DARPA has it on their radar:

> Chisel is mentioned by the Defense Advanced Research Projects Agency (DARPA) as a technology to improve the efficiency of electronic design, where smaller design teams do larger designs. Google has used Chisel to develop a Tensor Processing Unit for edge computing

[0] https://en.wikipedia.org/wiki/Chisel_(programming_language)#...

bee_rider 9 hours ago [-]
I wonder if they just mean niche in the context of languages generally—human or programming? I mean there are, relatively speaking, boatloads and boatloads of open source software projects out there. Hardware open source projects, well a few exist…
MobiusHorizons 2 hours ago [-]
I think it is niche in the sense that it is almost completely unused professionally. Most usage tends to be academic or hobbyist. I don’t mean to imply that it isn’t suitable for professional work, but more that it is not very easy to make work with the industrial EDA tools necessary for fabrication.
dkjaudyeqooe 6 hours ago [-]
Very niche on the scale of LLM training data.
dlcarrier 10 hours ago [-]
To the contrary, it's something especially suited to being done parametrically. Effectively, you can make a really big regex string to convert one language into a subset of another, then let the optimizer of the second language make it performant.