Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
VLSI Physical Design (ifte.de)
55 points by stefanpie on Sept 27, 2023 | hide | past | favorite | 26 comments


This plus the Verilog to Routing that was posted yesterday showcase the algorithms used in chip design (this post being more about the physical side, v2r covering frontend synthesis as well). The field of Electronic Design Automation (EDA) has some of the most interesting, hardest problems in tech, yet the pay in EDA is kind of mediocre. And EDA is essential in developing new chips -> Semiconductors are essential to our economy.

The EDA companies complain about not being able to attract new talent, but maybe if they paid better? The other problem with the EDA companies (as someone who has worked in EDA in the past) is that they're just plain stodgy - they feel like some kind of old boys club. We had C++ code from the early 90s and to a large extent they still coded C++ in a 90s style. C++ Templates? Nope, not allowed in the group I was in.


Wow, people seem to like my EDA submissions :)

Another aspect, at least what I have experienced, is that these tools are built on top of themselves and layered over older versions of themselves, making the code base fragile and hard to work on. For example, Xilinx's Vitis HLS is based on AutoESL, an academic work that underwent commercialization and transitioned into a Xilinx product. You can tell in Vitis HLS that there are many layers on top of simple tool actions that go through custom TCL functions and then down to the most likely Java/C++ software, which also calls out to outdated versions of LLVM tools. I would imagine this is not a pleasant code base to work on and add new features to.

Also, making changes to an EDA tool has large implications for downstream customers who depend on your FPGA chips and are maximizing the utilization on the chips, making them super sensitive to any changes in the EDA algorithms used. This also makes the software development process for these commercials a little sticky. This makes it hard to even plug into and build on top of these tools as an academic.

However, this is no excuse for bad software engineering. I agree that a big investment in software engineering for EDA could be a slam dunk. I think Synopsys and Cadence realize this payoff for their use cases (mostly ASIC), while maybe the payoff is not as big for FPGA companies? Who knows what they are thinking.

On the technical side, more open-source EDA tools help even if they are not fully practical in a commercial setting. Even the C++ codebase for VPR is readable and mostly easy to follow. As a thought experiment, a Rust-based port of that would be interesting from an API and program architecture perspective. There are also some DARPA programs that are pushing for better open-source tools for EDA flows as well as better EDA algorithms for large-scale, 3D, and heterogeneous semiconductor design, so I think they see the technical problems we are behind on.


Can you recommend books on the algorithms used in frontend synthesis? And/or FPGA-specific algorithms?


Logic Synthesis And Verification Algorithms by Hachtel and Somenzi the copy I have is copyright '96, not sure if it's been updated.


It hasn't been updated.

The book is considerably out of date, but a lot of the concepts and definitions are still relevant.


This deck has a great introduction to mapping and the references at the end are more up to date

https://www.eng.biu.ac.il/temanad/files/2018/11/Lecture-4-Sy...


Similarly out of date.

A good starting point for more modern techniques would be something like this:

https://people.eecs.berkeley.edu/~alanmi/publications/2006/t...


I came across this survey paper on Combinatorial Optimization in VLSI Design: https://www.or.uni-bonn.de/research/montreal.pdf

I don't know how up-to-date the information is. Two of the authors (Korte, Vygen) wrote a textbook on combinatorial optimization. I haven't read it and I don't know enough about the subject to be able to say how useful the contents are to VLSI design. I think they focus more on the theory than the application: https://link.springer.com/book/10.1007/978-3-662-56039-6


> is that they're just plain stodgy - they feel like some kind of old boys club

I mean just go look at that post from yesterday where I complain that yosys uses C++ to script flows and the response to the complaint - I'm starting to think these people just enjoy being miserable.


I think it’s more likely these people mainly enjoy designing hardware, not tools for designing hardware (aka software). Learn to use a tool, and use it. It’s a hammer, start driving nails

Software development is unique in that tools for building software are also software. This hammer is great and all, but I make hammers, and I know this one could be better


This is apologetics.

The real reason is the hardware vendors are entrenched and don't make money off the software at all. The situation is exactly the same as it was in the proprietary compiler space before gcc (and then clang) - no incentive to improve or open source the compilers -> no access to good tools -> proprietary tools remain only option.

Yosys though, being an open source attempt, is just making unforced errors, hence my ranting and raving in the other thread.


You're talking about FPGA tools but this thread is about VLSI design where the major EDA vendors (Synopsys, Cadence, and Mentor) don't sell hardware(*) and *do* make a lot of money from software.

(*) Excluding emulators or simulation accelerators.


> You're talking about FPGA tools but this thread is about VLSI design

did you miss the part where the comment i'm responding to is about VTR?


My bad, I thought we were talking about ASIC designers, not VTR developers.


For timing specifically, I recommend "Static Timing Analysis for Nanometer Designs: A Practical Approach" by Bhasker and Chadha.

https://link.springer.com/book/10.1007/978-0-387-93820-2

It doesn't really focus on theory but more the principles of timing analysis and how to wrangle chip design tools into complying with your desired compromises in order to achieve your goals.


That book is really great if it’s your job to write timing constraints. If you’re interested in timing theory I think this is a bit better.

https://books.google.com/books/about/Timing.html?id=80gOBwAA...


This looks like a treasure trove on what it takes in terms of algorithms to enable tools like Cadence Innovus or Synopsys ICC. It’s not a user guide on how to use these tools, but rather a perk behind the curtain.

I’ve worked with Andrew, one of the authors on occasion in the past, and he and his team of students are among the best academic teams in the world on this topic.

I do think a lot of the secret sauce lives as trade secret with Cadence, Synopsys, Mentor… They see all the real problems in designs from all their customers in bleeding edge nodes like 3nm and beyond.


That book is great. This one is also quite good.

https://books.google.com/books?id=EkPMBQAAQBAJ&printsec=fron...

Handbook of Algorithms for Physical Design Automation edited by Charles J. Alpert, Dinesh P. Mehta, Sachin

The information is available if one looks for it. It's a tough subject though.


I hope the developers of KiCad are taking note. It would be amazing to have these placement/routing capabilities at the PCB level also.


The barrier to automatic placement and route at the PCB level is not the algorithms. It the time it takes the user to create the routing constraints on the nets. For PCBs with less than, say, 100 nets. It's probably not worth it. You could wire it up manually faster than you could write and debug a constraints file.

That why for commercial PCB packages with support designs with 1000s of nets, such as Cadence's Allegro, one does see support for automatic routing of PCBs. And it's quite good.


I'm looking into maybe doing a BGA / MPU through KiCAD soon. Not that I'm confident in my abilities at all, but it'd be a challenge that I'd like to succeed in.

Yeah, you've listed off ~100 nets vs 1000s+ nets. But...

Many MPUs are BGA200 to BGA400 in size, and seemingly designed for 4-layer to 6-layer boards (available from OSHPark). Which means I expect around... maybe 200 nets or so in practice (a lot of the pins are either power-or-ground. But many other pins would be in fact connected to "something" and be their own net). More than your first number, less than your second number.

In your opinion, is that still within the feasibility of a hobbyist laying things out by hand? A lot of SiP MPUs have "internal DDR2" in sufficient quantities to boot Linux, although I'm also thinking about routing my own RAM for maximum flexibility.


It’s definitely doable in one off quantities. And in fact people do it. Probably the most annoying thing would be dealing with length matching but honestly modern interfaces are much more tolerant than the specs imply. Power delivery could also be a sore point. So again, if you just looking to get a prototype working, it’s workable. But in volume. I wouldn’t go this route. This presentation lists some important things that are lacking in KiCAD when you really need to know the answer. https://archive.fosdem.org/2022/schedule/event/advanced_sim/...


I'm not sure what you are saying here. Would KiCad not benefit from these algorithms? What if the PCB has a lot of wide buses, etc? Why would someone using Allegro have different requirements than someone using KiCad?

Just trying to understand.


It could definitely benefit. And if KiCAD wants to support larger systems then one might even argue they're required, eventually.

What I mean is not all nets are created equal. Is this an edge sensitive gpio or a level sensistive? Is this a net with a cap a current that's being integrated or edge rate control on a clock driver? A person knows because they can read the datasheets. The way for the algorithm to know is routing constraints, and their fairly tedious to write correctly which means for there to be good ROI on the time spent writing them, the algorithm needs to do a lot of work for you.


Ok, good point. Perhaps the role of each pad can be annotated in the symbol libraries. Or perhaps we can use LLM technology to read the datasheets and figure out the role of each net ;)


The developers of KiCad are fully aware of these kinds of algorithms.

There is a huge difference between autorouting in VLSI and autorouting in PCBs. There are particular differences:

1) The number of available layers

VLSI autorouting pretty much sucked until you had 5+ layers. You pretty much need local, horizontal, vertical, global, and pad as a minimum and the more layers the better the autorouter gets. Note that this is already more layers than generally exist on most PCBs which are typically 4 layer (upper, gnd, power, lower).

2) Changing layers in VLSI is relatively cheap without obstacle while changing layers in PCB is expensive and creates obstacles.

Changing a layer in VLSI is generally a couple of vias and no big deal. The vias are typically smaller than the metal track (but not always) and generally only connect the two layers in question.

Changing a layer on a PCB is generally expensive. PCB vias are generally larger than the metal trace and generally penetrate the entire PCB creating an obstacle on all layers. Take a look at PCB boards for FPGAs: they are generally 8+ layers and use expensive blind vias for routing breakout. Those kinds of PCBs are relatively expensive, and even those don't lend themselves to autorouting that well.

3) A lesser difference is that for high-speed signals PCB physical dimensions can be on the order of magnitude of the size of the wavelength of the signal being transmitted while VLSI dimensions are typically much smaller.

For example, a quarter wavelength for 6GHz is roughly 1cm which is generally bigger than most chips and significantly smaller than most PCBs. The design considerations for VLSI autorouting generally can ignore differences in routing length or layer changes (but not always).

On the other hand PCBs often need to be length matched fairly carefully for things like RAM buses. They also sometimes need to account for the fact that PCB materials are inhomogeneous--your routing track has different electromagnetic properties depending upon whether it is over glass fiber or epoxy. Thus, your PC motherboard tends to be routed at odd angles like 7 or 10 degrees to average out the effect.

Because of all of this, PCB autorouters are HARD and the KiCad developers have (rightfully, IMO) decided that their time is much better spent on the myriad other features and bugs rather than developing a PCB autorouter.

However, I'm sure that if you were willing to take a crack at it, the KiCad developers would be ecstatic to give a pointer to your amazing autorouter. Just be prepared that it's going to be quite difficult and you're going to get LOTS of complaints about how much it sucks.

I look forward to seeing your efforts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: