Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: SiLogy (YC W24) – Chip design and verification in the cloud
115 points by pkkim on March 7, 2024 | hide | past | favorite | 54 comments
Hi everyone! We’re the cofounders of SiLogy (https://silogy.io/). We’re building chip design and verification tools to speed up the semiconductor development cycle. Here's a demo: https://www.youtube.com/watch?v=u0wAegt79EA

Interest in designing new chips is growing, thanks to demand from AI and the predicted decline of Moore’s Law. All these chips need to be tested in simulation. Since the number of possible states grows exponentially with chip complexity, the need for verification is exploding. Chip developers already spend 70% of their time on testing. (See this video on the “verification gap”: https://www.youtube.com/watch?v=rtaaOdGuMCc).

Tooling hasn’t kept up. The state of the art in collaborative debugging is to walk to a coworker’s desk and point to an error in a log file or waveform file. Each chip company rolls out its own tooling and infra to deal with this—this was Kay’s (one of our cofounders) entire job at his last gig. But they want to work on chips, not devtools! The solutions they come up with are often inadequate and frustrating. That’s why we started SiLogy.

SiLogy is a web app to manage the entire digital verification workflow. (“Digital verification” means testing the logic of the design and includes everything before the physical design of the chip. It’s the most time-consuming stage in verification.)

We combine three capabilities:

Test orchestration and running: The heart of our product is a CI tool that runs Verilator, a popular open-source simulator, in a Docker container. When you push to your repo or manually trigger a job in the UI, we install your dependencies and compile your binaries into a Docker image, and run your tests. You can also rerun a single test with custom arguments using the UI.

Test results and statistics: We display logs from each test in the web app. We’re working on displaying waveform files in the app, too. We also keep track of passing and failing tests within each test suite, and we’re working on slick visualizations of test trends, to keep managers happy. :)

Collaboration: soon you’ll be able to send a link to and leave a comment on a specific location within a log or waveform file, just like in Google Docs.

Unlike generic CI tools, we focus on tight integration with verification workflows. When an assertion fails, we show you the source code where it happened. We’re hard at work on waveform viewing – soon you’ll be able to generate waves from a failing test, with the click of a button.

Our roadmap includes support for the major commercial simulators: VCS, Xcelium, and Questa. We’re also working on a test gen framework based on Buck2 to statically declare tests for your post-commit runs, or programmatically generate thousands of tests for nightly regressions.

We plan to sell seats, with discounts for individuals, startups, and research labs (we’re working on pricing). For now, we’re opening up guest registration so HN can play with what we hope is the future of design verification. We owe so much of what we know to this community and we’d be so grateful for any feedback. <3 You can sign up here, just press "Use guest email address" if you don't want to give up your email: https://dash.silogy.io/signup/




I'm a professional chip designer. And as somebody who's had to build this sort of internal tooling myself, I think this sort of product is desperately needed!

It makes sense to start with Verilator, because it's fast, easy, and open source, but it seems like it'll fall short on a lot of metrics. It seems like the big challenge is going to be making sure your tool can do everything that chip designers actually need to do. I remember spending almost a week bringing up gate-level simulation, which was a huge pain in the ass because our design used an ILM-based hierarchy. Handling things like X-propagation, timing constraints, mixed-language designs, and non-synthesizable models for mixed-signal IP is going to be really tricky, especially because they work differently in every simulator.

Is there any reason why you're not starting by targeting FPGA developers instead of semiconductor developers? It seems like the Verilator flow will do a much better job of fulfilling their needs, which are generally simpler, than fulfilling the needs of an ASIC team. Obviously in the long run, semiconductor developers are the bigger market, but you need far more features to be able to sell to ASIC teams.


I worked with some chip design and verification folks. They were very tolerant of slow results, waiting for licenses or infrastructure. Their processes were just built around those constraints. They didn't seem to have interest in removing them.

As an impatient software developer it was very interesting.


Project cycles are long. We are typically doing 1 tapeout/quarter. Chips are incredibly complex, and the physical design is a by definition slow process. Of couse everyone wants quicker tools/turnaround but bottleneck isn't RTL development, physical design and verification is.


Yeah, our team has experience in both hardware and software. DV engineers have a really hard time and we're trying to make it better.


Thank you! In fact we'll be helping our first customer with FPGA development. I don't know if we put a lot of thought into FPGAs vs semiconductors in terms of marketing but you raise good points, we'll discuss it.


Just to clarify though, we are targeting both ASICs and FPGAs.


I use verilator all the time for open source stuff, but it has a real problem not supporting Xs - I use them to detect uninitialised variables and taken undefined case statements (with full_case) etc allows me to force faults at the same time as giving clues for better synthesis.

I think that FPGA flows are different, because you can test stuff without a tapeout, a lot of debug stuff gets short circuited when you can design something bug and test it on an FPGA in minutes (or in my rather big case 12 hours and then testing on an AWS FPGA instance)


+1 on concerns about verilator not supporting X prop. Definitely a big difference between FPGA and ASICs.

The lack of X-prop in verilator is one of the reasons why a lot of OSS hardware designs are risky or unsuitable for ASIC integration... verilator is popular in OSS, and X-prop isn't a problem for FPGA because every bit in an FPGA is guaranteed to start at a known state.

So, Verilator assuming 0 by default instead of X means that a lot of OSS developers get away with bad coding habits that work perfectly in FPGAs, but can't reliably power up in an ASIC.


it's a little more than that - as I mentioned above you can assign xs in a case statement (or anywhere you like) as a signal to you synthesis tool when you don't care what value something is - that allows the synthesis tool to make better gates, and to cause simulations to fail if you were wrong about doing that


I wonder if FPGA developers are as likely to spend as much on tools? They're kind of used to having most of their tools from the FPGA vendors be free or if not free fairly inexpensive.


I'm about to pull a "dropbox" here, but I am aware of many companies that already do this inside their Git infrastructure. It's not that hard to do when you combine verilator, testbenches in software languages, and cloud CI intended for software. This is one of the big advantages of greenfield designs (and FPGA companies): you can set things up for Verilator and/or CocoTB natively, and then you get to use things like Github actions to do continuous verification.

If you can get the commercial simulators and full support for the awful, ass-backwards, industry-standard verification frameworks (eg UVM), there's a great business here, but the trouble is going to be in getting there.


Thanks for the observations. It's true that every company that needs this eventually figures something out.

The difference is (a) our customers don't want to be in the devops business, and for startups especially it's a severe barrier to entry that we can make disappear, and (b) we are going to keep investing in our products (especially collaboration tools and integrations with waveforms, logs, etc) long past the point where a chip company would decide their internal tools are "good enough" (hint: they're generally not).

UVM support is one of the next items on our priority list.


I noticed there are some GitHub Repos actively working on enabling UVM on Verilator:

https://github.com/MikeCovrado/GettingVerilatorStartedWithUV...

It would be an amazing leap to enable UVM fully on Verilator. Looking forward to it.


This is my feeling too, it's pretty trivial to do this stuff in any CI infrastructure.

At NYU we have this entire process built into very trivial CMake and Github Actions stuff.

Here's an example: https://github.com/NYU-Processor-Design/PurdNyUart

You can see we have 100% test coverage, illustrated by CodeCov, and our CI runs the test suite on each PR. This is very normal in the software world and I guess I don't understand why the hardware world would need a specialized provider just to run Verilator for you.


It's not in gitlab's CI infrastructure, but I have continuous integration set up in a private server for https://gitlab.com/specbranch/r5lite and also for my company's proprietary hardware.


Ya and I've seen similarly basic support in small IP houses that support Verilator alongside whichever proprietary suite the house uses.

Is there a need here? Are there IP design houses that are so bad at CI infrastructure that "we run Verilator for you" is a value add?

I don't mean to denigrate the OP, just wondering what the market is. Undergrads build this stuff and let me tell you my undergrads are not a particularly talented group.


When you are trying to design high performance IP, you are often trying to ensure that your design is mathematically correct, inputs and outputs are matching a complicated 100 page specification. You are also trying to parse out the minimum set of workable requirements for "version 1" all with fitting into utilization constraints that ultimately are undefined.

Your mindset is really split. "Building up a software dashboard" to visualize your test results is really the last thing on your mind. You definitely don't want to be building the dashboard for all your customer's platforms.

Having somebody (a company) help on this front is really useful.

As a non-website designer, I used to think the same of tools like netlify, but they seem to be popular as ever, especially in a collaborative workspace when you need to handoff a project from one team to the next.


The thing is too: Gitlab and GitHub CI are still kind of crap unless you put a bunch of work into them (gitlab in particular really don't what they're doing; they're not dumb but they aren't good enough).

Sell a workflow, not a prison.


The functionality on offer here is equivalent to about 30 lines of Github Actions YAML to install verilator, run the tests, and upload the coverage information. [1]

Generating waveforms is free, Verilator already does that if you pass it the appropriate argument, either --trace or --trace-fst. We usually control that with a single CMake option.

Complex workflows can get nutty, but what's illustrated here is not a complex workflow.

[1]: https://github.com/NYU-Processor-Design/component-template/b...


I used to work at a fairly large fabless semiconductor company on the team that developed the internal tools for doing almost exactly what you were doing.

> Tooling hasn’t kept up. The state of the art in collaborative debugging is to walk to a coworker’s desk and point to an error in a log file or waveform file. Each chip company rolls out its own tooling and infra to deal with this—this was Kay’s (one of our cofounders) entire job at his last gig.

This is definitely an exaggeration, the tooling has always been a bit "dated" but vManager from Cadence has all the features you are describing (and more). I think nowadays they offer a fully managed cloud service as well. I'm guessing other vendors have similar offerings but we were primarily a Cadence shop.

On the topic of web viewing, do you know that engineers want this? In my experience, managers liked it because they could get an overview quickly, but the engineers hated it because it involved copy-pasting paths from the browser to their local tools.

We had vManager but some teams used their own test runner/manager with a web viewer(similar to your tool) and I ended up building a little TUI that the engineers could use instead. It worked almost the same, except it let you directly relaunch failing tests in an interactive simvision session for debugging, open log files in vim and open the coverage report for the regression.

Anyway, good luck! I always thought that the EDA industry was ripe for disruption, wish you guys all the best!


> On the topic of web viewing, do you know that engineers want this? In my experience, managers liked it because they could get an overview quickly, but the engineers hated it because it involved copy-pasting paths from the browser to their local tools.

In my experience it's the opposite. While some firms issue Linux workstations to their engineers most use VDIs (and god forbid sometimes VNC) hosted in a datacenter that can be far from the office. The already poor latency becomes atrocious when they say open Verdi over LSF, which is X-forwarded from the host to the VM before getting encoded and sent to the client.

Web tools bring the interface and rendering over to the client side and allow for a much smoother experience even if the server is physically far away.


That's why I use vscode w/ remote ssh. Can't stand input lag on my typing.


Not sure i am following, what problem your product is trying to solve? helping to write tests/run the tests/just organizing tests as a part of the CI pipeline? How is it different than just running tests? (Or is it the platform to run tests on?) If you are trying to do CI for silicon, then what is your target market? From my experience, companies that design their own silicon are usually big enough to have their own custom pipeline for testing and verification and it would be quite difficult to convince them to switch. Smaller companies get help from larger companies in development and verification.

Do you have any tooling that won’t require the developer to write tests? (E.g. something that will ‘work’ with no effort from the developer’s POV - kind of sonarqube for vhdl/verilog)

In any case, good luck. Glad to see some HW-related startups.


Hey, thanks!

CI is one component of our platform. Most other CI tools are pretty agnostic about how tests are structured, though. We also integrate a way to structure your tests into groups so you can control when each test is called. For example, if one test out of 500 fails, it's super easy to rerun that one test with verbose logging and wave dumping enabled. We then also track test pass/fails over time, have tools to leave comments for coworkers on waveforms and logs in the browser like in Google Docs, etc.

Out of curiosity, what do you mean by "Smaller companies get help from larger companies in development and verification"?


In my experience in two HW companies that developed their own ASICs (one as a startup and one as a publicity traded company), we never developed any chip fully by ourself. In all of the cases there was another large company who helped to make the project work so we will actually end up with wafers.

If you are not at the scale of NVIDIA/intel and release a new silicon every other month, it is not worth it to recruit so many people for a relatively short period. However, I am not fully sure how involved they were in the pre-silicon verification process, but at least in some cases they were very involved in the development.


That's not correct. I've worked from start-ups to semiconductor giants. Always the first option to develop everything in house, if you can find the talent. This is pretty much industry standard.


What ASIC/semi start up that you know of is developing everything in house? That is absurdly complex and hundreds of millions of dollars...


Pretty much most of them. They might buy a small IP or two here and there, but for the rest everyone develops their design mostly in house. It's not 100s of millions, that's a ridiculous amount of money unless you are designing like a huge CPU or TPU or so. We design (can't give company name) quite large chips with complex analog and digital in 7nm and 5nm as a start-up and our seed funding was less than 20 million. This is kind of bare minimum funding for a semi start-up anyhow.


"design verification powered by Verilator"

Nice, coincidentally saw this as I was about to fire up Verilator for the first time in a long time this morning. Good to see a chip development/EDA type company getting funding from YC. It seems like the sector was seriously underfunded during the social media funding frenzy of the late aughts and until recently. Given how things have turned out, I'm pretty sure we would've been a lot better off funding chip development/EDA instead of social media.


Thank you! Yes, we're very excited for the new chips that will be developed in the coming years, and we'd love to help them be born.


I find your company fascinating, having also worked on chips (and chip dev tooling) for much of my career.

> But they want to work on chips, not devtools!

I have long had a gut feeling that there's an entire industry of frustrating tools specifically to keep that industry alive. I once was shocked to learn that my company had bought licenses for a tool specifically to combine multiple IP-XACT specs into one... basically just parsing several XML files and combining their data! Outrageous.

RE orchestration: It's easy-ish now since it sounds like you're starting out with (free) open source tools, but once you start looking at things like license fair-share, you might find yourself starting to build yet-another-Slurm/LSF.

Any reason for buck2 vs bazel? Bazel seems more active (O(thousands) questions on StackOverflow for Bazel vs O(hundreds) for buck).


Yeah, you make some good points, orchestration has been historically painful -- we've personally seen the headaches that come with scheduling on slurm and lsf; I'd guess some of the most thorough bikeshedding in history has been around tinkering with slurm's multifactor scheduling logic. We're trying to not to re-invent the wheel with orchestration, and we're in the midst of building interfaces to hook into slurm, instead of replacing it entirely :)

As for buck2, we decided to go with it for a few reasons:

More forgiving with gradual adoption, from our experience -- running non-sandboxed actions in bazel is a pain, buck2 has been much easier to plug into existing flows.

Buck2 installation is easier, and by extension, is simpler to embed into our test runner.

Respectfully, bazel's implementation is a monolith beyond comprehension -- if we want to modify buck2, and package our own fork, we have confidence that we could do that.


When do you anticipate supporting UVM based testbenches? Verilator is definitely very fast, and SV assertions and coverage are powerful, however the lack of native UVM will require existing ASIC teams to develop new stimulus and checkers that they already have developed.


UVM support is a high priority for us, and our current infrastructure should make it basically drop-in once it's ready. Drop me an email if you are interested (email in bio), we will be sure to let you know when it's ready! :)


I might be missing something but how does the product reduce verification time? Isn't it just Verilator running on a cloud server? How does it help with debugging?


Hey, good question, thanks.

We speed up developers in two ways: For debugging individual tests, we have a web based log viewer and web based waveform viewer, so you can share test results or look at waves via a URL, instead of opening VNC. More holistically, we're building a chip focused workflow that will make tests first class -- infrastructure to track tests, and by extension measure design health, will be in one place, instead of implemented with a bunch of handrolled, bespoke tooling.


Thank you! So will it be a web-based wave viewer and debugger?


Those will certainly be a big part of it. But we’ll also be adding a lot more dev friendly tooling around it as well.


As someone who's also done a lot of DV and DV tools, I love to see it. This gap was something I took away from attending DVCon last year and seeing all the papers from folks reinventing in the same verif tools.

Feeling inspired after DVCon, I was sick of how annoying our test infrastructure was to interact with so I built a vscode extension as a front end to the existing mature infrastructure. Handles running tests, regressions, opening logs of all kinds, launching vendor tools, jumping to definitions, hover over documentation on all our custom config files, even breakpoints that export to verdi. I've got about 30 people using it now which is pretty neat.

My main learnings are 1) boy it would be tough to make this generic enough to my existing infra without making it lose all the benefits 2) Switching away from my editor is a huge context loss, having tools directly inside my editor keeps everything moving faster.The thought of adding yet another important web page to my browser isn't pleasant. I specifically added features to my extension so I could avoid going to our internal regression results webpage.


Hey, that sounds really cool, and also sounds like it was a lot of work! As for point 2, we are planning at some point on making everything accessible via API and releasing a CLI, so a VSCode extension should definitely be possible.


Honestly, way less work than you'd think. A couple of months of a few hours per week tinkering. Copilot taught me the typescript and the vscode api is very powerful and well documented. Being able to leverage other extensions already doing the heavy work is a huge benefit (such as DVT for identifying symbol definitions).


This is awesome! As a software person who dabbles in hardware, I cannot believe that HW folks put up with such arcane, slow, and cumbersome tools. I think anyone who’s critical of this idea has no idea how good they could be having it.


Thanks for the support!


Thank you for tackling this critical problem for logic designiners. I think the tools available are much too old for fast paced workflows.

From my experience attempting to get a similar workflow down for my company:

I tried to use verilator a while back but ultimately I couldn't because it didn't have same constraints in the verilog language features that I was going to use in production. It doesn't even matter who was missing a feature, verilator or the proprietary tool, it was just about getting them to be same that caused the cognitive dissonance that I didn't want to deal with.

I ultimately decided to move away from verilator and use the clunky proprietary tools since it was what would be used in production. Getting "verilator compatibility" seemed like a "nice to have".

Second, the a winning local-first framework of verilator wasn't really established. You show in your example running a test from the yaml file using what looks like a bash script. Even as an experienced programmer who knows bash and sh well, I still find it very hard to write complex thoughts in it. The last high level attempt I found to bridge this gap is likely https://www.myhdl.org/ I don't know them personally, but it seemed like they had some very good thoughts on what makes writing good hardware level tests good. I think it would be worth reaching out to them if you haven't already.

The one thing that even more critical was a way to run our tests locally. The 10-20 seconds it takes to start a docker image (best case) in the cloud is really frustrating when you are "so close to finding a bug" and you "just want to see if this one line change is going to fix it". Once we got our whole pipeline going, it would take 1-6 minutes to "start a run" since it often had to rebuild previous steps that cache large parts of the design.

So I think you will want to see how you can help bring people's "local's first" workflows slowly into the cloud. Some tools (or just tutorials) that help you take a failing test, and run it locally and on the cloud will be really good especially as you get people to transition!


I really appreciate the feedback: these are all very valid points.

Bash is just an example; most people should make the test rule call the simulator executable directly or via a thin wrapper script. MyHDL is interesting too. Admittedly this is the first we've heard of it but we'll take a look.

We are working on supporting additional simulators in addition to Verilator.

Also, we are working on an API and command line tool so you can kick off test runs and view the results from the command line. This CLI tool should also support local test runs at some point.


I am a DV engineer, and I'm going to give some candid feedback: CI tooling is not what I spend most of my non-coding or verification-planning time on, so I wouldn't find much use out of this tool. Now that wasn't always true – I used to work somewhere that had horrible CI tooling. But that's just because the company didn't invest in someone to maintain that infrastructure. Given that, I don't think they'd pay an external vendor for a tool and require someone to maintain that tool as well.

However, I do have some problems that you may want to consider pivoting to or adding in the future:

1. A wrapper that works with all of the tools that EDA vendors offer as a back-end. Basically, CMake for SystemVerilog where I can just run `make` and compile, elab, and sim run in order. Every company I've worked for has made their own wrapper program which essentially re-creates this process and I've had to relearn it several times. If you had this wrapper, then you could easily just use other CI/CD pipeline which calls this tool. Bonus points if you can integrate it with VUnit or SVUnit unit testing frameworks.

2. SystemVerilog code generation. Something smarter than just "I wrote a Python script that prints SystemVerilog code to a file based on some config file and then you run your build with the file the Python just printed."

I'm sure there are others that I am not thinking of. But overall, I don't find a lack of CI to be the problem. It's the lack of tools that a CI pipe uses that's the problem.

ETA: Also, was it intentional to launch right on the tail of DVCon? If so, clever planning.


CI is a core component of our product, but it's also a building block that we're building lots of DV-specific features on top of. In fact, one of those features is a build wrapper sort of like Make/Bazel for compiling stimuli, feeding them into simulators, and doing post-processing. Essentially what you describe in point 1 but for verification. We'll likely open-source this so other CI platforms can use it too.

For the other problems you mentioned: we're currently just handling the verification piece of the puzzle for right now, we want to do one thing really well first. We feel your pain on the SystemVerilog code generation front, we've had to interact with similarly primitive code generation mechanisms. You can only go so far with the preprocessor and what gets generated during elaboration.


You should talk with Metrics (metrics.ca), who are walking a similar path and have a few years' head start. They are a solid team and are likely to be open and friendly about their direction and challenges. Metrics has an independently developed mixed-language simulator that claims decent standards compliance with both VHDL and SystemVerilog. It's an impressive feat that puts them in a different class from Verilator (for now - Verilator is moving fast these days, thanks to Antmicro's excellent work.)

In my opinion, the extraordinarily poor design productivity associated with RTL designs is unlikely to change much until we can change the languages themselves. Yes, EDA vendors' tendency to extract maximum revenue for minimum tooling is a cherry on top, but solving that problem alone does not resolve the underlying productivity crisis.

For example: when I implement a complex datapath in VHDL, I become responsible for verifying every nook and cranny of both the signal path and the scheduled design that implements it. If I can effectively do design entry in HLS, I no longer need to verify the scheduled design by hand. That's a very big win.


Hey, we'd love to talk to them. We've heard about them a bit but not from their customers so far. If you have an intro we'd appreciate it, otherwise we'll reach out.

Verilog is not the best language, totally agreed on that. Right now we're not in a place to change that, but maybe one day!


As a hardware developer, I love seeing EDA tools getting YC's attention and resources. When hardware designers talk about how terrible EDA tools are though (myself included,) I find that it's a lot of the pot calling the kettle black. Most semiconductor companies have the most ancient IT infrastructure and tooling. Like at my current company, we're still using perforce. Instead of using SSH, my coworkers VNC into a server to run their terminals. A surprising number of them still use Notepad++!

Encouraging modern practices and enabling developers to migrate to newer development and development adjacent tools will be the huge value add with a product like this. At my company and some of the other companies I have worked for, we primarily use Synopsys for our tooling, but in reality, we use Cadence and Siemens tools occasionally. Being able to be more vendor agnostic, and tool agnostic, would be extremely useful. I noticed that you're using ventilator, but are there plans in the future to support other vendor tools?

Promoting the use of natively running apps (even if they're thin client web apps) is a huge win in my book too! VNC and VDIs are a terrible way to work. I really hate having to deal with the 40ms latency for every key and mouse event and font scaling that never works properly.

Another question I have, is about cost - I'm not a cloud billing guy, so I don't know the numbers off hand. But from my understanding, hardware development typically sees pretty high compute resource utilization, which is why I had always assumed that in-housing compute infrastructure made financial sense. Since it's on the cloud, how does it compare to on-prem computing from a cost perspective?


Thanks for your thoughts! We are indeed planning to add support for other vendor tools, including the Synopsys, Cadence, and Siemens simulators.

Cost for cloud vs on prem is very specific to design complexity. For the designs we're working with, we have seen that compute is cost effective on the cloud with respect to building out on-prem compute. We expect that to change as we support larger designs, and we have support for on-prem compute firmly on our roadmap.


Great! Yeah, I imagine you guys have a lot on your plates. Looking forward to hearing more from you guys!


Greatly needed. Nice


Thanks for the support!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: