Hacker Newsnew | past | comments | ask | show | jobs | submit | elham's commentslogin

Based on my understanding of historians, we combine a historian, real-time data visualizer/processor, and hardware controller into one. We tried to create a more unified and developer-friendly ecosystem - reducing the need for multiple disparate components and adapters and abstracting out the low-level work necessary to interface with hardware like PLCs.


Thank you! Will look into these.


Not super familiar with CommandAlkon/Construction space so would love to hear more on that end.

To address hardware deeply integrated with proprietary systems - our current mode of dealing with this is building out the middleware to allow these systems to integrate with Synnax - or a protocol that can be used to communicate with these devices. But we also try to make it a developer-friendly tool so that if people already familiar with these systems want a quick way to connect Synnax to their existing systems - they can do so using our Python, Typescript, or C++ libraries.

Our goal would be to substitute these cobbled-together systems for a more uniform and airtight system. We hope to do that by starting with sub-scale preliminary integrations on already-in-development setups/expansions that can be a base for expanding out to the rest of the system.


Not super familiar with Zenoh! At first glance, it seems to focus more on the robotics use case and support higher-level data manipulation.

Synnax also follows a pub-sub model, which enables functionality like having multiple clients/consoles be able to view and monitor their physical systems.

I'd say we try to reach closer to the edge to help directly facilitate sensor abstractions. In this vein, Another way Synnax seems to differ is to try to cater more to the hardware control aspect. So for example, we have a panel for users to create schematics of propulsion/electrical systems which they can they link with automated scripts to command that actual system and override with manual control when necessary.

Using multiple nodes of Synnax in a distributed fashion is also still a WIP but a goal of ours as well!

If you use Zenoh would love to hear how you use it and if my impressions of it are correct.


1. Since we have several customers in Aerospace who adhere to ITAR regulations, all our software is hosted on-prem so that we have no connection to the data they're using. The data in the DB can always also be taken out if a system switch is desired. Agree about the large assortment of issues that come up when integrating these kinds of systems. It would be naive of us to try to address all these all at once. We tried to make it easy to integrate our systems so that we can run pilots that start small (1 test stand, 1 bench, etc.) to demonstrate initial & immediate value and then expand from there. We want our product to grow with our users - incorporating support for new protocols as they need that become broadly applicable.

2. We see a lot of value in providing essentially a universal adapter to these protocols and hardware interfaces. Decoupling the data communication/device infrastructure from the control and acquisition workflows is big for us and this seems essential to that. A big endeavor on its own, but our existing integrations have been really helpful to our users and as it matures, we intend to continue expanding these integrations!

Hopefully 1 & 2 address your first question!

3. Addressing the second question: We've mostly been focusing on the test & operations use cases (e.g. running real-time control and data acquisition for engine tests). We see a lot of ways we can eventually service industrial controls/automation space - similar to Ignition. However, we are also aware of many reasons people in this space will want to stick to tried and true tools with a larger community and ecosystem.

We're still figuring out how we fit into that space + communicate our ability to provide the breadth of functionality and support them. Posts like this and the users who already see the value and are willing to try something more novel and developmental like us have been huge in progressing towards this.

Some questions I have!

  1. What parts of the networking have been most challenging work you wish could have already been done for you?

  2. For interfacing with protocols - similar question as above but also which ones were pretty nice to work with and what about them made it so? Closely related which direct integration would you immediately want if you were considering something like Synnax?

  3. Related to the customer not knowing the mapping of tags to data - are there similar issues that you've experienced that make it hard to use these systems?
Ended up being a long message but I appreciate your insights on any of what I just said!


Oh, ok, the test and operations case makes sense. As for your questions:

1. It's basic networking tasks such as running a network drop, assigning IPs, making sure the PLCs are on the right subnet, etc. In many cases the PLCs aren't on a network at all and the IT team doesn't really know how to work with the PLCs and the OT team doesn't really know how to work with networks. Sometimes it's been easier to just add external sensors and go over a cellular network and skip the PLC altogether.

2. We use one of Ignition's modules to interface with the control systems directly. They have drivers for Allen-Bradley, Siemens S7, Omron, Modbus, and a few others. The downside is Ignition doesn't have an API, so we have to configure things using a GUI. Beyond Ignition, the other big provider of drivers is Kepware - they probably have a driver for everything, but again, they aren't really set up for use by developers trying to deploy to a Linux box. If the customer has an OPC-UA server set up, we can connect to that using an open source library.

3. What we've learned is that many customers rely on third parties (e.g. the machine manufacturer or a system integrator) to configure their system, so when it comes to extracting the data they want, you're kind of on your own. We're not industrial system experts, so this creates a unique challenge. Larger and more sophisticated customers will have a much deeper understanding of their systems, but these folks are usually going to be using something like Ignition and will already have the dashboards and reports so it's more a matter of integrating with Ignition.


This all makes sense and is extremely illuminating. Thank you!


Hey! Great question we get a lot. We've come from/talked to a lot of companies that do what you described with stuff like timescale and influxdb. They're useful tools and support a breadth of applications. We thought by building one to specifically leverage the read/write patterns you'd expect with sensor-heavy systems, we could achieve better data throughput and thus better enable real-time applications. For example, we've been able to get 5x write performance for sensor data on our DB compared to influxDB.

In general, having built out the core DB, it has been valuable in allowing us to expand to the other useful features such as being able to write commands out to hardware at sufficient control loop frequencies or create smooth real-time visualizations.

The other thing we think is really powerful is having a more integrated tool for acquiring, storing, and processing sensor data & actuating hardware. One common issue we experienced was trying to cobble together several tools that weren't fully compatible - creating a lot of friction in the overall control and acquisition workflow. We want to provide a platform to create a more cohesive but extensible system and the data storage aspect was a good base to build that off of.


Thanks for the reply! That all makes sense, and I can totally relate to the "cobbling together several tools that weren't fully compatible" experience. There's enough complexity with having to support or integrate sensors/actuators with a variety of industrial networking protocols. Anything to simplify the software portion of the system would go a long way. Excited to dig into this a bit more, best of luck with ongoing development!


Thank you! Happy to get any feedback after a deeper look


Did you build on any low-level libraries like RocksDB for data persistence etc.? Or did you fully hand-roll the database? Curious about the tradeoffs there nowadays.


The core time series engine, called cesium (https://github.com/synnaxlabs/synnax/tree/main/cesium), is written completely from scratch. It's kind of designed as a "time-series S3", where we store blobs of samples in a columnar fashion and index by time. We can actually get 5x the write performance of something like InfluxDB in certain cases.

For other meta-data, such as channels, users, permissions, etc. we rely on CockroachDB's Pebble, which is a RocksDB compatible KV store implemented in pure go.


One thing that might keep me from using this is how well it integrates with other tools I might want to use for data analysis or historical lookback. For example, I currently use grafana as a simple, easy way to review sensor data from our r&d tests. Grafana has solid support for postgres, timescale, influxdb and a number of other data sources. With a custom database, I'd imagine the availability of tools outside of the synnax ecosystem would be rather limited.


That's a valid concern! As we're currently a team of 3 building this out - we still are working on building out our integrations with other tools, hardware, etc. We have been prioritizing building direct integrations to systems that our current users are interested in.

We also value enabling developers to build off of Synnax or integrate the parts they are most interested in into their existing systems. We've tried to service that end by building out Python, Typescript, & C++ SDKs and device integrations. We're continuing to look into how we can better support developers build/expand their systems with Synnax, so if there are any integrations you think are important, I would appreciate your take.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: