Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Openbase (YC S20) – reviews and insights for open-source packages
148 points by liorgrossman on July 14, 2020 | hide | past | favorite | 85 comments
Hi everyone! I'm Lior, one of the makers of Openbase (https://openbase.io). We help developers choose the right JS package for any task - through user reviews and insights about packages' popularity, reliability, activity and more.

I have started Openbase out of my own frustration as a developer: there are 1.3 million JavaScript packages out there, and I found myself spending an increasing amount of time researching and evaluating packages every time I wanted to accomplish a task - displaying an autocomplete, sending an HTTP request, extracting significant keywords from an page, etc. Each time I would make that research, I spent hours reading different blogposts comparing those packages, going over PRs and commit messages to see how active the development or maintainers are, and trying to cross-reference data from npm and GitHub to get to a decision.

We started by gathering data from npm and GitHub - from versions and dependencies, to commits, pull requests, maintainers and tried to figure out what insights we could surface that could help us (and fellow developers) choose the right package. We ended up adding automated insights like star count over time, commit frequency, time between major/minor versions, average time to resolve issues and PRs, percent of commits by the community, dependency insights, and more. Here's what the insights page looks like: https://openbase.io/js/react

We decided not to stop there, and allow our users to discover the best packages for performing each task and compare them side-by-side. We've already manually curated several hundreds categories, such as CSS frameworks, OAuth packages, and HTTP request libraries: https://openbase.io/packages/best-javascript-css-framework-l... On the long term, we want to build tools that will allow the community to curate and maintain categories, and get to thousands of categories for any imaginable task.

Lastly, over the past couple of weeks we decided to try something more ambitious - we want to let developers rate and review open-source packages. While data and metrics are great, we found ourselves often consulting with friends and colleagues about which package to use. We think reviews could reveal a lot of insights that cannot be deduced by looking at metrics alone. To throw in even more fun to the mix, we added badges like "Great documentation", "Performant", and "Hard to use" (check it out: https://openbase.io/js/vue), and are looking for ideas for more badges.

We found that many maintainers want to promote their packages, and unfortunately, there aren't good ways of doing so today. This is a problem for new package maintainers who are having a hard time getting those first thousand users, an API/SaaS company that wants to earn developers' mindshare, or a software firm that wants to showcase their expertise. We've started by allowing package maintainers to claim their package page, and giving them the tools to promote it for free. In the future, we want to allow for paid promotion of packages - limiting it to a single, clearly-marked promoted package for each category. We would just surface the promoted package, but the package ratings, reviews, insights, and metrics are obviously untouched. We believe this kind of balanced approach could make Openbase a sustainable company, while not impairing the user experience.

In the good ol' 90s, when I made my first steps with JavaScript, and it was all about small scripts to make web pages a bit more interactive. I can't believe two decades later, the JavaScript ecosystem has gotten so huge and complicated, that we've actually built a startup to help you navigate this mess. Strange world, isn't it?

We would really love to hear your feedback about Openbase, and in particular about the reviews feature. How would you go about making the reviews a force for good, and making sure reviews are helpful for other developers and the community at large?




What’s the business model to get paid for something like this? Alternatively phrased, how is this a sustainable business?

And how do you beat Microsoft from melding in this functionality into GitHub / NPM? The friction would so much lower (existing GitHub account) and they can place the information directly on the project pages.


In terms of bizmodel - we're thinking about paid promotion of packages - allowing maintainers (companies and individuals) to promote their packages.

We want to limit it to a single clearly-marked promoted package for each category. Obviously, we would just surface the promoted package, but the package reviews, insights, and metrics are obviously untouched.

Obviously, users would be happy if GitHub included some of those functionalities inside the repos. However, I don't think GitHub could simply put user reviews on all repos (technically they could, but business-wise that would not be a smart decision). So even if they were to do that, and user reviews were an opt-in feature in GitHub, then you'd have reviews enabled for some packages and not for others, which is suboptimal.


On one hand, this is fantastic and desperately needed. I've wanted something similar for years.

On the other hand, I've been an open-source developer (one of my projects has 7.5k stars on GitHub) and the constant stream of negativity towards that the work I put hundreds of hours into for free was pretty psychologically draining, to say the least. And it's actually a pretty well-liked project! By making such a service you take on a lot of responsibility here.


I feel you.

I heard from several maintainers about this phenomena, where at first all you want is your package is to be popular and have many users, but after a while you notice having more users is becoming a burden, with so many requests, issues, and opinions.

But I can see what you mean. We're still small enough, but I would assume if Openbase becomes a success we take on a lot of responsibility - in terms of making sure maintainers rewarded (not necessarily monetarily) and not punished for their good work.


How are you going to prevent nasty people thinking they have the right to hunt down open-source creators who work for free and should have no pressure to get things done? Feels like such a platform can also cause stress with people trying to rate a package when they take no time to actually help with commits into the package.


That's a great question, I agree this might become a problem down the line, when bad players might try to pressurize package maintainers into fixing problems.

We haven't encountered this problem yet, and believe most developers are decent enough to honestly rate a package based on their experience, as opposed to using it as a leverage.

Few things we're thinking about:

* Allowing maintainers to flag such reviews (for our review)

* Allow users to upvote/downvote reviews, and flag bad behavior (spam, extortion) to stir away bad players.

* Allowing maintainers to publicly to reviews

* Gathering enough data (about review and other previous user reviews) to conceal "bad" or non-useful reviews, in a similar manner to other platforms.

What do you think? Do you have any other suggestions?


I think a very good metric would be the number of maintainers of a package and how well funded they are.

E.g. React has a huge resource pool and people are getting paid for development, while some (most?) packages are single dev with no funding.

Perhaps making this kind of metrics visible would introduce some humility for people pressuring latter maintainers.


That's a great point. Showing project funding is something we had in mind (and in the mockups) but didn't make it to the MVP. We would probably have to integrate with Open Collective and other data sources.


Maybe you could let people put bounties on issues, and pay the developers.


Interesting, that is something that we had in mind in the past. I know there are few good websites doing that, starting with simple bounties like IssueHunt and ending with more complex models like Tidelift. I'm not sure how well those work, honestly.

One similar thing we were thinking of is giving maintainers the platform to easily provide paid support if they want to. Not sure about that one either, although I know some companies/people make money from providing support for OSS.


I've been doing js development for years and feel the same pain when I need to look for the npm package with the right balance of functionalities,size,active development,popularity etc. A dedicated site for package recommendation would definitely save lots of time, currently, I use npmtrends.com for this. I quickly checked for packages on openbase.io for rendering table and found out there were packages only for vuejs. why to exclude react packages? I also understand it's an mvp but Package results sorting should be there. Reviews would definitely help but I'd also prefer to see likes/upvotes on those reviews to get extra validation.

On a side note how are you ensuring that all popular packages should be included in your database? Otherwise it wouldn't make sense if user needs to crosscheck on google/npm/github like in this case of rendering table.

I'll be regularly using openbase for package discovery. Wish you all the best!


Thank you, we all share the same pain with package search.

The categories curation in Openbase includes a category, but also a framework. So as you mentioned, under Table libraries, you could have React, Vue, Svelte libraries etc.

We currently filter those by framework, but unfortunately, we didn't have the time to fully implement this page prior to the launch.

What should be in this page is a dropdown where you can choose from 10 different frameworks, and see the results for that specific framework. That dropdown, along with the URL routing to support it, is still missing from the product. It's one of the most important things on our roadmap, so expect to see it on Openbase soon.

Unfortunately, there's no way to 100% ensure all packages are included inside a category since the curation is manual, but I would say we do base it on extensive Google/npm/GitHub search. In the future, we want to involve the community in the curation process, so anyone can suggest new packages for a category or categories to a package. Not a simple thing to implement, but we chose this challenge since we do feel that categorization can provide a lot of value.


Ruby toolbox has a wonderfully simple way of crowdsourcing it -- categorizing a package is as simple as submitting a small yaml PR, e.g. https://github.com/rubytoolbox/catalog/pull/417/files


I didn't know that (and BTW Ruby-Toolbox is a great source of inspiration)

I think the challenge with that is that there's no standardization of categories. Both GitHub and npm have the concept of "topics/keywords" that developers can choose. Problem is that every maintainer chooses slightly different keywords, so it becomes ineffective as a tool for discovery.

We still haven't fully figured out what the optimal solution might be like.


> We still haven't fully figured out what the optimal solution might be like.

Recommender system based on the kind of keywords in the description, starred project from main contributors etc


I think it would be a good start, but would still require some human vetting. We played with some simple NLP solutions (trying to run TF-IDF on the keywords/description/readme of packages to determine the category), the results weren't great honestly, so we opted of curating everything manually for now.


Thank you for creating this site - it looks like an excellent piece of work. I hope you find a way to make some money from the work, enough at least to pay the hosting bills.

A couple of questions:

1. I found a way to add a tutorial link to my package's page, but the wider documentation seems to be limited to whatever is put in the README.md file. Is there a way to add links to additional documentation?

2. You mention elsewhere in the thread possibly adding download sizes to the package page. This worries me a little because NPM appears to list the full size of the entire repository (which for my library is 44.5MB - the repository includes a lot of demos, assets for those demos, inline documentation, etc). Whereas the 'real' size of the minified file - the one which would be served to websites - is only 302KB (78KB zipped). If you do include download sizes in a future version, can I ask that you also include a way for maintainers to flag misleading information?

[Editing to ask] 3. How can I get my package to show up in the search results, when people search for 'canvas'?


Thank you for your feedback!

1. Nope, we currently have no way to edit what's in the overview page, so unfortunately you'll need to edit your README file to add additional links.

2. In general, we're going to use an approach similar to BundlePhobia where we would show the gzipped/uncompressed file of the final file, but it does make sense to allow maintainers to flag this (and other information) as wrong.

3. Our search engine is still work in progress. We started with the npms implementation, added additional layers of information from our dataset, and will iteratively add more and more factors to search results ranking. Unfortunately, we still can't grant for a specific package to be shown in the search autocomplete. We are, however, adding a "search result page" that would show more than the 9 results that you see in the autocomplete.


Does this include typescript projects? i couldn't find my own.[0]

Also, I can see my two most popular packages ( 40 stars[1] and 14[2] respectively), and I've realized that not updating them frequently probably hurts.

The thing is, what about bots like dependabot? I have one or two other packages that auto update and close issues quickly because they have some bots helping out, and I was wondering if you took that into account.

I am also sad that things that I worry about, like testing, and bundle size are nowhere to be found.

But overall, great idea. Keep at it!

[0]:https://github.com/ldd/vscode-jq/

[1]:https://openbase.io/js/gatsby-source-github-api

[2]:https://openbase.io/js/react-tech-tree


Thanks!

We currently only cover npm packages (not all GitHub repos), does vscode-jq have an npm package?

We currently don't have any special handling for bots, but this is on the pipeline. Same goes for consolidating contributors (sometimes the same person shows up as 2 different GitHub users, you can see that on GitHub too).

Bundle size is a big one for us, highly prioritized.

Testing isn't, though, what would you like to see about testing that isn't covered by badges? (e.g. test coverage, passing/failing, etc.)


aha! covering npm packages makes sense. I briefly forgot that I hadn't publish that one on npm. I wonder if vscode extensions are published on npm... umm...

I'm glad you have all those future features planned.

As for testing, I think that badges are often times not enough. I'd like to see if tests exist at all in the first place, and the quality or helpfulness of those tests. But maybe I'm in the minority for that, so ask other users for confirmation.

:D


Hehe :)

A few other people asked about testing, too, but it's going to be challenging to build something that works reliably at scale with different codebases and testing frameworks.

But I think I got the idea of what you're looking for now. This will help us better model this feature if we end up implementing it.

Thanks for the feedback!


Another interesting metric to try to capture would be issue resolution. A lot of time you might have a very decisive issue in Github where the developers basically piss off a lot of their users. I don't think that's generally a good metric to promote as a top level metric obviously, but it would be interesting way to slice and dice projects.

But that analysis could look for thumbs up or fireworks or other Happy reaction emoji vs negative feedback emoji tied to the replies of people on the team.

Yes, this goes back to criticizing open source maintainers for not doing what everyone wants - so perhaps that kind of metric would only be enabled on sponsor backed repos.


Oh wow, great feedback.

We do have a metric like median time to resolve issue - and we even display it on a chart over time.

Sentiment analysis (natural language or emojis) could be really interesting in terms of assessing the issues, we probably don't have the resources to pull this off right now but this would be amazing.

and yeah, we should be careful not to incentivize user behavior that rewards criticizing maintainers as some kind of a leverage.


This is cool and I appreciate software development insights is getting some love. Full disclosure, I wouldn't say I'm working on the same thing, but we definitely have some overlap, when it comes to software development insights, which is gaining steam.

Your dashboard gave me some interesting food for thought, so I'll do the same and share with you, some of the things that I'm working on, that you are free to incorporate into openbase.

https://imgur.com/Y06Tk3f

https://imgur.com/Yjk2dzR

https://imgur.com/RJw4ygS

https://imgur.com/xXi9EMd

https://imgur.com/nROCA0O

https://imgur.com/xhb2haD

https://imgur.com/sZ2YpIM

https://imgur.com/HVSNhS0

https://imgur.com/TnInleD

My focus is not on the discovery part, which you guys are working on, but rather, I'm more focused on providing "business intelligence" for the software development lifecycle.

I do agree with you that code metrics can only tell you so much, as it's not really meant for discovery. Code insights might provide things like stability, investment, complexity, etc., but like you said, you can't tell from code insights, if the code is easy to use or not. Discovery is a serious problem and GitHub has certainty not solved this problem yet, so what you guys are working on, is a problem worth solving in my opinion.


Thank you for sharing this!

As you mentioned, we're focused on the discovery part, and package insights is a part of that, but so are reviews, ratings, search, and categorization.

Your tool looks like a great solution for software teams to get insights into their code and dev processes, this also connects with the "developer velocity" trend I've heard several times in recent years.

Looks like quite a lot of work too, kudos!

Is it in production with some customers?


> Looks like quite a lot of work too, kudos!

Thanks. I don't think people realize how complex it is to generate actionable code insights. Indexing code is easy, indexing history is an order of a magnitude more difficult, in my opinion.

> Is it in production with some customers?

No, the product isn't finished yet (I pivoted a bit from my initial idea). My goal is to have it ready by the end of this month, so that I can start indexing and making the indexed information publicly available. I was hoping to have a docker image/bare metal install packages available by the end of this month, but I don't think this will be feasible.

Right now I'm continuously indexing 4,000 repos, but the challenge is really with GitHub and rate limits. When I have the public site up and running, I'll reach out to GitHub to see what can be done, as I would like to make the indexed information freely available for anybody to tap into, including Openbase, if I'm capturing things that you guys aren't.


Yeah, it is pretty darn complicated!

A couple of things that helped us with the rate limits are: * Limiting most of our graphs/reporting to the past 3 years (some repos are pretty old) * Fetching the data incrementally - namely, the first fetch is kinda of large, but then we're only trying to get the minimal set of changes/detla to a repo on a regular basis!

That sounds awesome, I can't wait to see the product when it goes live!


> Limiting most of our graphs/reporting to the past 3 years (some repos are pretty old) *

I really need the entire history, so limiting time ranges isn't possible. The biggest issue is really with cloning/fetching, since GitHub doesn't like people synching thousands of repos on a daily basis. Outside of those things, my search and analytics engine doesn't really need GitHub, since GitHub is unable to provide the data that I need anyways.

I talked about it before, but what I'm really doing is moving Git's history into SQL (postgres in my case) and Lucene, so that I can slice and dice its history, to surface developer insights. Hence me calling it "business intelligence" for your codes history.

>That sounds awesome, I can't wait to see the product when it goes live!

Thanks and I think your product has potential as well and you should look at the Enterprise angle. Discovery in large companies is actually a serious problem. Companies really need a single place that they can go to, to see all that is available.

GitHub, with their "Discuss" feature gets you part of the way, but companies still need a centralized place that can help them easily discover/discuss what is available internally.


Immediately went looking for how to be a reviewer or who reviewers currently are (vetted), but didn't find anything? Clicking into a package I'm invited to leave a review where I see it's tied solely to my GitHub login. Not bad, but hard to see how the reviews are approved or curated. The centralization could be nicer than having to read a bunch of blog posts to get the gist but I'd rather defer to a proven community (like Debian's package maintainers) for guidance on JS packages. That's not a thing today, so maybe this project is a good first step.


You got it right!

Currently the only way to create an account is through GitHub login (we're going to add more options in the future). That means in order to write a review you have to log in with GitHub.

Behind the scenes, text reviews do not show immediately, we go over them and approve them manually, to verify there's not bad behavior, spam, offensive content, etc.

We plan on approving each text review ourselves manually until we're stretched too thin and this plan doesn't scale anymore.


I was going to ask how you plan to prevent fake reviews, but this mostly answers that. Curious to see where you'll go after being stretched too thin.


Yeah, preventing fake reviews is a tough challenge faced by much larger companies, like Amazon, Google and Facebook.

I don't think anyone has been able to 100% prevent those, so it's still work in progress.

I guess once we cannot do this ourselves, we're going to bring on additional team members to help with this, and once this is not enough, we'll have to use some combination of moderators (not very scalable), crowdsourcing (let users report suspicious reviews), and technology (identify and surface suspicious reviews).

We honestly don't know yet, but we know it's going to be a priority.


How are libraries added to a category? Manual? Eg. https://openbase.io/packages/top-javascript-frontend-framewo... seems very lacking.

Side note, selecting frontend frameworks from the sidebar gives the same list but the URL stays https://openbase.io/categories - URL should update.


Indeed, currently it's curated manually! Would love to get suggestions for more libraries we can add to that category!

You're right about the selecting categories from the sidebar not updating the URL, we haven't implemented the routing yet, that is something that we'll implement soon.


I think it's a good idea to include ones that stands out from rest of the pack.

https://github.com/sveltejs/svelte and https://github.com/ryansolid/solid stands out by being the only 2 that are compiled to standard js, so no framework is loaded at runtime.

Speed is important too and https://github.com/infernojs/inferno (and solid above) is well known for speed and is backed by benchmarks.

Not sure if you want to add more beyond this point since a few more will push the boundary to just include everything which is over 100 nowadays.


Awesome, thanks for the suggestions. Not sure how we missed Svelte (?!) I'll be sure to add those to the frontend framework category


Beautiful and wonderful, thank you for building and sharing this! I use these sort of websites all the time so this is very useful for me.

Congrats on the launch!


Thanks for the kind words We built it to solve our own problem (spending so much time trying to find the right package). Happy to hear it's useful for others like yourself!


I would really love if this displayed the bundle size of each package (a la bundlephobia[0]).

I'm not quite sure I understand how or why this is a business, but there certainly is some base level of utility value. I have a feeling this will ultimately end up encapsulating a job board.

[0] https://bundlephobia.com


We get this feedback a lot actually - and we do plan to integrating bundle size (e.g. gzipped/uncompressed) into the product soon.

In terms of business model - we're actually thinking about paid promotion of packages: allowing maintainers (companies and individuals) to promote their packages, we found there's a big need for that. We want to limit it to a single (clearly marked) promoted package for each category. Obviously, we would just surface the promoted package, but the package reviews, insights, and metrics are obviously untouched.

Any thoughts on that approach?


[deleted]


Nice, now we can review someone's open source contributions they usually worked for free on.


I feel you, as I have tremendous respect and appreciation to open-source maintainers.

Nevertheless, I think most developers already "review", critique, and share their opinion of open-source libraries loud and proud. They just don't have a dedicated platform to do that.

Like most developers, I've been using dozens of open-source libraries, it's natural that I'll form some sort of opinion about them. If I'm like most developers, I'll even have some very strongly held opinions and preferences.

I will like some packages, and hate others. I'll find some documentation great, and other unreadable. I will insist one project is complex, and the other is easy to use. That's all fine.

IMHO, the fact that a piece of software was created for free, doesn't mean I shouldn't have the right to express my experience and opinions about that software, good or bad. With Openbase, we just want to give developers the ability to easily learn from the experience of other developers, and share their own experience with various libraries.


I'm curious about whether your project would make more events like what happened with the rust actix-web library.

Short story if you're not aware: It is the fastest web library out there, but the maintainer quit open source entirely. I fear projects like yours would just make it worse.

This may sound harsh: your rebuttal doesn't really address the issue it just tries to justify 'live with it'


Just because someone worked free on something does not mean it’s suddenly free of criticism. I’d say it’s healthy to criticize even these projects.


When that criticism is one describing a lacking support for someones issue then yes it is a problem. Its basically a way to coerce developers to give free support against bad reviews.


I don’t think software that has major issues (individual negative feedback about missing features will be filtered out as noise in the average ranking so this does not apply to that) should be left uncriticized just because it is made by someone for free.

On the other hand, good software that people like will thrive because of this.


Do you plan to provide an API to build cool stuff around this? Congrats on the launch.


Thank you! We do plan to offer an API in the future - there are tonnes of cool things that can be built around this data. We were even approached by a couple of companies that were willing to pay for such API! (that was cool) However, we're currently a really small team and it's hard enough wrapping our head around everything we want to do with the website, so we decided the API is something we can pursue as we grow the team.


It'd be interesting if there was some way to measure user satisfaction per release. It would be nice to know which versions are working well for people and which ones are breaking things.


We were actually toying with the idea of showing vertical lines for releases on the various charts. So for example: # of open issues over time, and you can see if it correlates with a major release.

Identifying user satisfaction automatically is more tricky (sentiment analysis, which would be amazing, but we don't have the resources to do that well).

For now, one of the things we might do is to ask people to specify a version whenever they rate/review a package (or somehow deduce that automatically based on the date of review, which would not be as accurate...)


When a user rates a library, they should really be rating a version of it (even if the latest is always assumed). As time passes and new releases are made, ratings over time could indicate whether the project is getting better or worse.


Wow, showing a graph of user ratings over time would be amazing (once we have enough data)


Do you have a plan to expand to other projects, not related to JS?


Yep. We're considering Python, Go, Java and few others.

We have a waiting list here! (also there's link in the homapge) https://docs.google.com/forms/d/e/1FAIpQLScexiehdwDWJwxZJym8...


This looks like a lot of work. How will Openbase make money?


It is a lot of work.

In terms of making money - we're thinking about paid promotion of packages: allowing maintainers (companies and individuals) to promote their packages, we found there's a big need for that. We want to limit it to a single (clearly marked) promoted package for each category. Obviously, we would just surface the promoted package, but the package reviews, insights, and metrics are obviously untouched.


That seems like an interesting choice.

As an individual, I don't think I'd ever pay for promotion of a package unless I had moved my career to providing support for open source packages.

I can imagine companies using it as a marketing exercise for their open source efforts. That might not be a bad thing as packages backed by companies tend to be better, more mature, have better support.

However I can see this then being abused by SaaS companies using this as a way to advertise their service. i.e. they provide a package that integrates with their service, so while the package is open source they are making money on the use. I can imagine this significantly degrading the quality of packages.


Yeah, I assume the vast majority of paying maintainers would be SaaS companies (for services like monitoring or logging), API companies, infrastructure companies (DBs, etc.)

I do think the vast majority of these companies already have packages, since they need developers to integrate with their services.


Why not charge the maintainers themselves to advertise validation by your brand (once you have established that trusted brand)?

I mean, I see that you are attacking this problem from the analysis and search perspective, which is great and well needed, but there is an intersecting need for security validation and preventing malicious package updates. There is more liability that way, but imo there is greater incentive for enterprise customers and/or maintainers to pay for packages you distribute.


That's a really interesting thought, we've never thought of this direction! Are there any successful products/companies with such business model (providing testing and certificates for money)


I think the reference examples would be certificate authorities, ratings agencies, Underwriters Laboratories, ISO9000 or SCAMPI inspectors/auditors, organic/kosher/halal/GMO-free food stamp companies etc.

Selling quality marks can be quite lucrative, but you'd definitely need to think about exactly what people buy. And definitely think hard about deep insurance.


Awesome, I'm honestly not deeply familiar with most of those industries (let alone maybe credit rating agencies), but this is definitely a model that we need to explore. Thanks!


The parent comment said it well. This could be a good backup model to work with that could serve as a pivot if it turns out that most developers, like myself, tend to be pretty good at finding a package that fits their needs, but aren't going to read through the entire package to see if it's safe or if it opens connections to some random IP address in the install script.


> we found there's a big need for that.

Would you mind elaborating on this comment? What are some reasons why one would pay to promote their package?

edit: Great stuff btw. Would love to see this for Python/Java packages


Thank you! We actually have a waiting list for those: https://docs.google.com/forms/d/e/1FAIpQLScexiehdwDWJwxZJym8...

Some use cases where package maintainers might be willing to pay to promote their packages: * API companies that want to promote their product, mainly newcomers to the market that look for developers' mindshare, but probably incumbents as well. * DB and other infrastructure companies that have packages (drivers, libraries, tooling) * SaaS companies who's users are developers (e.g. monitoring, logging, analytics) * Some indie developers who are willing to pay a few bucks to get those first thousand users to their package and get it off the ground. * Software development firms/agencies that build packages as a means of showcasing their expertise, use the package as a way to build reputation and get more clients


Very cool idea - Surprised there isn't already someone doing this. Annual surveys are interesting but a bit of a slow feedback loop.


Thank you! Annual surveys are very interesting to read to see the macro trends, but as you mentioned they can't keep up with the pace of the JS ecosystem. Furthermore, they cover the major categories (e.g. big frameworks or tooling), but cannot cover the hundreds of different categories that we use on a regular basis (HTTP request library, logging library, tooltip library, XML parsing library, etc.)


I hate to be that guy, but should I switch to a browser with javascript just to see the landing page?


Frontend javascript keeps the bubble economy going


Looks great. Can you add testing coverage as one of the metrics?


Thank you. You mean a percentage (77%), like the one you usually see on a README badge?


Yup, exactly. It gives confidence in using the library


Kudos for Lior and the team, great product!


Thanks!


Cool. Does it include who backs a project?


Not yet.

We do hope to integrate with Open Collective and others to fetch some of that data!


Do you provide source code auditing?


No auditing capability at the moment, sorry. We focus on helping developers discover and choose packages.


This is neat. Vetting dependencies is an essential yet understated task, and there are simply no existent tools on the market. I started creating a similar tool on the side (by starting I mean I just bought a domain name ;-) depvet.io)

Here are some random ideas I've been thinking about, maybe some of it would be useful to you:

Possible monetization strategies:

You mentioned "paid promotion of packages" as a way to monetize, what I had in mind is a bit different: on-premise installation. You provide tools to scan an entire Github org then build your UI based on that. Besides vetting, there are plenty of features you could add:

- Blocklists: it is common for companies to have a list of packages you cannot use. It could be based on a number of criteria like authors, licenses, collaborators, etc. We have one in Confluence and I always forget to look it up.

- Commenting/Rating: When looking at a dependency, I usually do a Github search to find other teams that use it and ask them about their experience. Sometimes there are specific caveats that you can't find online; sometimes, we already have a fork that we need to use instead. A tool where I can see all projects using a particular package, internal comments, rating, and maybe even integration code samples would certainly be welcome.

- Security vulnerabilities: While there are plenty of startups in this area, most of them appeal to the security org and not the engineers. If you integrate the security component into the vetting process, I think you can touch a bigger audience.

- Fork maintenance: forks are a pain to maintain and keep up to date. Sometimes you forget why you had a fork in the first place. I'm not entirely sure how this would look like, but I think it's worth exploring.

Challenges:

- Github has been doing an excellent job recently. Building all this out is way simpler for them since they own the source data. Don't be surprised if they take you out of business by implementing similar features. Have a backup plan for when (not if) this happens.

- Github API rate limits: 5000 requests per hour is not much, it may be enough for one language, but you eventually have to add more. If you were thinking of using the user's OAuth tokens, plan on having a robust security practice, because it will be scrutinized.

- The single most important metric, in my opinion, is not present in your tool: The total number of dependencies. The flaws in direct dependencies are just as bad as flaws in indirect ones, and while I can quickly look at the package.json to determine the first level, there is no simple way to see/vet all transitive dependencies.

I wish you the best of luck and I think you have something here, but I suggest spending some time on your monetization strategy before you go any further.


You know what they say? It's not a startup until you buy the domain name ;-)

Thanks for sharing your thoughts!

We did give the enterprise route a consideration - after all we have all those insights for all these dependencies, that could be really useful in the context of an organization. However, we decided not to pursue it since it seems this market is a bit crowded, with many security/license/dependency monitoring and management tools for the Enterprise. I agree we could come up with something unique (e.g. integrating it with insights and ratings as your said), this is probably not the first path we'd pursue in terms of monetization, there are other monetization models that I think might be easier to develop & test.

100% agree on GitHub doing an excellent job and being innovative and nimble even as part of a big company (Microsoft). While I think it would be fairly simple for GitHub to introduce more metrics and insights to repos, I think building a reviews/recommendation website requires a different mindset, focus, and essentially. Also, it doesn't necessarily fits in with their business model (hosting of repos and providing services around that).

Great point about the dependencies! We currently only display direct dependencies, but when assessing the health of a library, it's important to assess all transitive dependencies too, we've somehow neglected that, but will incorporate that into our roadmap.

Thanks for all the ideas! We're a small startup, and we're still putting a lot of our hypotheses (re product and business) to tests. Let's see what the future holds for Openbase.


Best of luck, sir!


Thank you!


Good Initiative


Thank you!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: