Hacker News new | past | comments | ask | show | jobs | submit | vikeri's comments login

Will try it out. Regardless of if I’ll use it or not I think this project is a great example of the rare type of engineering craft that seems to be a pre-requisite to build truly great software. A great signal if I were hiring.


This is amazing, I wish it had support for Ecovacs too


Dennis Giese - the guy who did & published many of the actual vacuum roots - actually did a talk at 37c3, just yesterday! Together with Braelynn he published rooting methods for Ecovacs. Expect easy & comfortable root using the UART debug connector.

Valetudo support seems to be coming, however "it's done when it's done"; the necessary work is more dramatic due to the different protocol Ecovacs uses for cloud communication.

Until then: I would make sure not to update the device firmware. Just be prepared to wait and keep an eye on the release notes and/or Valetudo webpage (unlike with other projects, you can safely assume it to be up to date).


Have you tried Stripe's payment links?


Definitely check them out - they're great. Easyful is just a fulfillment layer that plugs into Stripe payment links for emailing customers your content when they buy something.


I don’t have experience but I thought Subprime Attention Crisis was an interesting read


Another, less known, useful thing about these IDs is that you can double click on them and the full id will always be selected


Also, they are safe to use within filenames and directory names (Filesystem paths) without conversion (at least in today's Filesystem not limited to e.g. 8.3 characters) .

Compare that with otherwise nice ISO 8601 datetime format (e.g. 2023-06-28T21:47:59+00:00): it requires conversion for file systems that don't allow colons and plus signs.


This is a setting in your terminal emulator. For me, plain UUIDs are selected just fine when double clicking.


There's life outside of the terminal. For example you want to double-click on the part of a URL in your browser.


It's mentioned in the README. "can be selected for copy-pasting by double-clicking"


Curious to know how to which degree they are self hosting, seems they'll buy their own servers but I doubt they'll go all the way to buying land and building their own data centers?


When I was there, they rented space in a DC outside Chicago (near the airport), and later near NY for a backup location. The DC in Chicago was fantastic. We had a whole load of racks (about a row in the DC), and we bought our own servers. Running our own DC would have crossed the line, it's a whole huge thing you need to do and it only makes sense at scale if you want to do it well, Basecamp used a tiny fraction of the DC capacity.


>We had a whole load of racks

A whole load of Racks for running Basecamp? We are talking about 42U per Rack, and total of 420U of Servers?

The scale seems quite massive. At least to the idea / perception of what I had about Basecamp. Would be nice to see those specification and see how much of an improvement it is 10 years later and if we could fit those into 2-3 Racks.


*whoops, this got long*

Basecamp started on one single Rackspace server, before I was there. I started at 20 people, left at ~55.

When I left there were I don't remember how many racks exactly, but more than 10, less than 30 in the primary location. 42U of servers in each. There was a mix of a whole load of (Dell, never pay list price!) blade servers, DB appliances (~12 in total across ~6 apps ISTR), Isilon storage[0][1], F5 kit, juniper routers etc. etc. We had some epically fast storage in some of the servers for the time, way faster than SSDs.

Later we added two more sites. One in I think Virginia, one in NY. The one in Virginia was a replica of what we needed to run Basecamp, the one in NY was a half-rack data replication location (I think I got that the right way round). We had 10G fibre (we rented wavelengths not the whole fibre) between each location. We could lose one DC and remain RW for our block data, 2 DCs and we'd have to drop down to RO. Block data was things like uploads, so DBs, search etc. wouldn't have been affected. We could lose one of the /main/ DCs and still be RW for everything.

With all this kit we were able to run both main DCs hot. With our Geo DNS you could hit either of our DCs and you'd get served pages, you could even write to both locations. One DC was always the "RO" DC, it always replicated the databases. If you tried to write to that DC we proxied your request to the RW DC over our 10G links and proxied any more requests you made for n seconds to the RW DC too, at which point we reverted you back to the RO DC.

Now, NY to Virginia isn't that far, so why bother with the hot/hot config? Because it played into the rest of our plan, which was DC failover. With some pretty epic voodoo (Juniper/F5/OpenResty etc.) we could fail over the datacentres, swapping the RO and RW locations. We could also do this if one of the locations was unavailable. We could do this in 4 seconds /without losing a single in-flight request/ (we tested it).

This ended up a bit longer than I was intending, but it illustrates a few things:

- I think people underestimate Basecamp. It's /huge/ (money and users, not employees). Not so much in the tech world (anymore), but even with this kit, even with the (at the time 6) sysadmins that maintained it, it still made a shit load of cash. I guestimated the net-worth of the two owners as in the hundreds of millions of dollars each, entirely because of basecamp. Cash that as a privately owned company all went to the owners (who gave some of it to us, they treated us fairly well). I think it was Patio11 who said that people under estimated the market for software, it's easy to do, these aren't human-scale numbers.

- The owners are right about keeping it private. You might not make "larger yacht than Larry Ellison" money, but you sure as hell have a lot higher chance of making "pretty damn big yacht and not having to work again" sort of money. If I was rolling the dice I know which I'd gamble on.

- I don't think I ever actually calculated it, but the amount of money our servers were worth while running was /immense/. When you went to the DC and looked at the row of racks, you could kinda see just how dense the $ value there was. The efficiencies that the cloud brings gives money to AWS, not to the clients.

- All of this was still /way/ cheaper than running cloud infra. I'm not against the cloud (I use it), but we made more because we had this infra than if we were on the cloud. We were dense when it came to # customers per $ spent on infra.

- The flexibility we had because we controlled everything was also extraordinary. It came at a cost, we had to do it ourselves, but we had a /lot/ of power to make things work the way we wanted to.

- We had access to everything, from the network (hell, even the light :) ) to the JS. This gave us optimisation options not available to a lot of people. Network topologies, buying the right CPUs (we benchmarked them), configuring the CPUs, etc. etc.

[0] An Isilon storage engineer once dropped one of these on the floor, while it was powered on and the drives were running. It took out a metal floor tile. [1] They were a PITA, everyone hated them. There was a running joke among the sysadmins that when we decommissioned them we would take one on an Ops meet and use it as target practice.


That is about the amazon.com architecture as late as 2006 (only slightly higher scale of three entire datacenters scattered around virginia and I think dual 40G links between datacenters, but other than that exactly the same principles).

Lot of people think that DC failover needs to be east coast / west coast but you can achieve most of your DC redundancy goals separated by 100 miles or less and have a lot lower latency, higher bandwidth and lower costs. Might want to think about different geographic flood plains and different power companies / grids.

Could still nerd fight about EMPs from nuclear war and a sufficiently massive hurricane or an earthquake out here on the west coast, but at some point you need to accept some risks.


The reason was that I remember DHH stating they were doing 2K RPS with 30 App Server in 2015 ~ 2016. And they were on one Primary DB. ( At least that was what I jot down in my notes ). I was assuming they could fit multiple "App Server" Node inside a single 1U Blade. But even if it was 1U per App Server, that would be 30U + likely a Powerful 4U DB Monster. Along with probably some Cache instances.

Even if the number above does not include redundancy, that still only makes it 2 Racks of Servers with spare.

30 Racks is a lot. What am I missing here? Apart from Storage.


The place I work (CTO for now, looking for opportunities pretty soon) can sustain nearly 400 RPS through a Rails app on a single Performance-L Heroku dyno without a sweat (though we run two min), but it's the tip of the iceberg. The infra to support those web servers is way more than that. Aurora Postgres x 3 for now, a large ES cluster, 2x redis clusters, memcached etc.

Bear in mind that we had I think 6 apps. Basecamp 1, 2 and 3 (all separate infra), Highrise, Campfire, some other internal stuff. Our ES cluster was pretty damn big, redis and memcached too. Juniper, F5, network switches (rack infra) etc. Storage was pretty big, quite a few 4u servers with spinning rust. The blade servers were I think 6 blades in 2u.

Everything was redundant, everything had 2x or more. I honestly can't remember how many racks we had. "More than 10" is my hand-wavey guess.

I don't think DHH was being disingenuous with his "2k on 30 servers" message, it was likely more about the scalability of Rails rather than the infra required to run the app.


Do you guys define racks the same way?


I think so, 42U of full-depth server capacity.


It would make little sense when there are plenty of bare-metal providers who are happy to provide this as a service at a reasonable price.


Who told you they need to build a massive data center? They are NOT RENTING SERVERS but buying space for the ones they use. With the budget they can afford to buy super beefy rack servers that occupy no space. With a single rack they can have a ton of late tech servers.


Having been at a Clojure startup I can second this, despite our non existent brand we found great candidates. And teaching a smart junior developer Clojure wasn't harder than teaching people Ruby or Java. As soon as the initial lisp shock is over learning is much faster due to the simplicity (it's just data).


Good point! In my experience, you don't need to hire "Clojure developers". Look for good candidates that have worked with functional programming languages and you'll be fine. They'll get up to speed in a matter of weeks. Any flexible and educated developer can use Clojure, it's not magic.

In other words, it's not about hiring a "Clojure developer", it's about hiring a good developer.


My understanding is that Stripe billing can be managed completely no-code from the Stripe dashboard? And I think it includes automatic tax, prorations etc.


The fact that Apple's developer tooling is described as great makes me suspicious if the author has used JS tooling in a serious way. VS Code is just so much better than XCode. My experience developing native apps in SwiftUI is also that the documentation is very very lacking. And lastly I wasn't able to get a hover effect without dropping tons of frames. My lack of experience I'm sure is partly to blame for the app's poor performance, but it certainly wasn't performant out of the box. React would have been significantly more performant for my use case.


Web development is almost hell compared to developing with Apple tooling. At least that's been my experience. I primarily do web development when it comes to UI, but I dream of going back to Swift, Xcode and macOS.


I strongly disagree, XCode is so bad compared to VS Code and JetBrains I have to believe that Apple engineers are forbidden from using anything else, because they don't seem to want to improve it.

My experience was primarily driven by C/C++ development though. I've heard better things about swift, but I just can't grok how the hell xcode projects are supposed to be used at all and the tool is so slow it's unusable.


I agree that in terms of pure editor and IDE capabilities, JetBrains is better than Xcode. I am rather thinking of the actual Apple ecosystem with SDK, API's etc.


Not my experience. I was happy to leave Xcode behind for vscode after years of iOS development.


xcode is so busted and so are all their apis and documentation. I don't know how people are arguing with you.

I'd prefer electron even it wasn't cross platform. I could write the app 4 times in electron faster than installing xcode.


Wrong

People who say such thing have no idea what they are talking about

Debugging with VSCode is a joke, and there is no GUI preview for Cocoa/UIKit/SwiftUI

XCode has many flaws, but it's properly integrated and helps you to make beautiful native apps

VSCode is a UX nightmare


VS code feels so third-party-extension and custom-config dependent and I’m sure OP has their VS code settings _just right_ for them, but that does not make it a good editor for everyone.

I love this community, but the people who are comfy configuring Linux and cant understand why everyone doesn’t roll their own solutions generally don’t know shit about what good UX is.

VScode ain’t it, that’s for sure.


Installing the most popular LSP implementation for the language you're using is hardly comparable to setting up, like, an Arch install.


Not really fair to compare the dev experiences _for macOS development_ across VSCode and XCode. I wouldn't recommend using VSCode for Cocoa/UIKit/SwiftUI dev.

I'll say that working on TS/React in VSCode is head and shoulders above Swift in XCode from a DX perspective. I had to routinely restart XCode to verify that type errors were actually there, it was incredibly slow, and the Swift compiler would give up on highly polymorphic types and red line them.


I'm not going to be objective but XCode is the single worst piece of software I ever used.

12 GB to download, more sluggish than the worst electron app, terrible UX, undocumented config files screwing up with git, terrible error messages, app upload is so unreliable even Apple had to create a third party app and tied to the OS version.

Developers have no choice, it's clear and they know it.


JetBrains had a go at an Xcode alternative a few years ago, I wonder if that ever took off? AppCode I think?


This is a sub theme in the book "Subprime attention crisis" that more broadly argues that the fact that programmatic advertising is based on similar mechanics as financial systems also means it's vulnerable to bubbles in the same way. Interesting read but I would have loved some more data on the actual efficacy of programmatic advertising.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: