Hacker News new | past | comments | ask | show | jobs | submit | jchulce's comments login

There are a bunch of UX differences between an iPad and a laptop while connected to a docking station that make using an iPad in that manner not quite satisfactory. For example, the iPad's screen always has to be on - while you can choose to either mirror or extend your desktop environment, you can't use only the external monitors and shut your case like you can with a laptop.

Great news - what you're wishing for mostly already exists[1]. The relatively new sysupgrade server and attended-sysupgrade clients automate the process of creating custom images matching already installed packages. After the breaking network config change a few years ago, it's now pretty safe to keep config files and ssh keys while flashing an upgrade image. The end result now is that I can seamlessly and painlessly update my OpenWRT boxes with just a few clicks or commands, despite loads of installed packages and config files. [1] https://sysupgrade.openwrt.org/


Thanks, I just used luci-app-attendedsysupgrade to upgrade from 23.05.0-rc2 to -rc3 with minimal pain. The only problem is that it lost my scripts in /root.


Chrome on iOS integrated with the OS password system several years ago, it's just disabled by default and not easy to find the setting.


SQLite would bring simpler installs, reduced administration needs, and a decreased attack surface.


These old ESS switches are currently being retired en masse in favor of packet-based softswitches. For example, see the Verizon network disclosures page [1]. Every listing for "Planned Network Change" or "Switch Retirement" is another 5ESS or DMS100 being ripped out.

[1] https://www.verizon.com/about/terms-conditions/network-discl...


Packet switched voice was such a mistake. Even under great conditions it is ROUTINELY worse than the worst days of circuit switched and TDM.

Kids these days will never know how great phones worked in the 80s and 90s. Fuck.

Give me that circuit switched reliability over high res audio with occasional/intermittent/unavoidable packet loss and jitter any day of the week.


The holiday bombing in Nashville provided clear evidence of what we've lost in terms of PSTN resiliency. I don't think there would have been region-wide loss of service for greater than 24 hours in say 1994.


Who cares about reliability, it is not as if lives depend on it. /s

If I look at the availability of phone service in my home (German metro area, cable) it is way below anyone would gave dared to provide in the TDM/ISDN times. I strongly believe only the redundancies people have with also having a mobile phone has allowed this to continue with lives being lost and regulators clamping down.


Are there statistics, or is this personal experience?

I've never had a landline phone fail, but I also haven't had one for about 12 years.


I have had a landline phone fail--a tree fell over on the next block down and took the wires along. Also, many years ago, I was in a building where we found the phones would work without power, but they wouldn't ring. While waiting to go home, we would resort to "polling"--pick up a line and see whether anyone was on it.


Yeah, but also in an era of vastly less inter city capacity for long haul. Properly implemented Opus on a network with under 1% packet loss sounds great.


Yeah but how often are two people connected by networks that maintain <1% PL and low jitter for the entirety of a length of a call?

I'm thinking of just shipping 79xxs to my close friends and telling them to plug them directly into their routers, but even that will go to shit around 20:00 local as their residential networks overload.


The weakest link is absolutely the residential class last mile contended access networks. I've regularly seen 0.00% packet loss for weeks at a time on business class connections between two points in the continental US, but when one end is slightly flaky, all bets are off.


Not quite - there are a bunch of CS2K retirements listed there, which was the soft switch evolution of the DMS100. I'm personally a little sad about those retirements as my first professional job was on the small team at Nortel developing the prototype of a DMS100 call server controlling a packet switch fabric. We started with ATM but moved to Ethernet when it became clear that was going to be the packet fabric of choice. Still, it's had a good innings and they were fun times.


I hadn't heard the name Nortel in a long time - I used to work for a company that had a lot of Nortel phone switches. Wikipedia says that they're now defunct, which makes me a little sad.


The dot-com bubble, mismanagement, and accounting shenanigans took it from being ~1/3 of the entire value of the Toronto Stock Exchange and an employer of almost 100K employees to bankruptcy within the span of a decade.


It is physically impossible to have long distance underground high voltage AC transmission lines due to capacitance: http://www.puc.nh.gov/2008IceStorm/ST&E%20Presentations/NEI%... HVDC wouldn't have that issue, but there's not much experience with HVDC in North America.


Why on earth are you not using HTTPS?


Right now there is nothing to protect because the API is inherently open (on purpose to make it easy). But next week we will be introducing a security token mechanism to be able to "lock" machines. In this case HTTPS will be important and will be required. So yes, it's coming...


It's only available in Canada right now


So far, all pricing is "to be announced". Also, the only compute location in the US currently available is Virginia.


even though base64 increases raw size by about a third, this is mitigated by gzip or deflate encoding by the webserver. The actual transmitted size is only about 5% bigger


I measure less than that:

    dd if=/dev/urandom bs=1024 count=64 | base64 | gzip | wc -c
    64+0 records in
    64+0 records out
    65536 bytes (66 kB) copied, 0.0127386 s, 5.1 MB/s
    67302
This particular run comes out to ~2.7% overhead, and in fact it's very repeatable. The half dozen runs I did were within 20 bytes of each other.


The overhead is slightly worse when it's compressed alongside normal HTML content (the Huffman trees aren't so favorable).

    $ wget http://en.wikipedia.org/wiki/ASCII
    $ (cat ASCII;dd if=/dev/urandom bs=8000 count=1 | base64) | gzip | wc -c
    50578
    $ (cat ASCII;dd if=/dev/urandom bs=9000 count=1 | base64) | gzip | wc -c
    51621
Around 4% overhead. Either way, it's negligible.


Dead on! The overhead with gzip is very tiny. The size of the payload should also not be a factor in the cached condition.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: