LINQ is just an "express your query as an AST" approach. Sure LINQ presents its self as a DSL on top of that AST ( which is exactly what SQL is at the end of the day ), but there are libraries for MongoDB that allow you to do the same thing. There is even a LINQ driver for MongoDB. What you are missing is that by having the lowest common denominator be the AST instead of a DSL you have removed a huge burden from both the server and client side of the operation. From that AST you can build any kind of client side interface without worry about how to parse or compile the result into a specific textual language, even plain old SQL can be used if you like your sever spending its clock cycles parsing text.
I like mongo's interface (most of the time)... but when you get to more interesting queries, it gets far more weird than even SQL... I wrote an early node API almost mirroring the JSON query interface (was easy enough to do filtering on sensitive fields), and it worked really well.
That said, it was really clunky dealing with ordering as JSON doesn't guarantee serialization order (sort fields), and that was cumbersome to work around iirc...
Today RethinkDB would be my first choice for a similar solution... better ops interfaces with replication + failover. Instead of having to do a replica of a sharded system in mongo for distribution with redundancy. Also, while the query interface is a little more complicated to get started with, when your queries get more interesting, it's not nearly as messy as mongo gets.
From his willingness to post on an internet forum about his faith and about the historical record of the Bible, I imagine this isn't his first time having such an argument, and anyone who has presented and defended their faith in _any_ forum, but especially on the internet, knows that the argument generally is a waste of time. We can re-hash out the centuries of debate about Jesus here on this thread, but do you think it would change anyone's mind ?
The issue is that this is HN. I'm an avid Christian myself, and can surmise that jorangreef probably really knows his stuff. But HN probably isn't the place for this kind of thing.
On the Internet, I've met very few atheists or skeptics who really are able to play equally on the same playing field. Oh, if we debate physics and whatnot, sure, they often do great. But that's a completely different playing field. Debating historicity of scripture is very different from debating origin of the universe, just like how ice hockey is completely different from beach volleyball. And besides, if you know your Hebrew, you'll also know that a lot of atheists and skeptics likely misinterpret scripture when it comes to origin of the universe, so that debate is actually a moot point; but that's OK, so do many Christians who don't know Hebrew. It's like two people who are arguing about how to grill a steak when what they're actually doing is boiling eggs. But they're convinced they're grilling steak. It makes absolutely no sense. Anyway.
My point is that although having this debate may have an impact on people (hey, can't disagree with you on the possibility), I imagine that if we went all-out to have a debate on faith-related topics, it would not be appreciated by the HN community. The HN community is not here to debate faith. As much as I'd like that, being an avid Christian myself, I have to respect that the community is here to discuss cool technology, software, trends, business, and the odd quirky, geeky factoid/story. This falls under quirky and geeky. But if it turns into an all-out debate about faith, I imagine it will no longer be seen as quirky and geeky cool. It will be seen as an annoyance.
I can't speak for jorangreef, but I imagine his feeling is similar to what I felt when I saw some comments about death.
Here's another thread where I wanted to participate, but I didn't, because right away the tone of the poster told me that the discussion would not be productive. His tone was hyperbolic, overly emotional, and absolute: https://news.ycombinator.com/item?id=10371259
When you get the feeling from a poster's tone that he's not going to be open-minded about stuff, why stir the pot? It's not productive. If anything, it starts to get trollish. And trolls aren't appreciated on HN.
My experience is that discussions of this nature can be good on HN, given participants with a mindset of learning and exploring rather than preaching and fighting.
For example, I suspect that a post describing the major viable positions on the historicity of Jesus, and their strengths and weaknesses, would be well-received. A post asserting one of those positions without giving mention to the others (like the pair that touched off this subthread) would be less well-received. Because the first type of post helps give people the tools they need to evaluate and draw their own conclusions, while the second type of post merely tells us what conclusions someone else has drawn and invites us into a fight.
Most people on HN like to learn, even about religions they don't practice, and this is a great place to find people to learn from. But most people on HN don't want the sort of lame talking-past-each-other and name-calling that you can get on any other website.
I think if you read my post again, you will see it's just a short list of facts intrinsic to the New Testament documents, capped with an invitation for open-minded investigation.
For example, the point about thousands of manuscript copies is not me asserting a position. I don't think it's debatable that there are an overwhelming number of copies available for textual criticism compared to documents of the same period?
Similarly, the point about the testimony of women being relied on at key points in the gospels, is something intrinsic to the gospels?
I don't think the date ranges given were particularly subjective either. I am not aware of any evidence supporting a post 1st century dating of the documents, nor of this being seriously in dispute?
> "it's just a short list of facts intrinsic to the New Testament documents"
It's a list of facts intrinsic to the New Testament documents, followed by an assertion. There's no reference made to any of the potential issues that come up when investigating the historicity of Jesus.
Now, to be clear, I believe in a historical Jesus, Savior, Messiah, second person of the Trinity -- I'm doctrinally quite orthodox. But I think when you're arguing for a historical Jesus, it's important to be clear on what the evidence both does and doesn't say. It's important to point out that, for example, the large number of manuscripts helps us determine that the story wasn't modified over time, but the accuracy of the original stories must be judged using other criteria (including some of the criteria you pointed out.) It's important to note that "unchanged" and "accurate" are independent questions.
As for the date ranges themselves, one of the most interesting arguments I've heard has to do with the distribution of names in the canonical Gospels. Modern archaeology has given us a fairly good idea as to the most common Jewish names in the region in that era, as well as in later and earlier eras and in other regions. And the canonical Gospels show the same pattern as early 1st century Palestine -- there are several characters named Simon, the most common male name in the archaeological record, and the name is treated like a common name that needs to be clarified (so you see things like "Simon Peter", "Simon the tanner", etc.) In short, the people who wrote the Gospels were clearly familiar with Jewish names in first-century Palestine -- people who lived there during the time of Jesus and then scattered before the fall of Jerusalem. Compare this to the various gnostic gospels, which hardly ever use names outside of Jesus and whoever their purported author was (Thomas, Mary, etc.) The lack of ordinary details like names and locations in those writings point toward their being mythology.
All fair points and ones that I've taken to heart in my years of having discussions on and offline. That said, I personally don't think the comment jorangeef dismissed was particularly abrasive. The remark that caused umbrage, about the historicity of the gospels not being a necessary component for forming belief, is a line that has been touted by many Christians. It could be said that it's unfair to regard Paul as 'slightly insane', but similar claims have been made about Mirabai, Alexander, and Muhammad. From their followup, it seems that penguin82 didn't intend to cause offense.
Even at 300k points per second, PostgreSQL or MySQL can easily service your needs. The scheme that this article presents is really about batching writes to disk to get high throughput.
mysql> CREATE TABLE test.polls_memory ( id INT(11) PRIMARY KEY AUTO_INCREMENT, poll_id INT(11), time DOUBLE, value INT(11)) ENGINE MEMORY;
mysql> CREATE TABLE test.polls_disk (id INT(11) PRIMARY KEY AUTO_INCREMENT, poll_id INT(11), time DOUBLE, value INT(11)) ENGINE MyISAM;
#> for i in `seq 1 500000`; do echo "INSERT INTO test.polls_memory (poll_id,time,value) VALUES ($i,UNIX_TIMESTAMP(),$RANDOM);" ; done | mysql -uroot
mysql> INSERT INTO test.polls_disk SELECT * from test.polls_memory;
Query OK, 510001 rows affected (0.95 sec)
Records: 510001 Duplicates: 0 Warnings: 0
And this was just on my laptop. I know that on enterprise grade hardware I can get that write rate up to millions per second. The question isn't getting it to disk in batches. Its the read patterns you can support once you decide you are going to batch your writes.
Yet your measurement includes no networking, no parsing of statements, no concurrency...
Repeat the experiment with 200 different clients hitting the server via networking with 1500 insert statements, each second. Then you can compare MySQL with the InfluxDB use case cited as an example in the article.
The whole point was if you are going to batch writes to disk to get high thoughput then PostgreSQL or MySQL can easily sustain the same rates as InfluxDB ( which my little example shows ). I wouldn't have MySQL parse 500,000 insert statements per second, but I would have a proxy in front of it that could take in 500,000 data packets per second and then at some defined interval write them to MySQL.
I know its not as sexy as InfluxDB, but the old work horses are tried and true and if you use them right they are as good as if not better then some of the shiny new toys.
So I added an index on (poll_id,time) since thats a more likely index for various reasons.
mysql> ALTER TABLE polls_disk ADD INDEX `timestamp` (poll_id,time);
mysql> INSERT INTO test.polls_disk SELECT * from test.polls_memory;
Query OK, 510001 rows affected (1.78 sec)
Records: 510001 Duplicates: 0 Warnings: 0
So the index takes me down to 286k points per second. Again this is just on my laptop, but it shows a great point. I can write stuff down to disk super fast ( especially if I can batch it ), but reading it back out is where the problems are.
As far as a proxy in front of the datastore, the amount of effort to batch up data packets to hand them off to the data store is several orders of magnitude less complexity then a really well designed data store, so where do I want to custom roll software and where do I want to leverage the broader community. I don't think there isn't a place for the new tools, but I think we jump to them much more quickly then we should in a lot of situations.
Creating a time-series table is easy. However, you will quickly experience several challenges:
(1) Table scans to extract just a few columns will be extremely inefficient. It's better to create one table per column, each being a pair [time, value].
(2) B-tree indexes will be inefficient you for large extracts. For example, aggregating by month for one year will basically not use the indexes. The index will only mostly useful for looking up individual metrics, which is not useful. The new BRIN indexes in 9.5 should help, however.
(3) You'll end up reaching for something like Postgres' partitioned tables, which allows you to create child tables that are visible from the top-level table. Each table needs to be sharded by something like the current day, or week, or month.
(4) You say you need a proxy to batch data packets. So it's queuing, and needs persistence because you don't want to lose data if Postgres is lagging behind — your proxy is now a time series database! However, if Postgres is lagging behind, you have a problem. If you're writing at 600k/s and Postgres can only handle 500k/s, you have a never-ending queue.
(5) PostgreSQL can only have one master, and its write capability has a fixed ceiling (for HDDs you can calculate the max possible tps from the rotational speed, not sure about SSDs). At some point you'll end up sharding (by column, probably) the master into multiple concurrent masters.
Much of the point of something like InfluxDB is that there's no single bottleneck. You scale linearly by adding more nodes, each of which can write data at top speed. You don't need a "proxy" because it's designed to perform that very same function.
Both only reading or only writing is really simple, compared to when simultaneous high load on both reads and writes are needed, especially if at the same time as things are flushed to disk in a way that doesn't lose accepted data on power down.
That doesn't mean most people can't do well on relational databases as most people simply doesn't need both at the same time.
Try running that benchmark again on a table that already contains 500M records. I'm quite sure you'll see different numbers. Another problem you'll run into is reading back those records, and performing aggregations and window functions on your data. In my experience Postgresql is not a bad solution for time series data, but it's certainly not a perfect fit.
If I take the index off the benchmark will run just about as fast for 500M rows as it did for 0 rows, at 500M rows MySQL's BTREE index starts to fall over, but I have plenty of options to mitigate that. I don't think anything is a perfect fit for time series data, but again my point was that the traditional relational database's are _really_ good, and they do in fact scale up well beyond what most people think they will.
Nothing in that list makes me think Rust won't ever be a good language for game programming. It reads more like these are the current pain points that need to be addressed, and even with that list there are no real showstoppers there. Most are just nuances that they would like to see addressed.
I have fully switched over to Rust to do my web backend's. I use Nickel.rs and it has worked great. The code is very succinct and fast. The only downside I have found to this is that there is no "Hot Reload" which I had with Flask(Python) and Play(Scala).
As the ecosystem grows I only expect it to get better.
I'm using Iron instead of Nickel. Not sure who will become more popular over time...
What was behind your decision to pick Rust? My understanding is that Rust is meant as replacement for C++ in places where C++ performance is needed (browsers, systems programming (maybe)), etc. A web server does not fall into that category, so why did you pick it?
"A web server does not fall into that category, so why did you pick it?"
That's too categorical a statement. It depends on a lot of things. Sure, there's a place in the world for PHP pages that take two or three full seconds to render, but it doesn't matter because they're doing something useful and don't get hit a thousand times per second. There's also a place for things that render in under a millisecond because they're getting hit at an incredible rate, and if they also have some sort of interdependencies then Rust can make a lot of sense.
Plus as I've said before, the whole "you're stuck in IO anyhow so who cares how fast the render code runs" dogma has sometimes gone too far. (I know you didn't say anything about that, but I'm guessing based on experience it's what lies behind your point.) My experience in switching from slow languages to fast languages, even doing the same IO, is that you do tend to see real performance gains unless your page was absolutely trivial. Slow languages really are slow in practice.
> That's too categorical a statement. It depends on a lot of things.
I didn't mean it to be. Of course there are extreme cases where GC pauses are unacceptable on a web server, but those are relatively rare. Even Google mostly uses Java (although they are heavy C++ users, is it for their web servers?).
There are always extreme cases, which is why I asked the question.
> My experience in switching from slow languages to fast languages, even doing the same IO, is that you do tend to see real performance gains unless your page was absolutely trivial. Slow languages really are slow in practice.
Slow vs. fast is not black and white. Is Go slow? Is Lua slow? I Java (on a warmed up JVM) slow?
We're talking about GC vs. non-GC languages here. Do you really see a difference in performance gains?
We're talking specifically about the web application servers here. I wouldn't be surprised that something like Search is C++, it's one of the most visited pages in the world. But I haven't heard specifically that it is.
I don't think it's any secret that Java is huge Java users. If not web servers, then for what?
That's the amount of code in their repo which covers a lot more than web application servers.
This is an unnecessary tangent though, as I've said throughout this discussions, there's extreme cases and no doubt Google is likely one of them. But we're talking about common use case, to which I still haven't seen any evidences that GC pauses are a deliberating factor.
You may already be familiar with this and are specifically concerned with GC. However, for what it's worth, it's not just the absence of garbage collection that makes languages like Rust and C faster. A huge piece of their performance is the level of control that they offer over types' layout in memory and where things get stored. Rust's preference for stack allocations and the absence of a mandatory per-Object size overhead (a la Java) really allow it to shine.
Once you start seeing hundreds or thousands of requests coming in per second and you see your cpu utilization sky rocket, then the "slowness" of the interpreted languages really comes into full play. Most apps/websites will never achieve this kind of load or scale though.
> interpreted languages really comes into full play.
There are plenty of non-interpreted languages that don't require GC though. When you pick Rust you are picking manual memory management. Does even Google opt for non-GCed languages for their web servers?
We've been having a debate lately, within Rust, if "manual" is really the right term here.
let x = Box::new(5); // malloc
} // free
Is this really _manual_? In a sense it is, but it's very different than what most people think about. "Automatic" is kind of a good word, but Objective-C/Swift's "ARC" already covers that, and we don't do refcounting, so that could be misleading too.
There hasn't been an RFC yet, because we're in the middle of a large amount of compiler internals work (HIR/MIR), which will make overall analysis easier, including these two features.
> (I can't reply there for some reason.)
HN limits responses based on the depth of the comment tree and the length of time since the comment was posted, to discourage flamewars, you can always click on a comment link directly to kind of bypass that though.
Borrows may be non-lexical in the future, but deallocation via ownership will always be at the end of a lexical scope. Not only would eager deallocation be bad for performance, it's also a backcompat hazard at this point.
I'm thinking Wordpress, or bespoke status pages for internal apps. Of course you can get a PHP page out in under a millisecond if you try hard enough, but it's often not used in that sort of place. (And it often is, but that still doesn't mean it often isn't.)
Rails, for that matter, is known for being easy to produce multi-second pages too.
It actually amazes me that people find the statement that slow languages are actually slow offensive and worthy of downvoting. As I said, it is my experience that simply switching from slow to fast languages even on "IO dominated" workloads has real, visible effects.
If you haven't tried it, consider trying it before becoming offended at the idea that your tool may not be appropriate for all use cases. Yes, it can matter when your language is 50 times slower than another.
I think that line of thinking ends up not applying to rust terribly much.
I know there's been several articles lately about broader applications of rust, and then "is it worth it?" type responses, concerns about "manual memory management", with the implication being there's some heavy tax on not using a GC.
I would certainly agree with the presence of this tax while writing C/C++. (Less with C++, RAII, and especially c++11, but still there.)
With rust, once you're up the learning curve (which, granted, can take a few months), your productivity is not substantially worse than GC'd languages. You pretty much master the borrow checker's semantics, and the best practices to structure things in ways that play nicely with its expectations. And lifetime-related errors don't occupy all that much of your development time.
Source: my team has been re-writing some components of Dropbox's multi-exabyte storage system in rust (from go) over the last 9 months.
I am fascinated that no one uses Objective C on the back end. Fast, low memory, quick to code in, high-level-enough, a mix of static and dynamic typing, and no garbage collection. Also plenty of expertise you can hire for, maybe already within your company. I think if Obj C had a good story for deploying on Linux (and not losing modern features like ARC), it would make sense to use it, so we'd see sites doing it and open source tools like a good webapp framework. It is because you would need to run OS X servers? (And is that even true?)
I think being so heavily pushed by Apple is actually a downside for ObjC here. I really enjoyed programming ObjC when I was programming for iOS and Mac. That was mostly because of the very rich API shipped with Apple products.
Can you point me to a list of libraries written in ObjC that work without Apple specific code? (E.g. without using anything starting with 'NS')
I'm aware of Gnustep, so I was hoping someone would mention it. But I was hoping for more than one word. How many people are using it in production? Have they had good experiences or bad? What are the surprises they've encountered? Does it have ARC support? Can I use XCode to build with it?
Two big benefits are the static typing and compilation so a I get a lot of benefit from catching errors early, and the second big benefit is static linking.
Other languages give you that ( namely Go ), but I found the correctness checking from rust is _way_ better and I am forced to have much better designs in my programs.
Another Tangential benefit is that I can do lots of great system level things ( High performance data collection, Clustering, Math), and then have a thread pull in a web server and serve out things like statistics and control interfaces.
Also its just nice to program in and the tooling is much nicer then C++
Thanks for your replying. Static typing I won't argue with you, but is static linking important for a web server?
One of the big areas where people choose Go is for cli utilities and static linking is a big reason for that choice. I can see Rust becoming a competitor in this space for the same reason. But for a web server that you control I don't see a huge advantage to static linking.
If I was just working in a SaaS world then maybe static linking isn't a big deal, but I ship and support software across 4 different versions of glibc, libstdc++, and any runtimes for various languages ( Java, python, php, ...). Also I have various versions of MySQL and command line tools. The only things I can guarantee I have are ssh and something that will respond to MySQL connections. In that world being able to push out a single binary and have it just work is priceless.
The one benefit of static linking is reduction of software/library dependencies required for production deployment. In many organizations this means you can develop on any OS/Platform you wish, and then deploy on whatever OS/Platform the ops team is currently running.
Many organizations have yet to adopt devops and templated system deployment methodologies, so it's a legit concern.
> In many organizations this means you can develop on any OS/Platform you wish, and then deploy on whatever OS/Platform the ops team is currently running.
That's true of dynamic linking as well. Static linking just saves you the trouble of having to install the library dependencies on the server. But for ops that's a pretty tiny, if not non-existent gain. So I don't buy it.
When deploying to users it's a huge gain, so I do get that that for things like cli utils.
Depends on your definition of web server, nginx is written in C, even Ruby and Python web servers call to C, they need the performance of C implementations, so I don't see why Rust wouldn't be a good fit for a web server.
My current project is an interface to get detailed application performance statistics off a clustered system. So it connects to a local database and has a configurable list of system checks that it can run. Then it has a defined API that is consumed by the Polymer front end. Rust is nice in that I can do all sorts of great clustering and network programming, and then have a thread that exposes the WebUI and all of that is wrapped up in a single binary. That I can just push to systems and execute.
I know its not an option for everyone, but a good cyclecross bike will do wonders for that kind of commute. I picked up a hybrid bike and got a set of tires with some tread on them and I can now handle gravel fairly well, and still get good rolling resistance on the rest of my ride
Double? I doubt that. Typically, I ride ~20km/h on gravel and ~23km/h on well-paved roads at commute speeds (i.e., no sweating). Maybe the gravel roads are terrible in SV, but around here, it hardly makes a difference in speed as gravel roads are well-maintained.
The trail right along the shore is hard dirt, not quite asphalt, but passable. So that one is okay-ish.
But the connecting trail from SJ downtown along the creek towards the bay at some point changes from asphalt to literally a trough full of gravel. You'll lose a lot of speed there, no matter what kind of tires you're using.
I applaude this effort. I think many people discount how capable they are of making the switch to commuting via bike to work. Almost any commute less then 10 miles can easily be done via bike, and making a "safe" route to connect the metro area is a great first step.
The unfortunate situation is that if you are over 10 miles to your office ( which many are ) you will have a hard time making that commute in a acceptable amount of time, unless you actually make an effort at going fast.
Even in Copenhagen, 10km (6 miles) tends to be about the furthest most people bike. A few do bike further, but it really drops off past that. For the longer commutes, people do a hybrid bike/rail trip. If you're 5km from work, often it's easiest and fastest to just bike directly. If you're 20km, it's more common to bike to the S-train. Depending on your job location, you either take the bike with you on the S-train and continue biking from the other side, or you park it at the station and just take the train in.
Absolutely, same here. Amsterdam for example, most distances are about 4-5km. Inside the 'ring', which is a highway running in a circle around the centre of Amsterdam, is considered to be Amsterdam the city, outside is mostly considered Amsterdam the suburban area, and the ring has a diameter of about 8km, which is probably a good measure for what people are willing to bike. Up to 10km still makes sense, too, but beyond that it tapers off really quickly.
It really depends how long that takes though on a few factors. For example, in the Netherlands you can separate two distinct groups.
One are those who cycle between cities, usually these are either old people for recreation, or much bigger, teenagers (12-18) who cycle from a small village to a nearby larger village or town to go to secondary school. Those people have two things in common, 1) they have nice bikes with gears and 2) they have very long stretches of fully separated cycling paths without any traffic or stop lights. They can pretty easily get speeds of 20 km/h and do a 10km trip in half an hour.
The other group of people are mostly inner-city people who don't take their bicycle outside of the city and use it for shopping, seeing friends, cycling to work, to a station etc. They usually have what are called 'opa' and 'oma' bikes without any gears. And their trips consist of mixed traffic (quite a lot of separate lanes, but not everywhere) with a lot of traffic stops every minute or so. They get speeds around 14 km/h or so, and with the constant stops some effectively get more like 12 km/h. Their 10km trip suddenly takes 50 minutes, and that's just not really acceptable for a lot of people, particularly when you're in your suit for a morning meeting and it either rains or gets really warm.
So it really depends. In big cities though, you'll rarely see 10km trips, 8km is the reasonable limit for most, 1-5km is typical (for example 1km for shopping, 2km for sports, 2-5km for friends or work or a station).
You're welcome to your opinion. I bike 8 miles each way in Silicon Valley with a backpack on my back. I ride in a t-shirt and jeans (unless it's really hot, in which case I wear shorts).
I ride moderately hard, so I arrive with a bit of sweat on my body and on my face. Sometimes I wipe the sweat off my face with a paper towel.
10 mins in my air-conditioned cube and I'm back to normal temp, all dried off, and merino t-shirts don't smell.
Driving a car is convenient, sure. My bike commute to work takes about twice as long as driving on 101, though my commute home is often about the same, time-wise due to traffic.
I enjoy the bike ride more, for the most part. Occasionally I wish I had a car. Once I had vertigo so I called an Uber to take me to the urgent care. If I had a car, I would have driven, but that's probably not the best shape to be in while driving a motor vehicle.
So I suppose you'll trade sitting in traffic for the convenience of having a car on the rare occasion you need one. You're welcome to the choice.
I welcome the addition of more bike infrastructure.
I like having the mass attached to my body, as opposed to having an awkwardly-weighted bike. Reduced unsprung weight for more maneuverability :) Much easier to hop speedbumps and the like, which is pretty useful for urban cycling.
I have a light 20L pack with an internal frame and mesh for my back so I still get decent ventilation.
I like the backpack better as I can bring it with me on the last 100 ft and into my building where I shower and change, where if its pannier's I either need a backpack inside of it, or I need to carry my stuff around.
Also people steal stuff, so I don't like leaving things on my bike.
There are pannier backs with backpack straps, or shoulder straps, etc.
Although I have the more basic "back roller" standard detachable panniers. They take a second to attach/detach, nothing remains to be stolen. Best £50 I ever spent! They still look new after 2 years of daily use.
That really depends on your speed. Like dhenry says, if you're going at a moderate pace, a 5 mile commute ends up being about 30 minutes door to door, and you don't really work up a sweat, definitely not in the winter, maybe sometimes in the summer. If you're going full-tilt you can get that down to 20 minutes door-to-door and then you sweat and you have to take a shower. So really it's a question of if you're trying to get some aerobic exercise during your commute or not.
I am of course speaking for Seattle where it's usually cool but not cold and damp but not really raining.
> The unfortunate situation is that if you are over 10 miles to your office ( which many are ) you will have a hard time making that commute in a acceptable amount of time, unless you actually make an effort at going fast.
OR, unless you get an eBike. Which, while more expensive than a regular bike, are still vastly cheaper to purchase and operate than a car. A good pre-built eBike will probably sell in the $1000-2000 range and uses a negligible amount of electricity.
I think it's tragic that ebikes have become the domain of drunks and losers. We have the perfect solution to global warming and urban intensification right there, and no self-respecting person will use it because it's a tool of the underclass.
And if the infrastructure is good enough, a lot of people will deal with some clumsiness. Lots of people in Denmark and the Netherlands use bike trailers or cargo bikes, and having had a chance to use them, believe me, those are DEFINITELY clumsy.
And if you're over 30 miles from work, forget it unless you're in fairly good shape. My 45+ mile commute to Silicon Valley laughs at any attempt to get me on a bike. Better to support improved mass transit.
There's no single policy that solves the problem. It's a combination.
Building public transit extends the longer commute. Building out local cycling options extends the area that the public transit options can serve. It also relieves congestion on those same public transit options since some of those living 5 or 10m away are more likely to bike.
no $119 would buy him one year of updates, and he could choose not to update in the future if he didn't want to. If he chose to update on a regular basis, then he would still need to pay $119 every year
Do you have anything that says that's specifically what happens? Cause their FAQ says that when your subscription expires, you can continue using the last version that was released when your sub expired.
Where did you read that? The closest I could find was "Does the new model demand that I have Internet access?"  in which it states that it requires an internet connection every 30 days to authenticate and if it cannot then it will close the application. I'm guessing that also means if it does auth but sees an expired license, it would also close the application.
> Cause their FAQ says that when your subscription expires, you can continue using the last version that was released when your sub expired.
Would you mind mentioning the question in the FAQ which states this? Everything I've read so far (notable this question: https://i.imgur.com/u7Y7otq.png) suggests the software cannot be used when a subscription is not being actively paid for each month.
Its not really new, and its not really powerful. From what I can see all this does is give you an estimate based on the square footage of your rooftop. There is a lot more that goes into a real estimate then how big your roof is, namely how can you arrange the panels to cover the roof. That is a neat trick and this company does that http://www.modsolar.net/
I worked with their stack for a bit and from my understanding their tool has been doing what this site does and then some extra stuff to actually get the panels installed.