Hacker News new | past | comments | ask | show | jobs | submit | more kankroc's comments login

Behold Canada, I pay about twice as much for 120/20 Internet only.


Beanfield in Toronto: 1000/1000 for 100 cad / month.


Can you elaborate on why using a requirements.txt file is "manual hell"?

I rely on it for pretty much everything and I didn't run into game breaking problems.


The only problem I know with requirements.txt is that many people would require particular versions there while later versions work perfectly fine. Every time I clone someone's Python project to work with I have to manually replace all the =s with >=s to avoid downloading obsolete versions of the dependencies and have never encountered a problem.

Anyway, for me the most annoying thing about the Python projects architecture (and about the whole Python perhaps) is that you can't split a module into multiple files so you have to either import everything manually all over the project trying to avoid circular imports or just put everything in a single huge source file - I usually choose the latter and I hate it. The way namespaces and scopes work in C# feels just so much better.


> have never encountered a problem.

Oh, so you weren't around when Requests went 2.0 backward-incompatible (because they changed .json() with .json, or the other way around, can't remember) and half of PyPI, with its happy-go-lucky ">=1.0", broke...?

Since then, most people have learnt that you pin first and ask questions later.


Indeed. I just hate the versions hell (as well as dealing with old versions of a language although I happen to love old hardware) so much that I've been ignoring the whole Python until the 3.6 release waiting for the time when one will be able to use all the Python stuff without bothering to learn anything about Python 2.x. It took 10 years of waiting but we are finally here now and now I enjoy Python :-)


I just encountered the fun fact that Pip 18.1 broke Pipenv whereas Pip 18.0 worked just fine.


It requires a lot of work to produce reproducible/secure builds, see the original Pipfile design discussion for gory details:

https://github.com/pypa/pipfile


The problem requirements.txt doesn't solve is "what I want" versus "what I end up with".

There's no concept of explicit versus implicit dependencies. You install one package, and end up with five dependencies locked at exact versions when you do `pip freeze`. Which of those was the one you installed, and which ones are just dependencies-of-dependencies?

If you're consistent and ALWAYS update your requirements.txt first with explicit versions and NEVER use `pip freeze` you might be okay, but it's more painful than most of the alternatives that let you separate those concepts.


because if you pin stuff in requirements.txt, they either never get updated, or you have to go through, check which ones have updated, and manually edit the requirements.txt. the combination of Pipfile and Pipfile.lock were designed to solve this in a much better way (briefly: understanding standard deps vs development deps, and using the Pipfile.lock file for exact pinning/deployment pinning, vs general compatibility pinning in the Pipfile).


That is not my experience. Until recently, I used to pin versions in requirements.txt, then from time to time I removed the pinned versions, reinstalled everything, tested and added new versions to requirements.txt. Most of the work was testing for incompatibilities, but no package manager will help you there.

Recently I switched to pipenv because zappa insists on having virtualenv (as app dev I never had any need for it - but it seems my case is an exception, as I almost never work on multiple apps in parallel). Pipenv does make version management a bit easier, but it wasn't difficult (for me) to begin with.

From talking with other developers I know my view is somewhat unorthodox, but I haven't encountered the problems they describe, or the pain hasn't been that big for me to embrace all the issues that come with virtualenvs.


Btw, it is possible to use the compatible ~= operator (PEP 440) within requirements.txt.


Or just use pip-tools to automatically update dependency versions.


Presenting "not using Electron" as a significant way to reduce our impact on the environment has to be the most HN thing I ever read.


Javascript is ruining the environment. Everyone should switch to Rust, and compile all apps to web assembly. Also, blockchain.


No, blockchains are bad for the environment.


I don't want to jump the gun and call it creepy just yet, but what is it for exactly?


The most common end-user use of face detection is probably adding filters and decorations to photos.

This isn't too hard to do in javascript, but I assume a native implementation is much faster.

I'm not sure why it would be creepy; face recognition is easily done locally. Every camera implements it, for example, for focusing on faces.


My guess would be that it's intended to be used as some kind of second factor for authentication.


As a foreigner I have an alipay account and didn't need a Chinese back account.


I believe those days are long gone. I once opened an Alipay and WeChat account without bonding my bank card to it. Eventually, I couldn't use them unless I bonded my account.


Bureaucracy is a very real problem in many fields, but when we are talking about launching potentially untrackable satellite in space, having some form of oversight is probably good.


Can someone provide me a usecase for this when compared to executing jupyter straight into VSCode/Atom with vscodeJupyter/Hydrogen?


The idea is that you can edit your notebooks with your favorite text editor, use git for version tracking them, and no need to mess with a web browser at all.


In the medium term, it's likely that Hydrogen will be able to read and write .ipynb files. It can currently export .ipynb files but has some bugs [0].

[0]: https://github.com/nteract/hydrogen/issues/1296


The name is academic torrents, I can assure you that this is P2P.


To anyone reading this who wants to try DBSCAN, give its more recent brother HDBSCAN a go. The H stands for hierarchical and it's way better when dealing with clusters that aren't very similar (think big vs small).

There's an excellent pip package that seamlessly integrates woth scikit-learn too!


People have wildly different views on what Github stars are. Some see it as a way to bookmark project you find interesting while other simply stars whenever they feel like the project is impressive.

Bottomline is: it's not a good metric, it's simply a way to see how "popular" something is and how likely it is that some random developer saw your project.

While anecdotal, I've starred Vue because I see it as a cool project and I've starred React as another cool project. In both case, I did toy projects with the frameworks and never used it in production. On the other hand, apache/httpd I did a lot of projects with and I did not give it a star yet.

In my opinion stars are not endorsement, they're a questionable way to measure how likely it is that your coworker have heard of a given project.


I use 'em like bookmarks.


Also having created, promoted, and contributed to repositories with >100 stars it seems that ~10% of people who view the repository will star it. Not sure how reliable it is but I've always taken stars to indicate how many people have seen the project.


I have a couple projects that are getting a decent amount of stars recently and I'm looking at the stats for the last two weeks:

- One has 3750 unique visitors, and 680 stars in 2 weeks

- Another has 80 unique visitors, 20 stars in 2 weeks


Ohhhhh, now I understand why that one ex-coworker that 'follow' stars about 20 projects a week. I always used them as bookmarks and I couldn't believe that they ever were going to use that many things.


Yeah, a more interesting article would be "Github Stars === x", because I've never been sure what they're for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: