I'm in a tough spot with this new RTO guidelines (similar ones issues by my company). I live ~2-2.5 hours from my assigned office due to two-body problem. I travel to the office once/twice a week, but thats the max I can do - 3 days per week is impossible. I'm more productive WFH, however I utilize my office time to meet as many people in-person I can. However recently that has reduced to mostly VC time in-office as most of my team are from a different coast altogether. To me, going forward Remote works best, but these blanket rules really doesn't work for all edge cases like mine.
With all my recent ML PhD knowledge I cannot even explain how the model is able to do this. Surely the data distribution doesn’t have all possible responses to all possible linux commands? a full python repl? I’m stumped!
I'm just a layman but I don't think anyone really expected or knows _why_ just stacking a bunch of attention layers works so well. It's not immediately obvious that doing well on predicting a masked token is going to somehow "generalize" to being able to provide coherent answers to prompts. You can sort of squint and try to handwave it, but if it were obvious that this would work you'd think people would have experimented with it before 2018.
A bit of a controversial opinion: to those who are defending CoPilot saying it "boosted my productivity" and would miss it if it is discontinued, maybe you are not a productive developer to begin with. I fail to see how searching the same snippets on Google or saving commonly used macros in your favorite editor would not yield the same amount of productivity. I have used CoPilot for several months and I actively stopped using it, because I was afraid I will be dependent on it, and it would actually reduce my ability to do critical code-building. I'm happy without it - sure it takes some micro seconds more to type out my code instead of autogenerating it, but I feel much more self confident in my own coding skills.
CoPilot is a great research work - it is indeed spectacular to see how pre-training can achieve such impressive code completion results. However, in my honest opinion, it should not be a tool for a serious developer.
Yes, it is my second year running with Emacs after ditching VSCode. While I miss the remote editing capability of VSCode, working on a shared HPC cluster made me finally use Emacs. Nowadays, I have shifted most of my workflows in Emacs, especially using Org mode, and I believe it should stick for the next several years.
Not great, but I think it is an acquired taste. TRAMP makes it _feel_ as if you are working locally, with some caveats. I like the fact that I can use dired to quickly navigate my projects on the cluster, open a vterm session and run my scripts, open my python files and save which automatically syncs them in the remote; magit works seamlessly (with a slight lag). What I wish is for better LSP support (lsp-mode/Eglot) for remote development - its kind of a miss for me. lsp-mode straight up does not work, and Eglot keeps me waiting until it can connect, leading to decreased editing experience. I have kind of given up on LSP for remote, nowadays I just use dumb-jump for quick navigation and local editing for more serious development.
> I'm not sure how that compares to VSCode's capability.
Not favorably. Perhaps there's a magic combination of SSH and Tramp settings that can make the experience lag free, but I can't find it. VSCode's remote editing was setup-free and close to seamless when I tried it.
Tramp has support for many, many more remote protocols though.
I suspect good ssh support is just so much more necessary for vscode than it was for Emacs when tramp was developed. I do think tramp was also full of generality towards things that are very uncommon these days (various different protocols, ssh workarounds, baud rates, …)
Lag free? What are you doing where lag is problematic? Sure it takes a couple hundred milliseconds longer to open a file or save a file but the actual editing is lag free, and that's what matters.
It's probably been a decade since I used TRAMP, so maybe it's gotten worse or more likely people are expecting more of it.
ControlMaster auto. Most distributions don‘t ship with it turned on because it needs somewhere to store the socket files, and nobody has pushed for it to be the default. And then you still have to know to set up key–based authentication so that it doesn’t ask for a password every time it needs to reconnect.
There are also non–trivial bugs that cause headaches when typing out a TRAMP path to open a file, possibly caused by interactions with the various autocomplete packages people may or may not be using. For example, if you want to edit a file on a different machine via ssh but also change to root using sudo, you can enter a multi–hop path like `/ssh:othermachine|sudo:root@othermachine:/etc/whatever.conf` which is very, very useful. However, if you mistype and put a colon instead of a pipe, or have to backspace to edit something, then it will usually just break. Some types of connection errors can cause it to break as you type, causing it to fail to accept further input. All you can really do is hit C-g and try again.
That said, I use TRAMP every day of the week and it is amazingly useful. I could do my job without it, but it would suck.
If you are like me who subscribes to Jim Browning’s channel you know this technology would eventually be misused by scammers. Does Mullvad has any plans to counter it?
The technology is the same as any other gift card (cash-like instrument identified by code that can be transferred over the internet or phone). Scammers also use regular bank transfers, wire transfers, cryptocurrencies, and payment services like Zelle and Venmo. Gift cards are convenient because they're cash-like, but they don't enable scams.
ATM, it doesn’t seem like mullvad is selling these in stores. If a scammer wants a quick payout with less chance to get found out, they will get the gift cards from a physical store.
That is not what I asked though. Every experience is subjective. I’m asking how they envision this being a good business model.
Every bootcamp grad who can whip up a database backed web app can build a note-taking application that covers the major features. There is literally no defensible moat here and the incumbent is giving away theirs for free. What feature will you compete on? There are limits to how much innovation you can add to “storing rich text in the cloud”.
I suspect it's a combination of the intense passion people feel to work on this problem and the small user-base needed to support a small team to do the work.
I'm not so good at ballparking operational expenses, but Supernotes 2 probably needs somewhere around 2000 users to break even, which probably isn't hard to achieve.
After that, the founders are fully supported to pursue their passion. Sounds like a great business model for someone who wants to do this, and I think this is why you see a profusion of utility apps with ~$10/mo pricing models on the market today.
I'm always amazed at the rampant patent trolling that happens with deep learning papers/ideas. In this dump, if you search with the names of famous researchers in ML (such as Yoshua Bengio [1] or Yann Lecun [2]) you will find 100s of troll patents citing their work. Not all of them are troll though. Maybe this corpus can be used to automatically identify them, perhaps by merging data from arxiv?
The post glosses over Canada by not mentioning Quebec. If you are in Quebec, chances of your getting a Permanent Residency is close to slim if you do not have French language proficiency. Thus, while living in Quebec for 5 years now, being a grad student, I neither qualify for PR nor citizenship by naturalization.
My point was mainly directed at both PR and naturalization aspects of Canada that was mentioned in this post. Even if you are not a student, the road to PR in Quebec is long if you do not have meet the language requirement. Typically you apply for Quebec Selection Certificate (CSQ) which is a pre-requirement for PR. This pre-requirement also has a pre-requirement nowadays, which is getting an invitation to apply for CSQ through ARRIMA (which has a yearly cap). In total,even if you are not a student, if you do not meet language requirement, it can take 3-4 years just to get the PR, and 3 more for citizenship. Thus, my point was against the classic misconception of “easy path to citizenship” of Canada made by this post (and frankly by a lot of people in general, as they tend to overlook Quebec).
People choosing Quebec to immigrate must be either fluent in French or stupid. Quebec immigration is perhaps one of the most arcane system and politically influenced. To compare it only takes 4 or 6 months to get a PR in rest of Canada if you have the right skills. In Quebec, even if you have the right skills and speaks French, it takes a minimum of 2 years if the red tape allows. If you do not know French with the right skills 3 to 4 years.
For some people the choice is governed by the educational institution. Sure, I could have applied to UofT but my research led me to McGill (and I’m frankly glad that happened).
While being a student, I can’t. Also, if your intention is to move to Quebec you are technically not eligible for PR in Ontario. However, many folks these days opt of this path though - apply for PR in Ontario from outside Canada using Express Entry, land in Ontario and stay for 6 months and then move to Quebec.
Curious about this as well. The portal shows target delivery by 2022 when I apply to pre-order for an address in Kolkata. They will probably have to revisit the pricing model based on the local economy. But yeah, I'm worried whether the current govt will even allow such devices to be imported given they want a tighter control over the internet.
On the other hand, I believe this kind of connectivity will be extremely beneficial for cities such as Kolkata which suffer due to lack of proper infrastructure (and of course, lack of political will). I would love to see a future when I can move back to Kolkata and work with the same latency to servers in NorAM.