For me there is one thing that works so much better than anything else: new goals. To the end of my PhD, that meant to just suckt it up and finish the work despite low motivation and frankly also workweeks far below 40h.
Later in life, motivation came naturally, I worked more and I am really happy with the outcomes. Sorry, that this is no advice that is immediately helpful, but I hope that it can be reassuring that streaks of low motivation as a student are kind of "normal" and not necessarily something that needs complex solutions.
I agree with this! Honestly, I felt a lot of the same emotions op felt the week or two after exams finished or when large projects wrapped up. To me, the sudden drop of going from 100% on and focused on accomplishing a goal to having it all finished was huge impact on my motivation. Finding a new goal helped to bring me back out of that low.
If I think of myself as a user, that is spot on for information/entertainment, but far from the truth for products. The most extreme examples being everyone using "$THING Wikipedia" but nobody I know using "$THING Amazon" in Google search
How exactly could this be done reliably? With shorts you have to commit to a timeframe for not only finding evidence to reject the claim but also for the news to break and convince other investors.
Meanwhile you can be targetted by financial actors that play your positions on the stock market, at times even disregarding the atual company behind it. Further you have to mind monetary policy. If faith in the stock suffers compared to other stock but the overall market just goes up and up, your shorts have a problem again.
Personally, I have no informed opinion on EmDrive, but I have been very pesimistic about companies in the past and I am about some right now. Still, I do not think that shorts are anything appropirate for me. I'd much rather buy competitior's stock than burn myself with shorts or waste hundreds of hours getting into the financial market that I could also use to create value in my actual job/profession
There may be a lot of uncerntainty in Data Science and ML projects. However, recently I started feeling like I actually have it better than someone from pure software engineering sides of things:
For either, there is often a function from time spent to quality. 100% perfection is basically impossible and before that the function increases very slowly, seemlingly logarithmicly.
For SWE, expectations are often close perfect solutions. Too greedy effort estimations cause a lot of trouble. For DS/ML, however, perfect is usually off the table and this fact is widely (not universally though) accepted. When it is accepted to give estimates in this way, suddenly there no harm from being quoted on it and I really don't mind to give estimates anymore, where I just make a guess at a good 80/20 point. If I am wrong with that point, chances are nobody on the outside/higher up ever knows.
This may be different in domains where very clear targets have to be met (e.g., "self driving cars that pass lawmaker's requirements for use on the streets") and then I'd guess it is a true nightmare.
Like this, I never felt overly pressured by ML/DS deadlines over the last years. Some things were great successes, sometimes the quality wasn't great enough and projects were stopped or customers left. But there never really was a case where anyone thought that working extra long might have been an option to meet higher expectations.
I don't really have a solution for SWE, I don't really see how one would sell something like "I can do it in X time and it will only crash / stop working / make mistakes / become too slow / have vulnerabilities so often. More time will lead to fewer problems". This just isn't what's expected. But at least for complex systems and security vulnerabilities, I'd argue it is actually quite true. Guarantees for 100% perfection just aren't realistic. Avoiding the most obvious pitfalls is done rather quickly and the more time spend, the more is needed for further improvements.
I really wonder about the charging infrastrucuture. I guess it is doable and a necessary transmission, but I am a bit afraid it may be the next thing some countries are sleeping on.
As a German, there are serious subsidies for home owners to install one right now. However, I just moved into a new rental apparment and visiteted quite a few places that were all built in 2020. All of them had very nice parking spaces allocaed to the flat, but zero wallboxes for the entire appartment. I also looked into buying a flat and often it would have been difficult, sometimes even impossible to install one on my own behalf wihtout checking with all other buyers (and these kinds of changes often lead to tedious legal fights, afaik). The place I'm moving to doesn't have one either, but the ladlord will install one, once needed. At the moment I still have a car with a diesel engine and no plans to change soon(I go almost everywhere by bike, even have a different one for rainly days and to carry groceries, and do 0-2 longer trips per month and ~1 very long trip for vacation per year, bike + diesel seems to fit that quite well) soon, but the next car will be electric i guess
To make things worse: The overall power comsumption should not be too much of a problem, but if almost vehicle was electric and charged where people live, the power infrastructure could be in serious trouble. If improving it in remote areas goes anywhere as well as FTTC/FTTH internet, we're headed for disaster. There are a lot of interesting ideas, e.g. decentralized batteries within people's homes and renewables. But if all the focus is on changing the cars on the road, I have little hope that other transitions will be quick enough
Zalando seems really nice. I have no connections at all, but they really won me as a loyal customer because I really prefer the experience over all alternatives (especially including Amazon and local stores for everything except really pricy clothing).
More importantly though, I am really impressed and surprised by some of the research coming out of Zalando. Flair[1] has some great ideas (especially at the time of release, when there weren't pretrained huggingface models for everything to build upon) and a really well-written paper. Colleagues also have had a good experience using the software and achieved very results with adapted NED tasks.
Mostly doing several kinds of NLP:
My actual setup is a Windows Laptop to SSH into Linux machines w/ tmux session. However, I really appreciate WSL for working offline, etc.
My main reason: It is the most convenient way to have Unix tools (grep/sort/cut/sed/less/...) and bash available. Cygwin always was a pain, MinGW / GitBash felt much better, but ultimately WSL just feels best.
These tools are incredibly valuable to my workflow. Sure, stuff like pandas can be nice for small datasets, and some data sits in some DB/Kafka/distributed system. But there have been countless cases where unix tools allowed me to take xxGB zpfiles of text and do basic examination or even build baseline models within a few hours.
Sure, there always are alternatives to use these tools and there are many equivalents. But I would always prefer WSL + conda for Linux to a typical "Windows Conda" installation with that weird GUI and the need to install so many different applications to even just look into the first or last few lines of a huge textfile.
EDIT: That said, of course I can/could always just run a juypter notebook under windows using windows cuda + GPU and share files with a WSL bash where I do my modifications. But again, everything within the same systems just feels better (ipython shell magic, no worries about if paths to the same file are really identical, etc) and while this is by no means a game-changer, it is just nicer that way.
While there may be several things missing for many productive use cases (especially inserts/updates), I think QLever (https://github.com/ad-freiburg/QLever) fits that description very well. There's also a public endpoint linked there.
We're actually using fastai in production and will happily switch to v2. Sure, there are serious questions about long-term stability and we know these projects will be high maintenance.
However, they would be anyway: Core models and algorithms are quickly outdated and any change that allows us to achieve similar or better results with less effort in creating training data is easily worth the engineering work.
That said, I really hope v2 feels a bit more like other libraries: extending v1 models has been pretty painful in several occasions. E.g. making some changes to the underlying pytorch models was very straightforward but still using all the goodies for training build into fastai (in particular all the stuff based on the work of Leslie Smith, tuned for best practices inside the fastai universe) was pretty painful. It is awesome to have a library actually implement best practices from latest research, but sometimes all this greatness was pretty hard for me to transfer to changed models.
That said, it has worked for us in v1 and the benefits outweighed the problems by far.
We have had "good" (maybe not BERT/XLNet'ish levels of quality) results using ULMFit. I.e., on almost all problems we got better results than our previous best approaches (mostly LSTM/CNN and self-attention à la https://www.cs.cmu.edu/~./hovy/papers/16HLT-hierarchical-att...).
Thus, we've seen real value out of transfer learning that doesn't require overly much compute power (and actually could even be run on free colab instances, I think).
That said, I agree that the problem is still very far from being "solved". In particular, I have a fear that most recent advances might be tracked back to gigantic models memorizing things (instead of doing something that could at least vagely be seen as sort of understanding text) to slightly improve GLUE scores.
Still, I am highly optimistic about transfer learning for NLP in general
Later in life, motivation came naturally, I worked more and I am really happy with the outcomes. Sorry, that this is no advice that is immediately helpful, but I hope that it can be reassuring that streaks of low motivation as a student are kind of "normal" and not necessarily something that needs complex solutions.