The advisability of the idea aside, the presentation here feels exceedingly and unnecessarily risky
FunctorFlow only works on Repl.it, not on your machine!
Currently making tutorials and assignments for Repl.it community
- Strings are not names. Names help tooling, strings make tooling harder.
- A function which takes a string to switch on where behavior radically differs between cases is an anti-pattern.
- Imports should be explicit and well organized, not magic and scattered throughout the code.
Please don't give these functions simple names like "f". I often use these names as generic function names locally. I know I can use "import ... as" but doing so shouldn't be necessary and will result in everybody using different names, which will hurt consistency, readability and searchability of code.
While making a good package manager/community around it is hard, there's plenty of hard earned knowledge that's been completely ignored here.
This seems to make dependency management harder (imports spilled all over the code base, versioning seems non-existent, documentation for this/plugins? Twitter is not documentation).
How does plugin authoring work? How does distribution and trust work?
What problem is this supposed to solve that isn't addressed by other package managers (or what does this do better than said package managers)
Thanks for understanding. ;)
How would one use this with existing Python packages? Or is this a parallel universe to PIP where libraries need to be written explicitly with FF?
The website doesn’t mention how FF packages are versioned or how version restrictions between dependencies are resolved.
I see this as quickly becoming full of “left-pad” sized packages, can you analyze a full project to fetch everything ahead of time?
Looks convenient for notebook/repl style use (assuming the the editing environment still supports autocomplete/correct and other niceties). I might just be stodgy and change averse, but I'd be a bit less willing to try things like this in production.
Landing page or FAQ could possibly benefit from an explanation along these lines.
Yes, do not try this in production yet.
Currently, this library works only on Repl.it for the exact reason.
This is why you see the headline with "an attempt". :)
1. What's the difference between this and just packaging a function as a pip package? The only one I see is just shaving off a few seconds and one less command.
2. Where is the code running from for each plugin? Eval'ing random code just seems like a bad idea. You can argue that it's the same as blindly installing a pip package, but this apparently happens each time on runtime and I see no option to check the code beforehand.
3. f and ff are pretty bad function names. I see what you're doing, but why?
4. Dep management issues aren't as common as you think they are.
5. Lots of these plugins are already in the standard library. XML to JSON is already possible quite trivially with a pip package. Not only that, but this seems like a less flexible version of that - and more obscure. (how to map xml to json?)
I'm giving this a hard pass for me, but looking forward to seeing my doubts cleared. Cheers.
I'm still open to the possibility this could be brilliant.
I'll keep you updated.
A better suited name which encodes the core feature, "installationless helper routines", could really help this library to become actually recognized for what it can provide.
Here's another comment about the API: I would prefer a syntax like
Helping others learn while rewarded for doing so.
This tool was clearly designed in a vacuum.
This tool is clearly not ready for any use outside of a sandbox or toy.
Think of it like lego blocks for your next tech-toys and hopefully (in the future) for your next big-thing ^_^.
If you love tinkering with new toys then give it a try with Repl.it: https://dev.to/t/functorflow.
If you want to meet cool kids like you then join our club at Discord by subscribing to our email-list.
...and follow updates at https://twitter.com/functorflow
Together, we will re-imagine and re-invent how Python apps should be developed.
Lemme know what you think!
It's not until I get to the twitter account that I see that it looks like you're trying to create not just a different python package manager, but a new ecosystem that's not backwards compatible. I don't think this is a good idea. The Python package ecosystem is gigantic and one of the largest, most mature language ecosystems out there. If you're trying to make the way Python apps are developed better, try taking a look at alternate Python package managers like `pipenv`, `poetry` and `anaconda` and figure out what you like and dislike about them. Try making changes to them before you reinvent the wheel.
As it stands right now, this isn't even a wheel. It's an uninspired marketing page on twitter that is not doing a great job of convincing anyone it's better than the alternatives. In fact, it's not even clear that you're even aware of the alternatives.
Even if it was about modules, if anything FunctorFlow makes it worse because you now have X+1 things creating modules (FunctorFlow has to live somewhere after all so it doesn't remove the previous system).
Rather than abuse them, we might perhaps use them as an example of the Dunning-Kruger effect.
With the mass migration of Python developers into Go, this is the last thing that Python needs.
I think that's honestly its killer app. I don't do data science, but I often use jupyter to write short prototypes, and make plots and such.
I've been looking for something that lets me:
1. Set up a jupyter notebook.
2. Run some tests or experiments, pulling in libraries as needed.
3. Ignore it for a few years.
4. Come back and run the same notebook without everything breaking.
If this could patch into Python's native import mechanism and let me specify the repo version as todays date, it'd be great.
from ff.package_name.whatever import foo
Only issue is that it'd be really space inefficient, my DS venvs clock in at 400+mb each so having one per sheet will probably quickly become unusable. Which is why I thought of some sort of smart system wide caching akin to maven/ivy. But I'd forgotten how complicated python dependencies (binaries, c code, etc.) were and how little api support pip had.
I think to do this stuff, you need hooks in Jupyter to setup and teardown the venv before it runs the kernel. (And generally Jupyter would want to clean up unused venvs to mitigate the teardown hook not firing.)
> Only issue is that it'd be really space inefficient
A venv is overkill. You can just run the kernel in a regular directory, and `pip install --target kernel_dir foo-bar-pkg` to put packages directly in it. As long as the linker sees it, third-party libraries will work. This technique is used in the serverless project to bundle dependencies for use on AWS Lambda.
> Not sure if you'd need to find older versions of new direct dependencies to avoid conflicts.
Curation is a solution to this. Stackage is popular in the Haskell community; they build a consistent version set of everything every night and curate stable releases periodically.
With curation, a date and a set of top-level packages is enough to pin your dependencies.
Then the language itself offers trivial iteration, possibly in real time with tools like jupyter, and requires substantially less bootstrapping knowledge than something like C/C++, which creates a low barrier of entry for non-programmers, who are less likely to shoot themselves in their feet with memory management and such. And if you're concerned about performance, most of the standard data science packages are just wrapped C/C++ anyway.
Sure, it's not perfect, but it's practically the lingua franca of data science right now, and it fits the role quite well.
Did you read my comment?