I think a model like Dash would work a lot better: Rather than opting in my code to be sent up to Kite's servers, I opt in to the packages I'm interested in having indexes for. In some cases, as with node, there are ways to see what packages my project depends on and then those bits of indexed data could be sent to me.
My computer has a 512GB SSD. I could devote 10GB to indexes of libraries and my own code without blinking. The argument that it's too much data and therefore belongs in the cloud doesn't seem to hold up.
Also, there are cases where I'm not online... this is one of the great things about Dash. I have fast access to library docs anywhere I happen to be.
To see why: when you type "x.foo()" we need run type inference on the complete data flow chain that produced the value "x", so that we know which particular "foo" you're using. Throughout this analysis we may also need to know a lot about the python libraries you're using, since you may be passing values into and out of arbitrary third party libraries. If each of the steps in this chain triggered an SSD read then you'd often have a multi-second lag between hitting a key and seeing the result.
I didn't notice more than a 100-200ms delay in seeing the result (which happened in other threads so no effect on my editing). About the same I'd expect with a round trip over the the internet. It shows both help at the cursor as well as definitions and references in another pane in that time.
It doesn't look as slick as Kite but it also seems to suggest it's possible to do this all locally. If nothing else you at least have some context (the language) so you don't have to search all data, only data relevant to that language. You even know where in the language I am so you know when to search ids and which subset of ids to search.
On top of that you're basically going to have me sending a gig of source to you to index something like Chromium which will take hours on my crappy connection.
Let me be clear, I think kite looks amazing and I'd be happy to pay for it if it was local. Maybe you download the DB to my machine. I'm not nearly as comfortable with you reading my terminal though. I'm sure you can turn that feature off but that's a feature I liked. Turing features off = less interesting
1. Wouldn't multiple HTTP requests be just as slow? (especially if you're on a slow cellular connection, which I am at a coffee shop). Are there any benchmark tests you've done?
2. Is there a reason you couldn't select libraries to download at the start of your project? It's not like you switch libraries too much, especially after initial exploration. I don't think it'd break workflow too much to check some boxes or pick a few packages, I do it already in sublime.
3. To allay concerns about storage for some, couldn't you dump the data after some point? It seems like to be useful you'd only need it while someone is coding, and certainly not long term.
Just some thoughts, would love to get a response. Like others have said, I love the idea, but especially at my work, I don't think I could actually get permission, which puts me in a grey area.
Computational power is so insanely cheap these days it doesn't make any sense to not throw the as much power at your programming environment as you can, IMO.
But you're right, that kind of computing power is cheap compared to what you can do with it. If the $1000 in lifetime cost that separates your system from a baseline enables you to be a few percent more productive, it will quickly pay for itself.
I don't understand. The libraries I'm using need to be on my machine anyway. If you're doing type inference, there's no need to do the whole process each time: you incrementally add type information and keep it in an index in RAM and the disk. Can't there be a hybrid approach, where your servers are contacted only for libraries that are searched over but not imported yet?
Hell, I'm using 50% of my CPU to have 60 fps animated 3d fractals as my wallpaper. Surely I can spare some to parsing.
I'd personally prefer not having to manage the indexes on any of my computers.
Regardless of where the info is stored, it should be possible for the client to run locally, and access your code, but not actually send anything more revealing than what you're already sending to, e.g. Google.
I have a feeling lot of the functionality of Kite will eventually get into Zeal type local tools.
Zeal definitely works great for popular third party library code and integrates with Sublime, but I haven't seen options for it to integrate with your own libraries or third party libraries that aren't quite big enough to be supported by Zeal directly.
Far too many companies will never allow employees to do this.
Mine has 128GB, less than 10GB free.
That puts the cost of 10GB at about £2.
Given I don't need that extra storage, the cost for 10GB to me would be $300. Ouch.
While the RAM in many Macs is soldered into the motherboard, SSDs are not (they are often proprietary, though).