Hacker News new | comments | ask | show | jobs | submit login

First of all, this is a cool looking tool. Useful, well-organized, easily-accessed information is always a wonderful thing.

I think a model like Dash would work a lot better: Rather than opting in my code to be sent up to Kite's servers, I opt in to the packages I'm interested in having indexes for. In some cases, as with node, there are ways to see what packages my project depends on and then those bits of indexed data could be sent to me.

My computer has a 512GB SSD. I could devote 10GB to indexes of libraries and my own code without blinking. The argument that it's too much data and therefore belongs in the cloud doesn't seem to hold up.

Also, there are cases where I'm not online... this is one of the great things about Dash. I have fast access to library docs anywhere I happen to be.




Yes but how much CPU and memory are you willing to spare? Parsing is quite CPU-intensive, hence the CPU drag you often see when IDEs start indexing, and type inference involves a lot of unpredictable lookups all over the index, so much of it needs to be in memory to get reasonable performance (yes this is still true if we're talking SSD).

To see why: when you type "x.foo()" we need run type inference on the complete data flow chain that produced the value "x", so that we know which particular "foo" you're using. Throughout this analysis we may also need to know a lot about the python libraries you're using, since you may be passing values into and out of arbitrary third party libraries. If each of the steps in this chain triggered an SSD read then you'd often have a multi-second lag between hitting a key and seeing the result.


My editor had no problem indexing all of Chromium (fairly large project). It also indexed external libraries. It indexes new code as you write it so type a foo function, next time you type foo you get help immediately. It added standard libraries by default and you can add any other library (like I have it indexing Unity3D's mono libraries).

I didn't notice more than a 100-200ms delay in seeing the result (which happened in other threads so no effect on my editing). About the same I'd expect with a round trip over the the internet. It shows both help at the cursor as well as definitions and references in another pane in that time.

It doesn't look as slick as Kite but it also seems to suggest it's possible to do this all locally. If nothing else you at least have some context (the language) so you don't have to search all data, only data relevant to that language. You even know where in the language I am so you know when to search ids and which subset of ids to search.

On top of that you're basically going to have me sending a gig of source to you to index something like Chromium which will take hours on my crappy connection.

Let me be clear, I think kite looks amazing and I'd be happy to pay for it if it was local. Maybe you download the DB to my machine. I'm not nearly as comfortable with you reading my terminal though. I'm sure you can turn that feature off but that's a feature I liked. Turing features off = less interesting


What editor?


I'm not an OP, but Eclipse does quite well indexing fairly large projects.


This argument seems decent enough, but a couple of things come to mind.

1. Wouldn't multiple HTTP requests be just as slow? (especially if you're on a slow cellular connection, which I am at a coffee shop). Are there any benchmark tests you've done?

2. Is there a reason you couldn't select libraries to download at the start of your project? It's not like you switch libraries too much, especially after initial exploration. I don't think it'd break workflow too much to check some boxes or pick a few packages, I do it already in sublime.

3. To allay concerns about storage for some, couldn't you dump the data after some point? It seems like to be useful you'd only need it while someone is coding, and certainly not long term.

Just some thoughts, would love to get a response. Like others have said, I love the idea, but especially at my work, I don't think I could actually get permission, which puts me in a grey area.


In addition to what the others said, I'll note that PyCharm is already doing these things and there isn't a multi-second lag between hitting a key and seeing the completions. It is also doing type inference. And sure, there's a bit of lag when it needs to do a full index, but that shouldn't be often.


All of it. That's why I run an X99 system with 32GB of 3200MHz RAM at home, and a i7-6700 w/ 16GB 2866MHz RAM in my laptop.

Computational power is so insanely cheap these days it doesn't make any sense to not throw the as much power at your programming environment as you can, IMO.


Those aren't insanely cheap computers - those are top-of-the-line machines - so you can't really design an application to require that kind of hardware to work effectively and expect the user response to be good.

But you're right, that kind of computing power is cheap compared to what you can do with it. If the $1000 in lifetime cost that separates your system from a baseline enables you to be a few percent more productive, it will quickly pay for itself.


That hardly seems necessary. Only so many objects have a foo method to find. With free cores, search from both ends and meet in the middle.


> To see why: when you type "x.foo()" we need run type inference on the complete data flow chain that produced the value "x", so that we know which particular "foo" you're using. Throughout this analysis we may also need to know a lot about the python libraries you're using, since you may be passing values into and out of arbitrary third party libraries.

I don't understand. The libraries I'm using need to be on my machine anyway. If you're doing type inference, there's no need to do the whole process each time: you incrementally add type information and keep it in an index in RAM and the disk. Can't there be a hybrid approach, where your servers are contacted only for libraries that are searched over but not imported yet?


> Yes but how much CPU and memory are you willing to spare?

Hell, I'm using 50% of my CPU to have 60 fps animated 3d fractals as my wallpaper. Surely I can spare some to parsing.


I think this really makes a case for this being best as an open source API, i.e. something anyone could access and something to which anyone could contribute.

I'd personally prefer not having to manage the indexes on any of my computers.

Regardless of where the info is stored, it should be possible for the client to run locally, and access your code, but not actually send anything more revealing than what you're already sending to, e.g. Google.


I use Zeal. Local. Opensource. Uses Dash's docsets. Has basic integration with Sublime.

I have a feeling lot of the functionality of Kite will eventually get into Zeal type local tools.


Which component of Zeal allows you to pull in your own local code for completion?

Zeal definitely works great for popular third party library code and integrates with Sublime, but I haven't seen options for it to integrate with your own libraries or third party libraries that aren't quite big enough to be supported by Zeal directly.


100% agree. I don't think this will ever take off, if there isn't an opt-out for local code to be pushed up.

Far too many companies will never allow employees to do this.


Actually kite does not index any directories that you haven't explicitly enabled. So it's really easy to enable it for some but not all of your codebases.


The link to dash: https://kapeli.com/dash


> My computer has a 512GB SSD. I could devote 10GB to indexes of libraries and my own code without blinking

Mine has 128GB, less than 10GB free.


A 512GB ssd costs somewhere around £100.

That puts the cost of 10GB at about £2.


Assuming you're using a MacBook Pro, it's going to cost at least $300 to upgrade to the 512GB SSD. Compared with £100 = $140 for a 2.5" SSD.

Given I don't need that extra storage, the cost for 10GB to me would be $300. Ouch.


One alternative is a Nifty MiniDrive for ~$30 + a MicroSD card. I put m an extra 128 GB in my rMBP that way.

http://minidrive.bynifty.com/


Maybe you should opt for a more serviceable notebook, then?


Rolls eyes. Thanks for the snark - the many iOS and Mac developers on HN


I wasn't being snarky, really. I was suggesting that if upgradeability is in your list of priorities, then you would be better off with another notebook rather than a Macbook.


someone using apples really should complain about prices though. its their choice to pay thrice of the market price.


Well, you bought into Apple's system. They almost always overcharge for hardware.


Did you work as developer and make money with your machine?


That's for 2.5" SATA SSDs. Some people have embedded PCIe drives. And on some macs they are soldered. Regardless, reserving 10GB for offline help isn't that bad.


AFAIK the point about Macs is untrue.

While the RAM in many Macs is soldered into the motherboard, SSDs are not (they are often proprietary, though).


These days they're mostly (possibly always) flash on the motherboard. Afaik, chances are the only way you'll find a removable SSD in a Mac is if its stock spinning-platter drive was replaced.


That's only true for the MacBook, which isn't a suitable development machine anyway. In both the current MacBook Pro and MacBook Air the storage is a PCIe SSD.


I said some Macs (thinking about the 12" retina) but I think that's where they are moving.


Not really willing to change the SSD that came with my 2015 Mac.


Just wait for this summer, 3d nand is shaking things up in ssd world. Platter drives might even become obsolete by next year.


Assuming you use all of the 512GB, if I wasn't needing to use the 10GB free then the 10GB costs $100.


And you are a developer or a consumer?


Developer. I don't care about space because most of my stuff are cloud-based.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: