I remember hearing about a Xerox research lab that had a whole room where the walls were all whiteboard.
They had a celling mounted camera that would scan the walls.
They also had a markup language that the camera OCR software would pick up. So for example if they drew a P in a square, it would print out that wall to the printer.
If they wrote an e-mail address in a square it would e-mail a photo of the whiteboard to that e-mail address.
This was 7 or 8 years ago. I wonder if they still use this system.
I heard about it 7 or 8 years ago from people who had been to Xerox Parc and Cambridge (UK) based Xerox Research Centres.
I can find this research paper online, although it appears to be dated 1995. It's not exactly what I was told about, but it seems to be an earlier version.
This is my first time hearing about DirectedEdge: very cool stuff!
I watched the video and it looks like you guys have a great thing going on. I'd like to make a suggestion, if I may: prove that this isn't just for "similar users also bought". Prove that everybody should be doing recommendations for practically anything.
Here's one idea: call the Reddit guys. Use your recommendation engine to recommend articles, subreddits, and users I might like based on my up-votes. Ask them if they would be willing to show "Recommendations Powered by DirectedEdge.com" on there. Hit it out of the park with one partner and you'll have other people lining up to use your platform!
Reddit's initial appeal was supposed to be the recommendation engine. But for me, it never got past being a version of the front page with a different decay algorithm (even to the point that it included stuff I'd read and/or downvoted).
So this would be a little like the Yahoo/Bing deal.
Photographing the whiteboard and emailing the photo(s) to everyone who was at the meeting is a great tool for remembering what was discussed and decided. I'm always surprised at how few people do it. If your whiteboard isn't big enough and you have to erase it halfway through, you do need the presence of mind to take a picture before rubbing it out. Leave the camera by the eraser.
There's software to clean up these pictures (remove glare, correct perspective).
I've thought about software that goes further... I'm thinking for instance, vectorize the image, OCR the text, convert sloppily-drawn lines into actual arrows. Whiteboard-photo-to-Visio, so to speak. It won't be possible to do this fully automagically of course, because of bad clarity of writing and drawing, but a tool could be made that assists the user to do this rapidly. Not easy, but not undoable either.
Would you think there's actually a business for software that does this? Or is it just a silly geek "solution looking for a problem"?
At my last startup, I used to be a heavy user of "Whiteboard Photo" software, and I'd definitely pay for software that does what you suggest (assuming the right price/features/value proposition.)
To be perfectly honest, I can't see a use for it. You WANT to have it in handwriting because it reminds you who wrote what, which helps you remember what they said while writing it. Sure, you can't really show these things to outsiders, but I don't think automated tidying would help that much.
For my mISV I wrote PhotoNote to clean up whiteboard images. I wrote it to scratch a personal itch, but I think it's useful and competitive in the market.
I've used this stuff and can heartily recommend it. It's convenient for living spaces and the like where you might not want (or be allowed by the SO/Landlord etc) to put up "real" whiteboard.
Directed Edge is pretty cool. So far, though, I think we've been hearing a lot about their graph database, and not much about how they're actually going to do recommendations. Personally, I think the latter is the harder problem.
To be honest, that's because I'm comfortable dissecting our graph-database publicly, but less so for our ranking algorithms. There's been far more work put into the recommendations engine than the graph DB and the beginnings of the engine predate the DB. The original prototypes worked with our Store class, which is the DB abstraction that the engine sees, and were purely in-memory and then there were a number of things that I swapped in to replace the storage layer, eventually settling on our own DB.
That said, I do have one big blog post half-written on "the problems of sparse data" that I'll eventually roll out.
That would be very interesting (to me, at least). In particular, I'd be curious to know how you test your inferences. I guess you could run it on the NetFlix/GitHub prizes ... but in general, I'm finding that getting reliable data on which to validate algos is one of the biggest challenges.
In any case, I'd be interested to hear about the math, since there are about 50 different ways of doing this stuff. Even just vague stuff like "We might use a boltzmann machine".
There's some interesting work done on optimal stimulus selection (MacKay has a paper on it, and there's one in a neuroscience setting by Paninski). The idea is that you generate data for which the response will give you maximum information. So you figure out which edges would be most valuable to your algo's predictions (using information theory) and then make recommendations based on hypotheses about those edges, which are later confirmed/denied by user behavior. This gives you an optimal learning loop.
The vision sounds a lot like Loomia... though it looks like they've come to focus on content/publishing sites, retail or indeed 'anything' seemed the earlier focus.
They had a celling mounted camera that would scan the walls. They also had a markup language that the camera OCR software would pick up. So for example if they drew a P in a square, it would print out that wall to the printer. If they wrote an e-mail address in a square it would e-mail a photo of the whiteboard to that e-mail address.
This was 7 or 8 years ago. I wonder if they still use this system.