Hacker Newsnew | comments | show | ask | jobs | submit | yorak's comments login

Or then it is like the case with youngsters and mp3s, where the studies have found that they actually prefer the "sound" of compressed music - with compression artifacts and all. If a hifi enthusiast (~tunesmith) would say that uncompressed music sounds better, you wouldn't shoot his argument down by saying that improved fidelity and dynamics just allow you to hear the music was recorded and postprocessed in a studio.

What I'm trying to say with this mp3 analogue is that it is very much a matter of habit and preference. Technically lower quality video does not lead to better immersion if it is not something you have learnt to expect.

-----


I did, a year and a half ago. I decided upon FX-8320, as for me, more cores was better. In my PhD I'm doing automatic parameter tuning of optimization algorithms, and I thought it would be handy to have as many cores as possible also back home (the actual runs are of course run on a monstrous computation server somewhere in the basement of the university). To have 8 separate cores has proven to be handy every once in a while. Just yesterday I did some parameter tuning on a machine vision side project.

Back then the difference in bang-per-buck was even more striking than it is now: http://paulisageek.com/compare/cpu/

Also, by choosing AMD I avoided the minefield of missing virtualization and overclocking support of some mid level Intel models.

-----


Greg Anderson [1] from the Arcticstartup summed it up nicely: "My Summer Car ... is better described as bored Finnish guy in the 90's simulator."

[1] http://arcticstartup.com/2015/04/14/my-summer-car-looks-like...

-----


The idea of digital assistant does not require it to be tied to any specific device. It can live in the cloud and follow you anywhere the entrypoint being any device you are using (and you are logged on). Think of Jarvis from the Iron man movies. If I understood the aim of the VIV correctly, they are trying to do this by not tying the service to any given platform but being this platform. This is what makes it so powerful and intriguing (and scary if you do not have the control of the assistant).

-----


I've been pondering about this for some time and even more after the introduction of Siri, Google Now, and others. I've come to the conclusion that in the near future (artificially) intelligent personal digital assistants will become THE way majority of people will interact with the digital world. It will of course depend on how good and how quickly the assistants evolve, but the potential in productivity gains and transforming the way we live is too big opportunity to be missed. Imagine having a (real) person that would take care of simplifying and helping you with your digital and real life (calendar, meetings, notes, email, flights and other travel, anniversaries, even small research tasks like buy this or that etc.). It would help you immensely by saving time on the fluff of our daily lives and by allowing you to focus on what you deem important.

Therefore I think one of the most important open source projects of the next decade will be to build such a learning intelligent personal digital assistant. A Linux of our generation if you will.. Otherwise this opportunity will be lost to advertisers and others that have interest in steering movements and behavior of the masses. Does anyone know if such an FOSS initiative already exists?

-----


Another possibility is the new, more efficient, ways to interact with your computer.

Direct brain interface maybe.

Pretending that your computer is a human is actually NOT an efficient way of communicating with it. Moreover, it is unreliable and dangerous because those "intelligent" assistants lack intelligence and may do something a human would never do. So one cannot (should not) use them for any mission-critical tasks.

As yet, it would be nice to see at least a non-painful way of editing text on a tablet. Or something better than pen and paper (or an electronic pen and a tablet) to quickly express your thoughts in a graphical form.

-----


Direct brain interface requires invasive surgery and is still a ways to go. Its constant need for recalibration basically precludes it from any useful utilization, much less commercialization of it on a massive scale.

Even if the AI part is mature, I do not think speaking at length (in a public setting) telling your computer what to do is desirable.

I think our best bet is a device that reads neural signals going to our voice box. We can "speak" without vocalization and still have the device pick up our words. I think they call it EEG sensing of imagined speech.

-----


Well, that may improve in the future. The main property of inventions is that they are often completely unexpected.

Maybe there is a wireless way.

Maybe it will be compulsory to have a DBI (direct brain interface) connector in your skull in the future (I hope it won't).

-----


Videos have been successfully reconstructed from fMRI of people's brains. This will certainly improve as fMRIs improve and more advanced machine learning is used to decode the data.

There is also fNIR which can potentially do the same job if I understand correctly.

-----


What makes you think that it would take less sophisticated AI to parse the signals from your brain directly than it takes for it to parse natural language?

-----


Actually, the direct brain interface and AI are kind of orthogonal. There may be a direct brain interface that would allow to replace mouse and touch interaction, for example, so the interaction method would be purely mechanical in this case. Passing non-spoken words or thoughts to the computer instead of speaking is a different thing.

-----


Because the brain contains extremely useful representations that are difficult to learn from scratch. Data from brain imaging has been used to improve natural language processing algorithms.

-----


Great. Does anyone actually use Siri? I still don't see people talking to their phone while walking around.

I like the thought of having a smart assistant, but to me, the input mechanic just doesn't work.

Maybe a brain interface would really kick usage off, but I don't think we should expect that within the next 5 years.

-----


I use Siri daily to send hands free text messages. If you use and state a lot of punctuation when dictating Siri gets it right 94 percent of the time.

I've been using Siri for such since it first became available and I notice with iOS 8 a huge step up in accuracy for dictating text messages.

I also ask Siri hands free to play iTunes radio X genre frequently as well and more.

-----


I use it where the overhead of dealing with possible speed recognition errors is less than the overhead of doing the same thing manually.

- Setting a reminder

- Setting a timer

- Asking what the weather is today

Particularly when combined with the plugged in voice activated mode these sort of operations are pretty convenient.

-----


Re: open source initiative: great idea

There are huge problems though. When I worked at Google in 2013 I used the Knowledge Graph, and the amount of resources in very talented people and computational resources was enormous. Structured knowledge is a foundation for Google Now, Siri, Wolfram Alpha, Viv, and others. Maintaining Ontology's and ingesting data is an expensive endevor.

That said, there are great resources like DBPedia that could be built on. It is possible that a high profile (Apache?) project with a lot of corporate support might produce a system that anyone could use. I would like to participate in such a project :-)

-----


Totally agree. With Siri, Google Now and Cortana soon controlling most devices, we are in desperate need of a FOSS AI assistant, one that "thinks" outside of the "corporate AI scheme". BTW that would also benefit those using GNU/Linux, *BSD, and Unices - where the corporate AI's will probably never land anyway.

-----


I'm a bit more pessimistic. I've tried Siri and Google Now but they just seem to be more involved, less accurate ways to do things I could have done by touch. A good (human) personal assistant should know what I want and execute without involvement on my part, and that requires social cues that are difficult to encode as inputs to machine learning. Until we reach that point all we have is are speech-to-text engines with some services attached, which is not that useful imo.

-----


This used to be absolutely true but is now only mostly true. Setting reminders by saying "Google, remind me to do X at 3pm tomorrow" like you would to a normal person works now much better than aiming at a calendar.

Same with quick searches. It's easier to do a voice search than type the terms on a phone, even with suggestions from your keyboard and Google.

-----


Mee too, please share in pastebin or somewhere.

-----


Automating a fleet of trucks has come up few times in these comments. Many commenters have raised concerns of difficult and special situations.

I have worked with machine vision and currently I'm doing my PhD in computational logistics (mainly working on automating the deployment of Vehicle Routing Systems). With this background in mind have given some tought to this: What would be needed, at least in the transition phase, is technology that would allow remote drive-by-wire of trucks in difficult situations (platforms, urban traffic) and wheater conditions.

Imagine a system not entirely unlike the unmanned UAVs the US is using but for civilian use of remote controlling trucks. One driver could probably handle dowzen or so trucks because they would drive under full automation at least 90% of the time. In addition the truck driver could have a normal 9 to 5 job.

Of course there are technical challenges like communications delay etc., but I'd like to hear your opinion on feasibility of such system.

-----


Delay in the control loop is always critical to accurate control. But in the case of a large heavy truck, we already have to live with that (slow acceleration/braking/steering response built in).

-----


> the system tries every setting and graphs the results so it's easy to pick out the best setting. How would that happen? How does the system know what "good" is?

In a research field of Parameter Tuning we try to answer that question. The field is more focused to optimizing algorithm parameters, but it could be applied to physical world of robots etc., if it is feasible to automatically repeat the experiment few dozen to around hundred times.

If we would do like Bret proposed, by logging each experiment we would already have done some "probing" on the parameter space. This, in turn, would allow us to build a statistical model of the phenomena and then minimize/maximize on that. The resulting parameter configuration would then be evaluated, the model updated and the process repeated until satisfactory level of performance was reached.

See for example the recent works from Hutter et al. [1], where they use Random Forests with parameter tuning to make parameter "goodness" predictions (in order to reduce the actual experiments on the target algoritm/robot/whatever).

[1] http://www.cs.ubc.ca/labs/beta/Projects/SMAC/

-----


Also, I recently found out that Android app from Linkedin extracts your gmail contacts. From what I could gather, you cannot opt-out. I was quite annoyed by this. I also see this a more probable explanation to contact harvesting than hacking to email accounts.

-----


Do you know this because installing the app requested permission? The android security model is all or nothing, up front. If anyone ever would want the app to access their contacts, the app is required to demand that permission from every user at installation.

-----


In fact I just discovered that there are Pumped-storage hydroelectricity plants (PSPS) because they were planning to build one also to Finland into a old mine (with mile long shaft). PSPS could solves the power storage problem for wind and solar as long as there is enough suitable locations to use as PSPS reservoirs.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: