Hacker News new | comments | show | ask | jobs | submit login

Apparently touch responsiveness is improved in 2.3. However this article doesn't really explain what's wrong with it. I've been doing latency benchmarks for touch recently, and the highest latency I've seen for touch delivery is 10ms. Now 10ms is pretty terrible for what should be a very straightforward thing, but on the other hand it's only 2/3 of a frame, at 60 FPS.

The argument in favour of a GPU-accellerated GUI which is not mentioned here is battery life. I haven't been testing this side of things, but just because a CPU can do a 60 FPS GUI doesn't mean it should. Deferring it to the GPU would certainly reduce power consumption while navigating a UI. Whether this is non-negligible in terms of overall battery life, I'm not sure.




User: I think this feels laggy.

Developer: No you don't. I have the benchmarks to prove it.


You're seeing sub-10 mSec response times on projected capacitive screens? Care to share what you're analyzing? Cypress PSoC needs a good 20-30 to get a lock.


That's interesting. I'm not measuring touches, I'm injecting events into the Linux kernel by writing to /dev/input/event? and measuring how long it takes for them to get to the application. That's because I am working on software optimisation, so I'm interested in the overhead introduced to events processing by the framework rather than the hardware.


The argument in favour of a GPU-accellerated GUI which is not mentioned here is battery life. I haven't been testing this side of things, but just because a CPU can do a 60 FPS GUI doesn't mean it should. Deferring it to the GPU would certainly reduce power consumption while navigating a UI.

Both Charles Ying of Satine and wzdd seem sympathetic to this notion that the gpu is a power saving device. Surely it is when doing complex work. On the other hand, most smartphone usage consists of relatively simple screens and basic if any transitions. If it's only going to take 3ms of CPU work to run a 150ms animation, by all means, figure out how to keep the OS from interrupting the high priority graphics task and make the CPU do it. Particularly if the GPU would have to be on across the entire 150ms. If not, what is the cost of copying the graphical content between the GPU and the CPU? How much CPU does it take to initialize and set up the GPU for this extremely simple task you ask of it? Is the CPU going to be able to sleep while the GPU is running the animation, and if not what kind of context switching and control costs are going to be imposed on the CPU to manage the graphics state?

We've barely begun to see multi-core cell phone GPU's, but the point of the GPU is basically to run lots of work at a nice low clock rate where voltages can be dropped. If you dont have a lot of work to do, there's really no sense waking up the GPU and having it's extremely dumb poorly-branching cores chug away at figuring out, say, the SVG animations that a semi-competent ARM core could knock off right quick. You're just wasting power turning on the GPU.

Smoothness and polish, I suspect, are much more a questions of resource allocation than CPU capability, particularly when you've got a sub 640x480 surface and a 1GHz core.


Are those 10ms with a realistic app/usage scenario ? I wouldn't think users would complain about 10ms, isn't that below the threshold of perception ?


A stutter 2/3 of a frame long means there's a 2/3 chance that you'll draw a frame that doesn't match the current finger position, followed by a frame that catches up more than it should have had to.

The ~40ms input lag shown by most desktop IPS LCD monitors is noticeable even though it falls below the reaction time. On a touch display, it should be even easier to notice input lag, because you can see your finger moving and the screen around it not keeping up. 10ms sensor lag may be below the threshold of perception for circumstances like this, but probably not by much. If you add even a few milliseconds of processing lag after your app receives the touch event, you'll be behind by a frame.


I work with digital audio/MIDI and a latency of 15-20 ms between pressing a key and hearing the sound is easily audible making it near impossible to play expressive music. 10ms can be felt and is 'annoying', under that is acceptable. I usually keep latency down to 5ms, at most 7ms, only letting it increase, e.g. during mixing when responsiveness is not utterly essential.


Actually, I wrote about battery life at the very end of the article. I already discussed battery life in a previous post. I certainly feel that battery life is one of the most important issues in mobile, hence, "efficiency"

As for touch responsiveness, I don't have enough current knowledge of Android's situation and touch hardware in general to write on that topic.


This may be related to overall linux interactivity scheduling. There has been a running debate on the Linux kernel mailing list about this for years. The Linux scheduler is more efficient for server-workloads and compiles, but it is not as good for interactivity.

Very recent kernels have a new scheduler which supports a new scheduling algorithm which people are anicdotally reporting better interactivity with. Perhaps this will help the situation.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: