It would be relatively straightforward to build a custom chip for browsing the web that doesn't use memory or hard disk.
This sentence really needs a plan to back it up.
1) Building a custom chip is not ever straightforward. I say this as someone who has designed ASICs using appropriate high-level tools such as Verilog.
2) I don't understand how one could customize a chip for browsing the web. The modern web is inseparable from JavaScript, which is a general-purpose programming language and thus requires a general-purpose CPU somewhere. At best one could integrate, say, the NIC with the processor, but this is already commonly done.
3) I fail to see how it's possible for a web browser to operate without access to RAM. Page layout involves complex algorithms which are slow. Therefore their output (graphics) must be cached, or else the user would have to wait several seconds to scroll the page. Perhaps the author means to use a very large (tens of megabytes) on-chip cache, suitable for browsing a single web page, but that would inflate the cost of the chip beyond feasibility.
Heh, I came here with the exact same sentence already in the clipboard.
I agree ... As far as I know, "browsing the web" with modern expectations is Turing-complete, i.e. it needs a device able to execute general purpose programming.
I really wonder what they think they can do to somehow simplify this using a "fixed-function" design. Whatever that is, in this context.
For the Javascript part, you would need a programmable function block. One option would be not to support Javascript (OK in certain parts of the world). You could organize your chip so that there are several fixed function blocks for things like image decoding, video decoding, text rendering and graphic composition coupled with a programmable function block for the Javascript. The point here being instead of doing all your computation through a program, do only as much as you absolutely have to. The parts that you can hardcode, hardcode to make you chip more green. Web browsing has been given as an example because its a commonly used function and on its own it can justify building a gadget like the Chromebook. So, if you have a narrow function that the gadget is going to be used for, you're better of going full custom than programmable function. I'm doing a more complete analysis of what it would take to build such a chip. I'll publish it when its ready.
One option would be not to support Javascript (OK in certain parts of the world)
Where?
So, if you have a narrow function that the gadget is going to be used for, you're better of going full custom than programmable function.
How much inefficiency per unit in building small batches of a custom chip, as opposed to many units of a single design, would it take to overcome whatever gains you got when running the device?
If you could identify a large population of users that could be serviced with a narrow usage scenario and they agreed to stick with their function for say a year, you could build a viable solution the fixed-function way. For example, in India, PC penetration is only a few percent. Most of the population is completely technology agnostic. If you could identify the top three uses for a computing gadget for say the farmers of India, you could build a fixed function chip at low enough cost and high enough volume for them which would give them some solution instead of no solution because PCs are too expensive or consume too much power. Users such as these would probably not ask to change their gadget for years.
You can get an ARM system with a 700Mhz CPU, 256MB of RAM, an HD capable GPU and USB support that runs off a 300mA power supply for $25. How low could you realistically get with a custom chip?
For the application that I have in mind which is low cost computing alternatives for India's tech isolated millions, I think we could get by with a 320x240 resolution screen which means each frame would be pretty small and we should be able to get by with 2 frames of total buffering. At 6 gates per bit, assuming a 16 bit color depth, that would be 6 x 16 x 2 x 320 x 240 which is about 15 million gates. Assuming the rest of the chip is much smaller, and assuming we fabricate on 180 nm process means chip area is about 15million x 0.18 x .18 x 10(-6) mm squared or .5 mm sq. That's a pretty small chip. The only cost we would have to control is the design cost. Over time we could figure out how to do this with say a product line.
And what evidence do you have for this? We aren't talking about off-the-shelf, pennies per chip PICs here. These things would not only have to recoup the design costs (not cheap for this level of complexity), they would be expensive due to the previously mentioned huge caches (basically RAM on chip) and the fact that you really aren't going to get by with web browsing without something Turing complete.
So when the next new web tech comes out you just obsoleted the chip and have to throw them all out. When a new image format arrives or video codec or version of HTML all those chips are trash. Yeah that's green.
You would refresh your technology at some rate say once a year and try to match it with rate of the other technology suppliers just like in the microprocessor case. Your cadence is a little shorter.
My iPad from over two years ago browses the web with the same capabilities as the iPad 4, just a bit slower and lower-resolution. That I should have to throw it away and buy a new one because the web changes is the antithesis of "green."
General purpose computer hardware is an investment in a toxic industry to enable the cheap and clean process of disposable software.
Maybe you do. Some of us who are well beyond "power user" are still operating on hardware from many years ago. It's all about picking (or making) software that doesn't suck.
The longer you stick with your hardware the greener the process gets. Still, software has spoilt us for choice and I agree few people write software that lasts.
The Chromebook is only a reasonable appliance precisely because of the sophisticated programmability of Javascript. So your "one option would be not to support JS" proposal is directly contradictory with your selected example (the Chromebook) of a single-function computing gadget.
More generally, I'll echo the concerns of everyone else here: we seem to be getting greener by using more-general-purpose computing, that necessitates less devices, and less frequent upgrading. Especially because we now see that once devices reach a certain level of power, we actually buy new ones far less often. (Many people now happily own 5-year-old desktops, not because they cannot afford more, but because they're too LAZY to buy a new one for the small marginal benefits... and laptops are headed slowly in that direction. Cell phones and tablets are still far from that transition.)
On the other hand... your observation about the 3% penetration in India, assuming it's factually true, does sound like SOME kind of an opportunity. Making things radically cheaper is always an important disruption (c.f. the original sense of "disruptive innovation", which doesn't just mean "we beat the competition"). And certainly if you can sell to 0.01% of Indian farmers (say 100k households?), that's a hell of a lot of units over which to amortize the high fixed cost of circuit design and tape-out. Although only if you really can come up with a single design that meets the needs of such a huge and probably diverse market. Sounds very hard.
Plus, of course, as many sages (e.g. pg) observe repeatedly, if it were obviously a good idea then your competition would already have done it, which means that IF it's ACTUALLY a good idea, the crowd of naysayers here (myself included) is a good sign for your project. ;)
In short, I'm really sceptical about your fundamental hypothesis, but good luck with figuring it out.
While I don't agree with everything you say(+), I'd like to thank you for taking the time for writing all this (your initial post and your comments in this thread) up. Definitely one of the more interesting reads on HN today.
(+) Web browsing is extremely complex: We have video, audio, js, plugins (flash, java, silverlight), changing standards for html, css etc.
Because of this, it's not very well suited for custom chips. For this reason, modern smartphones contain both custom chips for the "phone" functionaltiy and a flexible microprocessor for Angry Birds.
There might be other areas where a few custom chips might save a lot of resources in the long run, though.
Maybe it's just me, but I could use some explanatory text along with these links (which I certainly find interesting, but I don't see directly how they relate to what colanderman said).
It's as if he forgets the re-use part of being green. A custom Google Chromebook ASIC will work for that and then cannot be repurposed into anything else. To add new features would mean a change of fabrication for the ASIC which is expensive and IMO insane. A tape-out is rarely done for less than $500K USD.
Whereas a more general purpose design allows someone to reuse the hardware in a different way just by changing the software. Furthermore, with a more general purpose design, you gain economies of scale and the general purpose chips become cheaper as they get produced in larger quantities and get used in a variety of hardware, not just Chromebooks (as an example). A custom ASIC would only get used in one design and have to be re-done for all others. I believe this to be much more wasteful and less 'green' than widespread use of general purpose SoC's or microprocessors.
Today's microprocessors use a billion gates and have to be produced on 22nm advanced processes to fit. The energy spent in producing each chip is pretty high. In the fixed function case, the gate count is a lot lower so you can use an older and cheaper and hence greener process. On average, people use their PCs for 2 years before replacing them. If a fixed function chip had to replaced say every year to update it with newer technology, it would be in the same ballpark. The change that you have to make is instead of promising your customer a gadget that is infinitely programmable through software apps, you have to ask them to decide what they want up front and stick with that decision for one year. This is a paradigm change from the user's viewpoint.
Economies of scale are orthogonal to how you decide to implement the function.
If I use my computer for 20 different things, then that's 20 different fixed-function chips to replace 1 CPU. It sounds like the opposite of green to me.
Besides, you can already produce CPUs on old processes. There's a CPU in the remote control for your TV, and another one in your computer keyboard, and in your microwave. These have low gate counts and are built with off-the-shelf components. Replacing them with fixed-function chips would be an enormous waste.
I'm looking at this going forward. In India, PC penetration is 2-3%. For the 97% remaining if we can build something that satisfies their top 3 information requirments cheap enough, we can really make a difference in people's lives.
Blah, blah, blah... "software bloat = CPU cycles = energy consumption". Another boring blog post about the same old crap.
I really was hoping to see an article about the harsh chemicals, heavy metals and exotic compounds used in chip fabrication plants, and printed circuit board production. Not to mention the all the batteries attached to them.
...or maybe at least how we consume all these devices like candy, compared with statistics gathered on actual recycling practices employed by end users.
But no. Just another head-in-the-clouds post on Hacker News about software and conflated theories about chips from some blogger who probably hasn't ever even bothered to look at the assembly produced by a compiler (or even a disassembler).
Yeah, I think this was a sub-par post. I understand that maybe the author had a different point (even though the author's grammar and spelling are competent, I suspect English isn't their first language), but I suspect that they don't have a very good grasp of electronics design, or electronics manufacturing. And this is coming from a software weenie.
Sure, you could make some good points about e-waste, and the bloat of software, the inefficiency of modern processors (up until recently), or even critique the whole disposable culture, but suggesting that producing locked down chips that are practically designed for planned obsolescence is the solution? No, sorry. That argument was lost when they started putting general purpose microchips in microwaves.
We are actually planning to build gadgets based on fixed function chips which are significantly greener than microprocessor based solutions. Intel et al at the same time are taking microprocessor based design to its logical next steps. We want to highlight that there's an inherent flaw in making the assumption that programmable function == more utilitarian == makes more sense because that's the way its been done the last 40 years. There is a lot of bloat in today's gadgets which fixed function alternatives will completely get rid of.
Is the power consumed by the CPU in efficient web browsing hardware actually a problem? I would have thought that radios and screens took up far more power. My e-ink Kindle will browse the web nearly forever between charges. The effective power consumption of half the devices I browse the web on is substantially lower than the ambient lighting I use with them.
The Intels of the world are trying to reduce the power consumption of their microprocessor based solutions and therefore claim greenness. The point of this write up is to highlight the fact that fundamentally, the microprocessor based architecture is a more general but less green way of achieving a given function. If you can pre-determine what functions your gadget is going to need, you can always build a greener solution by going fixed function.
> If you can pre-determine what functions your gadget is going to need, you can always build a greener solution by going fixed function.
How are you defining fixed function? For example, does eval [1] count as a function? If you have a gadget which only executes one function, eval, is it still a fixed function device? Presumably it is not, since eval can perform any function. If eval is a function which is not allowed in a fixed function device, how do you determine which functions may be used in a fixed function device?
The point of the write-up is to highlight the fact that you don't have to assume programmable function is a better alternative to fixed function under all circumstances, an assumption we have made for the last 40 years.
The wealth of specialized chips in a modern smartphone, from audio decoding to cell signal handling to graphics acceleration, would seem to belie that supposed assumption.
And by ignoring the whole cradle-to-grave logistics of the situation, you miss that it's not "always" a greener solution. As was mentioned earlier, what happens when a new capability arrives that supplants the old chips? Throw them all out? We already have problems handling the waste from general purpose hardware upgrade cycles!
As for power consumption, the GP is correct: screens and radios consume far more power than general purpose CPUs. Not to mention that a simple switch to greener power (solar, wind, hydro) neatly elides the whole problem.
Yes, but they're not doing it to optimize power consumption fro web browsing. Web browsing consists of a typically very short burst of work (parsing and rendering) followed by long periods of idleness while the user actually looks at stuff.
CPUs power consumption is important when you're using it constantly, but most web browsing doesn't do that.
An Intel desktop is a bad strawman to compare against. The thing to beat is an iPad running Safari, which uses around 0.1 watts in the CPU & memory when browsing.
Oops, our custom chip has a huge security bug. Never fear, we'll distribute a firmware update... Oh wait, we baked our algorithm into hardware. Looks like you'd better junk the device and buy version 1.1, the "doesn't give your credit card to the Mafia" release.
I should summarize the post with 'computing is not green' or even more 'human interactions' are not green. Flexibility is not green. Look at brains: they are resource hogs. But.
While I agree with your general feelings regarding this article, I would like to point out the difference between "not green" versus simply "resource intensive".
Brains are certainly resource intensive, but the benefits derived from those far out-weight the cost. You could say that as an investment wetware pays off. A different issue is if those are sustainable, but given that those have existed for thousands of millions of years across multiple species, we can say they are.
Human interactions have not been around for such a long time, but it can be argued that since human populations can survive (and have survived) for centuries within the restrictions of resources available to local ecosystems, those are potentially "green" too, though not necessarily to the scale we see in today's world. I will not try to discuss the benefits of human interaction or civilization, but I believe most of us will agree that it is so big that it will continue to exist for as long human beings have any amount of resources beyond the bare minimum for physical survival.
Computing is a tricky one though. I do not think that in its current form those are sustainable in the long term, regardless of the benefits that we may get from them. But it does not have to do so much with the per unit cost, but with the economies of scale and huge capital investment needed for semiconductor production to be cost effective. Even if operation cost keep dropping, the race to sustain the Moore's Law calls for even more expensive and environment unfriendly manufacturing processes to be relied on.
For me, this is the biggest flaw in the article. A call for manufacturing hoards of dissimilar purpose-specific chips would break the economies of scale needed to make the production of electronics viable (versus the energy consumption issue).
The qualifier here is that "computing" is an abstract capability, that in this day and age we happen to realize by the application of semiconductor technology. For the long term future, I hope that we will eventually move to a different, leaner and cleaner form of technology that will fulfill the same functions.
Microprocessors have been adding specialized components for a long time. A simple integer processor isn't very good at working with decimal numbers, so we have dedicated floating point units. Floating point units aren't very good at massively parallel tasks like image and video processing, so we have SIMD units. Encryption still wasn't fast or efficient enough, so now we have dedicated encryption instructions in our microprocessors.
Maybe one day we'll have dedicated IDCT or wavelet transform instructions in our microprocessors. However, you could argue that we already do, in the form of SoCs with on-chip GPUs and DSPs. Maybe we've already reached the point you want us to reach.
The original idea to do this came as a result of extrapolating this trend to add custom processing features to general purpose processing microprocessors in the form of SOCs. If you extrapolate out from the SOC, over time, the role of the microprocessor becomes less and less significant and that of the special function circuits more so. Khitchdee's approach was to start at the other end of the continuum.
Surely the least green thing is computers that are not flexible enough?
For example having software that is closely linked to hardware that is not upgradeable and locked bootloaders that don't allow older hardware to be repurposed?
This sentence really needs a plan to back it up.
1) Building a custom chip is not ever straightforward. I say this as someone who has designed ASICs using appropriate high-level tools such as Verilog.
2) I don't understand how one could customize a chip for browsing the web. The modern web is inseparable from JavaScript, which is a general-purpose programming language and thus requires a general-purpose CPU somewhere. At best one could integrate, say, the NIC with the processor, but this is already commonly done.
3) I fail to see how it's possible for a web browser to operate without access to RAM. Page layout involves complex algorithms which are slow. Therefore their output (graphics) must be cached, or else the user would have to wait several seconds to scroll the page. Perhaps the author means to use a very large (tens of megabytes) on-chip cache, suitable for browsing a single web page, but that would inflate the cost of the chip beyond feasibility.