> Car Wash Y/N? (wait)
> Loyalty (wait)
> Fuel Rewards (wait)
> Alt ID (wait)
> 0005550000 (wait for the numbers to appear on screen.
I bet they run Java.
Hopefully not all of the machines are that bad. I wish that common point-of-sale interfaces focused on simplicity and ease of use (especially accessibility). Maybe one day!
It was so slow and I had no idea why. The display was a standard text screen of 40 x 80. The math is dead simple and the only other thing it did was open the cash drawer. The computer was a 486-40MHz so better than the one I had a home that ran Ultimate 7 just fine.
It makes no sense at all. As soon as you input payment it should output gas. There's really no need for buttons or a screen at all (well, maybe except for entering a zip). If they really have to show all that stuff, it can come after I have my gas.
Note: languages aren't slow, it's the programs that are slow.
Here in Belgium I don't even get asked if I want a receipt, I can just ask it afterwards if I like (just like for the rewards card). The worst I've had were in store French supermarket stations where for a while they had audio ads playing while you were filling up, that was absolutely horrible and I stopped going there. Eventually those things went away.
But the interface itself always asks for the least possible things. Diesel/gasoline 95/gasoline 98 then the card PIN, and you can fill up.
In Finland, they don't even ask what you're going to pump. You just grab the right nozzle and go.
This used to work for probably better than half of them due to the fact that they had pay after pumping policies for the cash customers. Hardly ever works anymore, but I keep doing it and sometimes am surprised, while on a road-trip out in the country, when it works. OTOH, I also tend to stop at old crotchety gas stations on principal and have been known to make a U turn if a station has pumps that look like they are older than a couple decades. A few years ago, I stopped at one where there was a $3 taped over one of the digits. I asked the owner about it, apparently his pumps could only charge .01-$1.50 (or some range like that) so every-time gas prices went over or under some threshold he had to physically flip some mechanical gear in the pump and get the state to come back out and recertify the pump. This of course apparently cost more than the poor guy was making in gas, so he was saying that likely the next time it happened he would simply shutdown.
I wonder if it would work here... I live in Minnesota and I'm pretty limited on where I can fill up with 98 RON (called 93 Pump here in the US for some reason...). A lot of stations play video ads with sounds while you stand there.
Wouldn't surprise me if the ad based gas stations just lost a 10-20% drop in customers if there was an alternative nearby without the ads.
This strikes me as a reasonable solution, in that it removes the need to guess how much I need beforehand.
According to this blog they also collect the data for "marketing purposes".
The trick, which isn't obvious, is that you're supposed to drop the letters from your postal code, leaving three digits, and then append two zeros to the end. For example: K9Z 2P7 -> 92700.
In Canada we don't have to deal with this, since pretty much all our pumps have supported EMV chip cards for years.
So, I don't think it's based on the age of the pump. I'm guessing that there is a pretty standard architecture that they all adhere to. So yeah, I bet they run Java.
BTW: I've noticed this same thing with ATM software. I'm pretty sure they run Java as well.
"Please wait" is a deliberate feature, not a bug.
I mean, a replacement capacitive touch screen for my samsung mobile is like 2000 Rs. (30 USD)
Its frustrating how far we have come and yet how behind we are.
I don't think this is the reason.
On aside.. fuel pumps with embedded computers and payment terminals are US thing? There it is necessary to pay at the cashier after filling. (and it is most annoying part, because they try to upsell you hotdog/coffee, washer fluid and encourage to join loyalty program)
"No, I'm just naturally loyal. Runs in the family."
In general, for 99% of tasks, if it takes more than a few milliseconds there's a network involved.
I mean, an ESP32 module runs at about 160 MHz, and using it with a capacitive touch screen almost as smooth as in a mobile.
Probably Electron apps.
Here's another great article on the hand-welds on the F1. In only a short amount of time we've outsourced so much of these production tasks to other software/machines. But it's really amazing to contemplate and appreciate what a work of human hands Apollo was.
It's hard for me to grasp it, but I can't help but think it beautiful when I meditate on all the _people_ involved in the moon landing, each person playing a small part in a very complex symphony.
I think at some point we have to start talking about how far from optimized we are https://en.wikipedia.org/wiki/MenuetOS comes to mind when talking about what can be done in fasm. Are we ever going to get compilers to the point where we can squash things down to that size and efficiency? Is this yet another thing that AI can promise to solve someday?
Compilers are already that good.
The problem is economics. If you have an OS that uses up 1% of the available CPU budget, there will be pressure from management & product to spend the additional 99% on additional bells & whistles. Maybe you add fancy animations and transitions to window opening. Maybe you add full-HD search and spend additional CPU cycles indexing. Maybe you add transparent cloud storage and spend the additional CPU cycles backing up files. Maybe you add new frameworks for developer productivity, all of which suck up CPU time.
And the thing is - from a business perspective, these are all the right thing to do. Because having a slick, glossy UI and a long list of useful features will sell more copies, while being able to run it on a USB-C charger or 25 year old computer will probably not sell more copies.
- Approximately which year's average desktop PC is equivalent to a current Raspberry Pi?
- What year's worldwide computing power is equivalent to a current Raspberry Pi?
I'm sure folks can come up with other comparisons to make. I know it's not very useful to the world, but it's fun to think of just how much astonishing compute power we have these days.
This guy puts the rpi 4's GPU at 8 GFLOPS:
Looks like the X1600 was around 6 GFLOPS:
>After the release of Doom II on the PC, the original three episodes were released to some 16-bit consoles that used special 32-bit enhancement hardware. For the Sega Genesis, the 32X allowed for additional address space to enable Doom to run its demanding resources of the time which a 16-bit system wouldn't have handled. Whereas with the SNES, the Super FX 2 chip inside the DOOM cartridge allowed for an internal co-processor of the game cartridge to eliminate the need for bulky addons for the aging SNES.
Would it count if you somehow built in a 32-bit coprocessor for your 15-bit apollo doom port?
The bigger problems are more pedestrian like complete lack of suitable IO.
HN is a community and we want it to remain one. For that, users need some identity for others to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?query=by:dang%20community%20identity...
You needn't use your real name, of course.
You can still use C on 6502, it's just not very fast.
I guess you could define another language that mostly looks like C, but built around particular CPU architecture quirks.
A compiler/linker combo for the 6502 should, for example, analyse the call graph of the entire program to figure out what return addresses it needs to store on the stack at all (alternative 1: store it at a forced address in main memory (will work in single-threaded code that isn’t recursive). Alternative 2: instead of RTS, jump back to the caller (will work for functions that get called from only one place. Aggressively inlining those is an alternative, but may turn short branches into long ones, with their own problems)
Similarly, a compiler/linker for the 6502 probably should try really hard to move function arguments to fixed addresses, and to reuse such memory between different functions (if foo isn’t (indirectly) recursive and doesn’t (indirectly) call bar and vice versa, their function arguments can share memory)
And yes, self-hosting such a compiler would be even more challenging, given the 64kB memory limit.
It's running the (custom/not from TI) open-source firmware openchronos-ng-elf, which is on github. When I get around to it, there are a few features I'd like to add, but it's nice just to know that I can.
Now, if you'll calculate clock speed per W you'll see that Google charger easly beats the Huawei and the Anker.
I think it's interesting not just if you're interested in Google, but to contemplate what the limitations of talent and intelligence are, and how they can be utilized in unexpected ways. And what synergies exist between the people at the very top of a field and those who aren't.
The CPU was throwing an I'm overloaded signal, but dealing with it, as the process priority put docking radar as less priority than the landing sequence.
So the CPU was complaining but still doing it's job.
All this is from memory :P
I was (partially) wrong about the un-synced frequency though. The synchro's for the docking radar ran from a different power circuit than the AGC, and whilst they were frequency locked they weren't in phase; resulting in timings getting thrown off and the priority issues.
Edit: Superb deep dive on AGC in Ars from last week
Side note: that is partly what the 90's term “Network Operating System” is about. Both Cisco and Novell managed to market the hell out of the fact that both IOS and Netware are not real multitasking OSes but essentially giant event driven state machines (nice related buzzword: “run-to-completion scheduler”), which they managed to market as being a good thing.
We've actually lost the ability to go. No one understands how the engines on the Saturn V work anymore. It would take years to tear them apart and analyze them, and most of the original engineers are dead.
And the guidance systems would have to be built back up from scratch.
Kind of sad if you think about it.
There are parts of the F-1 engine that would be difficult to exactly replicate because while the engineering specs are known the exact fabrication processes weren't all documented. An exact workalike could be made for those parts that fit the specification without a problem. That new engine would need to be entirely requalified for use.
The F-1 is big and expensive and rebuilding them has long been outside of NASA's budget. If they were going to rebuild the F-1 it would be the same expense and effort to just design an entirely new engine.
This is why, for instance, the SLS is using the SSME as its first stage engines. It's a known system with a long and successful (the engine itself) flight history. It's been iterated upon and is well understood.
The design philosophy of rockets has also changed from the 60s when the Saturn was designed. Solid fuel boosters were not considered useful for Saturn but have been proven since then and have a long flight history. There's also the SpaceX/Soyuz model of an array of smaller/simpler engines. A small number of monster engines is just not needed like it was for Saturn.
The expense of the F-1 and the lack of real need has kept NASA from using them, not some loss of technology. The guidance systems...are a long long solved technology. Modern guidance systems are better than the Apollo and Saturn systems by orders of magnitude. They're far smaller which allows redundancy and still lower weight than old systems.
Since Saturn NASA has launched successful probes to every planet in the solar system and several dwarf-planets, landed several extremely long lived rovers on Mars, launched hundreds of satellites, over a hundred Shuttle missions, and assembled one of the most complex machines ever in orbit.
What's sad is the mistaken belief that Apollo was some pinnacle of capability and technological prowess for NASA.
"The largest solid rocket motors ever built were Aerojet's three 260 inch monolithic solid motors cast in Florida. Motors 260 SL-1 and SL-2 were 261 inches in diameter, 80 ft 8in long, weighed 1,858,300 pounds and had a maximum thrust of 3.5M pounds. Burn duration was two minutes. The nozzle throat was large enough to walk through standing up. The motor was capable of serving as a 1-to-1 replacement for the 8-engine Saturn 1 liquid-propellant first stage but was never used as such. Motor 260 SL-3 was of similar length and weight but had a maximum 5.4M pounds thrust and a shorter duration."
"Between Sept. 25, 1965 and June 17, 1967, three static test firings [of the AJ-260 rocket] were done. SL-1 was fired at night, and the flame was clearly visible from Miami 50 km away, producing over 3 million pounds of thrust. SL-2 was fired with similar success and relatively uneventful. SL-3, the third and what would be the final test rocket, used a partially submerged nozzle and produced 2,670,000 kgf thrust, making it the largest solid-fuel rocket ever."
For more see:
That's not really true . Although the project is dead AFAIK a fair bit of work went into reviving the F-1 engine in the early 2010s.
>And the guidance systems would have to be built back up from scratch.
As with many things, you can't easily replicate complex old artifacts because you don't even have the tools to make the tools to make the tools... No, we couldn't build an Apollo Guidance Computer today. I doubt you could easily get things like rope core memory or just about any of the components. But you wouldn't want to anyway. (The guidance computer would probably be the least of your challenges; this is a pretty well-understood technology and the company that made it is even still around.)
"Any of the components" is an exaggeration. For example, bipolar transistors are still here, here's the schematics  of the switched-mode power supply in the AGC - still perfectly understandable, rebuilding it can a fun weekend project. Also, discrete NOR gates are still around, although the underlying technology is different.
> As with many things, you can't easily replicate complex old artifacts because you don't even have the tools to make the tools to make the tools... No, we couldn't build an Apollo Guidance Computer today.
I think there's a difference between a 1:1 faithful replication and a replication based on the same architecture and/or principle of operation. While it's difficult to build a 1:1 faithful replication, rebuilding a new one based on the same architecture in general is much easier.
So when the original commenter said, "no one understands how the engines on the Saturn V work anymore", I think the point to be made is whether the ideas from the Space Age, e.g. whether the Saturn V architecture has survived, it is not necessarily meant that rebuilding it makes sense - it probably does not.
> The guidance computer would probably be the least of your challenges
>whether the Saturn V architecture has survived
We understand very well the basic approach that was taken to land on the moon and the hardware we used to do so. We'd have to re-engineer a lot of things that aren't just off-the-shelf to do it again. But we could do so pretty quickly if there were any compelling reason to do so.
As I understand it, current technologies that could include moon landing are being worked on in the broader context of going to Mars.
Yes. I think this is the perspective that people trend to overlook - every engineering project is unique, in the sense that it's optimized to do the job under very specific constraints - costs, production capabilities, available materials, etc. When the circumstance changes, sometimes you simply can't just pull the same blueprint out of the archive to make another one - redesigning it is sometimes unavoidable, even if the older design is nothing wrong from a technical point-of-view. I think the same applies to software development.
The scientific case for the moon was always weak, and most anything they could come up with has been done. Just look at the useless lets-grow-salad-in-space-oh-look-its-just-salad timesinks the ISS has been busying themselves with.
So, in a way we've become much smarter. And SpaceX, a commercial outfit with both solid financials, a good track record on safety, and rather unlimited ambition is probably the most exciting thing happening since the first moon landing.
This is called basic research. It's geared towards building greater knowledge of a study area without specific concerns towards application. That's how science works. We don't just fund the stuff that's immediately profitable (though things are shifting that way).
More than doubling the number of visible man made objects (~200 till the start of the SpaceX Starlink project).
And with fcc approval for another 12,000 and plans for a further 30,000.
It excites me alright, but certainly not in a good way...
Ditto guidance systems. Why try to replicate something designed around the constraints of hardware that seems ancient by today's standards? No reason to think we couldn't build something new with equivalent or better features on modern hardware.
In fact if you take various “high tech” designs from middle of the 20th century you will see a pattern of things that cannot work if built as designed, but work because of random effects that are certainly outside of what the original designers thought of (I have seen large amount of instrumentation electronics where voltage sag on supply voltage is part of feedback loop and it will not work without it. I highly doubt that the original engineers consciously designed that.)
The budget plan for the James Webb Space Telescope was $500m with completion in 2007. We're now at $9.6b with completion predicted in 2021. Some of the cost overruns and delays were caused by the unexpected complexity of fabricating the mirrors and assembling them in space, but that underscores my point. The Apollo mission was orders of magnitude more ambitious at the time.
I'm not pointing any fingers because the causes are complex. NASA is underfunded and relies on unreliable funding from Congress, which itself is a source of inefficiency. Defense/aerospace contractors are greedy, sure, but there's a much bigger pie to be had if past successes encouraged even larger-scale projects like moon colonization they could bid for.
I don't know what's gone wrong, but we are failing on these big projects as a species for some reason.
Other than all the advances that will give you much better performance at a lower cost enumerated in other comments, there is another problem.
When you dive a bit deeper into the history and design of the Saturn V (the NASA history website is a good source) you kinda get the feeling it was hacked together in many places to meet a deadline. Many of its systems were very complicated, labor intensive or hard to upgrade or extend jus so that they could meet the deadline and beat the Soviet Union to the lunar surface. The unlimitted money pass they got for this for a while definitelly did not help matters.
This also in my opinion is why Saturn V ended production so quickly - it did meet the deadline, but was just too expensove and inflexible due to all the hacks for a regular budget conscious "production" use.
On the same tech we did 50 years ago.
There are things to do on the Moon, right now. Basic science ought to be enough; but applied things like material extraction and tourism would add to that. However, we consider those things lesser priority than some others - and we're getting a lot of short-term bang for our bucks, just not so much of foundation for future developments.
So, today we don't see the reason to apply efforts to get there. Those few who could justify going "in small", are to few and far between to mount a concerted effort. So we're slowly developing such things as cheap LEO launchers and lunar landers - the precursors to return to the Moon. We'll get back there - in the future, not in present. In present we still can't.
Interestingly, China germinated plants on the moon this year, but that news seems to have been buried in the UK. So getting a payload to the moon, guidance systems and such, isn't the problem.
Every starry eyed AI/ML piece has me thinking, yes but where's the power going to come from? And where's the power on top to encrypt all this?
And quantum computing? Super-cooling at scale? Aye right pal.
The elegance of the AGC and it's approach to computing could provide a good starting point to a solution out of our mess.
to quote from a 2004 paper by Don Eyles, one of the AGC programmers:
"When Hal Laning designed the Executive and Waitlist system in the mid 1960's, he made it up from whole cloth with no examples to guide him. The design is still valid today. The allocation of functions among a sensible number of asynchronous processes, under control of a rate- and priority-driven preemptive executive, still represents the state of the art in real-time GN&C computers for spacecraft."
The trend is to use microprocessors for increasingly trivial things. You use that power to implement a UI, or a light bulb, or charger. That is far less important than driving a spac ship. But isn't that true of most things we develop?
I agree that the trend towards browser based apps is not always efficient. But the browser does a surprising amount in the background to even out hardware differences. It helps you take advantage of the hardware (like a powerful GPU) without having to worry about specifics. That was never exactly easy in developing a conventional native app on a traditional OS.
I'll never cease to be amazed by technology. I was born in 1985 and our first computer in the house (aside from an Atari 260 and NES) was a machine in 1995 although my computer usage began in 1990 with Apple II variants. My first portable (Game Boy aside) ran Windows CE around 1998ish and my first remotely 'smart' phone was a Treo 650 about 14 years ago. I still get plenty of value out of 8-bit Atari computers yet the USB charger sitting on my desk I'll discard without a thought if it breaks and it has a clock speed 5.5x that of my beloved Atari 800xl computers but the 800xl did come with 64k of memory from the factory so at least there is that, ha.
Just looking at how far the technology has come from 1979 (800xl release date) to 2002 is mind boggling. Had you described an Android or iOS phone to me in 1995 I'd have asked "are you writing a science fiction novel?" and now I have 5 of them I use on a daily basis (multiple accounts on a freemium game) and 2 that I carry on my person. It's crazy and wonderful and terrifying and even a little unbelievable.
Thankfully device-side microchips that interface with USB chargers come with overvoltage protection. For example: http://www.ti.com/lit/ds/symlink/tps65983b.pdf
Devices which charge from wall chargers have to undergo testing for power glitches to obtain FCC certification. I forget the name of the test. Somebody else will know what I'm talking about!
They are usually sporadic and no surge protection test will provide a guarantee against them.
Basically, they seem to do a pretty good job isolating high- and low-voltage circuits. As far as I understand it, it is air-gapped, and physically impossible to induce dangerously high voltage on the device side without some mechanical failure that bridges the gap.
On the other hand there are documented cases of USB-C PD between two device-ish things (ie. things that can be both power source and sink) failing in variously spectacular ways.
Maybe that's an example of what you mean?
AI in EVERYTHING. because it'll be so cheap in 20 years.
Like maybe those useless automatic taps or hand dryers will finally actually know if hands are under them properly because they'll have the equivalent of a modern day supercomputer of AI power trying to figure it out.
1950: $1'048'576 (1 μs)
1960: $32'768 (100 ns)
1970: $1024 (100 ns)
1980: $32 (20 ns)
1990: $1 (3 ns)
2000: $0.033 (0.5 ns)
2010: $0.001 (0.2 ns)
2020: $0.0005 (0.1 ns)
This is pretty inaccurate, but close enough to explain my point, which is that most of the humans have lived their entire lives and their parents' entire lives in this Moore's-Law regime, and it's coming to an end. But their cultural expectations, formed by three human generations of Moore's Law, have not yet adjusted; that will take another generation.
But who knows? Maybe some other similar exponential economic trend will come along and make AI cheap. But it isn't going to happen just by virtue of Moore's Law the way it would have in the 1980s or 1990s.
Big implications that I'm not sure the tech industry fully appreciates.
I think it’s a bill gates quote (badly paraphrased probably) - “people overestimate what can be achieved in a year and underestimate why can be done in a decade.”
So 2 decades ought to be something well outside our current expectations.
I hope it will be at least!
Some of that sort of happened, though mostly later (the electric guitar dates from 1932). But think about what you'd be missing: atomic energy (the Cockcroft–Walton generator wouldn't split the lithium atom until 1932(?)). Manchukuo and the rest of World War II. Alcoholics Anonymous. Mechanical refrigerators. Widespread washing machines and consequently women's liberation. The Little Prince. The jake walk. The Long March and Communist China. Looney Tunes. Airlines. Mass nonviolent civil disobedience. Independent India. The end of the British Empire. Comic books (outside Japan). Neoprene and nylon. The end of dirigibles. Vaginoplasty. The Hoover Dam and the Tennessee Valley Authority. Bugs Bunny. Computers and the theory of computable functions, and the death of Hilbert's decidability program. Universal censorship of mass-market films in the US under the Hays Code. Liquid-fueled rockets and ballistic and cruise missiles. The discovery of the Lascaux cave paintings. The discovery that the universe was less than 20 billion years old. The Moonies. The Great Depression. Perón. Merchandising agreements for children's characters such as Pooh. Radar. Antibiotics. Nineteen Eighty-Four. The end of the gold standard after 2500 years. Pluto. Maybe even talkies — you might have predicted in 1925 that talkies would become a mass phenomenon, but it might have been unimaginable that they would entirely displace silent film.
Maybe you would have predicted one or two of these. But nobody imagined very many of them.
So, what are the major developments you're missing over the next 20 years, which will change the world into which AI-powered hand dryers could potentially be born?
There lays the rub. Even today computers rated for space are relatively underpowered compared to consumer hardware. Comparing the microcontrollers used in USB-C wall warts to the AGC is like saying your car has a higher top speed than a tank. You wouldn't be wrong but you'd also be purposely ignoring some key differences in design goals.
Is it possible to update their firmware? I.e., is there an equivalent of an "update firmware" button anywhere? If so, Richard Stallman would not approve of using these non-free chargers. We should not even mention them, lest anyone would think it's acceptable to use them for anything.
He makes a distinction on the update-firmware level — if the microwave has no such functionality, then he does not consider it a computer. Look for "microwave" at http://stallman.org/stallman-computing.html:
> As for microwave ovens and other appliances, if updating software is not a normal part of use of the device, then it is not a computer. In that case, I think the user need not take cognizance of whether the device contains a processor and software, or is built some other way. However, if it has an "update firmware" button, that means installing different software is a normal part of use, so it is a computer.
You see, the problem of app developers, especially in big companies, is that they are given a top-notch smartphones, so they naturally target such hardware as a benchmark for performance, which moves every year. That's why a 2016 Moto Z play smartphone ran smoothly in 2016, but works really slow in 2020 - because all the regular apps (Gmail, Chrome, FB, etc) were updated with newer flagship smartphones in mind. "New build works really fast on my new Pixel 4", the Gmail developer thinks, "ship in to production!"
On the CPU side, you can figure out how to position a line of text in less than 100 instructions if you store it right. But even if you use 10 million instructions to lay out a handful of lines, you'll never have to hitch.
On the GPU side, the pre-retina A5 had a fill rate of 2 billion pixels per second, and a brand new high-DPI phone has 100-200 million pixels per second to draw at 60fps. Let alone comparing like to like. A high-end qualcomm chip from 2013-2014 would match that, and a low end phone would have half the power and half the pixels.
We are in a wonderful age of overpowered GPUs for 2D work. There are no excuses for dropped frames.
What I'm arguing is that the problems have little to do with one another, and I'm guessing so do the means. That's like saying building a viaduct is easy because the pyramids exist.
Because it's the CPU stuff that's the problem here, and that's not fundamentally different. The display differences are a very separate thing and not the cause of the problem.
The smaller are your transistors, the more they are susceptible to impact from a particle.
NASA's also working on a hardened A53 at ~800MHz, and you can get hardened POWER chips at similar frequencies.
I wonder about the hardware selected for say a charger, is it maybe just that the power (CPU, memory) are just a factor of that being the most cost effective choice and the power (cpu, memory) are far more than a USB charger needs?
I'm updating my laptop; it'd probably be fully sufficient to have just 16GB DDR4 (I have just 4GB right now, and it ain't that bad), but I can get a 32GB stick for only like 110 USD, so, I might as well go for 32GB to max out the slot and not have to worry about it later on. If the price for 32GB sticks was more like $500, I'd probably not bother (although it'd arguably still be a good investment in our profession here, it's just a little more hard to justify when you know the price will come down relatively soon, and you don't quite need that much RAM in the immediate future anyways).
Most laptops have two memory channels, so if both are not populated with same size memory module, at least part of the memory range will provide only 50% bandwidth.
For memory bandwidth sensitive tasks it can halve performance, although I guess for most workloads the effect is way less, perhaps just 15-20% slowdown.