I use the Inkplate 6Color for some similar projects. Built-in ESP32 controller, so no Pi required. Can be fully battery-powered, so if you make intelligent use of the deep-sleep/wake schedule, you can stretch that battery charge for a couple of weeks. My favorite one is a red "On Air" sign that hangs on my office door (used for meetings and recordings.) I just trigger a shortcut on my Mac and it automagically updates my status and mutes my devices. They have example open source projects for weather displays, news, Google Calendars...
I have a similar project too! I have an e-ink display connected to a Pi Zero that shows just a "Yes" or "No" telling me if it's going to rain outside in the next three hours. (On a second thought, maybe I don't need a 7 color display...)
What was the name of the eink computer that was on here last week? I can't find it back for the life of me. I desperately want a flat white display for work. Reading technical grant proposals on a regular screen is killing my eyes.
This is a case where ideas are "all around" - it's the kind of example that's wonderful for a new entrepreneur - to see your idea isn't unique (very few are). Execution matters.
Y'all like the Uber and Lyft of eink weather - but I've tagged others like this from HN.
Edit: soon as I find the time I'm building one too - all are inspirational for esp32 dev + key for FOSs home-automation
Cool hardware project and idea, but I'm not touching an OpenAI API with a ten-foot pole, especially for an always-on device
Maybe when I need a fun project I'll make a version that uses OpenStreetMaps + LAN-hosted Stable Diffusion tho, so worth a bookmark to remember to do that
Trickly cost center a corp that's already betrayed me multiple times gets to set the price on. Also I have a hardline policy that I won't interact with Microsoft in any capacity for less than $80/hr, and that's the friends and family discount
Ah, you're afraid of sudden price changes? That makes sense, though (at least for my frame) I cache the responses, so after a few days I don't make any more API calls, everything comes from the cache.
Limiting contamination is a good first step, but I'm of the mind that anything that doesn't need a cloud service shouldn't use one. To each their own though
Thank you for sharing – I love that the second one also has a microscope closeup of the display while it's changing. Happy hacking, hope to see your projects on here soon!
I've been to Barcelona a couple of times and it's hard to imagine a day where you hit both temperatures?
Is the weather data in any way reliable? Or is it all just hallucinated stuff and I should just enjoy the animations? It could also be that I know Barcelona weather less then I thought and it's possible to hit 26 high and 2 low?
Yeah this is Dall-E incapable of cleanly writing text. The 2 C is just a “hallucination” from the 26 C. This is a hobby project and as such I prioritized the text looking unique every time (and usually more artistic than in that image), rather than put a traditionally rendered text on there, but image generation models are not good enough for this use case if you were to make a serious product out of it.
The recommendation from Waveshare is "refresh at least once every 24 hours"[1]. It's probably not such a big deal in reality, but maybe there's some risk of the pixels getting stuck if they are active for too long?
Nope, it's a completely physical process and state, so you can disconnect it and the image will remain the same. So you could use it as a reusable post card if you wanted, but for $80 maybe it's not such a good idea...
I don't really know why they recommend to clear the screen after leaving the same image in for 24 hours, but I think it's mostly a precaution and most likely not a real issue. But for my own sanity, I do clear it at 2am and then it's already drawn a new image at 8am, before I'm out of bed.
I had some “burn-in” on a 4 in waveshare display that lasted for about 10 refreshes. I bought it used so who knows how long it was stuck on that one screen
There a difference in how the display works that’s different to a monochrome one, that requires the caveat of “don’t leave the same image on the display for long”?
If memory serves, standard monochrome e-ink displays don’t suffer from burn-in.
I'm just parroting Waveshare's own recommendations[1], but yeah probably it's not such a big deal. I'll link their docs in the README to help people make a more informed decision.
Does anyone know why projects like this always seem to specify using a particular type of tiny, low-power computer (usually a Raspberry Pi or something similar) to drive the display?
I already have plenty of non-tiny computers that run Debian GNU/Linux. Suppose I wanted to run an e-paper display from one of those computers, using this code, just via a normal USB cable. I could do that, right? There's no reason I would have to use a Raspberry Pi or something similar?
Small computers like RPi make it easy to access the low-level peripherals such as SPI, which this small screen uses, and others like GPIOs. If your big-computer has such peripherals available to the OS, you can use them also. Before small computers, you could use the parallel port (and some small program) to talk to your own peripherals via the same low-level signalling.
The other extreme would be nice. Something very low powered that can spend 99% of its time in standby. Then you could run the whole thing on a battery for months. For a weather display, waking up for a few milliseconds per minute should be enough.
The 7" E-Ink display is US$86, which is not too bad.
There's no reason at all. RPis come with lots of bootstrap documentation and code so it's comfortable for someone that's played with Linux to get one running, install some packages, and make it do something.
You could do this with a tiny microcontroller if you had the time and knowledge to do it. There's nothing magical about the displays other than strange supply voltages at times.
The more common problem is that they don't listen to USB. They take SPI or parallel digital interfaces to set the pixels. So you need some kind of intermediate interface and software to draw the display. Which is why people just slap an RPi into the mix and talk to that over more common protocols.
Thank you. My idea was more the opposite: do it with a normal laptop or desktop computer driving the display, rather than a tiny microcontroller. I guess I'm assuming that either the display's USB input supplies enough voltage to run the display, or that the display has a separate power supply -- i.e., that there's nothing magical about a Raspberry Pi that makes it supply special bits or special voltages to these displays that can't be supplied by, say, my desktop computer.
AHHHH, that's the key thing I didn't know (I have a Raspberry Pi sitting in a drawer and have played with it embarrassingly little -- I didn't realize how important having the SPI or other special interface is in this context). Thank you again.
With that said, though, there are also tons of inexpensive ways to output SPI or various other serial protocols from just about any device with a USB port, like your full-sized computer: https://www.adafruit.com/product/2264
The RPis and friends just optimize the workflow - theres nothing particularly magic about they way they implement SPI or GPIO, they just have it out of the box because its such a common way to extend hobby computer boards.
My first thought for a project like this (grab photos/data from the internet, display them on a device) would be a Pi Zero 2 W or a Pi Pico W, for the reasons you stated.
I'm not particularly up to date with the tiny microcontroller ecosystem - if I wanted to execute this at lower cost and/or lower power, what would be some better options to consider?
Neat, thanks! What a great project, and like any excellent project it helps get somebody on a path to customization. The hardware recommendation and even just epaper.py[1] is nice to see as a reference. Thanks for sharing!
It uses AI for some things you can't really use free APIs for:
- Turn a colloquial name for a neighborhood into the exact latitude and longitude (I'm sure there's some API that can kind of do this but I don't know a free one that is as accurate as GPT)
- Adding to the previous point, if you say "Hogwarts" or "Tatooine" it will dutifully give you the weather of those too thanks to AI [1]
- Most importantly, write a Dall-E compatible prompt to generate an image with the actual weather conditions (and sun/moon etc)
From your first point, I'd expect you to be using a service that provides the weather for a given latitude and longitude.
From your second point, I'd have expected that to fail for Hogwarts or Tatooine.
Taken together, it sounds like you're blindly trusting the coordinates generated by AI, and will happily generate correct-looking results for any input. I'd call that hallucinating the weather. If I type in "Springfield" I'm gonna get results but I'm not gonna know if they're mine.
This may be an unreasonable level of concern for a fun app.
Yeah the second part I consider a bonus. Especially because I can use street names or slang for locations that no rigid API supports.
> Taken together, it sounds like you're blindly trusting the coordinates generated by AI
Actually I guess I do "blindly" trust the AI's coordinates more than a traditional location search service. Not sure how much you've tested it yourself, but I haven't gotten a single false latitude/longitude pair from GPT-4 unless the location was truly ambiguous, in which case it would be one of them. But as a European, I've definitely experienced rigid APIs happily giving me the weather for "Stockholm, NY" which apparently does exist, but wouldn't have been my first option...
But yeah, it is just a fun app, so I wouldn't want to make it more rigid either way!
I think that people underestimate how much non-AI technology fails in these cases. And the recovery path is to retry with more specificity in either case.
Of these, only the first one actually has any real value. And as you've mentioned there are likely less "expensive" ways to do so. By expensive, I mean this is essentially using a sledgehammer to crack a nut (especially the Dall-E call).
I don't understand how I would get a unique but accurate prompt that makes Dall-E produce an image that besides the weather/lighting conditions also has a randomly picked iconic scene for the location, people in the picture that have the correct clothing and doing reasonable activities, etc.
I'm trying to understand the criticism here, so please give the GPT a shot with a few different locations (you can also do neighborhoods or fictional locations) and describe how this is a nut that can be cracked with non-LLM solutions:
If your goal was wanting to use AI image generation to create such an image, this makes sense.
I think people are just questioning the assumption that such a feature is a desirable or necessary feature of a weather display.
I definitely was a bit surprised to open the repo to a weather display and see the need for an OpenAI key without an explanation of which features were AI-powered.
But of course this is a personal project, so "Generate a representative AI image using the location and weather information" is a perfectly valid, and cool, challenge to set yourself and achieve.
My favorite is the weather station consisting of a rock sitting in a dish suspended by a chain. If rock is wet, it is raining. If rock is moving it is windy. If you can't see rock, it is night. If rock casts a shadow, it is sunny
Thanks! I should have known that it would have a wiki entry. Although, I think some of the rules are a bit meh. "if rock is warm" you're not meant to need to touch it as it's meant for a quick glance kind of tool. also, as it mentions, the rock is a finely tuned instrument, so thank you for not touching.
I appreciate that it's just a nice picture that happens to update with the weather, but there's no reason to only use this for your current physical location. It accepts any place in the world (and also fictional places), so you could see a picture of what it looks like where your distant family member or friend is, among many other things.
"Stocckholm". Also Santa Monica has at least one giant man, and on the beach there's a centaur, but I guess these little glitches add to the entertainment value.
I suppose I don't understand how colors work on e-paper at all. Can these 7 colors not be dithered? Or even mixed more properly, for something like true color functionality?
I use Floyd-Steinberg dithering, and if you look at the "More images" section you can see it does a reasonable job at representing colors. Also worth nothing that 2 of those 7 colors are black & white, so questionable naming perhaps.
I'm pretty sure real production use cases of these screens don't tend to use dithering though, focusing on contrast and readability.
The naming is a hard problem - black/white/orange and black/white/red screens are commonly called 3-color screens, to distinguish them from normal black/white screens. Calling them 1-color screens would be an awful choice of name, but "7-color screens" is a name consistent with the "3-color screens" name and calling them 5-color would be horribly confusing (especially since current 5-color would become 3-color).
Also, black and white are their own specific primary colors, they're not made out of a combination of other colors, so it's rather appropriate to call them colors in that context.
If using the Raspberry Pi 5 I wouldn't go for a battery-powered solution because it's pretty power hungry. But you can run this screen just fine off of a Raspberry Pi Zero, or ideally a microcontroller if you want basically no power draw while it's sleeping. I was just lazy.
The similar project I'm working on is looking like it'll get about a year of battery life with a typical 18650, the 5.65" version of the display and run off an ESP (refreshed once a day).
I’m sorry I was not very clear with my question. The review is negative- saying the ‘white’ is more like a dark purple color. I’m not asking how well it shows purple- but the general quality of the display itself.
My bad, I understand now! I think it's at least as good as a Kindle. I understand why someone who is comparing to bleached paper or OLED/modern LCD screens would say it looks purple, but yeah, I have no complaints.
Edited to add: In the picture you can also see the white material at the bottom that is not part of the screen, I would say that is "true white". And the picture also has white pixels on the screen. That's the "dark purple" the review refers to in that case.
The black does have a purpleish tinge on my panels, but the other colours don't. Overall it's pretty low contrast and saturation, and unable to reproduce cyan or fuchsia even with blending through any kind of blending, which makes certain things like sky and water come through poorly.
Since GPTs only have the one prompt I had to specialize it for a multi-step process with tool use (since it has to call an endpoint and draw an image in one go). Here's the prompt that I use in the GPT:
> Get the weather data for {location}. If the location is not on Earth (fictional
or otherwise), pick the place on Earth that is most visually similar to the
location. Before you get the weather, reason about which temperature unit is
most likely to be used at the location in question. That is the temperature unit
you should use when getting the weather.
> Here is the weather for the location "{location}":
> {weather["status"]}
> Use Dall-E to generate a beautiful illustration of the location in the style of
a wide post card filling up the entire image. Try to describe a scene that is
iconic and aesthetically pleasing for the location specified above, but also one
that can showcase the weather. (…)
Totally off topic because this is v cool, but what’s the use case for things showing the current weather? Forecasts I get, but current? Lots of phones and even Windows 11 prominently display this information but who is it for? People who work in windowless offices or something?
I mean it’s a bit like a restaurant menu app that only shows you pictures or what you just ordered, right?
Many taller buildings don't have a way for you to step outside in an outfit to see if it works for the weather. Or to step outside for a few minutes to try to manually gauge the temperature with our squishy body thermometers.
If you know the temperature, you don't need to do this exercise at all, you simply know based on experience, and it's more accurate, too. A few degrees difference can mean different clothes, different materials, different layering, etc.
I mean cool and all, but as an electrical engineer literally shaving off microamps from my design I can't help but wonder what the energy use behind the AI-generated backdrops is. After all we have kinda a situation as a planet.
https://soldered.com/product/inkplate-6color-color-e-paper-b...
Cheap e-ink projects are super fun and useful.