A few months ago my team moved to Azure for capacity reasons. We were constantly dealing with 429 errors and couldn't get in touch with Open AI, while Azure offered more instances.
Eventually got more from Open AI so we load balance both. The only difference is the 3.5 turbo model on Azure is outdated.
As usual it really depends on what you individually are referring to when you use the words "self-awareness" or "free will". Even "illusion" in this context could refer to something other than what you might expect. It really muddies the waters when seeking clarity on these questions, as it often seems like people don't even agree on the referent in the first place.
Based on your description this view sounds like it would be in the wheelhouse of the likes of Dan Dennett and Keith Frankish, the latter of which argues for a view called Illusionism [0]. Just note in this case "illusion" doesn't mean "not real", but more like "isn't what you think it is".
Tailwind is a system for design. It constrains what options the designer has to work with. This means both less chances to get things wrong and a smaller domain to wrap your head around.
Tailwind is configured out of the box by some very excellent designers. The color palette, fonts, spacing, etc. are all crafted to work together harmoniously. You have to go out of your way to configure it poorly.
The recommended way to handle components is by using template partials or JS to manage your card component rather than CSS classes. [1]
Though, you can still use BEM with TailwindCSS through the @apply directive. Adam calls the directive a kind of "trick" to get more devs on board with utility-first but it does help a lot of you're used to reasoning about CSS with BEM.
They aren't looking at all visual illusions, they are only looking at the double-drift illusion. In the paper they say the goal of the study was to use the position shift from this particular illusion to investigate where that perceived position emerges in the processing hierarchy.
They say that the double-drift illusion reveals an integration of motion signals over a second or more, which they say makes it unlikely that early visual areas are responsible for the accumulation of position errors because they have short integration time constants.
It seems more like supporting evidence that some illusions are in the conscious domain? It would interesting to see if there have been any studies on the double-drift illusion and animals.
But why can't we have later visual areas, but who are still at an unconscious level? From where the implication that if it's late or has long integration time is conscious?
I might not be understanding what you mean, or I am misinterpreting the paper, or this article is just doing a poor job communicating the research. I don't believe they are assuming or implying that consciousness is required for or detected by this illusion, if that's what you mean.
The authors don't outline what their working definition of consciousness is but it seems to me that the authors are using a higher-order theory of consciousness because they refer to a conscious percept as the end result of some hierarchy of information processing. Would you agree that you consciously perceive all illusions, even if the illusions are caused by some unconscious processes (ie outside conscious domain)?
It makes sense to me that if I were investigating what a conscious percept consists of, I'd take a look at what feeds into it. They say this particular illusion has properties that make it a useful probe which they use to find evidence for WHERE in the flow from sensory representation to conscious percept this unique illusion emerges. It turns out we would not necessarily require consciousness for this illusion (that's not the claim being made), but it's still part of the neural correlates of conscious perception, hence the title of the paper: "Neural correlates of conscious visual perception lie outside the visual system: evidence from the double-drift illusion."
The long integration time in this case really only means that we are unlikely to find emergence in early visual areas, which they confirm with the first experiment that showed an illusory path doesn't share any activation patterns with a matching Gabor path (that has no internal drift) in early visual areas. Then they explored other areas with a whole-brain searchlight analysis and found a shared representation in anterior regions of the brain associated with higher-order processing. That does mean that the representation is stored outside what is usually classified as the visual system, so this evidence suggests the illusion emerges somewhere after the visual system but before the conscious percept.
This is really cool! I built something almost exactly like this, though we used the ESP32 instead. [1]
What I find interesting is that I never had an issue with dropped columns over network using the ESP32 as an AP. Did you connect the ESP8266 up to an existing network?
I didn't see a mention of how you designed the brushes in this write up, were they all just images on an SD? I experimented with a palette app to design the brushes (solids, gradients, images, manual) and to send the frames to the brush. Curious what your solution was!
This year I am adding a gyroscope to the device to experiment with 3D space and holographic content. Also trying different LED attachments (like a circular or matrix display) for different effects. There's a lot more to explore!
I'm happy to see this today, the denser version looks very nice!
Ooh, yours looks amazing as well! Yes, the ESP8266 was the AP and it was dropping (or maybe not displaying? I doubt that) packets. Maybe the ESP32 is just beefier, or maybe it's the second core (the ESP8266 probably had to put the wifi chip on hold while sending data to the LEDs).
The brushes were just images, yes. It's interesting that you'd ask that, because I didn't have a concept of a brush (it's all just images), whereas you do, since you use them :) In my case, I have a PNG with the pixels I want, and then select the minimum time step and duplicate the columns in the PNG as I want them, so I run through each PNG column to generate the "brush".
I really like how your example "fans out" by activating more LEDs in time, I should try that as well. I think you'd get much better results with some electrical tape as a diffuser (unless you like the stripes!) too.
Haha I've been so absorbed by the brush metaphor in my take I didn't think about what other terms to use. The fanning out is from a brush size slider in the app. My goal was to make it performance friendly for artists so the app has a bunch of real-time things like that.
You're probably right about the dropped packets, though it makes me concerned I'll eventually run into the same problem and my whole workflow depends on the network not sucking lol
Do you set your time step arbitrarily? I haven't implemented a solution for stabilizing the time step (until the gyroscope is added) and found it very difficult to get non-skewed results on images. Yours look really nice though, was that just patience and a steady hand?
I really want to improve the density on mine after seeing your results. For sure I'll work on better diffusion as well, we had one that blurred the results too much so we bailed on the idea but I think a denser strip and a tighter diffusion would be awesome.
If you want to discuss further, send me a message on Keybase (or something else, whatever is convenient for you).
The ESP32 is pretty beefy, can you not do things on-device? I wouldn't rely on the network after what I've seen, but I haven't tried the ESP32.
My time step is constant, I have a parameter for it but I rarely change it. It's mostly a steady hand, yeah.
Are you talking about horizontal or vertical density? Vertically, the 60 pixel per meter strip is the best you can do (there are some denser ones but need a lot of current), but a diffuser will make it look much better. Horizontally, you can get very fine resolution, up to the refresh rate of the strip.
Good to know. Maybe if I got back to it the core is more functional now.
My impression is that it's soo easy to publish to NPM and the ecosystem is newbie friendly - this is good but it also creates a lot of noise and popularity/usage is not a valid indicator of reliability/quality (compared to other platforms). And because the core was so barebones you need to use third party solutions a lot more often so it compounds the issues.
If anyone is interested in some foundational reading Jeff Hawkins and Sandra Blakeslee published a fantastic book in 2004 called "On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines".
There is also an audiobook version if that's your thing. I enjoyed it very much. Happy to see more progress from this group.
Eventually got more from Open AI so we load balance both. The only difference is the 3.5 turbo model on Azure is outdated.