Hacker News new | past | comments | ask | show | jobs | submit | abeisgreat's comments login

This is a really good discussion of density in different forms. I’ve always thought mobile UIs could have a density renaissance, would love to see folks questioning some assumptions of these devices - especially when the trend with LLMs is “wait a long time for a potentially incredibly wrong output” it feels like we’re going the wrong way.

When we first released our Chat+RAG feature, users had to wait up to 20 seconds for the response to show. (with only a loading animation).

And then we fake-streamed the response (so you're still, technically, waiting 20 seconds for first token, but now you're also waiting maybe 10 additional seconds for the stream of text to be "typed")...

And, to my enormous surprise, it felt faster to users.

(Of course after several iterations, it's actually much faster now, but the effect still applies: streaming feels faster than getting results right away)


Mobile apps are constrained by accessibility (touch target minimum size), so you probably won't see the density renaissance you're hoping for.

This is the unspoken secret of all these mods. They’re built to be built, not played, so if there are subtle instabilities introduced which may impact practical gameplay they will likely never be discovered because the final creation just sits unused on a shelf.


To the same end, when you hire human beings anticipate them standing up for what they believe in and occasionally inconveniencing your immoral business practices. Humans on both sides, opinions and your right to voice them on both sides.


The thing with masterpieces is that they don’t hold that status because they can’t be replicated, they hold it because they were novel, innovative, and unrivaled at the time of their creation.


> The thing with masterpieces is that they don’t hold that status because they can’t be replicated

Copying the old masters is often an important part of developing one's own skills as a painter.

https://www.sightsize.com/old-masters-copying-older-masters-...

> The training of painters in past centuries regularly involved copying old master drawings and paintings. To that end, most museums happily allowed students into their buildings for that purpose (some still do). But the practice was not limited to students. Even fully trained old masters copied older masters.

Why copying old master paintings is useful - https://youtu.be/91UXW_hSpnU

The Art of the Copyist - https://www.metmuseum.org/perspectives/videos/2023/3/copyist...

Art: France’s long history of copying Old Masters at the Louvre - https://www.connexionfrance.com/article/Mag/Culture/Art-Fran...

> The practice of copying and recreating paintings by the Old Masters at the Louvre goes back to when the museum first opened in 1793, when any artist could turn up and use a freely available easel to copy a masterpiece.

> ...

> Not all artists copied works to improve their skills. Some took up the practice professionally, since the demand for copies of masterpieces in the Louvre was high throughout the nineteenth century.

> ...

> These days only 250 copyists are permitted to install themselves in front of the museum’s art works, and a two-year waiting list shows that there are plenty of hopefuls waiting in the wings to take up a palette and brush.

> Those granted access have up to three months to work on their copy.


That this exists as a pedagogical exercise does not disprove the original point in any way.

Source: I spent a lot of time in the library copying sketches of the renaissance masters as a kid.

AI is the pencil, not the artist. As cool and capable as large models are they are not even remotely close to replacing self directed human intent. If you do not understand this you do not understand art.

I don't believe there's some magical quality to human intelligence, just that the things we are making today with AI are still orders of magnitude short of the real thing, and that there are still very difficult open questions in that gap.


There are certain jobs that we consider artists, but are very close to someone entering a text to a prompt. Consider a director for theater/film. They are prompting their "tools" (to be reductive) to produce the art they want, and have to sometimes accept when they just can't get the results they want from their tool.

I've kept considering the term hand crafted when reading this thread about what is considered valued art or not as that's what applies to this gemstone TFA directly. Then it went to the painters with brush strokes, and that too keeps the hand crafted idea. That's when I jumped to directors. To step further away from art, and switch it to sportsball. While current managers might have once been a player, now, they are essentially entering text into prompts to get their "tools" to provide the result they are looking for with varying degrees of success. The managers/coaches can't kick/throw the ball themselves to get the results. They just have to get their "tool" to perform better by constantly tweaking the text entered into the prompt. Hell, now I'm thinking parents are constantly tweaking their prompts to get their kids to do something.

Okay, at this point, I'm convinced we're all just part of the matrix.


> Consider a director for theater/film. They are prompting their "tools" (to be reductive) to produce the art they want, and have to sometimes accept when they just can't get the results they want from their tool.

Bluntly, it's clear you have no personal understanding of such productions and did not understand the most important point of my comment and how different it is from piloting a generative model.


Bluntly? You clearly have no idea who I am or what my work experience is like. I have no idea what your response has to do with anything, but I hope you feel better for getting it off your chest.


Humans working on a creative team are not automatons given commands, and this is a pretty basic understanding even if you are super impressed by what large models can do.


I am not super impressed by what LLMs can do, and think the current hype wave is ridiculous. I find them slightly more useful than NFTs.

But if you can't see how a director trying to use phrases like "I see what you're doing, and it's interesting. But let's try saying the actual lines a few more times, and then we'll let you play with it some more", or "okay, that was great. let's do it one more time", or "this time with more energy/angrier/etc", or "that was great everyone! this time, we're going to do the same thing but with..." or any other variations of director speak isn't like a user tweaking their prompt while looking for something entirely different or keeping parts of it while looking to change a different part.

If you can't see how that kind of feedback loop is similar to using a GPT, then you're really being obtuse as it's as blatant as the nose on your face


incredibly well said.


I never understand why people consume content like this accelerated. For me, I feel like if I felt the need to rush through something then it’s not worth consuming at all.


Personally it helps me understand it more because it requires more concentration. I've always needed to be doing something while listening or watching things, but the problem is that I can distract myself doing that and not actually pay attention to the actual thing at all. Turning the speed up forces me to concentrate more and it's harder to get distracted


This. I have ADHD. If I watch at normal speed, I get bored and my mind wanders.


For me it’s mostly how slow some people talk. This is especially true for some content when I can understand it at a higher speed, for example Dota replays. I feel like I’ll get more out of the time watching two in the same when I can understand 95% of the nuance. I also have ADHD so wanting it to be faster could easily be part of that.


Honestly, not everyone has the same mental clockspeed. I have friends who seem to quite literally think 7x faster than everyone else. But to be fair, they don't do as much side-band emotional processing and tangential thought as me. I process "off the beaten path" of the recorded narrative, so I like realtime speed, bc it lets me relate other thoughts to what I'm hearing. But that's just my slow relational mind :)


It’s all about time. I can consume, enjoy and take in content like this much faster than it is presented. Speeding it up let’s me enjoy more content


So you don't believe in scientists when they say that the conscious has a low, fairly constant bandwidth? Like what, 50 b/s?


I think it is safe to say we know not all people take in information at the same rate. Also some people process different forms of information at different rates. Some people can read a book and understand it clearly. Others see it once and they learn the skill. So no I do not fully believe what scientist have claimed. That may be an average or the normal but not true in every case. Furthermore how do you actually measure what rate of information that video was presenting information at? It definitely was not at a steady rate as well lots of pauses and dead spots. Speeding the video up removes some of that dead space. Also there is the depth of the information. We are not working out space time curvature equations we are being shown precious stone cutting. If I was trying to learn the skill maybe even normal speed would be too fast and slowing it down or repeating sections may be needed. But as I was just trying to get an idea of the subject a faster speed is fine. I guess what I’m thinking is there are lots of factors to consider.


Sounds like a great reason to watch that 30 b/s video at at least 1.5x speed.


Most of it is pretty long sequences of him cutting the same facets or polishing, so not really much to glean from many of those sequences.


TikTok and YT shorts condition us all to have shorter attention span


10 minutes watching YouTube is 10 minutes of your life you'll never get back. I 2x everything unless it needs to be experienced in realtime.


Boomers complained how Gen X rotted their brain by channel flipping.


What does that prove? They might well both be right.


Every new generation faces some form of peril touted to be the coming of end times by the outgoing generation.

From a 50,000-ft view of the collective, it looks gross and seems like there’s no end in sight. However, individuals can still take responsibility to find a balance between excesses that society provides and complete avoidance of any forms of entertainment.


Each generation saying that the next is having their attention span diminished by new media may seem "gross" to you, but that feeling hardly proves any of them wrong.


what'd you say? i got distracted.


tldr


No, Silent Generation did that about Boomers.


no, Great Generation did that about Silent.


I worked at Firebase for many years and the concerns with security rules have always plagued the product. We tried a lot of approaches (self expiring default rules, more education, etc) but at the end of the day we still see a lot of insecure databases.

I think the reasons for this are complex.

First, security rules as implemented by Firebase are still a novel concept. A new dev joining a team adding data into an existing location probably won’t go back and fix rules to reflect that the privacy requirements of that data has changed.

Second, without the security of obscurity created by random in-house implementations of backends, scanning en masse becomes easier.

Finally, security rules are just hard. Especially for realtime database, they are hard to write and don’t scale well. This comes up a lot less than you’d think though, as any time automated scanning is used it’s just looking for open data, anything beyond “read write true” as we called it would have prevented this.

Technically there is nothing wrong with the Firebase approach but because it is one of the only backends which use this model (one based around stored data and security rules), it opens itself up to misunderstanding, improper use, and issues like this.


To be honest I've always found the model of a frontend being able to write data into a database highly suspect, even with security rules.

Unlike a backend where where the rules for validation and security are visible and part of the specifications, Firebase's security rules is something one can easily forget as it's a separate process, and has to be reevaluated as part of every new feature developed.


Yeah, I've never understood how this concept can work for most applications. In everything I build I always need to do something with the input before writing it to a database. Just security rules are not enough.

What kind of apps are people building where you don't need backend logic?


Many apps where every user has his own data, which just needs to be synced between devices.


Curious as to which apps if there are any you can point to?


A typical notes app.


I think I missed where writing to the database precludes backend logic. Databases have triggers and integrity rules, but beyond that, why can't logic execute after data is written to a database?


Because once it is written to the database, it can be output somewhere before you execute your logic. IE, explicit language, child porn, etc. You generally want to check for that BEFORE you write the data.


You're saying it's impossible to have public write access to a table without also providing public read access?

"it can be output somewhere before you execute your logic" is a design choice that is orthogonal from whether you execute your logic before or after input into the database.


You generally don't want to write child porn to disk, if you can help it.


First of all, most database records couldn't fit child porn, unless it was somehow encoded across thousands of records, in which case you couldn't realize it was child porn until after you've stored 99% of it.

Sure though, by putting "child porn" in a sentence, you can make anything seem bad. Tell me this, would you rather your application middleware was in the "copying child porn" business? ;-)

Actually, the more I think about it, the crazier this seems. You're going to store all the "child porn" you receive in RAM until you've validated that it is child porn?


I don’t get your tone or why you seem shocked that binary data can be stored in a database. Postgres and MySQL both have column sizes for binary data that can hold gigabytes.

Second, you generally need to hold the entire image in RAM to create the perceptual hash needed to check that the image is/isn’t child porn.


> I don’t get your tone or why you seem shocked that binary data can be stored in a database. Postgres and MySQL both have column sizes for binary data that can hold gigabytes.

My tone is shocked, because what you're describing seems totally removed from any system I've seen, and I've implemented a ton of systems. For performance reasons, you want to stream large uploads to storage (web servers, like nginx, are typically configured to do this even before the request is sent to any application logic). You invariably want to store UGC data that conforms to your schema, even if you're going to reject it for content. There's a whole process for contesting, reviewing and reversing decisions that requires the data be in persistent storage.

I think you misunderstood what I said. Yes, Postgres, MySQL and a variety of other databases have column sizes for binary data that can hold gigabytes. What I wouldn't agree with is that most database records can hold gigabytes, binary or otherwise. Heck, most database records aren't populated from UGC sources and not UGC sources where child porn is a risk.

But okay, let's assume, for arguments sake, most database records are happily accepting 4TB large objects, and you're accepting up to 4TB uploads (where Postgres' large objects max out). Do all your web & application servers have 4TB of memory? What if you're processing more than one request at once, do you have N*4TB of memory?

At least all the systems I've implemented that receive data from users enforce limits on request sizes, and with the exception of file uploads, which are typically directly streamed to the filesystem before processing, those limits tend to be quite small, often less than a kilobyte. Maybe someone could write some really terse child porn prose and compress it down to fit in that space, but pretty much any image would have to be spread across many records. By design, almost any child porn received would be put in persistent storage before being identified as such.

> Second, you generally need to hold the entire image in RAM to create the perceptual hash needed to check that the image is/isn’t child porn.

This is one of many reasons that you generally want to stream file uploads to storage before performing analysis. Otherwise you're incredibly vulnerable to a DoS attack on your active memory resources. Even without a DoS attack, you're harming performance by unnecessarily evicting pages that could be used for caching/buffering for bytes that won't be served at least until you've finished receiving all the file's data.

[Note: Many media encodings tend to store neighbouring pixels together, so you can, conceptually, compute a perceptual hash progressively, without loading the entire file into active memory, which is often desirable, particularly with video content.]


Thought about it some more... this whole scenario makes sense in only the narrowist of contexts. Very few applications directly serve UGC to the public, and a lot of applications are B2B. You're authenticated, and there's a link to your employer (or you if you're self-employed). Uploaded data isn't made visible to the public. Services are often limited to a legal jurisdiction. If you want to upload your unencrypted child porn to a record in Google's Firebase database, you go ahead. The feds could use some easy cases.


There's little point in not writing it to disk, the idea of holding it in RAM vs writing a file to disk is moot. You've got to handle it and the best way of handling that kind of thing at scale is to write it to a temporary disk and then have a queue process work over the files doing the analysis.

No serious authority is going to hang you for UGC which is illegal material in storage while you process it. Heck, you can even allow stuff to go straight to publicly accessible if you have robust mechanisms for matching and reporting. The authorities won't take a hard line against a platform which is open to the public as long as they have the right mitigations in place. And they won't immediately blame you unless you act as a safe haven.

A sensible architectural pattern for binary UGC upload data would plan to put it in object storage and then deal with it from there.


I have never in my life wrote a "child porn validator" that restrict files uploaded by users to "non child porn". This sound nontrivial and futile (every bad file can also be stored as a zip file with a password). This sound like an example of a "think of the children" fallacy.

I also find the firebase model weird (but I didn't use it yet), but not for the child porn reasons.


Writing directly to Firebase is rarely done past the MVP stage. Normally it's the reading which is done directly from the client. Generally writes are bounced through Cloud Functions or a traditional server of some form. Some also "fan out" data, where a user has a private area to write to (say a list of tweets) then they get "fanned out" to follower's timelines via an async backend process which does any verification / cleansing as needed.


Sadly, most developers don't know this and continue to write from frontend, almost all of the apps and websites we found did this.


It's a really good question

context: I have a near-100% naive perspective. Mobile dev whose built out something approximating Perplexity on Supabase. I have to use edge functions for ex. CORS, but by and large, logic is all in the app.

Probably because the client is in Flutter, and thus multiplatform & web in one, I see manipulating the input on both the client and server as code duplication and error prone.

I think if I was writing separate native apps, I'd push everything through edge functions, approximating your point: better to have that sensitive logic of what exactly is committed to the DB in one place.


Our experience has been very different. Our Firebase security rules are locked down tight, so any new properties or collections need to be added explicitly for a new feature to work — it can't be "forgotten". Doing so requires editing the security rules file, which immediately invites strict scrutiny of the changed rules during code review.

This is much better than trying to figure out what are the security-critical bits in a potentially large request handler server-side. It also lets you do a full audit much more easily if needed.


Are you suggesting that it's essentially too easy for a dev to just set and forget? That's a pretty interesting viewpoint. Not sure how any BaaS could solve that human factor.


Say you add a super_secret_internal_notes field. If you're writing a traditional backend, some human would need to explicitly add that to a list of publicly available fields somewhere (well, hopefully). For systems like Firebase, it's far too easy to have this field be created by frontend code that's just treating this as another piece of data in a nested part of a payload. But this can also happen on any system, if you have any JSON blob whose implicit schema can be added to by frontend development alone.

IMO implicit schema updates on any system should be consolidated and lifted to an easily emailed report - a security manager/CSO/CTO should be able to see all the super_secret_internal_notes as they're added across the org, and be able to immediately rectify security policies (perhaps even in a staging environment).

AFAIK Firebase doesn't do this - while there are pretty audit logs, there's not an automatic rollup of implicit schema changes: https://firebase.google.com/support/guides/cloud-audit-loggi...

(Also, while tongue in cheek, the way that the intro to a part of Firebase's training materials https://www.youtube.com/watch?v=eMa0hsHqfHU implicitly centers security as part of the launch process, not something ongoing, is indicative of how pervasive the issue is - and not at all something that's restricted to Firebase!)


Generally agreed on improved audit logs of some formed helping.

Re training materials, this is one of the mitigations we launched to attempt to pull security to front of mind. I do not really think this is a Firebase problem, I think average developers (or average business leaders) just don't, in general, think much about security. As a result, Firebase materials have a triple burden - they need to get you to think about security, they need to get you to disrupt the most "productive" flow to write rules, and they need to get you to consistently revisit your rules throughout development. This is a lot to get into someone's head.

For all the awesomeness of Firebase's databases, they're both ripe footgun territory (Realtime Database specifically). Our original goal was to make the easiest database to get up and running with, which I think we did, but that initial ease comes with costs down the road which may or may not be worth it, that's a decision for the consumer.


You could either do away with the model of the frontend writing to the DB and ask customers to implement a small backend with a serverless component like AWS Lambda or Google Cloud Functions.

Barring that, perhaps Firestore could introduce the concept of a "lightweight database function hook" akin to Cloudflare workers that runs in the lifecycle of a DB request, thus formalizing the security requirements specific to the business requirement and causing the development organization to allocate resources to its upkeep.

So while a security rule usually gets tested very lightly, you'd see far more testing in a code component like the one I'm suggesting.


> Barring that, perhaps Firestore could introduce the concept of a "lightweight database function hook" akin to Cloudflare workers that runs in the lifecycle of a DB request, thus formalizing the security requirements specific to the business requirement and causing the development organization to allocate resources to its upkeep.

Firebase has triggers.


I think it's more like there's more surface area to forget when you have humans handling so many concerns, and it's not likely the part that's changed the most so it's a likely candidate for being "pushed out of the buffer" (of the human).

In a more typical model, backend devs focus more on security, while not needing to know the frontend, and vice versa.


Eventually, humans will forget, set or not.


I agree, but I also disagree.

The concept with firebase DB's is flawed IMO, I never got the point of directly accessing a DB in the frontend, or allowing that even with security rules, it just seems like it would cause problems.


We tried to contact google, via support to try to help or for them to help disclose the issues to the websites. We got no response other then a response telling us that they will be creating a feature request on our behalf if we wanted instead of helping us, which is fair as I think we'd have to escalate pretty far up in Firebase to get the attention of someone who could alert project owners.


One of the things we fought for, for years after acquisition was to maintain a qualified staff of fulltime, highly paid support people who are capable of identifying and escalating issues like this with common sense.

This is a battle we slowly lost. It started with all of support being the original team, then went to 3-4 fulltime staff plus some contracts, to entirely contractors (as far as I’m aware).

This was a big sticking point for me. I told them I did not believe we should outsource support, but they did not believe we should have support for developer products at all, so I lost to that “compromise.” After that I volunteered myself to do the training of the support teams, which involved traveling to Manila, Japan and Mexico regularly. This did help but like support as whole, it was a losing battle and quality has declined over time.

Your experience is definitely expected and perhaps even by design. Sadly this is true across Google, if you want help you’d best know a Googler.


I suspect it is going to end up being Google's downfall, or at least, be part of it.

They simply don't know humans. Their repeated failures at building social networks is good enough evidence. They always try to have the human out of the loop, which, to be fair, worked for them in the early days, as their search engine was better than those that relied on human-made directories. But now it is becoming ridiculous. It is a company of bots, for bots. And when they need humans for some reason, they take away most of the value they can add with rigid frameworks, basically treating them like bots. They pay hundreds of thousands not for people who are competent and trustworthy to provide the best service, but instead, to people who write bots to provide mediocre service.

I believe that at some point, a startup who understand humans will eat them up, bit by bit, by feeding on dissatisfied customers who don't want to deal with stupid bots.


I really doubt that this will be google's downfall, theyre too big to fall right now. I think it will be laws.


Thank you, and good job for isolating the root cause and solution.

Deregulation is an op designed to prevent the people from toppling dragons.


The bigger they are, the harder they fall; is a saying for a reason. There is no such thing as “too big to fail” otherwise the East India Trading Company would still be in operation.


Sometimes. IBM was still considered big when Buffet invested in them early in the 2010s. And it took almost a decade worth of bad performance for him to finally exit. It might be slowly sliding into irrevelance but its stock hasn't completely tanked -- during or after that period.


> they did not believe we should have support for developer products at all

That explains a lot.


What'd you expect, its google!


> which is fair as I think we'd have to escalate pretty far up in Firebase to get the attention of someone who could alert project owners.

This begs the question, isn't this a security vulnerability after all?


It is for the sites, not Firebase.


Looking at https://firebase.google.com/docs/rules/basics, would it be practical to have a "simple security mode" where you can only select from preset security rule templates? (like "Content-owner only" access or "Attribute-based and Role-based" access from the article) Do most apps need really custom rules or they tend to follow similar patterns that would be covered by templates?

A big problem with writing security rules is that almost any mistake is going to be a security problem so you really don't want to touch it if you don't have to. It's also really obvious when the security rules are locked down too much because your app won't function, but really non-obvious when the security rules are too open unless you probe for too much access.

Related idea: force the dev to write test case examples for each security rule where the security rule will deny access.


One simple trick helped us a lot: we have a rules transpiler (fireplan) that adds a default "$other": {".read": false, ".write": false} rule to _every_ property. This makes it so that any new fields must be added explicitly, making it all but impossible to unknowingly "inherit" an existing rule for new values. (If you do need a more permissive schema in some places you can override this, of course.)

Our use of Firebase dates back 10+ years so maybe the modern rules tools also do this, I don't know.

What would really help us, though, would be:

1. Built-in support for renaming fields / restructuring data in the face of a range of client versions over which we have little control. As it is, it's really hard to make any non-backwards-compatible changes to the schema.

2. Some way to write lightweight tests for the rules that avoids bringing up a database (emulated or otherwise).

3. Better debugging information when rules fail in production. IMHO every failure should be logged along with _all_ the values accessed by the rule, otherwise it's very hard to debug transient failures caused by changing data.


I've been an advocate for Firebase and Firestore for a while — but will agree to all of these points above.

It's a conceptual model that is not sufficiently explained. How we talk about it on own projects is that each collection should have a conceptual security profile, i.e. is it public, user data, public-but-auth-only, admin-only, etc. and then use the security rule functions to enforce these categories — instead of writing a bespoke set of conditions for each collection.

Thinking about security per-collection instead of per-field mitigates mixing security intent on a single document. If the collection is public, it should not contain any fields that are not public, etc. Firestore triggers can help replicate data as needed from sensitive contexts to public contexts (but never back.)

The problem with this approach is that we need to document the intent of the rules outside of the rules themselves, which makes it easy to incorrectly apply the rules. In the past, writing tests was also a pain — but that has improved a lot.


It's not that difficult to build the scanner into the firebase dashboard. Ask the developer to provide their website address, do a basic scanning to find the common vulnerability cases, and warn them.


Firebase does that, the problem is "warning them" isn't as simple as it sounds. Developers ignore automated emails and they rarely if ever open the dashboard. Figuring out how to contact the developers using the platform (and get them to care) has been an issue with every developer tool I've worked on.


It also makes portability a pain. Switching from an app with Firebase calls littered through the frontend and data consistency issues to something like Postgres is a lengthy process.


Firebase attracts teams that don’t have the experience to stand up a traditional database - which at this point is a much lower bar thanks to tools like RDS. That is a giant strobing red light of a warning for what security expectations should be for the average setup. No matter what genius features the Firebase team may create this was always going to be a support and education battle that Google wasn’t going to fully commit to


at Steelhead we use RLS (row level security) to secure multi-tenant Postgres DB. Coolest check we do is create a new Tenant and dbdump with RLS enabled and ensure the dump is empty. Validates all security policies in 1 fell swoop.


The security rules where I fell off my love with Firebase, not that there is anything wrong with the security, but until the point of having to write those security rules, the product experience felt magical, so easy to use, only one app to maintain pretty much.

But with the firebase security rules, I now pretty much have half of a server implemented to get the rules working properly, especially for more complex lookups. And for those rules, the tooling simply wasn't as great as using typescript or the likes.

I haven't used firebase in years tho, so I don't know if it has gotten easier.


Firebase needs something like RLS (row-level security). It needs to be real easy to write authorization rules in the database, in SQL (or similar), if you're going to have apps that directly access the database instead of accessing it via a proxy that implements authorization rules.


I agree! Supabase does it pretty good.


very well spoken arguments for a fundemental need for structural diversity, not monoculture, on the net


I don't see the comment arguing for that at all, and I don't think the analogy to crop monocultures being more vulnerable to pests really holds.

There are good reasons we deride "security through obscurity" as valid, and just because "structural diversity" makes automated scanning harder doesn't mean it can't be done. See Shodan.


The idea as I (who is not GP) see it is not that diversity makes scanning harder, it’s that it makes the blast radius smaller. Notably, though, that means we have to be talking about diversity of implementations, not just deployments—numerous deployments of just a few pieces of software can be problematic in their own ways, and of course there have been bugs with huge consequences in Apache, MSRPC, or—dare I say it—sendmail since the very earliest days.


"security through obscurity" is red team trash talk mostly.


I view the issue as more of a poor UX choice than anything else. Firebase's interface consists entirely of user-friendly sliders and toggles EXCEPT for the security rules, which is just a flimsy config file. I can understand why newer devs might avoid editing the rules as much as possible and set the bare minimum required to make warnings go away, regardless of whether they're actually secure or not. There should be a more graphical and user-friendly way to set security rules, and devs should be REQUIRED to recheck and confirm them before any other changes can be applied.


I've done a handful of fun hardware + LLM projects...

* I built a real life Pokedex to recognize Pokemon [video] https://www.youtube.com/watch?v=wVcerPofkE0

* I used ChatGPT to filter nice comments and print them in my office [video] https://www.youtube.com/watch?v=AonMzGUN9gQ

* I built a general purpose chat assistant into an old intercom [video] https://www.youtube.com/watch?v=-zDdpeTdv84

Again, nothing terribly useful, but all fun.


Oh hey I just watched that pokedex video. It was so impressive! Deserves way more attention


Indeed! Such a beautiful project!


Great job on that Pokedex and the video entirely. So freakin cool!


I recently did my own case build around a framework motherboard in a retro style https://www.youtube.com/watch?v=YsOAE5c7YXs it worked pretty well and I actually met up with the Framework CEO, Nirav, when I was recently in SF to discuss ways to better enable this type of project. I truly believe they want to support creative re-use.


Really funny project and video!


Yes! I’ve used them dozens and dozens of times. Not the cheapest if you want quick turnaround, but I did use to cut down cost by walking over to their Oakland warehouse for pickup. Was a nice small operation, smelled like burnt acrylic from a block away though haha


It's not mind-reading if you look at someone's actions and words then speculate about what their motivations and future decisions will look like - that's just deductive reasoning.


Well, inductive reasoning. But yes, that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: