> My fantasy was that I was playing this weird game many years later on a lazy Sunday afternoon; I play a couple of runs, enjoy my time for about an hour, then set it down and continue the rest of my day. I wanted it to feel evergreen, comforting, and enjoyable in a very low-stakes way.
I will pay money for more games like this. I want more games like this.
I could write an essay on this beautiful breath of fresh air. Balatro, like many beautiful pieces of software, is defined by what it is and isn't. No ads, no screwy Skinner box mechanics. Just wholesome gameplay.
> This algorithm is much more efficient than typical ray tracing acceleration methods that rely on hierarchical acceleration structures with logarithmic query complexity.
!?
This is a wild claim to just slip in. Can anyone expand?
It's faster because there are only a constant number of other faces in a given cell to check to find where the ray exits. Then you can just traverse from cell to cell in this way, without using hierarchical bounding box checks like you normally would.
I don't know much about the details about how it got started (i.e., how much Palmer Luckey self-funded it in the early stages) but the fact that he'd already built/sold Oculus was always going to make it easier to open doors and close funding deals (even if the politics/FB drama was a complication).
Also, building tech for the US military has some advantages. You have one customer with very high stakes and very deep pockets. Obviously it's hard to get started and you need to be building something that's uniquely useful and valuable for defence purposes, but once you achieve that, once you're in the door of the military industry, you're on pretty solid ground, and that's attractive to investors. The one hardware company i know in Australia that's doing well is also making defence tech.
It's much harder if all your potential customers (in my case, farmers) are small family businesses spread thinly in non-urban areas all over the world. Investors know that makes distrubuition much more challenging.
> We hope that TinyStories can facilitate the development, analysis and research of LMs, especially for low-resource or specialized domains, and shed light on the emergence of language capabilities in LMs.
This part interests me the most. I want to know how small yet functional we can get these models. I don't want an AI that can solve calculus, I just want a dumb AI that pretty consistently recognizes "lights off" and "lights on".
It's actually pretty hard to design a non-llm system that can detect all the possible variations:
Lights on. Brighter please. Turn on the light. Is there light in here? Turn the light on. Table lamp: on. Does the desk lamp work? It's a bit dim here, anything you can do? More light please. Put the lights on for the next 5 mins. Turn the light on when I come home. Turn all the lights off together. Switch the lights off whenever its daytime or quiet at home unless I say otherwise. etc.
If you don't support every possible way of saying a command, then users will get frustrated because they effectively have to go and learn the magic incantation of words for every possible action, which is very user-unfriendly.
I suspect ModernBERT can also be very helpful with these sorts of tasks, if you decompose them into an intent classification step and a named entity recognition step.
that entity extraction is where it actually gets really really difficult, even for LLMs since people will use 10 different names for the same thing and you'll have to know them ahead of time to handle them all properly. For either BERT based or llm based there's a bit of a need for the system to try to correct and learn those new names unless you require users to put them all in ahead of time. That said i've seen LLMs handle this a lot better with a list of aliases in the prompt for each room and then type of device when playing with home assistant + llm.
It isn't unreasonable to imagine one recognizable intent to be teaching new terminology. That would allow dialogs where the machine doesn't understand the object of the command and the human then says something like "when I say x, I mean the y" and the computer updates the training set for the named entity recognizer and does a quick fine-tuning pass.
Your examples include complex instructions and questions, but for simple ON/OFF commands you can go far by pulling key words and ignoring sentence structure. For example, pick out "on" "off" and "light" will work for "turn the light on", "turn off the light", "light on", "I want the light on", etc... Adding modifiers like "kitchen" or "all" can help specify which lights (your "Table lamp: on" example), regardless of how they're used. I'm not saying this a great solution, but it covers pretty much all the basic variations for simple commands and can run on anything.
That is certainly a hard problem, but how do you know such a system is better than simpler command based ones? Systems like you describe have a much higher chance of taking the wrong action. Even humans do, when given ambiguous instructions. I know everyone loves to hate on Siri because it doesn’t do a good job of understanding anything complex, but I have always found it to be very reliable when you find the right command. As a result I use it much more often than google assistant (was team android/ pixel until 2020). I especially use it for timers, reminders, and navigation, and if my hands are not free, texts. Taking the wrong action breeds distrust, which I also think is not user friendly.
I find this work much more exciting. They're not just teaching a model to hallucinate given WASD input. They're generating durable, persistent point clouds. It looks so similar to Genie2 yet they're worlds apart.
Not to share here. I was at boring places my parents had the interesting experiences but those aren't my stories to share.
It had lots of small industry specific solutions, lots of hardware/software tied together solutions (so a large 'tech' community where 'tech' was the title for hardware technicians that hand built/designed/produced the hardware devices/circuit boards/etc). I'm surprised you never heard of the Santa Cruz software scene. Some examples:
Victor Technologies (specifically Victor 9000 Computer)
Texas Instruments had a fab/facility there.
Digital Research had a presence (CP/M, PDP)
Seagate
SCO
Borland
EMU Systems (hardware music samplers) (lots of cool musicians used to visit their facilities)
(current) Antares (Auto tune/music tools)
(current) Plugin Alliance (music tools)
(current?) Plantronics
But mainly it was smaller obscure companies. For example Parallel Computers working on parallel computing early on. IDX working on radio tags in the 80s. Triton doing sonar imaging with SGI boxes.
It was very much it's own sub-scene with people who picked it for lifestyle so a bit different mindset (more hippie, schedules around the tides so people could surf, everyone going to the Wednesday night sailing races together). I was just a kid but it seemed cool. And it was open and friendly (I always had summer jobs as a kid starting from duplicating floppies for CP/M or PDP software as a 10 year old). Plus just a cool vibe. I remember next door to where my mom worked going and talking to Lorenzo Ponza (inventor of the pitching machine) when I got bored of playing Trek or Rogue on the VAX at her office (she was a workaholic always working weekends and dragging me along) or skating the ramp (for the stoner tech(nician) crowd) in the parking lot.
Thanks for the response! No, the only one that rings a bell is Plantronics (recently changed hands/title) since you pass it on the way to the Santa Cruz Costco.
reply