Hacker News new | past | comments | ask | show | jobs | submit | ghilston's comments login

Shamelessly sharing my own writeup on how to set this up on your Mac

https://www.greghilston.com/post/tiling-window-manager-on-os...


No shame at all should be expected from sharing something you worked for that can help people (including me)


I appreciate the sentiment :)


Same, I had neovim. Which maybe isn't a Webster dictionary word, but is funny nontheless


Can you explain why that sounds boring?

I ask, as my preference would be to do the boring thing and install a binary locally. Like how one generally uses git for example.


I meant in a SQL context. I don’t want to manage extra binaries to deploy by environments. Specially if the same result can be archived via standard SQL


I used to love Second Life back in the day. Can you tell us more about your "game" , and less about the technical details? I know this is HN, but what would you tell a prospective player about your game?


I'm interested in metaverses that work. The NFT industry has crashed and burned [1], and Facebook/Meta has bombed in that space [2]. With the clown car out of the way, it's now possible to make progress. The lesson of these failures is that there's not much of a role for ads or brands in the metaverse. They're distracting and don't fit in. The successes, from Roblox to Fortnite to Second Life, charge users a modest fee each month.

A metaverse is not a game. Games can be built within it by users. This has been done in Second Life, but the games are 1) sluggish, and 2) space-constrained, because land is expensive. Those are scaling problems which can be solved.

This requires solving the scaling problems that led to the original article here. You can't just download everything in advance. There's too much stuff in the larger games.

Open Simulator is an open source re-implementation of Second Life servers, written in C#. It's been around for a while, and now it's getting a bit more developer attention. There are multiple federated grids of Open Simulator servers, and content stores where you can buy items, all under different management. Land is much cheaper than in Second Life, but servers tend to be under-resourced and slow. There are some people working quietly on trying to improve the Open Simulator technology to work better.

Other attempts to solve this problem include Improbable's system. Improbable managed to blow through $400 million on the scaling problem, producing a system that's too expensive to run.[3] (They're funded by SoftBank.) Some good indy games tried to use their system, but the server bill was too high. Their approach is a general-purpose distributed object manager, which seems to be the wrong tool for the job. Otherside uses Improbable, and they only turn on their world maybe twice a year for a few hours for special events.

[1] https://web3isgoinggreat.com/

[2] https://www.wsj.com/articles/meta-metaverse-horizon-worlds-z...

[3] https://www.improbable.io/

[4] https://www.ft.com/content/3508bec7-a2f8-414e-8059-7b96b2700...


My website has given me a place to share projects I've done or small snippets I've learned about. I am always writing as if I am the audience, as I'm referring to it quite often.

Over time I've noticed that readership has increased and I've started to get comments from readers either asking for additional help or offering advice. With that also comes a ton of companies offering their paid services to improve my seo ranking....

Overall, it's a nice stress-free place to write.

https://www.greghilston.com/


It would be incredible if a chat gpt competitor came into the self host able, open source, niche like stable diffusion.

The options I've seen today are no where near as good as chat gpt


https://www.greghilston.com/

I'm pretty torn on having a separation between the notion of "posts" and "projects". I don't keep the project section up to date very often, and probably should remove it. Any thoughts/advice on this HN?


Briefly looking through your posts, I'm not 100% sure what differentiates the project posts from the others. I like the idea of the project page being visually distinct from the lists of posts. One way to sort out the project page is to provide a short blurb describing each post.


> Briefly looking through your posts, I'm not 100% sure what differentiates the project posts from the others

Yeah, that's exactly what I think the problem is. I arbitrary consider a one thing a project and another thing a post.

Understood, thanks for your advice!


Don't quite understand why the images are links? Also highly recommend moving away from Disqus


Do you have any writeups on what prompt engineering you've done to get gpt-J to behave like a chat?


I haven't because I don't need it or want it


Hi there! Can you talk about your little life simulations?


I haven't worked on them in a while to be honest, but for many years I worked on a small snail simulation in my spare time and blogged about it[0]. The project had gone through JS, PHP, and later Go iterations. I also spent a few weeks playing around with implementing an approach described in this paper on open ended simulations[1]. Go was not the best language to choose for this project[2], but it definitely counted as fun hobby coding for me.

[0] https://liza.io/categories/snails/

[1] https://www-users.cs.york.ac.uk/susan/bib/ss/nonstd/ecal11-1...

[2] https://liza.io/roee-self-modifying-go-simulation-experiment...


Yes you can download the trained model and run it on your machine. The article has a link to a hugging face model when you can play with in the web browser as a toy example and then download it locally and use with code.


another noob here - does this hugging face model expose an api? i have a light classification use case i might wanna try it out on but think running it on my machine/a beefy cloud machine would be overkill


All spaces do[0], but please don’t abuse it: it is just for demo purposes. If you hammer it, it will be down for everyone, and they might not bring it back up.

It can be run locally with ~16GB VRAM GPU; you might be able to configure it at a lower precision to run it with GPUs with half the RAM.

[0]: https://huggingface.co/spaces/togethercomputer/GPT-JT/blob/m...


There are also a number of commercial services that offer GPT-J APIs (and surely in a couple days GPT-JT APIs) on a pay-per-token or pay-per-compute-second basis. For light use cases those can be extremely affordable.


Cannot one do inference using a CPU?


thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: