Hacker News new | past | comments | ask | show | jobs | submit | raviqqe42's comments login

We can still represent graphs expressing references explicitly, for example, using hash maps. Here, I just wanted to point out no circular references "on memory" in the language. It might be interesting for you to take a look at documentation of Rust or Koka and how they represent those data structures!


free time haha


Fair enough! Sorry, I didn't intend that to be rude, I just genuinely didn't understand as it seems to be two very conflicting ideologies merged.


I think this project is a great idea! A minimalist run-time and language (like Go) but with an emphasis on functional programming seems like the best of both worlds! However, will the language get cumbersome without advanced FP features? See Elm debates…


Honestly, my main issues are with the lack of generics and the "dynamically typed effect system". The idea of a minimalist runtime and language reminds me of Clean a little bit.


IMO, dependency injection tries to solve similar problems which are traditionally tackled by purely function programming languages, such as side effect management, reliable unit tests, segregation of application logic and implementation details, and so on.

This is too rough but, in other words, dependency injection is simply a less strict or untyped version of effect system or purely functional programming.


I agree. It's not for everyone but there are many different kinds of developers.

If you are and can hire smart people who can use generics without introducing extra tech debt and complexity, I definitely recommend the ones with generics like Rust and Haskell. But the problem is that I'm not smart enough.


Thank you for your feedback! Can you open an issue on the GitHub repository? I would appreciate it if you add some concrete use cases as then it'll be clear what kinds of options need to be implemented.


Actually, I totally agree with you. I decided the number based on the default maximum number of open files on Linux because I was not sure about common limits of concurrent connections between clients and HTTP servers. Or, it should probably regulate numbers of requests per second sent to the same hosts. If someone suggests other options, I would adopt it.


A good default might be based on current browser behavior. Keep in mind http2 might make everything use one connection but allow 100 concurrent requests.


I think your approach is fine for local dev servers.

Maybe just introduce something like an exponential backoff algorithm if you start getting too many 5xx errors or the requests are hanging.


Both Chrome and Firefox limited the number of connections to a server to six (6), if memory serves. I'm not certain if those limits have changed or if the number is different based on HTTP 1.1 versus HTTP 2.


The limit is per _hostname_ not per _server_ (unless things have changed in the last 10 years).

This is why you'll see assets1.foo.com, assets2.foo.com, etc. all pointing to the same IP address(es). Server-side code picks one based on a modulus of a hash of the filename or something similar when rendering the HTML to get additional pipelines in the user's browser. Not sure how or if this is done in SPA.


Why don't you say Attack On The Titan?


There might be a trademark issue with using that phrase.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: