Hacker News new | past | comments | ask | show | jobs | submit | devmunchies's comments login

> I mostly use royalty free music of Japanese origin

Like what? I’d like to hear what royalty free Japanese music sounds like.


Here's the music that I used in my most recent game (which I call: Fore! Track): https://twitter.com/gingerbeardman/status/173255553386375169...

This track is by an amazing, long-standing musician that goes by the name watson and I listened to their entire catalogue and made a playlist of my favourites to be able to easily select a piece that fits any game I'm working on. Their umbrella website MusMus is https://musmus.main.jp and whilst you'll need to run browser translation on the site you will see all their music sorted across many categories. That said, it is definitely easier to listen to their music through the albums they've released on streaming platforms: Spotify and Apple Music for certain, and probably others. They're also on YouTube.

Of course, there are many more Japanese services and people writing great royalty free music. Another of my games used royalty free music by a young Japanese musician named YuyakeMonster, again picked specifically to fit a particular game: https://soundcloud.com/mac-vogelsang/sets/sparrow-solitaire


It would be nice if I stopped needing to update the fsproj file for imports and hierarchy. If this could be dynamically built using a topological graph approach that would be a huge improvement. I don't use a heavy IDE so it's kinda tedious to need to update the fsproj file when i want to add a new file.


Personally I strongly prefer F#'s way, and I always enable it manually in C# by setting `<EnableDefaultCompileItems>` to `false`. It's so much easier to debug e.g. compiler deficiencies/bugs when you can just binary chop to find the file that's causing a problem, and the reason you can do this is the linear ordering of explicitly listed files. (In C# I generally just give up and hope that someone else will do it, because it takes so much longer; in F# it's trivial.)

What are you using as your development environment? Personally I don't find the overhead in Vim to be too onerous (`yypf"ci"NewFile.fs`), and even in Rider I routinely edit fsproj files manually. (I even create new projects mostly by hand.)


Are there any dev kits to tinker with this? How does one get started?


Read the linked article? https://docs.xilinx.com/v/u/en-US/microblaze-quick-start-gui.... It's an Xilix FPGA design, so a Xilix dev board is the where to start ...


You will find the entry-level boards here. (ranging from $150 to ~$4000)

https://www.xilinx.com/products/boards-and-kits/cost-optimiz...


I'd go with any cheap[0] xilinx one that supports microblaze soft CPUs.

If all you need is a RISC-V softcore, then there's plenty of options outside of Xilinx ecosystem. I like anything yosys/nextpnr supports well.

0. https://www.joelw.id.au/FPGA/CheapFPGADevelopmentBoards


This might give businesses the ammo they need to require the employees to come into the office.


This law has been in effect for years.


It's been in effect for office work. CA is one of the few states expanding this to Remote work (according to the article).


No, the article is pointing out that it's always applied to remote work, which was specifically clarified with additional laws in 2022.


According to the judge which was news to Amazon. Otherwise they wouldn't have tried to dismiss.


They are required to pay all the necessary expenses in the office too.


Yes, but they have established processes, cost structures and real estate commitments to support that.


Am curious about QUIC and HTTP3 as well.


QUIC requires the initial handshake packets to be at least 1200 bytes, and sets the anti-amplification limit of 3x [0]. This means that the server can typically send up to 3600 bytes in response (unless the client's handshake message exceeds one packet, which usually only happens if there is a post-quantum key share in it). 3600 bytes is usually enough, unless your certificate chain is too large, in which case you'd need to compress it. [1] is a nice overview of the problem.

(full disclosure: I worked on some of this stuff)

[0] https://datatracker.ietf.org/doc/html/rfc9000#name-address-v...

[1] https://www.fastly.com/blog/quic-handshake-tls-compression-c...


QUIC explicitly mentions that it is vulnerable to amplification attacks in the RFC. It suggests sending an extra packet to mitigate, at which point I believe the advantage of QUIC is lost as far as establishing connections is concerned.


I believe you're referring to the stateless retry mechanism? It's designed to be used only when the server is actually under attack (either as an amplification vector or exhaustion of its own connection capacity). The idea is that the server establishes some threshold for pending QUIC connections, and only if that is exceeded does it start requiring clients to complete the extra stateless, SYN-cookie style roundtrip to validate their source address. So it maintains the advantage of one less roundtrip than TCP+TLS unless the server is receiving large amounts of connections that are not progressing in a timely manner.


If the startup is tiny (less than 10) then I would give the benefit of the doubt to the company. At that size, it’s mostly tapping your network and not having any full time recruiters who can prioritize things other than “I need somebody, anybody! Help!”


Sure, but why would the founders not know any women?


Another analogue would be dating. It’s hard to network or build a lasting relationship with people you’ve never met in person.


> The only group not benefitting from RTO are the actual workers.

That depends on the worker’s goals. If your goal is to have work/life balance or “personal” productivity then, yeah, I empathize. But there is more investment I put back into my team when I’m in person. It’s not just my code output. I find it very hard to mentor young engineers remotely.

Also, if you are a stockholder, like many engineers, and if teams do in fact have more “synergy” (bleh) then it’s also good for the workers via stock based comp.


When I was young and learning my trade I had a group of older co-workers who decided they liked me enough to invite me to join their lunch group. It was a very informal mentorship, they showed me the ropes, provided guidance and generally made me better at my job. This was all because they knew me enough and liked me enough to put in the effort, they got little out of it other than enjoying watching another guy come up in our field. Now I'm a grey beard with less than a decade left and I'd like to return the favor but to whom? I don't work in an office and I have no relationship with most of my co-workers, I do my job and they do theirs. Management could force me to mentor one or more of the engineers but that probably wouldn't work out very well for anyone. There's no relationship with my co-workers anymore because we never see each other and while slack and zoom is great for a 20 something it just doesn't work for me and in the end I'm the one with the institutional knowledge that I won't be passing along to the loss of both the employee and the company.


> That depends on the worker’s goals.

If the worker's goals aren't pro-worker then the business certainly won't be. Your goal of training others is admirable, but that can be often be remote and won't stop them from getting rid of you the moment it is convenient.


I prefer working in person so from my PoV requiring RTO is pro-worker.


An often ignored part of the standards are how to represent durations (time spans).

See http://xml.coverpages.org/ISO-FDIS-8601.pdf (section: 5.5.4.2 Representation of time-interval by duration only, pg. 21)

Would like to see more json parsers in static languages allow me to define a field as a time span and can serialize into the valid format.

For an example, see this proposal in Crystal: https://github.com/crystal-lang/crystal/issues/11942

    Example:  A duration of 15 days, 5 hours, and 20 seconds would be:

      P15DT5H0M20S

    A duration of 7 weeks would be:

      P7W


> ISO/FDIS 8601 (section: 5.5.4.2 Representation of time-interval by duration only, pg. 21)

Alternatively, simply refer to the ABNF definition of “duration” in RFC 3339, Appendix A.


Why do you want a string format for data that can be structured?

    "duration": {
        "days": 15,
        "hours": 5,
        "seconds": 20
    }
Now it's not the job of your JSON parser to understand the semantics of your data. It's your input validator's - which is how it should be, imo.

Note you will have to have some fallible conversion step from whatever JSON chooses to represent that data as anyway.


Because there can be benefits to having it compact and yet human readable?

    2023-08-24T20:30:00Z
versus

    "time" : {
        "year" : 2023,
        "month" : 8,
        "day" : 24,
        "hour" : 20,
        "minute" : 30,
        "second" : 0,
        "timezone" : "UTC",
    }
Call me lazy, but tryping the latter took me 20 times longer. I totally would use it for internal state, but I would not store it like that or expose that to users ever.


But you're talking about JSON, not a concise string that is exposed to users, and not a data format optimized for size. If you want more than a basic type you give it structure - and durations are not trivial types nor common enough to deserve their own representation.

Durations optimized for storage without ambiguity would be the tuple (u64, u32, u8) where the first value is seconds, second value is nanoseconds (if precision is needed) and final value is epoch.

Durations displayed to a user wouldn't ever be stored so the point is kind of moot.

Optimizing for "time it takes someone to write it once" is kind of dumb since it happens once while reading it happens often, and parsing even more likely.


The ISO format can be used for a lot of things which your JSON format could never be used for. ISO works great in file names, for example.

And it's really nice in some circumstances (such as the file names case) that alphabetical sort is identical to chronological sort.

And just... why would you spend 10x the space to represent timestamps when ISO timestamps (or, well, RFC 3339) works just fine?


It just depends on how you are using the JSON and where it fits in an application. If I'm using a static language, it is common for me to have a TimeSpan (or similar) type and standard rules for how it can be serialized (e.g. toString, toJson, etc). In this scenario, I don't care how it is represented in JSON, I just want convenient (de)serialization.

With your example, I would need to create custom parsing logic, and handle a dynamic number of JSON fields. But if a TimeSpan class had JSON deserialization built in based on the ISO8601 format, then I wouldn't need to do anything special. That's the benefit of using the standards. Same if I wanted to convert the JSON stringified format into a postgres time span there isn't any special parsing logic I need to do.

Yes, it's just a string in JSON, so it's not semantically special. But other languages that have a TimeSpan type could take advantage of the standard serialized format.

Here is an example of what it could look like in F#, no special logic for deserializing into a custom type:

    type MyEvent = {
      myDuration: TimeSpan;
      createdAt: DateTimeOffset;
      eventId: string;
    }

    let rawJson = 
      """
        {
          "myDuration":"P15DT5H0M20S",
          "createdAt": "2023-08-24T20:30:00Z",
          "eventId": "e_1234"
        }
      """
      
    let myEvent = JsonSerializer.Deserialize<MyEvent>(json = rawJson)


I see your point, but I guess I don't see the difference between it being structured as a string with a second deserialization step after JSON deserialization. I see the convenience of a standard way to deserialize the time stamp.

But either way you need multiple pieces of data for the duration to be correct and useful. Without the created-at time in your example the duration will be invalid in the presence of leap years/seconds.

If you want to unambiguously encode a duration of time, it needs to be in the smallest unambiguous unit that is meaningful (usually seconds/nano seconds). That will allow your duration to be correctly used without any additional logic/metadata packed along with it.


I actually used this in my F# code yesterday. The forall function does the same and it was exactly what I needed.

https://fsharp.github.io/fsharp-core-docs/reference/fsharp-c...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: