Hacker News new | past | comments | ask | show | jobs | submit login

Yes, we use the same technique for our larger apps. We treat the Protobuf layer as a distinct layer related to the API; we have two packages, "protofy" and "unprotofy", so in the server implementation, you do something like:

  func (srv *Server) GetFoo(
    ctx context.Context,
    req *proto.GetFooRequest) (*proto.GetFooResponse, error) {
    foo, err := srv.db.getFoo(foo.ID)
    if err != nil { ... }
    pFoo, err := protofy.Foo(foo)
    if err != nil { ... }
    return &proto.GetFooResponse{
      Foo: pFoo,
    }, nil
  }
    
Obviously, a bit less pretty in reality. But the principle is the same. Having two packages makes it easier to read — e.g. protofy.Foo() is always about taking a "native" Foo and turning into a *proto.Foo, and unprotofy.Foo() is the reverse.

For TypeScript, I'm using ts-protoc-gen [1]. The weird part, which I don't fully understand, is that the serialization code is emitted as JavaScript code. All the type definitions end up in a .d.ts file, but the client is a .js file. Which means you still get type safety and autocompletion, just like plain TS, but it's still weird to me.

[1] https://github.com/improbable-eng/ts-protoc-gen




I believe that means you're more or less using it as intended. Protobuffers are intended for serialization. In go, serialization is often handled at a separate layer from the business logic.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: