• 6 Posts
  • 29 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle
  • There is no way legacy projects are going to switch to Deno. Even when Deno is 100% compatible, the only advantage Deno provides is slightly higher performance. Node’s complexity problem? All those configs needs to be supported for compatibility anyway. Typescript? The project already has tsconfig.json set up, so they might as well continue to use tsx. Security? I bet users will just get tired and use -A all the time.

    To benefit from Deno, Node’s legacy needs to be shed.

    Wine is a different case. The reason Wine makes sense is because Windows is so much worse than Linux that even with scrappy game compatibility, Linux offers a better experience. For Linux users, the alternative to Wine is not switching to Windows, it is not being able to play games. On the other hand, legacy Node projects have a very easy alternative… just continue to use Node.

    And btw Bun is making the same mistake.


  • Through compatibility, Deno established an upgrade path.

    Sure, but Node compatibility needs to work, and it needs to work reliably. Which means every last detail of Node needs to be supported.

    This is what I am trying to convey… the engineering effort to make an objectively better JS runtime while being Node compatible is likely too much effort. Many popular Node projects are already having issues with Deno. Now imagine how the compatibility scene will look like with every single proprietary Node project out there.

    So instead of trying to replace NodeJS or offering an upgrade path for existing Node projects, incentivize formation of ecosystem around Deno.











  • That advertisement would be interpreted as Node C’s advertisement.

    The plan is to treat public keys as node’s identity and trust mechanism similar to OpenPGP (e.g. include any node key signed by a master key as a cluster member)

    Right now, none of the encryption part is done and it is not a priority right now. I need to first implement transitive node detection, actually forward packets between nodes, some way to store and manage routes, and then trust and encryption mechanisms before I’d dare to test this stuff on a real network.





  • Thank you… I had to learn kubernetes for work and it was around 2 weeks of time investment and then I figured out I could use it to fix my docker-compose pains at home.

    If you run a lot of services, I can attest that kubernetes is definitely not overkill, it is a good tool for managing complexity. I have 8 services on a single-node kubernetes and I like how I can manage configuration for each service independent of each other and also the underlying infrastructure.




  • If one service needs to connect to another service then I have to add a shared network between them. In that case, the services essentially shared a common namespace regarding DNS. DNS resolution would routinely leak from one service to another and cause outages, e.g if I connect Gitlab container and Redmine container with OpenLDAP container then sometimes Redmine’s nginx container would access Gitlab container instead of Redmine container and Gitlab container would access Redmine’s DB instead of its own DB.

    I maintained some workarounds, like starting Gitlab after starting Redmine would work fine but starting them other way round would have this issue. But switching to Kubernetes and replacing the cross-service connections with network policies solved the issue for me.


  • akash_rawal@lemmy.worldtoSelfhosted@lemmy.worldShould I move to Docker?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 years ago

    As someone who is operating kubernetes for 2 years in my home server, using containers is much more maintainable compared to installing everything directly on the server.

    I tried using docker-compose first to manage my services. It works well for 2-3 services, but as the number of services grew they started to interfere with each other, at that point I switched to kubernetes.