• melfie@lemy.lol
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    1 day ago

    It’s really all about using Conway’s Law to your own benefit.

    If adding features or fixing bugs consistently requires one person from a fairly small team to make PRs across multiple repos and changes can only really be tested in a staging environment where everything can be tested together, then it’s an anti-pattern.

    However, if 100 developers or more are working in a single repo, it’s past time to split it up into appropriate bounded contexts and allow smaller teams to take ownership.

    I worked at a place where hundreds of developers worked on a single Rails monolith / monorepo, and enterprise architects insisted that 100,000+ RSpec tests that required PostgreSQL had to run in CI for every PR merge. Every build took 45 minutes and used ungodly amounts of cloud compute. The company ended up building their own custom CI system to reduce their 7 figure CI spend so they could ignore the problem.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago
      • You should really not need to do a PR across multiple repos. If you need, you are breaking your code wrong. Some functionality may require multiple PRs, but you should always be able to do those at different moments and test them separately.

      • The monorepo tools are exactly software that emulate the features of a multi-repo so that you can have thousands of people on the same repository. We also have multi-repo tools that emulate the features of a monorepo, but people don’t hype those online because they are simple and free.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        You should really not need to do a PR across multiple repos.

        different ways of treating PRs… it’s a perfectly valid strategy to say “a PR implements a specific feature”, in which case you might work in a backend, a front end, and library… of course, those PRs aren’t intrinsically linked (though they do have dependencies between them… heck i wouldn’t even say it’d be uncommon or wrong for the library to have schemas that do require changes in both the fronted and backend)

        if you implement something in eg the backend, and then get retasked with something else, or the feature gets dropped then sure it’s “working” still, but to leave unused code like that would be pretty bad… backend and front end PRs tend to be fairly closely tied to each other

        a monorepo does far more than i think you think it does… it’s a relatively low-infrastructure way of adding internal libraries shared across different parts of your codebase, external libraries without duplication (and ensuring versions are consistent, where required), and coordinating changes, and plenty more

        can these things be achieved with build systems and deployment tooling? absolutely… but if you’re just a small team, a monorepo could be the right call

        of course, once the team grows in size it’s no longer the correct option… real tooling is probably going to be faster and better in every way… but a monorepo allows you to choose when to replace different parts of the process… it emulates an environment with everything very separated