Given the scale of the problem, we would need to invest much more in custom automation tooling to keep up with our growth trajectory. We also relied heavily on automated tooling to upgrade our packages as they were published, but with dozens of commits being merged every day, even automation could not keep up with our pace. This also meant dependency management became difficult, with version upgrades and migrations requiring boilerplate changes to be repeated upwards of 70 times depending on how many packages were impacted. We found the complexity of our dependency graph made such tooling unreliable, and instead of linking one package to another, we’d often be linking three or four codebases together at once. Yarn link is a command that allows local packages to be connected to one another, enabling developers to run their code across projects with unmerged changes. With our code spread across so many repositories, developers relied heavily on tools like yarn link to aid local development and test features end-to-end within our application. While we still reaped much of the benefits intended by this approach, several pain points had cropped up over the years. Four years later, we had over 70 distinct repositories housing frontend code exclusively for Talent Solutions applications. The multi-repo architecture served us well for several years, but our continued growth trajectory led to a rapid expansion of code. This approach also enabled the separation of foundational infrastructure from our core application, as well as code sharing between applications as we expanded into new ventures. Each repository could be versioned and published independently, decoupling unrelated product areas. By only containing a portion of the code shipped to production, our engineers experienced improved build times and faster feedback cycles during local development. These codebases were owned by the team responsible for that part of the product, and they could each be built and tested fully in isolation from the overall application. To solve these problems, we began extracting portions of our code into separate repositories, each aligned with an area of functionality. Despite having been a reasonable architecture for launching the project, our needs had eclipsed the monolithic approach. Changes made to any part of the application required execution of our full test suite, even for unaffected features. It was becoming difficult to conduct maintenance work, such as migrations and upgrades, and required multiple teams to closely coordinate in order to land fixes across the codebase. However, over time, the monolith outgrew its usefulness as unclear ownership, increasing build times, and other pain points cropped up in the ever-growing application. As we built out features, the repository grew organically according to the product needs, much like most projects. When we first began developing what is now our largest Talent Solutions product suite, the frontend codebase was structured as a classic monolithic application. This monumental task is made possible by our ongoing efforts to invest in building consistent, quality code at scale. These suites of products enable recruiters, job seekers, and enterprises to source, connect, and hire talent from LinkedIn’s economic graph, generating eight hires a minute on LinkedIn. We own the foundations of this ecosystem and build distributed, highly scalable products that connect talent with opportunity at a massive scale. LinkedIn Talent Solutions is the central piece of our hiring ecosystem, which houses a broad spectrum of products including LinkedIn Recruiter, Jobs, Talent Hub, Career Pages, Talent Insights, and more. In the face of challenging productivity problems, our LinkedIn Talent Solutions (LTS) teams recently adopted yarn workspaces, unlocking a 97% improvement in lead time for delivering commits to our deployment pipeline, reduced from 39 hours to 125 mins. Our infrastructure teams enable developers to work effectively within these large applications without being impacted by the sheer scale of each codebase. At LinkedIn, we develop many applications that receive regular contributions from a multitude of teams, with each team owning distinct products or features. While many projects start small with just one or two repositories (for example, frontend and backend), this approach often becomes difficult to maintain as the codebases expand. As teams and applications experience growth, it’s critical to adopt architectures that optimize for clear code ownership, build isolation, and provide efficient delivery of code.
0 Comments
Leave a Reply. |